Want to see Blaine's Sample AI Policies?
You're not alone. Many organizations know they need to incorporate guidelines on the use of AI into their company policies, but aren't sure where to start.
Here at Five Nines, we faced the same challenge. We understand that creating policies around the use of new and evolving technologies can be difficult, and that's why our Sample AI Policy is available to you to help you get started.
This is an example policy and may not be suitable for your industry, regulations, or specific business needs. Auditing all text according to your policy approval procedure is recommended.
1. As a small community bank, how do I protect our customers and the bank from someone using AI against us? How do we stop them from using AI to impersonate our clients?
Blaine Kahle:
While we can't stop attackers from doing this, we can look at new AI-enabled attacks from the perspective of social engineering. These attacks are predicated on using what we call pretexts — the need to convince you that the request is legitimate. In a low-technology environment, we authenticate that request through recognition of a face, a voice, etc. With AI, those options are out. We can't authenticate based on those alone anymore, because they can be easily "faked". Accessing systems will now require more rigorous verification processes. Not just for your clients but for even your own staff.
This is something that has to be addressed procedurally and technically. There's a need to prove that the person making a request is real, by some measure. Like in more traditional technologies, we rely on something they know or something they have (think MFA). While this will cause more inconvenience for the user or customer, the good news is that this security need is universal and users are already getting used to having these kinds of challenges in the access and authentication processes they use every day.
Ross Rosenzweig:
AI is also a real enabling factor here (on the defense side). The ability for AI to learn locally in order to authenticate beyond the use of usernames and passwords, but instead to know what your behavior looks like:
- What's your cadence on the keyboard?
- How do you move the mouse?
- How do you typically conduct yourself on the endpoint?
- The times of day and locations from where connections are coming from.
Are these activities normal? This is the type of prediction that I've been talking about, to stop or at least alert that suspicious behavior is happening.
2. With AI being also used to obfuscate malware, isn't a new opportunity for a new exploit available after updates are pushed to commonly used software and libraries that the AI security doesn't yet have a model to predict?
Ross Rosenzweig:
The idea behind our (BlackBerry Cylance) approach is to stop attacks from starting before they begin. If you have too many features in a model, they're very prone to false positives. If you have too few, they're prone to the same or potentially false negatives.
Like I mentioned, any particular file, even if it's a zero-day that has been obfuscated, it's still going to have the same static features and characteristics that we're training our models against. It's going to be resources, keywords, compilers, header information, signatures, even things that us as human beings would never be able to see like entropy between sections. The models can be very effective, even before we get to the point where a piece of software is calling an API. In the case of CylanceAI, we're providing a static analysis that is smart enough to stop an attack, even a zero-day or unfamiliar one, before it even starts.
3. Should I have a policy inside my organization governing the use of AI?
Blaine Kahle:
Yes. As you saw through the presentation, AI is a really broad term so you may want to focus specifically on how you think this technology might be used within your company. I've written one internally for Five Nines that I've made available for others if they want it as a sample, and it's specifically around the use of ChatGPT, the large language model style AI.
Something you may want to be aware of is the privacy policy surrounding that. You don't want people to be putting private customer or patient information or things like that into a platform that uses your data publicly. (ChatGPT uses prompts and conversations to further train the AI.) If your users do that, the platform you use might be using your employees' prompts to train the next version of the AI, and suddenly your private or proprietary data could be an answer to somebody else's inquiry.
You need to be aware of which tools you're going to authorize your people to use based on those privacy policies. They also need to be aware of the limitations of these tools. ChatGPT has no concept of truth. It's like a super version of autocomplete on your phone, suggesting the next word. It will absolutely make something up because it looked like the right next set of words, just based upon your inquiry.
So what you don't want to say is you can't use AI ever, because then you've got to look at the fact that your anti-virus probably uses AI, you might use Grammarly which analyzes your grammar based upon a model, etc. So you're using AI already. Just be conscious of what business objective you're trying to meet with your policy. It's probably data and privacy protection, and ensuring that your users don't put something out publicly that is false or makes you look silly.
4. How can businesses strike a balance between providing a convenient user experience versus maintaining robust cybersecurity measures?
Ross Rosenzweig:
This is exactly what we (Blackberry) are trying to help folks do with our approach. It aligns itself with the architecture we were just talking about with Zero Trust. The ability to authenticate users beyond just the username and password is important to being able to achieve that balance. The ability to understand what [users] are doing and what makes sense, and aligning that with policies, driving adaptation based on authorization and need, we can achieve the best possible outcome. We need to avoid risks while keeping our users productive, but also ensure that they remain safe, and that we remain safe as an organization as well.
Machine learning is an enabling capability here. There are lots of opportunities in the technologies that are available today that help us strike that balance.
5. Our EMR announced they are adding AI technology to their software. What kind of risk does that present to us outside of the EMR line?
Blaine Kahle:
What does the company mean when they say they're "adding AI" to their software? Does that mean they're integrating chat to give you suggestions based upon data?
AI is such a hot buzzword right now that everyone is capitalizing on because they don't want to be seen as being behind AI. I think you need to figure out what exactly they're doing. Are they going to end up collecting more data from you or using your data differently? You'll want to understand that.
Question 5 Continued Offline:
eClinicalWorks (EMR) is incorporating Sunoh.ai into the software
Blaine Kahle:
My main concern would be to find out if your data is in any way transmitted back to Sunoh.ai or accessible by them, and if so, if it is used in training future AI models. Given that Sunoh.ai is surely aware of medical community privacy concerns, everything is probably fine.