Generative artificial intelligence has been among the most impressive breakthroughs in years. But for corporate security chiefs, it’s causing angst.
That’s because for all its genius, generative AI – which can create human-level text as well as unprecedented efficiencies – also heaps new risks and potential liabilities upon companies.
As AI moves rapidly through companies and society as a whole, risks will accelerate as well. Despite the fact that AI innovations produce risks, adopting AI isn’t an option; it’s a must. It would be a strategic blunder for any company to avoid the use of generative AI in different levels of its business – from human resources to analytics to product development.
As a result, a company’s first order of business is to conduct due diligence and protect against risks that range from cybersecurity threats, intellectual property infringement, unreliable data sets and false information.
The responsibility to protect against potential generative AI threats falls on the shoulders of the security team. A security chief’s priority is to understand the different ways AI tools could undermine a company. Bad actors will employ them, so a mitigation strategy must be put in place.
Unreliable information
The fundamental problem with the large language models that underpin generative AI tools lies in the way they are trained. They are fed massive amounts of data from various sources on the internet, ranging from Wikipedia and PubMed, to EDGAR and the FreeLaw Project.
Using this information, the model creates a “worldview” to generate content. But the supposed facts embedded within are sometimes fuzzy. Likewise, newly generated information can contain bias or a lack of ethics.
How can that happen? Artificial intelligence, while being able to mimic human language, cannot fundamentally understand the stories it generates. It uses algorithms to predict which word comes next in a sentence. So if, for example, the entire internet is used as the foundation, the outcome will contain the bad and the good.
Machines’ lack of understanding requires that humans evaluate the veracity of information. Because there can be unpredictable patterns, predictions or outcomes stemming from underlying information, it is people who need to handle the last mile of any process that leads to a decision.
From a risk-management perspective, there are several implications for companies. For starters, if they’re using generative AI within their own organization, they must pay close attention to what information they're feeding the model. They must also know the integrity of data purchased from vendors.
The model needs a valid set of facts to draw upon to generate the best results. Fortunately, every company is a massive owner of data: They've got their historical customer data, market trends, surveys, and financials. The key is to ensure the models are tapping into this fact base.
Just as important, companies need to be aware of how other businesses, partners, and vendors are sourcing their information and training their models. Are the people you’re doing business with doing their best to filter out misinformation and skewed results? If not, they’re a potential risk to your business.
Heightened liability
Attention to due diligence and liability will play a bigger role with the mass proliferation of AI. Copyright and intellectual property infringement will be ramped up as some companies might not be aware they’ve crossed a legal line. Spotify recently removed tens of thousands of AI-generated songs from its platform in a purge of fake streams after they were flagged by record labels.
Cybersecurity risks have also been ramped up by the adoption of generative AI. The global volume of cyberattacks already reached an all-time high in last year’s fourth quarter, which was before a big surge in corporate AI spending took place in the first half of 2023.
The reason why organizations need to pay attention to generative AI is because text-generating tools such as ChatGPT give bad actors the ability to generate perfect text of nearly any length at the push of a button.
Just as concerning, users can instruct the generative AI to mimic the style, tone, and language of a specific individual. As a result, the velocity of impersonations and deep fakes is set to spike. What’s more, the barriers to such nefariousness have been dropped. A simple software plugin can generate chaos and confusion at scale.
If executives have written content publicly available on the internet – shareholder letters, media interviews, blog posts – that material becomes fodder for crafting believable phishing emails that seem to be from that person.
It’s more important than ever for businesses to understand the sourcing of their information, and how they're filtering out bad or corrupted data. Top of mind is for companies to understand how AI might be trying to change their opinions, their narratives, and their decisions because malicious actors are standing at the door.
Navigating new waters
To guard against social engineering attacks, companies will require increased vigilance and security. A good start is making employees aware that these highly advanced new methods exist.
While generative AI is undeniably a powerful and disruptive technology for companies that are successfully able to harness it, it also irrevocably changes the calculus around risk management.
Since the genie isn’t going back into the bottle, companies will need to use every tool at their disposal to make sure they’re successfully managing the risk that surrounds it.