X
Story Stream
recent articles

Cascading artificial intelligence advancements present an array of security traps for unsuspecting companies. The consequences of a successful attack are likely to be much more severe than an old-style phishing or hacking attack.

Gone are the days of an easily identifiable scam featuring a blurry eBay JPEG logo asking you to update your credit card information. Coming in the immediate future will be sophisticated schemes to steal the most critical data your company has. Unlike previous hacking ploys, attacks that leverage AI will be almost undetectable to those without robust - almost ironclad - protection.

The reason that scams leveraging AI are so threatening is because AI is so darn sophisticated. For example, leading generative AI models are currently producing “original” work of a sort. Known in the industry as “hallucinations,” these completely false or invented images, sounds, and other types of content  seem genuine, but are in fact, anything but. Due to their high level of realism, they present a significant hazard to unsuspecting parties.

Worse than traditional hacking

Hackers leveraging generative AI are coming for your proprietary information – the intellectual property of your products, your processes, your customer insights. The AI hacking danger can - and will - go way beyond ransomware. These are existential threats to a business.

While there’s a divergence of opinion about the size and timing of the impact of the AI wave, the wave itself is real. There might be debate as to when foreign and domestic AI hackers will appear in numbers ; but be certain, they will indeed arrive.

As AI becomes woven into the fabric of everyday processes, it becomes imperative that all partners in this data revolution be thoroughly vetted and transparent about the inner workings of their tools – regardless of type – to allow users to detect, evaluate and repel vulnerabilities and attacks.

The reason it’s critical: Traditional software tools consisted of defined, human-created code. AI tools can begin to create new code on their own – and generally with no audit trail and no attribution. This lack of transparency becomes a major security issue. 

Your security team must step up

These flaws in the AI models may get their own fix someday – but for now, the use of AI tools mandates an extensive quality assurance and verification process. Your security team must require transparency in the algorithms your business is going to rely on.

The AI phishing problem presents a good illustration: Tech tools have long been used to weed out spam. However, while spam filters have usually been sophisticated enough to determine legitimacy, generative AI tools today have such a high level of fluency the requisite technological tools will need to be enhanced to be effective. That won’t be enough, though, as  human training will be required as well to fully weed out threats. 

For example, an AI generated text could look like and be written in the same “tone” as a trusted colleague or business acquaintance, but is actually, completely illegitimate. The same concerns apply to phishing phone calls. These, of course, have proliferated for years, ever since the digital age of cell phones and social media. However, with the onset of generative AI, these threats go well beyond what has long been discounted as bogus. Today generative AI can create such a realistic replica of someone’s voice that phone scams are much more likely to be interpreted as truth. 

Once captured though nefarious means, that information can then be generated into and published on the web, with scraping and collecting information, which can then corrupt the models that have been built to predict security.

The reliability threat

Let’s remember that the ultimate goal of using these AI models is to increase human productivity.  However, that necessitates humans being able to rely on that information. 

The public-facing AI platforms were rushed to market long before the hallucination problem was solved, and in some cases, even before the technology evolved to be sophisticated enough to be creating the hallucinations. That means that these major platforms, trained on open and unauditable data sets, have ingested both gold and garbage, and at the moment often can’t distinguish between the two. 

That in turn means that any business relying purely on AI outputs is doing so at substantial risk of being wrong, and incurring liability or customer loss by doing so. Once again, it seems that the machines will require significant human oversight and intervention for some serious time into the future.

As treacherous as it may be, government regulation may also play an important role in attempting to quell tampering. Whether a government has the collective skills to execute in that role is certainly arguable. It’s not that some sectors of the government – the military, security/intelligence agencies like the NSA and CIA, and others – don’t have remarkable security expertise. It’s that it wasn’t built to ensure cybersecurity across every private network and enterprise across the nation. 

In this coming period of technological tumult, AI tools should be assessed in much the same way the moves were made from slide rule to calculator, and then to personal computer, and then to the cloud. Technological tools have finite capability limits and even with AI, still require significant human intervention and guidance. 

The prospective threats to your enterprise of generative AI are huge. There’s no way to avoid them, and the technology will be necessary to compete in the coming decades. That means wrestling with the threats now will allow big dividends later. 

Jim Brooks is chief executive officer at Seerist.


Comment
Show comments Hide Comments