The Threat of Generative AI Running Afoul of Data Protection Laws
AP
X
Story Stream
recent articles

The headlong rush into generative Artificial Intelligence (AI) comes with an undeniable threat of privacy intrusion that threatens to upend the public’s embrace of this powerful productivity tool.

Failing to vigorously address privacy concerns in the near-term is certain to bring about a patchwork of privacy laws across the nation - perhaps the world - will create nothing but constant headaches for those attempting to roll out generative AI in their enterprise.

Looking across the current landscape we already see some best practices evolving from the legislative perspective. From California to the European Union, regulations affecting privacy are developing at a rapid clip as the AI revolution takes hold. 

Attention needs to be paid to some of these bold legislative privacy initiatives. Ignore them at your own expense.

Certainly, there’s no stopping AI from drastically altering the future. But there will be hurdles to overcome, and those hurdles could be high if we find AI develops into a privacy invasion apocalypse.

While the chatbot technology has been a boon, companies run the real risk of running afoul of state data protection laws, particularly if their technology leads to the unintentional acquisition or storing of personal data. Currently, 10 states in the country have introduced data protection laws, but that is expected to grow as the convergence of data protection and AI continues at a rapid pace.

The issues are complicated and complex. Companies need to be mindful of whether personal data they collect is shared with the generative AI that sits underneath their chatbots. If they are in a state where they are subject to data protection laws regarding chatbots, are they following those laws? Other issues they need to take into consideration include whether the chatbot owner has a statutory responsibility when it comes to the data it collects. 

Scenarios for chatbot privacy intrusion are countless. Companies now must navigate the thorny issue of what happens when somebody uses a chatbot to schedule, for example, a medical appointment. In order for a chatbot to do this, it must have some fundamental information, including who that person is and who their doctor is. If that doctor treats cancer, or opioid addiction or sexually transmitted diseases, or mental health issues, the bot will have that information, too. The question then becomes: does the bot save all that data in case the person needs to make a future appointment?  Should the chatbot intelligence use that information to make the person aware of new drugs and treatments that might be coming out? Can the underlying intelligence of that chatbot share its information with a third party? What if there is a data breach and information is released on the web or the dark web and cannot be erased?

Data protection laws need to be enhanced overall and they must be created to specifically to deal with AI. There are very few data protection laws governing AI throughout the world although there is a great deal of proposed legislation currently in the works. The current fundamental lack of data protection laws in this arena is because nobody anticipated generative AI would be upon us so quickly. It’s why, on March 29, over 1,000 tech leaders and researchers signed an open letter calling for a six-month halt in developing powerful AI technologies, saying they propose “profound risks to society and humanity.”

That’s why it’s not enough for states to have individual data protection laws. There needs to be one law that governs the country. As AI continues to grow, more states will implement their own data privacy laws to the point where all these different state laws will become unmanageable. The ideal situation to work toward is one heightened law addressing all concerns.

No matter how far generative AI takes us or how it can simplify people’s lives, the goal must always be the safeguarding of people’s fundamental right to privacy as a cornerstone of individual freedom, human dignity and as the basis to a functioning democratic society. 

It's why security assurance and privacy rights need to be integrated into the core of any AI system so that governments and businesses can still collect and use personal data, safeguard against identity theft, fraud and other forms of cyber-crime, while simultaneously maintaining a person’s individual right to privacy. In order for that security to be watertight, the United States should look beyond state rules and develop a robust federal data protection act, along the lines of Europe or Switzerland. Switzerland recently revised its federal act on data protection  (FADP), which will go into effect in September 2023. 

A regulation hodgepodge is bound to emerge without a coordinated effort. That scenario will lead to a chaotic regulatory environment that will serve no useful purpose other than to confuse. At the end of the day, it’s essential that there is a comprehensive federal data privacy law that incorporates very specific laws about AI that ensure control of data stored while respecting people’s privacy and personal data. Without this, the country risks losing one of its fundamental pillars of democracy.

 

Scott Allendevaux is a data protection and privacy law specialist and the senior practice lead at Allendevaux & Company in London, UK. 


Comment
Show comments Hide Comments