Conversica named a Leader in The Forrester Wave™: Conversation Automation Solutions for B2B, Q1 2024

It’s the End of the World as We Know It (and I Feel Fine): The Prospects for (More) AI Regulation

Lewis Barr

Vice President Legal and General Counsel

AI regulation in 2023
AI regulation in 2023
Share Article
AI Regulation & EthicsArtificial Intelligence
Published 08/09/23
9 minutes read

When I was younger, so much younger than today, my friends and I strained to understand what Michael Stipe was singing about on R.E.M. ‘s staggeringly good, first album, suitably titled “Murmur,” released in the internet’s inaugural year, 1983.  The music was energizing and infectious, but even if I could have figured out how to access the internet back then, no one was posting free lyrics to help me understand what Stipe was singing.  Anyway, what would I have made of lines like “The pilgrimage has gained momentum/Take a turn, take a turn/Take our fortune, take our fortune?”

Now, as general counsel for a company that has been providing conversational AI solutions for more than a decade, I turn to IAPP, law firm, government and other organization publications for help in keeping up with legal developments that may impact my company’s processing of customer data transferred via the internet and our use of generative AI to continually improve our services.

The Changing Landscape

Unlike R.E.M. ‘s early days when my friends and I could differ and laugh about Stipe’s puzzling pronouncements, the texts of AI-related, enacted and proposed statutes and regulations are readily accessible and their application is no laughing matter.

Stanford University’s 2023 AI Index shows 37 AI-related bills were passed into law globally in 2022, and efforts to further regulate AI are gathering steam. Among U.S. states there has been a lot of activity lately in legislation targeted at AI technology. However, since the passage of the AI in Government Act of 2020 that established a government organization to promote AI best practices and provides guidance to U.S. agency deployment of AI, the U.S. has not passed significant, federal AI-focused legislation or been leading the way in AI-specific regulation.

2023 State of AI, Stanford University

Admittedly, I am getting ahead of myself on the topic of AI regulation. But that brings to mind one of my favorite R.E.M. songs: “It’s the End of the World as We Know It (And I Feel Fine).”

Tech Leaders’ Concern

For several years, the negative impacts of the use of AI in social media, including the promotion of more extreme content to boost ad revenue, have raised alarm and been rightly criticized. But with public awareness of AI threats and benefits growing alongside the phenomenal user adoption rate of ChatGPT, the clamor about all things AI has led to a role reversal of sorts.

On the one hand, in their open letter circulated back in March, big tech and leading AI developers called for taking at least a six-month break on further development of generative AI models more powerful than GPT-4. These experts seemed to be warning that AI, if unchecked, indeed would lead to the end of the world as we know it — the eventual replacement of us all by “nonhuman minds.” Darn scary stuff.  

Sam Altman, Open AI’s CEO, one of the letter’s signatories, followed up with testimony before the Senate Judiciary Committee where he recommended the U.S. government consider a combination of registration and licensing requirements for the development and release of AI models above an unspecified threshold of capabilities along with incentives for compliance. Mr. Altman even advocated for the establishment of a global body to regulate AI, an approach supported by U.N. General Secretary Antonio Guterres who recently established a board to recommend global standards for AI use by year-end.

The U.S. Government’s Approach

The Biden administration and U.S. government agencies, on the other hand, seem to be taking a cooler view of the commotion and media frenzy generated by the open letter and AI developments to date, as though viewing it all behind a pair of Ray-Bans with an attitude closer to Stipe’s unconvincing refrain, “And I Feel Fine.”  

It’s not that the Biden team is oblivious to the dangers of AI. Quite the contrary. Last year in October the White House released its Blueprint for an AI Bill of Rights (Blueprint), which recognizes the dangers AI could pose to the rights of American citizens. The Blueprint recommends the application of five principles to the use of AI “[to] guide the design, use, and deployment of automated systems to protect the American public.”

The 5 principles from the White House’s Blueprint for an AI Bill of Rights

Well aware of the threat to democracy posed by deep fakes and AI-generated agitprop, the White House is advocating for the application of risk-based controls where the use of AI may pose a risk to the rights of Americans instead of seeking legislation targeting AI specifically.  

In January of this year, NIST followed up Team Biden’s Blueprint with the release of its AI Risk Management Framework and companion playbook along with other tools that offer detailed guidance for American companies seeking to use AI responsibly and minimize their risk profile in doing so. The framework is designed to enable organizations that deploy it “to improve [their] ability to incorporate trustworthiness considerations into the design, development, use, and evaluation of AI products, services, and systems.”

That the White House currently is not pushing for any AI-specific legislation is understandable in light of Congress’s failure last year to pass a comprehensive privacy law – the American Data Privacy and Protection Actdespite significant bipartisan support for its passage. In addition, the Supreme Court punted on considering the plaintiffs’ and  Biden Administration’s contention in the Gonzalez v Google LLC  case that Section 230(c)(1) of the Communications Decency Act of 1996 (the “Act”) should not shield a company from legal claims based on the company’s own conduct in designing and implementing its recommendation algorithms.  

Since Open AI’s release of Chat GPT, the Federal Trade Commission (FTC) and other federal regulators have been making the case that AI, as another technology used by businesses and other organizations, including the government, enjoys no exceptions to the laws already in the books.  

Earlier this year at the IAPP Global Privacy Summit, FTC Commissioner Avaro Bedoya spoke about the troubling characteristics of generative AI, noting that generative AI applications produce results that often surprise their developers in surfacing emergent behaviors that not only are unpredictable but can be inexplicable. He reminded the audience that AI is already regulated under unfair and deceptive trade practice law, civil rights law, tort and product liability law, and common law as well. Perhaps in reference to the industry leaders seeking to frame the regulatory debate, Commissioner Bedoya emphasized that the law is concerned with the impact of AI on regular people, not the opinion of AI experts. Finally, he called for the users of AI systems to take reasonable measures to prevent foreseeable risks and to be transparent about their use of AI,  explaining its use and enabling it to be tested by others. 

Not long after Commissioner Bedoya’s speech to the IAPP, the FTC, EEOC, Department of Justice Civil Division, and Consumer Financial Protection Bureau issued a joint statement emphasizing that they would apply their existing authority and law to protect Americans from the use of AI negatively impacting their rights.  For its part, the FTC emphasized it is paying particular attention to the use of AI resulting in illegal discriminatory impacts, companies making unsubstantiated claims about their use of AI, and whether companies are assessing and mitigating risk before their deployment of AI. The FTC has put American businesses on notice that they are accountable for their algorithm’s performance. And in three cases since 2019 where personal data was wrongfully used in AI applications, the FTC has obtained the defendants’ agreement for the destruction of the algorithms as a condition of settlement. 

In late July, the White House announced it had persuaded Amazon, Google, Meta, Microsoft and other leading companies developing AI technology to follow a number of AI safeguards in line with its earlier promoted Blueprint.

This volunteer approach is the opposite of what the EU has in mind.

The European Union Law

Like the U.S., the EU is starting to apply its existing law, the GDPR, to AI usage. This spring Italian data protection authority Garante temporarily halted the use of ChatGPT in Italy out of concern that its data collection practices violated the GDPR and Italian law. As a result, Open AI agreed to modify some of its practices.

With some exceptions, GDPR Article 22 prohibits automated decision-making (including with the use of AI) that significantly impacts an EU resident. This calls for humans to be at the end of the AI loop, ensuring that human (not machine) judgment is applied in making those consequential decisions in scope of the law. In addition, concerns have been raised that some AI usage may violate an EU resident’s right to be forgotten under the GDPR.

Unlike the U.S., the EU has been working on a comparatively comprehensive AI law, the Artificial Intelligence Act ( the “AI Act”), since 2021. EU lawmakers are reportedly in the final stages of negotiating the revised draft of the law, which may be enacted by early 2024, although it will likely be at least a year following its enactment before it’s fully implemented. So, while the AI Act train has certainly left the station, there are still significant points to be negotiated before it is done and dusted. 

Like the GDPR, the AI Act will apply not only to organizations in the EU but also to those “placing on the market or putting into service AI systems in the Union.” AI Act, Art. 2., 1.(a).

Also, like the GDPR, the AI Act will authorize the imposition of significant fines for significant violations. Similar to the Blueprint, the AI Act incorporates a risk-based approach, imposing guardrails for the use of higher-risk applications. But it will ban outright certain AI applications while imposing more obligations on providers of systems the AI Act classifies as high-risk and lesser obligations on providers of lower-risk systems. Obligations for the providers of high-risk systems will include those related to AI risk management, documentation, security, transparency, and human oversight.  

This risk-based approach in the current AI Act draft will no doubt remain part of the enacted law as will its transparency requirements applicable to all business providers of AI, no matter the level of risk involved. With limited exceptions, the AI Act will require AI providers to “ensure that AI systems intended to interact with natural persons are designed and developed in such a way that natural persons are informed that they are interacting with an AI system, unless this is obvious from the circumstances and the context of use.” AI Act, Art. 52, 1.

The AI Regulation Bottom Line

As R.E.M. aptly observed in “Talk About the Passion”, “Not everyone can carry the weight of the world.”

While the EU can’t carry the weight of significant AI regulation for the world due to jurisdictional restrictions, it may once again (following its GDPR example) set the global standard for high tech regulation — this time on the provisioning of AI.

Here in the U.S., however, it seems that we’ll continue with “shaking through, opportune” for a while.

Share Article

No results found

Related Posts

Explore More Posts

Subscribe to get the latest blogs in your inbox

* By submitting this form, I agree to receive information and updates, including marketing communications, by email about Conversica’s products and services. By submitting this form, I am agreeing to Conversica's privacy policy.

Thank you!

Ready to See a
Revenue Digital Assistant™ in Action?

Let us show you how our Powerfully Human®️ digital assistants
can help your team unlock revenue. Get the conversation started today.

Request a Demo