Privacy

Specific Terms for Specific Risks: The Need for Accurate Definitions of AI Systems  in Policymaking

By: Kara Williams (EPIC) and Ben Winters (CFA)

Cross-posted on both websites


 

 

 


These headlines likely conjure up a picture of some super-intelligent, all-powerful robot capable of solving society’s problems, thinking like a human, and replacing all our existi ng technology in one fell swoop. But if that sounds too good to be true,
it is. Hayden Field, senior AI reporter at the Verge, summed up the problem with calling everything “AI”: “If we had a better vocabulary for talking about AI, it would actually be really really helpful for combatting some of the misconceptions about AI that are out there right now.” When we call everything “AI,” it becomes nearly impossible to identify the risks and harms of certain kinds of AI-based systems or to fix those harms. 

Despite our cultural tendency to refer to a huge variety of different technologies as “AI,” it is essential to be specific about what we mean, both in common conversation and in legislators’ efforts to regulate technology. If everything is “AI,” it stops lawmakers and their constituents from meaningfully engaging with what a given piece of regulation would actually do. This presents a problem because out of all the “AI bills” from this legislative session, none of them regulate all of technologies within what is commonly referred to as “AI.” Without being explicit about precisely what we mean when we talk about “AI,” we risk continuing to let the companies creating or implementing the technology write their own rules—or worse, ensuring that no meaningful rules around technology are written at all. 

AI proponents regularly argue that regulations pose a massive threat to AI development and adoption and that any meaningful regulation would be too burdensome and confusing for companies trying to innovate. This rhetoric was most visible recently when House Republicans pushed for a moratorium on all state regulation and enforcement of AI, a proposal that failed after massive backlash from lawmakers, public interest organizations, and the public itself. These arguments are also the constant refrain of tech companies and AI advocates whenever a state considers any AI regulation. In reality, these claims rely on a series of incorrect representations, including that all new technologies fall into the bucket of AI, that regulation stifles innovation, and that tech companies’ “self-regulation” is sufficient to ensure technology is safe and fair. 

When Everything Is Called “AI,” the Term Loses Any Real Meaning 

“AI” is often used as a catch-all term encompassing a wide variety of technologies, ranging from the simplest algorithms to the most complex systems and everything in between. Each of these technologies that commonly fall under the “AI” umbrella have distinct abilities, uses, and harms, and categorizing them all as “AI” is a marketing ploy, not an assessment of the technologies themselves. Clumping dozens of different technologies and algorithms together under one title of “AI” makes effectively regulating any algorithm-based systems nearly impossible. 

This nonexhaustive list of examples illustrates the vast range of technologies that people may be talking about when they say “AI”: 

  • Generative AI 
    • Text generation systems built on large language models like ChatGPT
    • Image and video generation tools like Google’s Veo or OpenAI’s Sora
    • Voice generation systems like ElevenLabs
    • Chatbots, including companion chatbots like Character.ai, mental health chatbots like Woebot, and many other subsets 
    • Transcription tools like Otter.ai 
    • Agentic AI systems that are both business-facing and consumer-facing, including companies that automatically set individualized pricing off of information collected from different sources – like location, browsing history, stock information, purchase patterns, and more 
  • Automated decision or recommendation systems
    • Hiring systems like Workday’s resume scanner and Hirevue’s interview insight
    • Predictive policing systems like ShotSpotter 
    • Health care coverage algorithms like UnitedHealth’s nH Predict 
    • Public benefits eligibility systems like Deloitte-contracted state systems 
    • Fraud detection systems like Pondera
    • Criminal justice-related “risk assessment” predictive algorithms like COMPAS 
    • Rental pricing algorithms like Realpage
    • Surveillance or “personalized” pricing algorithms used by retailers like Kroger
    • Targeted ad delivery systems like those used to place online advertisements  
    • Algorithmic content moderation systems like those used by social media platforms 
    • Algorithmic content recommendation systems like those used to order social media feeds 
    • Automatic license plate readers like those offered by Flock Safety 
    • Background automated tools on our devices including spam-email routing
    • Navigation technology including Google Maps 
  • Biometric identification and recognition systems
    • Fingerprint recognition systems like those on iPhone
    • Iris scans like Worldcoin
    • Facial recognition systems like PimEyes and ClearviewAI
    • Emotion recognition/sentiment analysis systems like Hume AI’s Expression Measurement 

Many of these technologies have some common elements, including data collection, training of some sort, data use or processing, and an output. This larger category should be referred to as an umbrella of “automated systems,” rather than calling them all “AI,” a marketing term intended to imply a futuristic technology with superior “intelligence.” Despite these commonalities, these technologies produce different kinds of outputs, are used by and for different audiences, and are used for different purposes and in different contexts. The outputs of these automated systems are wildly different, such as a suggested price for an item, text for an email to a client, a recommended employment action, or a deepfake image or video, and can be used by people ranging from a child to a law enforcement official to a CEO. 

While this insistence that there is no one singular thing called “AI” may seem like a technicality, it has important real-world impacts. Recognizing that there are many different categories of systems that produce different kinds of outputs and are used for different purposes within the broad umbrella of “AI” is essential for ensuring that the harms and risks of each type of automated system are regulated appropriately. 

For Effective Policy, Solutions Must be Mapped onto Accurately Described Harms

The idea that a chatbot that interacts with children should be regulated differently than an algorithm used to set prices in grocery stores is intuitive to most people. Just as we do not put the same rules on children’s bicycles that we do on commercial airplanes despite both being vehicles, we should not attempt to regulate discrete uses of “AI” the same way just because they are all “computers doing computer stuff.” Proponents’ lofty claims of AI’s potential to cure cancer or fix social problems do not justify ignoring the existing and future harms the technology causes, particularly when the safeguards proposed could significantly reduce those harms, still allowing for positive innovation. To this end, lawmakers should take thoughtful, people-centered approaches to regulating the technologies that fall under the “AI” umbrella. 

Grouping new technologies that have unknown risks together with systems that have been used for years and have well-documented histories of specific harms muddies the issue for legislators, making every bill seem like a referendum on technological progress as a concept. This oversimplification also scares policymakers into believing they are not equipped to address any AI-based harms without unintentionally preventing progress or “stifling innovation.” The narrative that a bill putting specific guardrails in place will somehow prevent the development of artificial intelligence that could deliver a cancer cure poses unnecessary challenges to lawmakers trying to pass laws that solve real harms their constituents are actively facing. This rhetorical confusion directly benefits AI developers and deployers by mucking up the regulatory process and preventing laws from passing so that they can continue business as usual without having to comply with any targeted technology regulations. By convincing policymakers that self-regulation is the only plausible way to avoid stifling innovation, technology companies and the venture capitalists who fund them are given the green light to release untested, unproven technologies into the marketplace. This lack of meaningful regulation also means that when these systems cause harm, the victims often do not have adequate recourse to hold those responsible accountable.  

Despite the hyperbolic claims of those profiting off the widespread adoption of AI, there are many commonsense guardrails that can be put in place to reduce ongoing harms without impeding technological development. While there is no easy solution to the problems inherent in many AI technologies, there are numerous simple steps policymakers could take now to greatly reduce the harms individuals face from flawed technologies. 

For example, one clear AI-based harm comes from generative AI being used to create nonconsensual deepfake intimate images. A law that criminalizes the creation or distribution of this content would be a commonsense measure to mitigate this harm that many states have already adopted. Another set of harms stems from automated decision systems that are inaccurate or biased. A simple yet effective way to combat this harm would be to pass a law requiring the developers of automated decision systems to test their accuracy, quality, and freedom from bias before they are used or sold. Using chatbots in mental health care presents yet another wave of harms. Banning chatbots from conducting therapy or performing other tasks that require a license to safely practice is a clear bright-line rule that would greatly reduce these harms. 

These examples are just a few of the numerous safeguards that legislators could put in place to make their constituents safer and more knowledgeable about emerging technologies. When commonsense regulations like these are proposed, it is critical to take the proposal at face value and not get swept up in abstract rhetoric about how one regulation could put the United States behind in AI development or stifle innovation. 

Regulating AI and other new technologies does not require legislators to become experts on the technology — good tech policy merely requires lawmakers to accurately describe what technology they are regulating and what the regulation will do to prevent harm. CFA and EPIC urge policymakers to focus on concrete, people-centered safeguards — identifying a specific type of technology, a particular setting in which technology is used, or an ongoing harm or set of harms — and advocate for legislation with clear messaging and carefully defined terms.

If you would like assistance with educating lawmakers or the public about these issues or developing legislation through direct assistance, testimony, coalition building, and more, don’t hesitate to contact the authors at williams@epic.org and bwinters@consumerfed.org.