You know that something is scary when the businesses that have created it ask Congress to regulate it, as they did in a recent Senate subcommittee hearing on artificial intelligence (AI). AI is the ability of computers to use the intellectual processes characteristic of humans to perform tasks. A subset, “generative AI,” is a class of AI systems that, after being trained on large data sets, can be used to generate text, images, videos or other outputs from a given prompt.
A fellow consumer advocate recently showed me how he could use an AI program on his laptop to draft privacy legislation. The result was instantaneous and quite impressive. I don’t know what sources the program used, but it must have searched for legislative language that someone advocating on behalf of consumers would be likely to use and stitched together a decent legal framework for privacy protection. It was a good example of how AI can research, reason, and produce decent results for users. There are many applications of AI that can benefit consumers, from improving automated responses to their questions and complaints to enhancing their shopping experiences.
But the hearing began with another, more sobering example provided by Senator Richard Blumenthal. He played a recording that sounded like his voice, explaining concerns about AI, including that it could be used to impersonate someone. And indeed, it was impersonating him! It was not really him speaking, and he did not write what was said. Instead, his staff used an AI program to mimic his voice and the remarks were based on things he had previously said about AI. And here is another crucial point: since AI can learn, it might be able to accurately guess what he would have said even if he had not already voiced those concerns.
Or, if used by a malicious actor, AI could be used to impersonate Senator Blumenthal and say things that do not represent his views at all. How would people know it wasn’t him? Would it be sufficient to simply label things as AI-generated? I don’t think that would be particularly meaningful to people. Do we need a new federal agency to regulate the use of AI, as Senator Michael Bennet has proposed? We already have the Federal Trade Commission, which could regulate the commercial use of AI, though that wouldn’t necessarily address issues such as the potential for politicians to use AI in misleading ways. The White House is getting involved – the President’s Council of Advisors on Science and Technology has created a working group on generative AI, which is a great step, but regrettably it has no representatives from consumer organizations.
It’s already difficult for consumers to be sure who they’re dealing with when they answer the phone or see an offer for something online. Last year, imposter scams in which consumers were approached by someone falsely claiming to be a trusted business, government agency, other entity or individual were the second most common subject of complaints reported to the Federal Trade Commission. Even legitimate companies and organizations may be tempted to use this technology in ways that are unfair and abusive. Plus, the potential to mislead and manipulate consumers politically, as we saw during the last Presidential election, is even greater with AI.
I predict that AI will present consumers, consumer advocates and law enforcement agencies with huge challenges. The solution is not yet clear, but it’s obvious that we need to explore legislative or regulatory approaches to this issue. This would be a good subject for an FTC “town hall,” perhaps pulling in other relevant agencies that deal with consumer issues, and of course consumer organizations. Use of artificial intelligence is growing, and consumer advocates, Congress, regulatory agencies, and businesses must be prepared to handle any issues that may arise from the vast number of uses of this technology.