Privacy

CFA State AI & Privacy Update # 1 – Jan 15, 2025

Good morning!

Welcome to the first installment of a newsletter from the Consumer Federation of America (CFA) tracking the latest news about how AI and personal data are used, abused, and regulated. The newsletter will focus on the state level, where it’s nearly guaranteed there will be a lot of legislation and regulation on these topics, which are hard to track!

Every two weeks, I’ll update you on the latest regulatory developments and highlight some stories of how privacy violations and irresponsible AI use are causing harm.

For those of you I haven’t met yet, my name’s Ben Winters – I’m the Director of AI and Privacy at CFA as of October. Before CFA, I was at the Electronic Privacy Information Center (EPIC) leading their work on AI & Human Rights, as well as a short stint at the Civil Rights Division of the Department of Justice, helping their work related to the October 2023 AI Executive Order and helping the division address the newer ways data and automated systems are used to carry out civil rights violations.

The policy priorities I’m focusing on this year are restricting data brokers, passing good privacy laws with data minimization that’d restrict surveillance advertising, getting clear disclosures and controls about the environmental impacts of AI, addressing the insufficient laws on non-consensual images, videos, and audio of others, and establishing meaningful transparency requirements about all automated systems used.  More to come soon on what I think statehouses should be prioritizing more specifically along with how.

Anyway, I promise this introduction won’t always be so long, but if you’re not familiar with CFA, please visit our website https://consumerfed.org/ to learn more. If you work at an advocacy organization or agency focused on our issues, consider becoming a member. We have over 250 members nationwide from small local orgs to big national ones. There are very modest dues for excellent benefits including tickets to our conferences, exclusive member dialogues, and more. Learn more and apply here!

Alright – let’s get into it:

STATE AI AND PRIVACY POLICIES

  • State of play: There’s already quite a bit of tech-specific regulation in the states, but we do have a bit of a muddy patchwork with varying quality.
    • Twenty states have passed comprehensive consumer privacy laws (laws that cover all data collection and use – not just in one sector or context like health or education). The devil is in the details, though, with many being industry-friendly rather than prioritizing consumers.
    • On AI, there is only one comprehensive law – the Colorado AI Act (SB205), and there are expectations it will be altered in this upcoming session. However, there have been many targeted bills introduced and some passed last year in the states – with many focusing on deepfakes, transparency, and discrimination.
      • Dive Deeper: Consumer Reports published an extremely helpful chart about the State AI legislation from last year. Check it out.
    • It’s just the very beginning of the legislative session, but we’re already seeing some legislative action across party lines and all types of states. Here are some highlights:
      • In New York, Sen. Kristen Gonzales [D] introduced the New York AI Act, which is a pretty strong starting point for a state that has no comprehensive privacy or AI law. Sen. Gonzales was the sponsor for the LOADinG act, a strong bill putting some accountability measures on the way government agencies use AI tools. The NY AI Act would regulate the commercial sector.
      • In Illinois, Sen Sue Rezin [R] introduced the Privacy Rights Act, which would create a data protection agency akin to the California Privacy Protection Agency and some good consumer rights.
      • In Virginia, Delegate Michelle Maldonado [D] introduced the Consumer Data Protection Act/AI Training Data Transparency Act together, which in part tries to amend the notoriously weak comprehensive privacy law in Virginia.
      • In Texas, Rep. Giovanni Capriglioni [R] introduced the Texas Responsible AI Governance Act , which he explains in a LinkedIn post
      • Trend to watch: It seems clear that a coalition of states will introduce AI laws that are similar to (with updates and lots of constant movement) the Connecticut AI law introduced last year but vetoed by the Governor. Here’s an op-ed a coalition of bipartisan lawmakers from around the country wrote on AI legislation.

RELEVANT NEWS

  • Enforcement of existing laws (both privacy and general consumer protection law) is ramping up and critical. Some highlights:
    • Texas sued Allstate and a subsidiary for illegally collecting and selling location data under Texas’ privacy law. (The Record)
    • California announces a slate of settlements with data brokers who failed to register with the state as required (California Privacy Protection Agency)
    • California AG published guidance on how California privacy law applies to AI use. (CA AG)
    • New Jersey Division of Civil Rights in the AG’s office published guidance on how their existing Law Against Discrimination applies when automated systems/algorithms are used. (NJ DCR)
  • Open AI published a list of asks for the federal government. It paints a very bold and bleak picture of how they’d like the federal government to bend over backward to support their business with taxpayer dollars, while not regulating it in any meaningful way. (Sherwood News)
  • A lawsuit alleges that Mark Zuckerberg (who has been in the news for irresponsible content moderation and privacy choices) gave the explicit OK to train their company’s Generative AI model on copyrighted materials.(TechCrunch)

Apologies all for the particularly long first issue and buckle up all! If you know anyone who might like this, encourage them to sign up here.