Privacy

CFA State AI & Privacy Update #5

Welcome to the fifth installment of a newsletter from the Consumer Federation of America (CFA) tracking the latest news about how AI and personal data are used, abused, and regulated.

Tomorrow at 130PM ET, CFA is hosting a conversation with former FTC and CFPB officials about the impact of efforts to hamper the agencies on people’s privacy and security. I’m really excited about it and hope you can watch – REGISTER HERE.

A close-up of a blue and white poster

 

 

 

 

 

 

 

If you’re reading this and not subscribed to the list, head to https://cfa.simplelists.com/ and on the list select the second from the top (“ai-privacy-updates (CFA State AI and Privacy Updates.)

Alright – let’s get into it:

STATE AI AND PRIVACY POLICIES

  • Virginia Gov. Glenn Youngkin vetoed a bill that would’ve made Virginia the second state in the country to regulate algorithmic discrimination. While the concerns from industry that got through to Youngkin are misguided, there is a silver lining – the bill vetoed was an extremely weak version of this type of bill, and would have set a dangerous standard moving forward. (Virginia Leg System for Veto Explanation)
    • More about this bill in a blog post from Kara Williams at EPIC.
  • California is getting active, and CFA is supporting the following bills:
  • Oregon SB 722 is a bill that would ban algorithmic price fixing for rent pricing. Companies like RealPage collect non-public rental pricing and term data from landlords and use that to recommend rates above competitive market rates. CFA testified in strong support of this law. (Support letter)
  • Massachusetts has several strong (and a few bad ones, because of course!) privacy laws pending – all of which will be supported by CFA at an April 9th hearing:
  • A good LinkedIn Post talks a bit about some of the tech industry’s lobbying practices – which are constant and exhausting! (LinkedIn)

RELEVANT NEWS

  • FTC Commissioners being fired is not only illegal, but absolutely terrible for consumer protection in the tech and privacy space. The FTC is the primary tech enforcer, with recent efforts including successfully suing data brokers that sold the most sensitive precise location data about where people pray or get medical care, cracking down on RiteAid for collecting biometric information from everyone coming into their stores then using its erroneous determinations to harass shoppers of color; taking action against General Motors for sharing precise location and driving behavior data without consent from drivers just trying to live their lives; going after scammerswebsites that try to trick you into agreeing to sign up for a service or manipulate your “consent” to sharing your data; taking appropriate action when companies let data breaches happen; and tackling the corporate consolidation fueling many data abuses by Big Tech. (Reuters | CFA Statement)
  • Sora, the product from OpenAI that allows users to generate video has some unsurprising but very real “sexist, racist, and ableist” biases as documented by WIRED. I recommend you reading through the examples, and also recommend reading RestOfWorld’s piece last year on AI Image generation and stereotypes. (WIRED)
  • The fast-changing dynamics around data centers – increasing funding, increasing projects being started around the country by Big Tech, but also increasing pushback from advocates, communities, and lawmakers – was really well chronicled this week. (Tech Policy Press)
  • 23AndMe, the company that gives you insights about your genes and family based off your DNA sent through the mail, has gone bankrupt and is going to be sold. It’s a cautionary tale about (Washington PostReddit post with advice on deleting your data if you used it)
  • A guide from WIRED on how to navigate the increasing digital privacy risks of traveling to the U.S. (WIRED)
  • A really interesting piece in 404Media about the impact of AI generated content on the experience of being on social media (404Media)
  • OpenAI and Google, in a response to a request for the information by the Trump Administration, is begging for get out of jail free cards about the copyright violations required to build their ubiquitous AI tools. It really is quite bold, would be very damaging, and is incompatible with the law. (Forbes)
    • 60 daily newspapers spoke out against it, as did hundreds of celebrities. To highlight a quote from the editorial, in responding to the use of a China-US competition for innovation to justify their ask for this “exemption”: “That iron-clad commitment to protecting the rights of owners of work they themselves created is precisely what distinguishes the United States from communist China, not the reverse.” (Chicago Tribune for newspaper editorial; Verge for celebrity letter)
    • Relatedly, The Atlantic made a search engine to see what books and articles were used to train certain AI systems. It chronicles the stunning breadth of copyrighted material used to create these tools. (The Atlantic)
  • Some news from tech, culture, and broader US politics
    • The Secretary of Defense was strategizing attacks through a group chat and included the EIC of the Atlantic by accident. (The Atlantic)
    • A clip of JD Vance talking badly about Elon Musk went really viral and is…fake and AI-generaetd. (404Media)
  • California Gov Gavin Newsom, the world’s newest podcaster, apparently send burner phones directly to Tech CEO’s. This is a really discouraging story about an already-industry friendly player.  (Politico)

If you know anyone who might like this, encourage them to sign up here.