From CFA’s Eye on AI Substack by Ben Winters
Several months after Trump’s Executive Order that sought to scare states away from passing regulations to protect their constituents from AI-driven harm, the Administration published a “comprehensive national legislative framework” (4 pages) this morning that addresses “the most pressing policy topics that AI presents.”
The framework is light on substance and holds little surprises — the throughline remains protection of Big Tech bottom lines over everyday people. It’s encouraging to see some stated desires to protect people from AI-generated scams and data abuse of minors, but it’s not enough and is outweighed by it’s pro-AI, anti-person stance on preemption and enforcement. We need to see money where their mouth is on the protections — more money for consumer protection agencies at both the federal and state levels. So far, they’ve done nothing but cut and hamstring them.
States say they are planning on working with legislators to turn it into leg text (perhaps something like Blackburn’s 291-page rollercoaster named after Trump), but it remains to be seen what that looks like. Presumably if Blackburn’s effort codified their desires, they would embrace that publicly instead rather than looking vaguely toward the future.
Some quick reactions to the framework are below — to be clear, there is not significant detail in any section. They are high level sentences that describe an outcome without much information about the hard parts of making it work. With any of the ideas in the framework, the devil is really in the details, and there are almost none here.
Reactions to the substance of the framework: — it is not good…
- There are some positive points to take away, if taken at face value, such as the call to “Augment existing law enforcement efforts to combat AI-enabled impersonation scams and fraud” — this should mean an increased investment in enforcement resources at the DOJ, FTC, CFPB, and to state AGs. The admin’s other actions dont show a willingness to do that, but it’s one of the only meaningful ways to do that.
- It’s also positive to see they endorse “limits on data collection for model training and targeted advertising” for minors
- This is a pillar of our model chatbot bill (although we want it to apply to all people): https://consumerfed.org/testimonial/the-people-first-chatbot-bill/
- Some of the language on kids protections are really worrisome — they want to “reduce the risks of sexual exploitation and self-harm to minors,” which is a squishy standard to even start with that would maintain the status quo of Big Tech choosing when and how to protect people but then not actually doing anything meaningful.
- It’s also positive to see they endorse “limits on data collection for model training and targeted advertising” for minors
- Preemption: Unfortunately but unsurprisingly, the framework echoes the statements from Trump himself and his tech-industry advisors/donors to prohibit states from regulating AI themselves. They state that there should be some exemptions for “protecting children” and specifically AI-generated child sexual abuse material, but even those should not be believed. They have said from the start that they wouldn’t interfere or have a problem with laws protecting kids, for example, and we have already leaked stories of White House advisor David Sacks directly calling lawmakers in Utah and Florida to try to kill kids safety AI bill. Any claims of meaningful carveouts for preemption are either written in a tricky way or a flat out lie, so far.
- Re-upping my piece with EPIC’s Kara Williams mythbusting some preemption claims: https://www.techpolicy.press/debunking-myths-about-ai-laws-and-the-proposed-moratorium-on-state-ai-regulation/
- Story: Utah’s AI bill is everything David Sacks asked for. He still wants it dead.
- Fighting federal preemption of state AI legislation is perhaps the most critical fight related to tech right now. Any members of congress that wants to protect people should be unequivocally against all forms of AI moratoria.
- A standard of prohibiting regulation because of “undue burden” is unworkable and ridiculous.
- “States should not be permitted to penalize AI developers for a third party’s unlawful conduct involving their models” is a MASSIVE red flag — this is one of the only ways potential enforcement of scams, deepfakes, and other AI driven crimes would happen.
- Data centers: At a high level, the first bullet point being to “ensure that residental ratepayers do not experience increased electricity costs as a result of new AI data center construction and operation” is good. It’s not surprising, given the cost of living crisis as well as polling and actions all around the country showing how pissed off people are about their bills going up for Mark Zuckerberg’s next project.
- However, they also double down on helping to streamline data center production — im not holding my breath. Their efforts thus far favor data center deployment and operators, not
- The parts on copyright protections are all over the place — and do not adequately reflect the massive theft of IP and information that the GAI industry represents.
More to come…

