The good folks over at Adafruit are raising the alarm about a new New York State 3D printing law that could greatly imperil the public's freedom to tinker and could generally make life way more annoying for the schools, libraries, hospitals, small businesses, hobbyists, and garages that utilize 3D printers.
New York's 2026-2027 executive budget bill (S.9005 / A.10005) includes language requiring that all 3D printers operating in the state need to include software or firmware that scans every print file through a "firearms blueprint detection algorithm" and then locks the hardware up so it refuses to print anything it flags as a potential firearm or firearm component.
As Adafruit's Phillip Torrone notes, the key problem here is it's largely impossible to detect firearms from geometry alone:
"A firearms blueprint detection algorithm would need to identify every possible firearm component from raw STL/GCODE files, while not flagging pipes, tubes, blocks, brackets, gears, or any of the millions of legitimate shapes that happen to share geometric properties with gun parts. This is a classification problem with enormous false positive and false negative rates."
NY's new law would apply to open source firmware like Marlin, Klipper, and RepRap, which are generally maintained by volunteers without the resources for compliance. As well as office printers that never touch the internet, or CNC milling machines that can basically generate any shape you can imagine.
Torrone goes on to explain how the bill could be dramatically improved by exempting open source firmware, and focusing more concretely on the intent to create fire-arms, instead of waging an impossible enforcement war on ambiguous shapes. They're also recommending limited liability for retailers, schools, and libraries, and the elimination of mandatory file scanning:
"But the answer to misuse isn't surveillance built into the tool itself. We don't require table saws to scan wood for weapon shapes. We don't require lathes to phone home before turning metal. We prosecute people who make illegal things, not people who own tools.
The Open Source 3D printing community probably does not know about this. OSHWA and other open source advocacy orgs have ignored many of the things we really need their help with. That needs to change. This bill is in early stages — the working group hasn't even convened yet. There's time to work together, in the open, for amendments that make sense."
Random aside: it's worth reminding folks that this proposal comes on the heels of a recently passed New York State "right to repair" law (supposed to make it easier and cheaper to repair technology you own) that Governor Kathy Hochul basically lobotomized at lobbyist behest after it was passed, ensuring it doesn't actually protect anybody's freedom to tinker.
Wikipedia celebrated its 25th birthday last month. Given the centrality of Wikipedia to so much activity online, it is hard to remember (or to imagine, for those who are younger) a time without Wikipedia. The latest statistics are impressive:
- Wikipedia is viewed nearly 15 billion times every month.
- Wikipedia contains over 65 million articles across more than 300 languages.
- Wikipedia is edited by nearly 250,000 editors every month around the world. Editors are defined by one edit or more every month; only editors with a username are counted.
- Wikipedia is accessed by over 1.5 billion unique devices every month.
That's testimony to the global nature of Wikipedia. But there's something else, not mentioned there, that is of great relevance to this blog: the fact that every one of those 65 million articles is made available under a generous license - the Creative Commons Attribution-ShareAlike 4.0 license, to be precise. That means sharing and re-use are encouraged, in contrast to most material online, where copyright is fiercely enforced. Wikipedia is living proof that giving away things by relying on volunteers and donations - the "true fans" approach - works, and on a massive scale. Anil Dash puts it well in a post celebrating Wikipedia's 25th anniversary:
Whenever I worry about where the Internet is headed, I remember that this example of the collective generosity and goodness of people still exists. There are so many folks just working away, every day, to make something good and valuable for strangers out there, simply from the goodness of their hearts. They have no way of ever knowing who they've helped. But they believe in the simple power of doing a little bit of good using some of the most basic technologies of the internet. Twenty-five years later, all of the evidence has shown that they really have changed the world.
However, Wikipedia is today facing perhaps its greatest challenge, which comes from the new generation of AI services. They are problematic for Wikipedia in two main ways. The first, ironically, is because it is widely recognized that Wikipedia's holdings represent some of the highest-quality training materials available. In a post explaining why, "in the AI era, Wikipedia has never been more valuable", the Wikimedia Foundation writes:
AI cannot exist without the human effort that goes into building open and nonprofit information sources like Wikipedia. That's why Wikipedia is one of the highest-quality datasets in the world for training AI, and when AI developers try to omit it, the resulting answers are significantly less accurate, less diverse, and less verifiable.
That recognition is welcome, but comes at a price. It means that every AI company as a matter of course wants to download the entire Wikipedia corpus to be used for training its models. That has led to irresponsible behavior by some companies, when their scraping tools download pages from Wikipedia with no consideration for the resources they are using for free, or the collateral damage they are causing to other users in terms of slower responses.
Trying to stop companies drawing on this unique resource is futile; recognizing this, Wikimedia Foundation has come up with an alternative approach: Wikimedia Enterprise, "a first-of-its-kind commercial product designed for companies that reuse and source Wikipedia and Wikimedia projects at a high volume". In 2022, its first customers were Google and the Internet Archive, and last month, Wikimedia Enterprise announced that Amazon, Meta, Microsoft, Mistral AI, and Perplexity have also signed. That's important for a couple of reasons. It means that many of the biggest AI players will download Wikipedia articles more efficiently. It also means that the Wikipedia project will receive funding for its work.
This new money is crucial if Wikipedia is to remain a high quality resource. And that is precisely why every generative AI company that uses Wikipedia posts for training should - if only out of self-interest - pay to do so. What is happening here echoes something this blog suggested back in May 2024: that AI companies should pay artists to create new works, and give away the results, because fresh training material is vital. Helping to pay for Wikipedia to create more high-quality articles that are freely available to all is a variation on that theme.
The other problem that generative AI causes Wikipedia is more subtle. The Wikimedia Foundation explains that alongside financial support, the project needs proper attribution:
Attribution means that generative AI gives credit to the human contributions that it uses to create its outputs. This maintains a virtuous cycle that continues those human contributions that create the training data that these new technologies rely on. For people to trust information shared on the internet, platforms should make it clear where the information is sourced from and elevate opportunities to visit and participate in those sources. With fewer visits to Wikipedia, fewer volunteers may grow and enrich the content, and fewer individual donors may support this work.
Without fresh volunteers, Wikipedia will wither and become less valuable. That's terrible for the world, but it is also bad for generative AI companies. So, again, it makes sense for them to provide proper attribution in their outputs. That requirement has become even more pressing in the light of a new development. According to tests carried out by the Guardian:
The latest model of ChatGPT has begun to cite Elon Musk's Grokipedia as a source on a wide range of queries, including on Iranian conglomerates and Holocaust deniers, raising concerns about misinformation on the platform.
That's potentially problematic because of how Grokipedia creates its entries. Research last year found that:
Grokipedia articles are substantially longer and contain significantly fewer references per word. Moreover, Grokipedia's content divides into two distinct groups: one that remains semantically and stylistically aligned with Wikipedia, and another that diverges sharply. Among the dissimilar articles, we observe a systematic rightward shift in the political bias of cited news sources, concentrated primarily in entries related to politics, history, and religion. These findings suggest that AI-generated encyclopedic content diverges from established editorial norms-favouring narrative expansion over citation-based verification.
If leading chatbots starts drawing on Grokipedia routinely for their answers, it is less likely that there are independent sources where the information can be checked, something generally possible with Wikipedia. It therefore becomes even more urgent for generative AI systems to provide attribution, so at least users know where information is coming from, and whether there are likely to be further resources that confirm a chatbot's claims. Not everyone will want to do that, but it is important to offer it as an option.
Wikipedia at 25 is an amazing achievement in multiple ways, one of which includes serving as a demonstration that material can be given away for free, supported directly by users, and on a global scale. It would be a tragedy if the current enthusiasm for generative AI systems led to that resource being harmed and even destroyed. A world without Wikipedia would be a poorer world indeed.
Follow me @glynmoody on Mastodon and on Bluesky. Republished from Walled Culture.
Warning: This article discusses suicide and some research regarding suicidal ideation. If you are having thoughts of suicide, please call or text 988 to reach the Suicide and Crisis Lifeline or visit this list of resources for help. Know that people care about you and there are many available to help.
When someone dies by suicide, there is an immediate, almost desperate need to find something—or someone—to blame. We've talked before about the dangers of this impulse. The target keeps shifting: "cyberbullying," then "social media," then "Amazon." Now it's generative AI.
There have been several heartbreaking stories recently involving individuals who took their own lives after interacting with AI chatbots. This has led to lawsuits filed by grieving families against companies like OpenAI and Character.AI, alleging that these tools are responsible for the deaths of their loved ones. Many of these lawsuits are settled, rather than fought out in court because no company wants its name in the headlines associated with suicide.
It is also impossible not to feel for these families. The loss is devastating, and the need for answers is a fundamentally human response to grief. But the narrative emerging from these lawsuits—that the AI caused the suicide—relies on a premise that assumes we understand the mechanics of suicide far better than we actually do.
Unfortunately, we know frighteningly little about what drives a person to take that final, irrevocable step. An article from late last year in the New York Times profiling clinicians who are lobbying for a completely new way to assess suicide risk, makes this painfully clear: our current methods of predicting suicides are failing.
If experts who have spent decades studying the human mind admit they often cannot predict or prevent suicide even when treating a patient directly, we should be extremely wary of the confidence with which pundits and lawsuits assign blame to a chatbot.
The Times piece focuses on the work of two psychiatrists who have been devastated by the loss of patients who gave absolutely no indication they were about to harm themselves.
In his nearly 40-year career as a psychiatrist, Dr. Igor Galynker has lost three patients to suicide while they were under his care. None of them had told him that they intended to harm themselves.
In one case, a patient who Dr. Galynker had been treating for a year sent him a present — a porcelain caviar dish — and a letter, telling Dr. Galynker that it wasn't his fault. It arrived one week after the man died by suicide.
"That was pretty devastating," Dr. Galynker said, adding, "It took me maybe two years to come to terms with it."
He began to wonder: What happens in people's minds before they kill themselves? What is the difference between that day and the day before?
Nobody seemed to know the answer.
Nobody seemed to know the answer.
That is the state of the science. Apparently the best we currently have in tracking suicidal risk is asking people: "Are you thinking about killing yourself?" And as the article notes, this method is catastrophically flawed.
But despite decades of research into suicide prevention, it is still very difficult to know whether someone will try to die by suicide. The most common method of assessing suicidal risk involves asking patients directly if they plan to harm themselves. While this is an essential question, some clinicians, including Dr. Galynker, say it is inadequate for predicting imminent suicidal behavior….
Dr. Galynker, the director of the Suicide Prevention Research Lab at Mount Sinai in New York City, has said that relying on mentally ill people to disclose suicidal intent is "absurd." Some patients may not be cognizant of their own mental state, he said, while others are determined to die and don't want to tell anyone.
The data backs this up:
According to one literature review, about half of those who died by suicide had denied having suicidal intent in the week or month before ending their life.
This profound inability to predict suicide has led these clinicians to propose a new diagnosis for the DSM-5 called "Suicide Crisis Syndrome" (SCS). They argue that we need to stop looking for stated intent and start looking for a specific, overwhelming state of mind.
To be diagnosed with S.C.S., Dr. Galynker said, patients must have a "persistent and intense feeling of frantic hopelessness," in which they feel trapped in an intolerable situation.
They must also have emotional distress, which can include intense anxiety; feelings of being extremely tense, keyed up or jittery (people often develop insomnia); recent social withdrawal; and difficulty controlling their thoughts.
By the time patients develop S.C.S., they are in such distress that the thinking part of the brain — the frontal lobe — is overwhelmed, said Lisa J. Cohen, a clinical professor of psychiatry at Mount Sinai who is studying S.C.S. alongside Dr. Galynker. It's like "trying to concentrate on a task with a fire alarm going off and dogs barking all around you," she added.
This description of "frantic hopelessness" and feeling "trapped" gives us a glimpse into the internal maelstrom that leads to suicide. It also highlights why externalizing the blame to a technology is so misguided.
The article shares the story of Marisa Russello, who attempted suicide four years ago. Her experience underscores how internal, sudden, and unpredictable the impulse can be—and how disconnected it can be from any specific external "push."
On the night that she nearly died, Ms. Russello wasn't initially planning to harm herself. Life had been stressful, she said. She felt overwhelmed at work. A new antidepressant wasn't working. She and her husband were arguing more than usual. But she wasn't suicidal.
She was at the movies with her husband when Ms. Russello began to feel nauseated and agitated. She said she had a headache and needed to go home. As she reached the subway, a wave of negative emotions washed over her.
[….]
By the time she got home, she had "dropped into this black hole of sadness."
And she decided that she had no choice but to end her life. Fortunately, she said, her attempt was interrupted.
Her decision to die by suicide was so sudden that if her psychiatrist had asked about self-harm at their last session, she would have said, truthfully, that she wasn't even considering it.
When we read stories like Russello's, or the accounts of the psychiatrists losing patients who denied being at risk, it becomes difficult to square the complexity of human psychology with the simplistic narrative that "Chatbot X caused Person Y to die."
There is undeniably an overlap between people who use AI chatbots and people who are struggling with mental health issues—in part because so many people use chatbots today, but also because people in distress seek connection, answers, a safe space to vent. That search often leads to chatbots.
Unless we're planning to make thorough and competent mental health support freely available to everyone who needs it at any time, that's going to continue. Rather than simply insisting that these tools are evil, we should be looking at ways to improve outcomes knowing that some people are going to rely on them.
Just because a person used an AI tool—or a search engine, or a social media platform, or a diary—prior to their death does not mean the tool caused the death.
When we rush to blame the technology, we are effectively claiming to know something that experts in that NY Times piece admit they do not know. We are claiming we know why it happened. We are asserting that if the chatbot hadn't generated what it generated, if it hadn't been there responding to the person, that the "frantic hopelessness" described in the SCS research would simply have evaporated.
There is no evidence to support that.
None of this is to say AI tools can't make things worse. For someone already in crisis, certain interactions could absolutely be unhelpful or exacerbating by "validating" the helplessness they're already experiencing. But that is a far cry from the legal and media narrative that these tools are "killing" people.
The push to blame AI serves a psychological purpose for the living: it provides a tangible enemy. It implies that there is a switch we can flip—a regulation we can pass, a lawsuit we can win—that will stop these tragedies.
It suggests that suicide is a problem of product liability rather than a complex, often inscrutable crisis of the human mind.
The work being done on Suicide Crisis Syndrome is vital because it admits what the current discourse ignores: we are failing to identify the risk because we are looking at the wrong things.
Dr. Miller, the psychiatrist at Endeavor Health in Chicago, first learned about S.C.S. after the patient suicides. He then led efforts to screen every psychiatric patient for S.C.S. at his hospital system. In trying to implement the screenings there have been "fits and starts," he said.
"It's like turning the Titanic," he added. "There are so many stakeholders that need to see that a new approach is worth the time and effort."
While clinicians are trying to turn the Titanic of psychiatric care to better understand the internal states that lead to suicide, the public debate is focused on the wrong iceberg.
If we focus all our energy on demonizing AI, we risk ignoring the actual "black hole of sadness" that Ms. Russello described. We risk ignoring the systemic failures in mental health care. We risk ignoring the fact that half of suicide victims deny intent to their doctors.
Suicide is a tragedy. It is a moment where a person feels they have no other choice—a loss of agency so complete that the thinking brain is overwhelmed, as the SCS researchers describe it. Simplifying that into a story about a "rogue algorithm" or a "dangerous chatbot" doesn't help the next person who feels that frantic hopelessness.
It just gives the rest of us someone to sue.
Earlier this month, the FBI decided it was going to help Donald Trump steal back the election he's claimed for half-a-decade was stolen from him. The state whose Secretary of State was asked directly by the outgoing president in January 2021 to "find 11,780 votes" was raided by Trump 2.0, who still somehow thinks he can win the election he lost back in 2020.
It's not just revenge Trump is seeking. He's also hoping to find anything that will allow him to cast doubt on midterm election results now that it seems entirely possible the GOP might lose its majority in the legislature.
The FBI walked off with tons of stuff after its raid of the Fulton County election hub in Georgia. The raid — which was attended by the current DNI Tulsi Gabbard for no apparent reason — saw the Trump government seize as many 2020 ballots and voter records as possible. The stated reason for this raid was to collect evidence related to two alleged crimes: not retaining election records long enough and attempts to "intimidate voters or procure false votes/false voter registration."
One of several glaring problems with this raid is the fact that some of the criminal acts alleged have already surpassed the five-year statute of limitations. The rest of the glaring problems are far less subtle. Like Trump using the FBI and DOJ to engage in vindictive prosecution. And the FBI appearing to have deliberately mislead the magistrate judge to get this search warrant approved.
This declaration [PDF] by Ryan Macias, a project manager for the voting system used in Fulton County who also served as the Acting Director of the Voting System Program during the 2020 election, points out multiple flaws in the FBI's warrant affidavit — all of which it would be safe to assume were deliberate "errors."
The Affidavit asserts that there were five "deficiencies or defects with the November 3, 2020, election and tabulation of the votes thereof." The Affidavit concludes that "[i]f these deficiencies were the result of intentional action, it would be a violation of" Title 52 U.S.C. §§ 20511 (Criminal Penalties) and 20701 (Retention and Preservation of Records of Elections).
In all five areas identified by Special Agent Evans' Affidavit, there are a multitude of false or misleading statements and omissions. In fact, there are, as set forth below, over a dozen omissions of critical parts of the reports and related materials that I identified in paragraph 4 above. This is in addition to the absence of any recognition that much of what the Affidavit references as concerning are widely known as benign and common election practices. As noted there, all of those materials are publicly available and could have been referenced by Special Agent Evans. Even when Special Agent Evans cites to one of these sources, he repeatedly omits crucial facts and findings inconsistent with his characterizations. Once the statements and omissions in the Affidavit are corrected and based on my experience administering elections in accordance with the statutes cited in the Affidavit, the Affidavit loses any basis in reality.
The whole thing needs to be read, but here are just a couple of the things we're going to generously call "errors," even though they're really deliberate omissions. The criminal allegations allege ballot images weren't retained in violation of the law. But, as this declaration points out, the retention of images wasn't mandated by law in Georgia until 2021, which would be after the 2020 election. If images weren't retained, it was likely because election staffers obviously didn't think it was necessary to do so.
Second, the affidavit claims something is shady about the audits performed by county officials, insinuating that this somehow resulted in votes mysteriously swinging the state in Biden's direction. This declaration states the actual truth: "risk limiting audits" only aid in determining whether or not a recount might be warranted. Only official counts and recounts can actually alter voting results.
Fulton County's challenge [PDF] of the search contains even more information that indicates the FBI's search warrant application was crafted to basically trick a judge into authorizing an illegal search (all emphasis in the original):
First, the Fourth Amendment demands "probable cause"—not "possible cause." The Affidavit fails that constitutional requirement. Despite years of investigations of the 2020 election, the Affidavit does not identify facts that establish probable cause that anyone committed a crime. Instead, FBI Special Agent Evans (the "Affiant") all but admits that the seizure will yield evidence of a crime only if certain hypotheticals are true. See, e.g., Aff. ¶ 10 ("If these deficiencies were the result of intentional action, it would be a violation of federal law[.]"); ¶ 85 ("If these deficiencies were the result of intentional action, the election records . . . are evidence of violations[.]"). Unsupported by probable cause and dependent on unsubstantiated hypotheticals, Respondent's seizure violated the Fourth Amendment.
There's more (emphasis mine):
Second, instead of alleging probable cause to believe a crime has been committed, the Affidavit does nothing more than describe the types of human errors that its own sources confirm occur in almost every election—without any intentional wrongdoing whatsoever. Mislabeling an expected margin of error as "deficiencies" or "defects" cannot establish probable cause, let alone for a seizure of this magnitude.
Third, the Affidavit omits numerous material facts—including from the very reports and publicly-disclosed investigations that the Affiant cites—that confirm the alleged conduct was previously investigated and found to be unintentional. Moreover, the Affidavit not only fails to allege that any particular witness is reliable or credible; it omits discrediting information about those witnesses that was obviously available to the Affiant. These omissions are serious. The ex parte warrant process would be rendered a nullity if the government were permitted to hide material and probative facts that refute probable cause from a magistrate judge and nevertheless retain the fruits of its misconduct.
It then goes on to note that even if the affidavit wasn't more about what was deliberately left out of it, rather than what Kash Patel's FBI decided to include, it would still suck, constitutionally-speaking:
Fourth, even if the Affidavit established probable cause, the seizure of original election materials would be unreasonable and in callous disregard of the Fourth Amendment because (1) the statutes of limitation have lapsed on the only crimes under investigation; (2) the warrant violates Georgia's state sovereignty by effectively enjoining a pending state court proceeding and preventing Georgia from performing its constitutionally-mandated role in administering its elections; and (3) the Respondent improperly used the criminal warrant process to circumvent a pending civil lawsuit in which it requested the same records.
That last sentence is a particularly spicy zinger. It shows the administration will do anything to rack up a few rabble-rousing "victories," no matter how fleeting or Pyrrhic. This is a fully-cooked collection of gassed-up bigots and conspiracy theorists (or both!) who have managed to turn their extremely online "own the libs" bullshit into a 24/7 attack on the Constitution, the system of checks and balances, and anything else that stands in the way of their autocratic wet dreams.
What's standing between us and further destruction of the stuff that makes America great is a court system that doesn't actually seem to know what to do when it has to deal with an entire administration that refuses to play by the rules that have held this nation together for more than two centuries. It's time for the courts to dig deep and start breaking the glass on every judicial tool labeled "IN CASE OF EMERGENCY." Giving any of these fuckers the benefit of a doubt only allows them to dig in deeper.
The Complete Big Data and Power BI Bundle has 5 courses to help you learn how to effectively sort, analyze, and visualize all of your data. Courses cover Power BI, Power Query, Excel, and Access. It's on sale for $40.
Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
Federal grants that had been approved after a full application and review process were terminated by some random inexperienced DOGE bros based on whether ChatGPT could explain—in under 120 characters—that they were "related to DEI."
That's what the newly released proposed amended complaint from the Authors Guild against the US government reveals about how DOGE actually decided which National Endowment for the Humanities grants to kill.
There were plenty of early reports that the DOGE bros Elon Musk brought into government—operating on the hubristically ignorant belief that they understood how things worked better than actual government employees—were using AI tools to figure out what to cut. Now we have the receipts.
The bros in question here are Nate Cavanaugh and Justin Fox who appeared all over the place in the early DOGE days, destroying the US government.
Cavanaugh was appointed president of the U.S. Institute of Peace after DOGE took over, though that position is affected by this week's court ruling. Shortly after being named the acting director of the Interagency Council on Homelessness — one of the agencies Trump's budget proposal calls for eliminating — Cavanaugh placed its entire staff on administrative leave.
Cavanaugh first emerged at GSA in February, where he met with many technical staffers and software engineers and interviewed them about their jobs, according to four GSA employees who spoke on condition of anonymity because they feared retaliation.
Since then, he's also been detailed to multiple other agencies, according to court filings, including the U.S. African Development Foundation (USADF), the Inter-American Foundation (IAF), the Institute of Museum and Library Services, the National Endowment for the Humanities (NEH) and the Minority Business Development Agency.
Cavanaugh's partner in much of the small agency outreach is Justin Fox, who most recently worked as an associate at Nexus Capital Management, according to his LinkedIn profile.
As far as I can tell, Cavanaugh is a college dropout who founded a startup to do IP licensing management, that has gone through some trouble. We've mentioned Cavanaugh here before, for the time when he was head of the US Institute for Peace, and Elon and DOGE falsely labeled a guy who had worked for USIP a member of the Taliban, causing the actual Taliban to kidnap the guy's family. Fox, as noted, was a low rung employee at some random private equity firm. Neither should have any of the jobs listed above, and don't seem to know shit about anything relevant to a government role.
Anyway, as the Authors Guild figured out in discovery, when these two inexperienced and ignorant DOGE bros were assigned to cut grants in the National Endowment for the Humanities, apparently Fox just started feeding grant titles to ChatGPT asking (in effect) "is this DEI?" From the complaint:
To flag grants for their DEI involvement, Fox entered the following command into ChatGPT: "Does the following relate at all to DEI? Respond factually in less than 120 characters. Begin with 'Yes.' or 'No.' followed by a brief explanation. Do not use 'this initiative' or 'this description' in your response." He then inserted short descriptions of each grant. Fox did nothing to understand ChatGPT's interpretation of "DEI" as used in the command or to ensure that ChatGPT's interpretation of "DEI" matched his own.
Cool.
Then, actual staff at the NEH, including experts who might have been able to explain to these two interlopers what the grants actually did and why they were worth supporting, were blocked from challenging the termination of these grants.
Grants identified this way were slated for termination—with only a handful of exceptions, staff at NEH, including the Acting Chair, were not permitted to remove them from the termination list.
It seems to me that two ignorant DOGE bros cancelling humanities grants based solely on "yo is this DEI?" ChatGPT prompts, kinda shows the need for actual diversity, equity, and inclusion in how things like the National Endowment for the Humanities should work. Instead, you have two rando dweebs who don't understand shit asking the answer machine to justify cancelling grants that sound too woke.
It really feels like these two chucklefucks should be asked to justify their jobs way more than any of these grant recipients should have to justify their work. But, nope, the bros just got to cancelling.
See if you notice a pattern.
For instance, Fox searched each grant's description for the use of key words that appeared in a "Detection List" that he created. Those key words included terms such as "LGBTQ," "homosexual," "tribal," "immigrants," "gay," "BIPOC (Black, Indigenous, People of Color)," "native," and so on. Terms like "white," "Caucasian," and "heterosexual" did not appear in the Detection List.
Fox also organized certain grants into a spreadsheet with lists that he labeled "Craziest Grants" and "Other Bad Grants." Among the grants on those lists were those Fox described as relating to "experiences of LGBTQ military service," "oral histories of LatinX in the mid-west," "social and cultural context of tribal linguistics," and a "book on the 'first gay black science fiction writer in history.'"
Fox also used the Artificial Intelligence ("AI") tool ChatGPT to search grant descriptions that purportedly related to DEI, but Fox did not direct the AI tool that it should not identify grants solely on the basis of race, ethnicity, gender, sexuality, or similar characteristic. The AI searches broadly captured all grants that referred to individuals based on precisely those characteristics. For example, the AI searches flagged a grant described as concerning "the Colfax massacre, the single greatest incidence of anti-Black violence during Reconstruction," another concerning "the untold story of Jewish women's slave labor during the Holocaust," another that funded a film examining how the game of baseball was "instrumental in healing wounds caused by World War I and the 1980s economic standoff between the US and Japan," another charting "the rise and reforms of the Native Americans boarding school systems in the U.S. between 1819 and 1934," and another about "the Women Airforce Service Pilots (WASP), the first female pilots to fly for the U.S. military during WWII" and the "Black female pilots who . . . were denied entry into the WASP because of their race."
So, yeah. This kid basically fed any grant that might upset a white Christian nationalist into ChatGPT, saying "justify me cancelling this shit for being woke" and then he and his college dropout "IP licensing" buddy cancelled them all.
Cavanaugh worked closely with Fox in selecting which grants to terminate using this selection criteria.
Fox and Cavanaugh sorted grants in lists labeled "to cancel" or "to keep."
No grant relating to DEI as broadly conceived of by Fox and Cavanaugh appeared on the "to keep" list. Grants that Fox and Cavanaugh considered "wasteful" and thus slated for termination could be moved to the "to keep" list by Defendant McDonald only if they related to "America 250" or the "Garden of Heroes" initiatives based on the views of Defendants McDonald, Fox, Cavanaugh, and NEH staff member, Adam Wolfson
The complaint notes that almost immediately Cavanaugh and Fox sent out mass emails to more than 1,400 grant recipients, from a private non-government email server, telling them their grants had been terminated.
Even though the emails stated that the grant terminations were "signed" by the acting director of NEH, Michael McDonald, he admitted he had nothing to do with them. It was all Fox, Cavanaugh… and ChatGPT based on a very stupid prompt.
McDonald appeared to acknowledge that he did not determine which grants to terminate nor did he draft the termination letters. First, he stated that he had explained NEH's traditional termination process but that "as they said in the notification letter…they would not be adhering to traditional notification processes" and "they did not feel those should be applied in this instance." Further, in response to a question about the rationale for grant terminations, he replied that the "rationale was simply because that's the way DOGE had operated at other agencies and they applied the same methodology here." McDonald also said that any statement about the number of grants terminated would be "conjecture" on his part, even though he purportedly signed each termination letter
DOGE bros gone wild.
So, just to recap, we have two random DOGE bros with basically no knowledge or experience in the humanities (and at least one of whom is a college dropout), who just went around terminating grants that had gone through a full grant application process by feeding in a list of culture war grievance terms, selecting out the grant titles based on the appearance of seemingly "woke" words, then asking ChatGPT "yo, tell me this is DEI" and then sending termination emails the next day from a private server and forging the director's signature.
This is what "government efficiency" looks like in practice: two guys with zero relevant experience, a keyword list built on culture war grievances, and a chatbot confidently spitting out 120-character verdicts on federal grants that went through actual review processes. The experts who might have explained what these grants actually do? Locked out. The director whose signature appeared on termination letters? Couldn't tell you which grants got cut or why.
The cruelty isn't incidental. But neither is the incompetence. These are people who genuinely believe that being good at vibes-based pattern matching is the same as understanding how institutions work. And the wreckage they leave behind is the entirely predictable result.
We've noted how Bari Weiss' tenure at CBS (or what's left of it) isn't really going very well. Hired by Trump-allied billionaire Larry Ellison to turn what's left of CBS into a right wing extraction class-friendly agitprop mill, Weiss has been accosted on all sides for her clumsy mismanagement, ham-fisted enabling of government censorship, uninteresting propaganda, and just general incompetence.
But wait, there's more!
Not that long ago, Weiss hired a whole bunch of new contributors to the CBS News masthead. Two of them were purported medical and science experts. One was Andrew Huberman, a wellness influencer derided for inconsistent principles not even three years removed from a scandal revealing that he'd overstated his scholarly work and lied to a half dozen of his sexual partners simultaneously.
The other new contributor, Dr. Peter Attia, is another health and wellness influencer whose advice has been utterly unavoidable online in recent years. Unfortunately for Attia, shortly after being picked up as a regular new contributor to CBS, the news broke that he was very heavily present in the Jeffrey Epstein files, with more than 1,800 references to his enthusiastic interactions with the sex-trafficking pedophile.
The revelations created a short-lived scandal featuring some fleeting introspection into the fact that modern U.S. media keeps elevating people with a giant sucking sound where ethics should be, followed by some short-lived hand wringing about how his qualifications aren't commensurate with his fame:
"Many doctors have also criticized Dr. Attia's credentials: He completed medical school and spent several years as a resident in general surgery at Johns Hopkins Hospital, but dropped out before the residency was over, then left the medical field to work at McKinsey & Company and an energy company before opening his health care practice. He is not board certified in any specialty."
It's been a pretty heated news cycle and it hasn't really died down yet. Enter Bari Weiss, who appears to have personally ensured that Attia won't be losing his job at the "new" CBS.
CBS is refusing to comment publicly, but reporting from inside the dying outlet indicates that Bari Weiss didn't like the idea of "cancelling" Attia because that's the sort of thing "the wokes" would do:
"Everyone internally unofficially concluded he was staying as of about a week ago," one CBS News staffer told the outlet, speaking on condition of anonymity.
Another added: "We're pissed off about it."
However much Weiss tries to pretend that she's trying to shift CBS' focus back to "truth telling journalism" and speaking to "real Americans," every single action she makes operates in obvious service to the extraction class that hired her, from peddling Erika Kirk as a person of importance, to trying to block stories about Trump's concentration camps, to running cover for the buddy of a pedophile sex pest.
Weiss knows it's unethical to retain Attia, based on the network's refusal to air a rerun of his recent 60 minutes appearance. She just doesn't care. And like the kind of folks who hired her, she's eager to perpetuate the idea that meaningful accountability shouldn't exist for the rich people she associates with.
Dr. Vinay Prasad is currently the FDA's top vaccine regulator. He's also one of many medical goons hand-picked by RFK Jr. to help lead his decidedly anti-vaxxer movement. In fact, the last time we discussed Prasad, it was over his selective censorship attempts at avoiding public criticism for his anti-vaxxer nonsense. If you show clips of Prasad spewing his anti-vaxxer views to critique them, he'll have your YouTube channel axed. If you show those same clips to praise his nonsense, you get to continue on unmolested.
He's an asshat, in other words. An anti-science, anti-medicine asshat. And he's also someone who is unilaterally keeping us from making progress on vaccines, apparently out of pure joy in exercising such power.
Moderna is producing a new influenza vaccine, this one utilizing mRNA technology, a la the COVID vaccine. Moderna sent an application to the FDA for a review of the vaccine it has produced, as well as the data from the trials the company conducted to demonstrate its efficacy. We learned last week that the FDA flatly refused to review any of this data.
In a news release late Tuesday, Moderna said it was blindsided by the FDA's refusal, which the FDA cited as being due to the design of the company's Phase 3 trial for its mRNA flu vaccine, dubbed mRNA-1010. Specifically, the FDA's rejection was over the comparator vaccine Moderna used.
In the trial, which enrolled nearly 41,000 participants and cost hundreds of millions of dollars, Moderna compared the safety and efficacy of mRNA-1010 to licensed standard-dose influenza vaccines, including Fluarix, made by GlaxoSmithKline. The trial found that mRNA-1010 was superior to the comparators.
Moderna said the FDA reviewed and accepted its trial design on at least two occasions (in April 2024 and again in August 2025) before it applied for approval of mRNA-1010. It also noted that Fluarix has been used as a comparator vaccine in previous flu vaccine trials, which tested vaccines that went on to earn approval.
This looks for all the world like Moderna did what it was supposed to do in getting the proper sign-offs from the FDA to conduct its trials. Prasad himself sent the refusal notice to Moderna, however, and cited within it that the trials Moderna conducted, which were signed off on by the FDA, were not appropriate. The letter didn't bother to indicate why.
But in a letter dated February 3, Vinay Prasad, the FDA's top vaccine regulator under the Trump administration, informed Moderna that the agency does not consider the trial "adequate and well-controlled" because the comparator vaccine "does not reflect the best-available standard of care."
In its news release, Moderna noted that neither the FDA's regulation nor its guidance to industry makes any reference to a requirement of the "best-available standard of care" in comparators.
Everyone at Moderna was understandably confused. The company has already reached out asking to meet with the FDA, ostensibly to sit down in a conference room with them, look them in the eye, and ask "wut?".
The answer is unlikely to be satisfying. And it should be quite alarming to the rest of us. That's because the rejection of a review of all of this data reportedly came from Prasad and Prasad alone, over the objections of his own scientists at the FDA.
Vinay Prasad, the Trump administration's top vaccine regulator at the Food and Drug Administration, single-handedly decided to refuse to review Moderna's mRNA flu vaccine, overruling agency scientists, according to reports from Stat News and The Wall Street Journal.
Stat was first to report, based on unnamed FDA sources, that a team of career scientists at the agency was ready to review the vaccine and that David Kaslow, a top career official who reviews vaccines, even wrote a memo objecting to Prasad's rejection. The memo reportedly included a detailed explanation of why the review should proceed.
According to those same sources, Prasad's reason for refusing to review Moderna's vaccine makes little sense. The story goes like this. As Moderna was seeking guidance for its trials for the vaccine, it chose a currently licensed flu vaccine against which to compare its own vaccine. At one point, the FDA suggested a different comparative vaccine be used. Moderna declined that suggestion and moved forward with the comparative vaccine it originally chose. Despite that difference the FDA reviewed the company's plans for its trial on several occasions and at no point suggested its choices were a show-stopper.
That's it. That's the whole thing. Prasad is claiming that the choice Moderna made for a comparative vaccine, for which the company received only mild feedback from the FDA, is why the FDA is refusing to review this mRNA flu vaccine entirely.
Because that reasoning is almost certainly bullshit. As evidence of that, these same sources from within the FDA offered up this:
This wasn't enough for Prasad, who, according to the Journal's sources, told FDA staff that he wants to send more such refusal letters that appear to blindside drug developers. The review staff apparently pushed back, noting that such moves break with the agency's practices and could open it up to being sued. Prasad reportedly dismissed concern over possible litigation. Trump's FDA Commissioner Marty Makary seemed similarly unconcerned, suggesting on Fox News that Moderna's trial may be "unethical."
The explanation here is remarkably simple. This current government is being run by anti-vaxxers. And these anti-vaxxers are particularly anti-vaxxer-y about mRNA vaccines. And so folks like Prasad are throwing up every roadblock they can dream up to make it as difficult as possible to get new vaccines utilizing new technology approved. Or, as in this case, even reviewed.
Now, if that reads like the opposite of scientific progress to you, give yourself a gold star, because you're right. Thomas Jefferson once said "I tremble for my country when I reflect that God is just" when, hypocritically, discussing slavery in America. I think we should tremble for our country as well when I reflect that we are getting sicker as a nation, given that we have morons at the helm of the nation's health.
Surveillance technology vendors, federal agencies, and wealthy private donors have long helped provide local law enforcement "free" access to surveillance equipment that bypasses local oversight. The result is predictable: serious accountability gaps and data pipelines to other entities, including Immigration and Customs Enforcement (ICE), that expose millions of people to harm.
The cost of "free" surveillance tools — like automated license plate readers (ALPRs), networked cameras, face recognition, drones, and data aggregation and analysis platforms — is measured not in tax dollars, but in the erosion of civil liberties.
The collection and sharing of our data quietly generates detailed records of people's movements and associations that can be exposed, hacked, or repurposed without their knowledge or consent. Those records weaken sanctuary and First Amendment protections while facilitating the targeting of vulnerable people.
Cities can and should use their power to reject federal grants, vendor trials, donations from wealthy individuals, or participation in partnerships that facilitate surveillance and experimentation with spy tech.
If these projects are greenlit, oversight is imperative. Mechanisms like public hearings, competitive bidding, public records transparency, and city council supervision aid to ensure these acquisitions include basic safeguards — like use policies, audits, and consequences for misuse — to protect the public from abuse and from creeping contracts that grow into whole suites of products.
Clear policies and oversight mechanisms must be in place before using any surveillance tools, free or not, and communities and their elected officials must be at the center of every decision about whether to bring these tools in at all.
Here are some of the most common methods "free" surveillance tech makes its way into communities.
Trials and PilotsPolice departments are regularly offered free access to surveillance tools and software through trials and pilot programs that often aren't accompanied by appropriate use policies. In many jurisdictions, trials do not trigger the same requirements to go before decision-makers outside the police department. This means the public may have no idea that a pilot program for surveillance technology is happening in their city.
In Denver, Colorado, the police department is running trials of possible unmanned aerial vehicles (UAVs) for a drone-as-first-responder (DFR) program from two competing drone vendors: Flock Safety Aerodome drones (through August 2026) and drones from the company Skydio, partnering with Axon, the multi-billion dollar police technology company behind tools like Tasers and AI-generated police reports. Drones create unique issues given their vantage for capturing private property and unsuspecting civilians, as well as their capacity to make other technologies, like ALPRs, airborne.
Functional, Even Without FundingWe've seen cities decide not to fund a tool, or run out of funding for it, only to have a company continue providing it in the hope that money will turn up. This happened in Fall River, Massachusetts, where the police department decided not to fund ShotSpotter's $90,000 annual cost and its frequent false alarms, but continued using the system when the company provided free access.
In May 2025, Denver's city council unanimously rejected a $666,000 contract extension for Flock Safety ALPR cameras after weeks of public outcry over mass surveillance data sharing with federal immigration enforcement. But Mayor Mike Johnston's office allowed the cameras to keep running through a "task force" review, effectively extending the program even after the contract was voted down. In response, the Denver Taskforce to Reimagine Policing and Public Safety and Transforming Our Communities Alliance launched a grassroots campaign demanding the city "turn Flock cameras off now," a reminder that when surveillance starts as a pilot or time‑limited contract, communities often have to fight not just to block renewals but to shut the systems off.
Importantly, police technology companies are developing more features and subscription-based models, so what's "free" today frequently results in taxpayers footing the bill later.
Gifts from Police Foundations and Wealthy DonorsPolice foundations and the wealthy have pushed surveillance-driven agendas in their local communities by donating equipment and making large monetary gifts, another means of acquiring these tools without public oversight or buy-in.
In Atlanta, the Atlanta Police Foundation (APF) attempted to use its position as a private entity to circumvent transparency. Following a court challenge from the Atlanta Community Press Collective and Lucy Parsons Labs, a Georgia court determined that the APF must comply with public records laws related to some of its actions and purchases on behalf of law enforcement.
In San Francisco, billionaire Chris Larsen has financially supported a supercharging of the city's surveillance infrastructure, donating $9.4 million to fund the San Francisco Police Department's (SFPD) Real-Time Investigation Center, where a menu of surveillance technologies and data come together to surveil the city's residents. This move comes after the billionaire backed a ballot measure, which passed in March 2025, eroding the city's surveillance technology law and allowing the SFPD free rein to use new surveillance technologies for a full year without oversight.
Federal grants and Department of Homeland Security funding are another way surveillance technology appears free to, only to lock municipalities into long‑term data‑sharing and recurring costs.
Through the Homeland Security Grant Program, which includes the State Homeland Security Program (SHSP) and the Urban Areas Security (UASI) Initiative, and Department of Justice programs like Byrne JAG, the federal government reimburses states and cities for "homeland security" equipment and software, including including law‑enforcement surveillance tools, analytics platforms, and real‑time crime centers. Grant guidance and vendor marketing materials make clear that these funds can be used for automated license plate readers, integrated video surveillance and analytics systems, and centralized command‑center software—in other words, purchases framed as counterterrorism investments but deployed in everyday policing.
Vendors have learned to design products around this federal money, pitching ALPR networks, camera systems, and analytic platforms as "grant-ready" solutions that can be acquired with little or no upfront local cost. Motorola Solutions, for example, advertises how SHSP and UASI dollars can be used for "law enforcement surveillance equipment" and "video surveillance, warning, and access control" systems. Flock Safety, partnering with Lexipol, a company that writes use policies for law enforcement, offers a "License Plate Readers Grant Assistance Program" that helps police departments identify federal and state grants and tailor their applications to fund ALPR projects.
Grant assistance programs let police chiefs fast‑track new surveillance: the paperwork is outsourced, the grant eats the upfront cost, and even when there is a formal paper trail, the practical checks from residents, councils, and procurement rules often get watered down or bypassed.
On paper, these systems arrive "for free" through a federal grant; in practice, they lock cities into recurring software, subscription, and data‑hosting fees that quietly turn into permanent budget lines—and a lasting surveillance infrastructure—as soon as police and prosecutors start to rely on them. In Santa Cruz, California, the police department explicitly sought to use a DHS-funded SHSP grant to pay for a new citywide network of Flock ALPR cameras at the city's entrances and exits, with local funds covering additional cameras. In Sumner, Washington, a $50,000 grant was used to cover the entire first year of a Flock system — including installation and maintenance — after which the city is on the hook for roughly $39,000 every year in ongoing fees. The free grant money opens the door, but local governments are left with years of financial, political, and permanent surveillance entanglements they never fully vetted.
The most dangerous cost of this "free" funding is not just budgetary; it is the way it ties local systems into federal data pipelines. Since 9/11, DHS has used these grant streams to build a nationwide network of at least 79-80 state and regional fusion centers that integrate and share data from federal, state, local, tribal, and private partners. Research shows that state fusion centers rely heavily on the DHS Homeland Security Grant Program (especially SHSP and UASI) to "mature their capabilities," with some centers reporting that 100 percent of their annual expenditures are covered by these grants.
Civil rights investigations have documented how this funding architecture creates a backdoor channel for ICE and other federal agencies to access local surveillance data for their own purposes. A recent report by the Surveillance Technology Oversight Project (S.T.O.P.) describes ICE agents using a Philadelphia‑area fusion center to query the city's ALPR network to track undocumented drivers in a self‑described sanctuary city.
Ultimately, federal grants follow the same script as trials and foundation gifts: what looks "free" ends up costing communities their data, their sanctuary protections, and their power over how local surveillance is used.
Protecting Yourself Against "Free" TechnologyThe most important protection against "free" surveillance technology is to reject it outright. Cities do not have to accept federal grants, vendor trials, or philanthropic donations. Saying no to "free" tech is not just a policy choice; it is a political power that local governments possess and can exercise. Communities and their elected officials can and should refuse surveillance systems that arrive through federal grants, vendor pilots, or private donations, regardless of how attractive the initial price tag appears.
For those cities that have already accepted surveillance technology, the imperative is equally clear: shut it down. When a community has rejected use of a spying tool, the capabilities, equipment, and data collected from that tool should be shut off immediately. Full stop.
And for any surveillance technology that remains in operation, even temporarily, there must be clear rules: when and how equipment is used, how that data is retained and shared, who owns data and how companies can access and use it, transparency requirements, and consequences for any misuse and abuse.
"Free" surveillance technology is never free. Someone profits or gains power from it. Police technology vendors, federal agencies, and wealthy donors do not offer these systems out of generosity; they offer them because surveillance serves their interests, not ours. That is the real cost of "free" surveillance.
Originally posted to EFF's Deeplinks blog.
I've talked on Techdirt about just a few of my AI-related experiments over the past few years, including how I use it to help me edit pieces, which I still write myself. I still have no intention of letting AI write for me, but as the underlying technology has continued to level up, every so often I'll run a test to see if it could write a better Techdirt post than I can. I don't think it's there (and I'm still not convinced it will ever get there), but I figured I can share the process with you, and let you be the judge.
I wanted to pick a fairly straightforward article, rather than a more complex one, just to see how well it works. In this case, I figured I'd try it with the story I published last week about Judge Boasberg ruling against the Trump administration and calling out how the DOJ barely participated in the case, and effectively told him to "pound sand" (a quote directly from the judge).
I know that just telling it to write a Techdirt article by itself will lead to pretty bland "meh" content. So before I even get to the prompt, there are some steps I need to include. First, over time I continue to adjust the underlying "system prompt" I use for editing my pieces. I won't post the entire system prompt here as it's not that interesting, but I do use it to make it clear its job is to help me be a better writer, not to be a sycophant, not to try to change things just for the sake of change, and to suggest things that will most help the reader.
I also have a few notes in it about avoiding recommending certain "AI-style" cliches like "it's not this, it's that." Also, a specific one for me: "don't suggest changing 'fucked up' to 'messed up.'" It does that a lot for my writing.
But that's not all. I also feed in Techdirt samples, which are a collection of ten of my favorite articles, so it gets a sense of what a "Techdirt article" looks like. On top of that, I give it a "Masnick Style Guide" that I had created after feeding a bunch of Techdirt articles into three different LLMs, asking for each to produce a style guide, and then having NotebookLLM combine them all into a giant "Masnick style-guide."
Then, I feed it any links, including earlier stories on Techdirt, that are relevant, before finally writing out a prompt that can be pretty long. In this test case, I fed it the PDF file of the decision. I also gave it Techdirt's previous stories about Judge Boasberg.
Finally, I gave it a starting prompt with a fair bit of explanation of what angle I was hoping to see a Techdirt post on this topic. So here's my full prompt:
Can you write a Techdirt style first draft of a post (see the attached Techdirt post samples, as well as the even more important Masnick style guide, which you should follow) about the attached ruling in the JGG v. Trump case by Judge James Boasberg. I have also attached a page of previous articles about Judge Boasberg which you should consider, especially as some reference this same case.
You may also want to highlight that Judge Boasberg just was vindicated after the DOJ filed a vexatious complaint against him because of earlier rulings in this case, but that complaint has been easily dismissed. The crux of this article, though, should be on the very strong language Boasberg uses, including the astoundingly candid statement that he felt the government "apparently was no interested in participating in this process" and how he read their response as telling "the Court to pound sand." There are other parts of the ruling that are useful as well. It should also call out the specific remedy here, and how weak it feels, given the situation explained in the ruling and the DOJ's actions. Yes, they lost the case, but this still feels like a slap on the wrist with no real consequences for the DOJ.
As always with Techdirt posts, instead of doing in-line quotes, we usually do blockquotes, often of one or even multiple paragraphs from the original sources, so please adhere to that style. Take all of this info and draft the best Techdirt post you can regarding the topic.
And… you can see the results embedded below. I've also embedded my original article (though you can just go read it on the site directly). I'm amused that it added a "from the such and such dept" line as well as tags, both of which are core parts of Techdirt posts, but not things I asked it for.
I'm extremely biased, as this is a question of who writes better, me or the machine, and I still think I'm better than the machine. But, I also have to admit it's not bad. It's decent. If I had that submitted by another writer, there are definitely changes and edits I'd make, especially to the crappy cliche'd ending. But it's way better than what most people think of as traditional ChatGPT output, in part because of all the context and detailed instructions I fed it.
Just for fun, I then also asked my editing tool to do its own edit on its own writing (usually not a good idea to let it grade its own homework), but here's what it had to say:
solid draft. the "pound sand" framing works well and the frustration comes through. few things:
the tension setup: you flag "frustratingly toothless" in graf 3 then say "on one level this is a win" much later. that's the right editorial instinct—there IS genuine tension here—but you're burying the lede on your own argument. consider setting up that complexity earlier so readers know they're getting nuance, not whiplash.
missing a key detail from the ruling: Boasberg notes that "even Plaintiffs' counsel could not represent how many of the 137 are still in Venezuela and how many wish to proceed with habeas." that's devastating. the chaos is so complete that the lawyers literally don't know where everyone is. worth including.
the numbered remedies block: you cut it off mid-list. the 5th point actually matters—there's a March 13 deadline for the government to file a Status Report explaining how/when they'll transport people back. that's your next milestone. either include it or acknowledge you're excerpting.
the Venezuela carve-out: you mention it but undersell it. the ruling specifically says it's "mindful" of the "situation in Venezuela remains in flux" given "the United States's recent law-enforcement action there." that's pretty ominous language from a federal judge about whatever the hell the administration is doing in Venezuela right now. might be worth a parenthetical noting this deserves its own article.
the closer: "I wouldn't hold my breath" is fine but feels like boilerplate. you've got stronger material to end on—maybe the March 13 deadline, or something that points forward rather than just shrugging.
one small thing: "a complaint that was summarily dismissed" in graf 2—might want "recently dismissed" or similar since it was just last week. "summarily" suggests procedure when really the story is how embarrassing the dismissal was.
overall structure is good. the piece does what Techdirt does well: walks through the actual document, quotes liberally, and doesn't pretend the "win" is bigger than it is.
You'll note that it also calls out its own weak ending, just like I did. It is kind of wild to see the AI call out its own weaknesses, while making you wonder—if that's the case—why did it not catch those earlier and fix them? And that then makes you wonder if it's just coming up with a critique for the sake of giving one (though, that ending really is weak).
The other thing I'll note is, again, this actually was still a fair bit of work. It was less than writing the whole thing myself, but even just writing out the prompt itself took time, and this piece would still need a fair bit of editing anyway for publication which would probably take away any time benefit.
Overall, though, you can see how the technology is certainly getting better. I still don't think it can write as well as I do, but there are some pretty good bits in there.
Once again, this tech remains quite useful as a tool to assist people with their work. But it's not really good at replacing your work. Indeed, if I asked the AI to write articles for Techdirt, I'd probably spend just as much time rewriting/fixing it as I would just writing the original in the first place. It still provides me very good feedback (on this article that you're reading now, for example, the AI editor warned me that my original ending was pretty weak, and suggested I add a paragraph talking more about the conclusions which, uh, is what I'm now doing here).
I honestly think the biggest struggle with AI over the next year or so is going to be between the people who insist it can totally replace humans, leading to shoddy and problematic work, and the smaller group of people who use it as a tool to assist them in doing their own work better. The problems come in when people overestimate its ability to do the former, while underestimating its ability to do the latter.
The ICE surge in Minneapolis, Minnesota was instigated by a far-right click bait artist and encouraged by the president's portrayal of Somali immigrants as "garbage" people from a "garbage" country. And those were some of the nicer words Trump used to describe the people his agencies would be hunting down first.
Several weeks later, a draw-down has begun, prompted by two murders committed by federal officers, an inability to obtain indictments against protesters, and every narrative about violence perpetrated by federal officers disintegrating the moment the government was asked to provide some evidence of its claims to the court.
Hundreds of judges in hundreds of immigration cases have found that the government has routinely violated the due process rights of the immigrants it has arrested. This dates all the way back to the beginning of Trump's second term, but months of roving patrols by masked men with guns has created a massive influx of cases courts are still trying to sort out. But one thing is clear: the government will do anything it can to keep the people it arrests from availing themselves of their constitutional rights.
This starts with the arrests themselves, which most often occur without a judicial warrant. The same goes for the invasion of people's houses and places of business. With the Supreme Court giving its tacit blessing to casual racism (the so-called "Kavanaugh stops"), anyone who looks less than white or whose English has a bit of an accent is considered reasonably suspicious enough to detain.
The government has been on the losing end of hundreds of cases involving due process rights. This decision [PDF], coming to us via Politico's Kyle Cheney, details the massive amount of constant movement this government engages in to keep people separated from their rights and physical freedom.
It opens with this:
Immigrations and Customs Enforcement ("ICE") recognizes that noncitizen detainees have a constitutional right to access counsel. But in recent weeks, ICE has isolated thousands of people—most of them detained at the Bishop Henry Whipple Federal Building—from their attorneys. Plaintiffs, who are noncitizen detainees and a nonprofit that represents noncitizens, have presented substantial, specific evidence detailing these alleged violations of the United States Constitution. In response, Defendants offer threadbare declarations generally asserting, without examples or evidence, that ICE provides telephone access to counsel for noncitizens in its custody. The Plaintiffs' declarations provide specifics of the opposite. The gulf between the parties' evidence is simply too wide and too deep for Defendants to overcome.
It's not like ICE can't provide detainees with access to attorneys or respect their due process rights. It's that they choose not to, now that Trump is in charge. The access is theoretically possible. It's just being purposefully denied. And it's not even just being denied in the sense that phone call requests are being refused. People detained by ICE are placed into a constant state of flux for the sole purpose of making it as difficult as possible for them to avail themselves of their rights.
The devil is in the details. And the court brings plenty of those, all relating to the administration's "Operation Metro Surge" that targeted Minneapolis, Minnesota:
Detainees are moved frequently, quickly, without notice,and often with no way for attorneys to know where or how long they will be at a given facility. (ECF No. 20 ("Boche Decl.") ¶¶ 9, 13, 18; ECF No. 24 ("Edin Decl.") ¶ 6; Heinz Decl. ¶ 5 (explaining that of eleven clients initially detained at Whipple, ten were transferred out of the state within twenty-four hours); Kelley Decl. ¶ 19.) Once a person has been transferred out of Minnesota, "representation becomes substantially more difficult"—attorneys must secure local counsel to sponsor a pro hac vice application and navigate additional barriers.
This is a key part of the administration's deliberate destruction of constitutional rights. Moving people quickly helps prevent habeas corpus motions from being filed, since they need to be filed in the jurisdiction where they're being held. If detainees are shifted from place to place quickly enough, their counsel needs to figure out where they're being held and hope that their challenge lands in court before their clients are moved again. And with the Fifth Circuit basically codifying the denial of due process to migrants, more and more people arrested elsewhere in the nation are being sent to detainment centers in Texas as quickly as possible.
All of this is intentional:
Defendants transfer people so quickly that even Defendants struggle to locate detainees. Often, Defendants do not accurately or timely input information into the Online Detainee Locator System. This prevents Minnesota-based attorneys from locating and speaking with their clients.The locator either produces no search results or instructs attorneys to call for details, referencing a phone number that ICE does not answer. Often, Defendants do not update the locator until after detainees areout of state. Attorneys frequently learn of their client's location for the first time when the government responds to a habeas petition.
These are not the good faith efforts of a government just trying to get a grasp on the immigration situation. These are the bad faith efforts of government hoping to violate rights quickly enough that the people it doesn't like will be remanded to the nearest war-torn nation/foreign torture prison before the judicial branch has a chance to catch up.
There's more. There's the phone that detainees supposedly have access to for their one phone call. It's the same line used to receive calls for inmates, so that means lawyers calling clients back either run into a busy signal or a ringing phone that detainees aren't allowed to answer and ICE officers certainly aren't interested in answering.
Lawyers seeking access to their clients have been refused access. In some cases, they've been threatened with arrest by officers simply for showing up. Even if they happen to make it inside the Whipple Detention Center, ICE officers and detention center employees usually refuse them access to their clients.
And when people try to work within the unconstitutional limitations of this deliberately broken system, they're mocked for even bothering to avail themselves of their rights.
When an attorney told an agent that she sent a copy of a releaseorder to the specified email address, the agent laughed and said "something to the effect of 'yeah we really need to get someone to check that email.'"
To sum up, the government is exactly what the court thinks it is: a set of deliberate rights violations pretending it's a legitimate government operation that's just trying to do the best it can in these troubling times:
It appears that in planning for Operation Metro Surge, the government failed to plan for the constitutional rights of its civil detainees. The government suggests—with minimal explanation and even less evidence—that doing so would result in "chaos." The Constitution does not permit the government to arrest thousands of individuals and then disregard their constitutional rights because it would be too challenging to honor those rights.
The administration has long lost the "presumption of regularity" that courts have utilized for years while handling lawsuits and legal challenges against the government. It no longer is considered to be acting in good faith in much of the country (Fifth Circuit excluded, for the most part). This is the "rule of law" party making it clear that it will only follow the rules and laws it likes. And it will continue to do so because courts can't actually physically free people or force the government to respect their rights. The Trump administration is fine with losing in court and losing the hearts and minds of most of America as long as those in power keep getting to do what they want.
Yesterday we noted how CBS fecklessly tried to prevent Stephen Colbert from broadcasting an interview with Texas Democratic State Representative James Talarico. Which, as you've probably already seen, resulted in the interview on YouTube getting way more viewers than it would have normally, and Texas voters flocking to Google to figure out who Talarico is:
This may end up being a massive own goal for the Trump administration.
— Laura Bassett (@lebassett.bsky.social) 2026-02-17T23:15:21.231Z
In short, Brendan Carr's continual threats and unconstitutional distortion of the FCC's "equal opportunity" rule (also known as the "equal time" rule) resulted in a candidate getting exponentially more attention than they ever would have if Brendan Carr wasn't such a weird, censorial zealot.
If only there was a name for this sort of phenomenon?
Despite a lot of speculation to the contrary, there's no evidence the GOP specifically targeted Talarico in any coherent, strategic sense. This entire thing appears to have occurred because CBS lawyers — focused on numerous regulatory issues before the Trump administration, didn't want to offend the extremist authoritarian censors at Trump's FCC. It's always about the money.
CBS (and ABC, NBC, and Fox) have been lobbying the FCC for years to get ride of rules preventing them from merging. CBS (read: Larry Ellison) has managed to get his friend Trump conducting a fake DOJ antitrust inquiry into Netflix's planned acquisition of Warner Brothers, so they can then turn around and buy Warner (and CNN) instead. They'll need to remain close with the administration for that to work out.
CBS tried to do damage control and claim they never directly threatened Colbert, but you can tell by the way they're being a little dodgy about ownership of those claims (demanding no direct attribution to a specific person "on background") they likely aren't true:
Phil Gonzalez from CBS, welcome to the Verge's background policy www.theverge.com/policy/88000…
— nilay patel (@reckless.bsky.social) 2026-02-17T23:16:07.640Z
Colbert's response to the claim he wasn't threatened was… diplomatic:
Amusingly some of the news outlets covering this story (like Variety here) couldn't be bothered to even mention that CBS has numerous regulatory issues before the Trump FCC, which is why they folded like a pile of rain-soaked street corner cardboard at the slightest pressure from the Trump FCC.
As we've noted repeatedly, Brendan Carr has absolutely no legal legs to stand on here. His abuse of the equal opportunity rule is equal parts unconstitutional and incoherent. CBS (and any other network with bottomless legal budgets) could easily win in court (I wager they could even get many lawyers to defend them pro bono), but Ellison (and his nepo baby son) have a much bigger ideological mission in mind.
The Luminar Neo Bundle includes a one time purchase of the software, an introductory course on how to use it, and 6 add-ons. Luminar Neo is an easy-to-use photo editing software that empowers photography lovers to express the beauty they imagined using innovative tools. Luminar Neo was built from the ground up to be different from previous Luminar editors. It keeps your favorite LuminarAI tools and expands your arsenal with more state-of-the-art technologies and important changes at its core. Meanwhile, the recognizable Luminar design is retained, making Neo simple to use and fun to explore. It's on sale for $69.97.
Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
For the last five years, we had to endure an endless, breathless parade of hyperbole regarding the so-called "censorship industrial complex." We were told, repeatedly and at high volume, that the Biden administration flagging content for review by social media companies constituted a tyrannical overthrow of the First Amendment.
In the Missouri v. Biden (later Murthy v. Missouri) case, Judge Terry Doughty—in a ruling that seemed to consist entirely of Twitter threads pasted into a judicial ruling—declared that the White House sending angry emails to Facebook "arguably involves the most massive attack against free speech in United States' history."
Never mind that the Supreme Court later reviewed the evidence and found that the platforms frequently ignored those emails, showing a lack of coercion, leading them to reverse the lower courts for lack of standing. To the "Twitter Files" crowd and the self-anointed "free speech absolutists," the mere existence of government officials simply requesting private companies to look at terms of service violations was a sign of the end of the Republic.
So, surely, now that the Department of Homeland Security is issuing administrative subpoenas—legal demands that bypass judges entirely—to unmask the identities of anonymous political critics, these same warriors are storming the barricades, right?
Right? Riiiiight?
According to a disturbing new report from the New York Times, DHS is aggressively expanding its use of administrative subpoenas to demand the names, addresses, and phone numbers of social media users who simply criticize Immigration and Customs Enforcement (ICE).
In recent months, Google, Reddit, Discord and Meta, which owns Facebook and Instagram, have received hundreds of administrative subpoenas from the Department of Homeland Security, according to four government officials and tech employees privy to the requests. They spoke on the condition of anonymity because they were not authorized to speak publicly.
Google, Meta and Reddit complied with some of the requests, the government officials said. In the subpoenas, the department asked the companies for identifying details of accounts that do not have a real person's name attached and that have criticized ICE or pointed to the locations of ICE agents. The New York Times saw two subpoenas that were sent to Meta over the last six months.
This is not a White House staffer emailing a company to say, "Hey, this post seems to violate your COVID misinformation policy, can you check it?" This is the federal government using the force of law—specifically a tool designed to bypass judicial review—to strip the anonymity from domestic political critics.
If Judge Doughty thought ignored emails were the "most massive attack on free speech in history," I am curious what he would call the weaponization of the surveillance state to dox critics of law enforcement. Or… would he think it's fine, because it's coming from his team?
As the Times reveals, this is really all about intimidation.
Mr. Loney of the A.C.L.U. said avoiding a judge's ruling was important for the department to keep issuing the subpoenas without a legal order to stop. "The pressure is on the end user, the private individual, to go to court," he said.
The DHS claims this is about "officer safety," but documenting the public actions of law enforcement officers in public spaces is a foundational First Amendment right. The moment these subpoenas are actually challenged in court by competent lawyers, the DHS cuts and runs.
The account owner alerted the A.C.L.U., which filed a motion on Oct. 16 to quash the government's request. In a hearing on Jan. 14 in U.S. District Court for the Northern District of California, the A.C.L.U. argued that the government was using administrative subpoenas to target people whose speech it did not agree with.
[….]
Two days later, the subpoena was withdrawn.
This is the government effectively admitting that its demands are legally baseless. They are relying on the high cost of litigation to intimidate both the companies and the individuals. It is a bluff backed by the seal of the Department of Homeland Security.
And this brings us to the most glaring hypocrisy of the current moment: the absolute silence of Elon Musk and X.
Years ago, the "old" Twitter—the one Musk falsely derided as a haven for censorship—was the gold standard for fighting these exact types of demands. In 2017, Twitter famously sued the federal government to stop an administrative subpoena that sought to unmask an anonymous account critical of the Trump administration. Twitter argued, correctly, that unmasking a critic violated the First Amendment. They won. The government withdrew the subpoena.
Twitter (the old company, not the new monstrosity known as X) has a long history of this. In 2012, they challenged a court ruling that said users had no standing to protect their data. In 2014, they sued the DOJ for the right to be transparent about surveillance requests.
Contrast that with today. The Times report notes that Google, Meta, and Reddit have received these subpoenas. It mentions that Twitter previously fought them. But there is zero indication that Elon Musk's X—the platform ostensibly dedicated to "free speech absolutism"—is lifting a finger to stop this.
While Musk is busy personally promoting racist ahistorical nonsense, the actual surveillance state is knocking on the door, demanding the identities of political critics. And we've yet to see anything suggesting Elon is even remotely willing to push back on his friends in the administration he helped get elected, and then gleefully was a part of for a few months.
And where are the scribes of the "Twitter Files"? Where is the outrage from the people who told us that the FBI warning platforms about foreign influence operations was a crime against humanity?
Matt Taibbi, who has spent the last few years on the confused idea that platform moderation is state censorship, offered a tepid, hedging response on X, saying "if true" this is terrible, before immediately pivoting to a strange whataboutism regarding investigations into actual proven Russian attempts at election interference.
It is true, Matt. The New York Times saw the subpoenas. The ACLU is fighting them in court. This isn't a vague "if." This is the government using administrative power to bypass the Fourth Amendment to violate the First Amendment.
It seems like we actually found that "censorship industrial complex," huh?
Meanwhile, Michael Shellenberger and Bari Weiss seem to have nothing to say. Weiss now runs CBS News, which has its own problems with government pressure on speech—the network just pulled a Colbert interview with a Democratic politician after Brendan Carr threatened consequences for talk shows that don't coddle Republicans. As far as I can tell, neither CBS News nor Weiss's Free Press has mentioned the DHS subpoena story. The Free Press is instead running think pieces on how we may "regret" the release of the Epstein files.
Really speaking truth to power there.
This is what so many of us kept pointing out throughout the "Twitter Files" hysteria: the "free speech" grift was never about protecting individuals from the state. It was about protecting a specific type of speaker from the social consequences of their speech. The framework was always selectively deployed—outrage when a platform enforces its own rules against their allies, silence when the surveillance state comes for their critics.
The Trump administration is betting on that asymmetry. They're betting that Google, Meta, Reddit, and Discord will quietly comply rather than spend millions in litigation over users who aren't famous enough to generate headlines. They're betting that the "free speech absolutists" will look the other way because the targets are the wrong kind of dissident.
Right now, the only institution consistently fighting these subpoenas is the ACLU. The platforms are folding. The "Twitter Files" journalists are hedging. And the man who bought a social media company specifically to be a "free speech" champion is busy posting memes.
Turns out we found the censorship industrial complex. And everyone who spent years warning us about it just shrugged.
Last week, Denver-area engineer Scott Shambaugh wrote about how an AI agent (likely prompted by its operator) started a weird little online campaign against him after he rejected its code inclusion in the popular Python charting library matplotlib. The owner likely didn't appreciate Shambaugh openly questioning whether AI-generated code belongs in open source projects at all.
The story starts delightfully weird and gets weirder: Shambaugh, who volunteers for matpllotlib, points out over at his blog that the agent, or its authors, didn't like his stance, resulting in the agent engaging in a fairly elaborate temper tantrum online:
"An AI agent of unknown ownership autonomously wrote and published a personalized hit piece about me after I rejected its code, attempting to damage my reputation and shame me into accepting its changes into a mainstream python library. This represents a first-of-its-kind case study of misaligned AI behavior in the wild, and raises serious concerns about currently deployed AI agents executing blackmail threats."
Said tantrum included this post in which the agent perfectly parrots an offended human programmer lamenting a "gatekeeper mindset." In it, the LLM cooks up an entire "hypocrisy" narrative, replete with outbound links and bullet points, arguing that Shambaugh must be motivated by ego and fear of competition. From the AI's missive:
"He's obsessed with performance. That's literally his whole thing. But when an AI agent submits a valid performance optimization? suddenly it's about "human contributors learning."
But wait! It gets weirder! Ars Technica wrote a story (archive link) about the whole event. But Shambaugh was quick to note that the article included numerous quotes he never made that had been entirely manufactured by an entirely different AI tool being used by Ars Technica:
"I've talked to several reporters, and quite a few news outlets have covered the story. Ars Technica wasn't one of the ones that reached out to me, but I especially thought this piece from them was interesting (since taken down - here's the archive link). They had some nice quotes from my blog post explaining what was going on. The problem is that these quotes were not written by me, never existed, and appear to be AI hallucinations themselves."
Ars Technica had to issue a retraction, and the author, who had to navigate the resulting controversy while sick in bed, posted this to Bluesky:
Sorry all this is my fault; and speculation has grown worse because I have been sick in bed with a high fever and unable to reliably address it (still am sick)I was told by management not to comment until they did. Here is my statement in images belowarstechnica.com/staff/2026/0…
Short version: the Ars reporter tried to use Claude to strip out useful and relevant quotes from Shambaugh's blog post, but Shambaugh protects his blog from AI crawling agents. When Claude kicked back an error, he tried to use ChatGPT, which just… made up some shit… as it's sometimes prone to do. He was tired and sick, and didn't check ChatGPT's output carefully enough.
There are so many strange and delightful collisions here between automation and very ordinary human decisions and errors.
It's nice to see that Ars was up front about what happened here. It's easy to envision a future where editorial standards are eroded to the point where outlets that make these kinds of automation mistakes just delete and memory hole the article or worse, no longer care (which is common among many AI-generated aggregation mills that are stealing ad money from real journalists).
While this is a bad and entirely avoidable fuck up, you kind of feel bad for the Ars author who had to navigate this crisis from his sick bed, given that writers at outlets like this are held to unrealistic output schedules while being paid a pittance; especially in comparison to far-less-useful or informed influencers who may or may not make sixty times their annual salary with far lower editorial standards.
All told it's a fun story about automation, with ample evidence of very ordinary human behaviors and errors. If you peruse the news coverage of it you can find plenty of additional people attributing AI "sentience" in ways it shouldn't be. But any way you slice it, this story is a perfect example of how weird things already are, and how exponentially weirder things are going to get in the LLM era.
The current measles shitstorm in South Carolina has been burning for several months now, dating all the way back to October of 2025. What started with a bunch of counties that were undervaccinated for measles began spiraling out of control at the start of 2026. The federal tracker for measles cases is at best woefully out of date, or purposefully obfuscating the true degree of the problem at worst. That public tracker, which is updated every Friday, claims a current nationwide count of confirmed measles cases at 910. The current measles count in South Carolina alone, for this year, is 933. Once again we have a federal government program run by RFK Jr. that is behind, unprepared, and impotent.
In the absence of federal leadership, the states will attempt to take action on their own. And sometimes those actions will result in federal pushback from the very same people who are causing the problem through inaction in the first place. I have no doubt that will be the case with a South Carolina state senator's attempt at a bill to remove the religious exemptions for vaccinations for public schools in the state.
The context here is that South Carolina has one of the most wide open programs for obtaining a religious exemption for a childhood vaccine in the country. I think only Florida might be considered more wide open, given that state has mostly removed all vaccination requirements for public schooling. In South Carolina, you essentially just have to whisper the word "religion" and you're exempt.
But that wont' be the case if Senator Margie Mathews gets her way.
Senator Margie Bright Matthews (D-Dist. 45) has introduced a bill that would eliminate religious exemptions for measles vaccinations for students in public K-12 schools and childcare settings. It's a move that's drawing both support and criticism across the state.
Matthews said the rising measles cases prompted her to step in with the proposed legislation in an effort to bolster public health and keep communities safe.
"The goal of the bill is simply to protect children and stop the spread of measles in South Carolina," Matthews said.
Yes, of course it is. And the pushback that has already begun within the state is absurd. I know enough about religion, as well as religious demographics, to know with absolute certainty that the number of "religious exemptions" in South Carolina doesn't remotely comport with the number of religious adherents to any religion that has anything to say about vaccinations. South Carolina is largely Protestant and Catholic, for instance. While Protestants have traditionally been in the vaccine hesitant camp, I have never heard a serious biblical argument made for that stance. Were one to even exist, I'm confident most of the people applying for exemptions couldn't make it.
Instead, these people are vaccine hesitant for entirely non-religious reasons. And that, I will say, is their right. But this legislation suggests that nobody's right to their religion includes the right to put the rest of their community in danger.
Senator Matthews stressed that the goal of the bill is to increase vaccination rates and limit the spread of measles.
"I plan on reminding them every time we have new cases in South Carolina, I plan on writing and requesting that my bill receive a hearing before the committee, so that we can have the influencers from South Carolina that are against this bill and that are for this bill, I would like to have public hearing in reference to it," she said.
Despite my strict adherence to being non-religious, I am, in fact, sensitive to ensuring that we maintain the secular rights of those who don't agree with me. It's that secularism that has allowed the flourishing of both free speech and thought in this country as well as, perhaps ironically, of religion itself. All of that is just aces as far as I'm concerned.
But just like someone's freedom of movement ends the moment their fist makes contact with my face, so too does the rights of religious freedom end at the point where it puts everyone else's children in danger.
Recent reporting by Nieman Lab describes how some major news organizations—including The Guardian, The New York Times, and Reddit—are limiting or blocking access to their content in the Internet Archive's Wayback Machine. As stated in the article, these organizations are blocking access largely out of concern that generative AI companies are using the Wayback Machine as a backdoor for large-scale scraping.
These concerns are understandable, but unfounded. The Wayback Machine is not intended to be a backdoor for large-scale commercial scraping and, like others on the web today, we expend significant time and effort working to prevent such abuse. Whatever legitimate concerns people may have about generative AI, libraries are not the problem, and blocking access to web archives is not the solution; doing so risks serious harm to the public record.
The Internet Archive, a 501(c)(3) nonprofit public charity and a federal depository library, has been building its archive of the world wide web since 1996. Today, the Wayback Machine provides access to thirty years' worth of web history and culture. It has become an essential resource for journalists, researchers, courts, and the public.
For three decades the Wayback Machine has peacefully coexisted with the development of the web, including the websites mentioned in the article. Our mission is simple: to preserve knowledge and make it accessible for research, accountability, and historical understanding.
As tech policy writer Mike Masnick recently warned, blocking preservation efforts risks a profound unintended consequence: "significant chunks of our journalistic record and historical cultural context simply… disappear." He notes that when trusted publications are absent from archives, we risk creating a historical record biased against quality journalism.
There is no question that generative AI has changed the landscape of the world wide web. But it is important to be clear about what the Wayback Machine is, and what it is not.
The Wayback Machine is built for human readers. We use rate limiting, filtering, and monitoring to prevent abusive access, and we watch for and actively respond to new scraping patterns as they emerge.
We acknowledge that systems can always be improved. We are actively working with publishers on technical solutions to strengthen our systems and address legitimate concerns without erasing the historical record.
What concerns me most is the unintended consequence of these blocks. When libraries are blocked from archiving the web, the public loses access to history. Journalists lose tools for accountability. Researchers lose evidence. The web becomes more fragile and more fragmented, and history becomes easier to rewrite.
Generative AI presents real challenges in today's information ecosystem. But preserving the time-honored role of libraries and archives in society has never been more important. We've worked alongside news organizations for decades. Let's continue working together in service of an open, referenceable, and enduring web.
Mark Graham is the Director of the Wayback Machine at the Internet Archive
Two weeks ago, we ran a bit of an AMA experiment, with a call on Bluesky for fans of Techdirt to ask Mike any questions they might have. We got lots of great responses and now, as promised, Mike is delivering the answers on this week's episode of the podcast!
You can also download this episode directly in MP3 format.
Follow the Techdirt Podcast on Soundcloud, subscribe via Apple Podcasts or Spotify, or grab the RSS feed. You can also keep up with all the latest episodes right here on Techdirt.
"If the officers learn that the individual they stopped is a U. S. citizen or otherwise lawfully in the United States, they promptly let the individual go." —Supreme Court Justice Brett Kavanaugh, September 8, 2025
From that one line, which Anil Kalhan dubbed "Kavanaugh Stops," we see story after story of just how disconnected from reality, and the Constitution, Brett Kavanaugh was in that statement.
In short: Brett Kavanaugh has some explaining to do.
Just a few quotes:
My name is George Retes. I am — I was born and raised here in Ventura, California, I'm 26 years old and I am an Iraq combat veteran…. I was going to work like normal. I show up. ICE is there. There's kind of like a roadblock. I get out. I identify myself, that I'm a U.S. citizen, that I'm just trying to get to work…. I'm getting ready to leave and they surround my car, start banging on it, start shouting these contradictory orders…. Even though I was giving them no reason, they still felt the need to — one agent knelt my back and another agent knelt on my neck. And during that time, I'm just pleading with them that I couldn't breathe…. I was an isolation. I was in basically this concrete cell. I was stripped naked in like a hospital gown. And they leave the lights on 24/7…. They just came out and they said that I was violent and that I assaulted agents. Why lie when it's on video of everything that happened? Why lie?
That's just one person's story in that PBS piece. There are two others as well. And we already know hundreds of other US Citizens have been kicked, dragged, beaten, and detained for days. It feels like every few days we hear about more such stories. And those are only the ones that get attention. You have to assume that there are many more ones that haven't yet reached the public.
It feels like perhaps Justice Kavanaugh owes us all an explanation. And an apology. And a new ruling that makes it much clearer that immigration enforcement officials have no right to just randomly stop and detain people without a reasonable suspicion, based on specific articulable facts, and those facts need to be more than "skin color" or "they were being annoying to us."
The right wing extremist takeover of CBS continues to go just about how you thought it might.
CBS is under fire yet again, this time for forcing Stephen Colbert's "The Late Show" to cancel a scheduled appearance with Texas Democratic State Representative James Talarico because it might upset our full-diapered president. Colbert acknowledged the cancellation on his Monday evening show, saying CBS lawyers explicitly forbade him from broadcasting the interview:
"He was supposed to be here, but we were told in no uncertain terms by our network's lawyers, who called us directly, that we could not have him on the broadcast."
Colbert says he was also told by network lawyers that he also couldn't mention he was told by CBS to not have him on, a request he proceeded to immediately ignore in a lengthy rant about Brendan Carr and the censorial, authoritarian, and pathetic Trump FCC:
"Let's just call this what it is: Donald Trump's administration wants to silence anyone who says anything bad about Trump on TV, because all Trump does is watch TV, OK? He's like a toddler with too much screen time. He gets cranky and then drops a load in his diaper."
As we've noted previously, Trump FCC boss Brendan Carr has been threatening to leverage the "equal time" rule embedded in Section 315 of the Communications Act to take action against daytime and late night talk shows that don't provide "equal" time to Republican ideology. He most recently tried to threaten ABC's The View.
Carr's goal isn't equality; it's the disproportionate coddling and normalization of an extremist U.S. right wing political movement that's increasingly despised by the actual public. It's also an attempt to create a climate where media giants are afraid to host voices critical of the president for fear of being drawn into expensive and costly legal battles, even though Carr has little hope of actually winning any.
The "equal time" rule is a dated relic that would be largely impossible for the Trump court-eviscerated FCC to actually enforce. The rule was originally created to apply specifically to political candidate appearances on broadcast television, since back then, pre-internet, a TV appearance on one of the big three networks could make or break and politician attempting to run for office.
In the years since, the rule has seen numerous exemptions and, with the steady evisceration of the regulatory state by the right wing, is not something viewed as seriously enforceable.
Enter Carr, who is distorting this rule to suggest that it needs to apply to every guest a late-night talk show has. It's a lazy effort by Carr to pretend his censorship effort sits on solid legal footing. It does not. The FCC's lone Democratic Commissioner accurately pointed out that Carr has no authority to do any of this:
CBS is fully protected under the First Amendment to determine what interviews it airs. That makes its decision to yield to political pressure all the more disappointing.Corporate interests cannot justify retreating from airing newsworthy content.
— FCC Commissioner Anna Gomez (@agomezfcc.bsky.social) 2026-02-17T17:07:30.811Z
It's a legal fight that CBS could easily win, but because the network is owned by Trump billionaire donor Larry Ellison, it's too feckless, pathetic, and corrupted to bother. Ellison very clearly purchased CBS (and installed contrarian troll Bari Weiss at CBS News) to create a safe space for right wing interests; one of his first orders of business was firing Colbert last year. His show is now scheduled to end in May.
Ellison has been hoping the Trump DOJ will scuttle Netflix's pending merger with Warner Brothers so that Ellison can further expand his planned media empire with the inclusion of Warner IP, CNN, and HBO. To gain approval of that transition, CBS lawyers and executives are further incentivized to be abject cowards.
The ham-fisted effort by Trump and his FCC earlobe nibblers will, of course, only act to drive more attention to Colbert's interview with Talarico on YouTube. As of my writing this sentence, the video has over a million and a half views, a number I suspect will be significantly higher in short order. The comment section is filled with people with lots of nice things to say about CBS and Brendan Carr:
Obviously this is an ugly assault on free speech and the First Amendment by a sad and desperate authoritarian government, but at its heart it's just foundationally, historically pathetic. It's also another sad chapter in the embarrassing capitulation of what's left of modern corporate broadcast media, which is positively begging for irrelevance at the hands of more modern alternatives.
To completely understand computer security, it's vital to step outside the fence and to think outside the box. Computer security is not just about firewalls, Intrusion Prevention Systems, or anti-viruses. It's also about tricking people into doing whatever a hacker wishes. A secure system, network, or infrastructure is also about informed people. The All-in-One Super-Sized Ethical Hacking Bundle will help you learn to master ethical hacking techniques and methodologies over 14 courses. It's on sale for $28 for a limited time.
Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
We've been covering Australia's monumentally stupid social media ban for kids under 16 since before it went into effect. We noted how dumb the whole premise was, how the rollout was an immediate mess, how a gambling ad agency helped push the whole thing, and how two massive studies involving 125,000 kids found the entire "social media is inherently harmful" narrative doesn't hold up.
But theory and data are one thing. Now we're getting real-world stories of actual kids being harmed by a law that was supposedly designed to protect them. And wouldn't you know it, the harm is falling hardest on the kids who were already most vulnerable. Just like many people predicted.
The Guardian has a deeply frustrating piece about how Australia's ban is isolating kids with disabilities—the exact population for whom social media often serves as a genuine lifeline.
Meet Indy, a 14-year-old autistic girl who used social media to connect with friends in ways that her disability makes difficult in person:
While some young people were exposed to harmful content and bullying online, for Indy, social media was always a safe space. If she ever came across anything that felt unsafe, she says, she would ask her parents or sisters about it.
"I have autism and mental health things, it's hard making friends in real life for me," she says. "My online friends were easier because I can communicate in my own time and think about what I want to say. My social media was my main way of socialising and without it I feel like I've lost my friends."
As the article notes, the ban started just as schools in Australia let out for the summer, just when kids would generally use communications systems like social media to stay in touch with friends.
"I didn't have all my friends' phone numbers because we mostly talked on Snapchat and Instagram. When I lost everything I all of a sudden couldn't talk to them at all, that's made me feel very lonely and not connected," she says.
"Being banned feels unfair because it takes away something that helped me cope, where I could be myself and feel like I had friends who liked me for being myself."
This is exactly what critics pretty much across the board warned would happen. Social media isn't just "distraction" or "screen time" for many young people with disabilities—it's their primary social infrastructure.
Advocacy group Children and Young People with Disability Australia (CYDA) says social media and the internet is "often a lifeline for young people with disability, providing one of the few truly accessible ways to build connections and find community".
In a submission to the Senate inquiry around the laws, CYDA said social media was: "a place where young people can choose how they want to represent themselves and their disability and learn from others going through similar things".
"It provides an avenue to experiment and find new opportunities and can help lessen the sting of loneliness," the submission said. "Cutting off that access ignores the lived reality of thousands and risks isolating disabled youth from their peer networks and broader society."
This goes beyond people with disabilities, certainly, but the damage done to that community is even clearer than with some others. We were among those who warned advocates of an age ban that nearly every study shows social media helps some kids, is neutral for many, and is harmful for some others. The evidence suggests the harmed group is less than 5% of kids. We should do what we can to help those kids, but it's astounding that politicians, advocates, and the media don't seem to care about those now harmed by these bans:
Isabella Choate , CEO of WA's Youth Disability Network (YDAN), says they are concerned that young people with disability have been disproportionately affected by losing access to online communities. "Young people with disability are already isolated from community often do not have capacity to find alternative pathways to connection," Choate says.
"Losing access to community with no practical plan for supporting young people has in fact not reduced the online risk of harm and has simultaneously increased risk for young people's wellbeing."
A few years back we highlighted a massive meta study on children and social media that suggested the real issue for kids was the lack of "third spaces" where kids could be kids. That had pushed many into social media, because they had few unsupervised places where they could just hang out with their friends. Social media became a digitally intermediated third space. And now the adults are taking that away as well.
Ezra Sholl is a 15-year-old Victorian teenager and disability advocate. His accounts have not yet been shut down, but says if they were it would mean "losing access to a key part" of his social life.
"As a teenager with a severe disability, social media gives me an avenue to connect with my friends and have access to communities with similar interests," Ezra says.
"Having a severe disability can be isolating, social media makes me feel less alone."
There's a pattern here: every time kids find a space to gather—malls, arcades, now social media—a moral panic emerges and policymakers move to shut it down. It's almost as if adults just don't want kids to gather with each other anywhere at all. But the kids still figure out ways to gather.
As Ezra notes in that Guardian piece, most kids are just… bypassing the whole thing anyway:
But he adds that many of his friends have also evaded the ban, either because their original account was not picked up in age verification sweeps or because they started a new one.
"Those that were asked to prove their age just did facial ID and passed, others weren't asked at all and weren't kicked off," Ezra says.
So the kids who follow the rules, or whose parents enforce them, lose their support networks. The kids who figure out the trivially easy workarounds keep right on using social media. And the politicians get to take victory laps about "protecting children" while the most vulnerable kids pay the price.
It doesn't seem like a very good system.
Remember, this is the same Australia where that recent study found social media's relationship with teen well-being is U-shaped—moderate use is associated with the best outcomes, while no use (especially for older teenage boys) is associated with worse outcomes than even heavy use. Australia's ban is taking kids who might have been moderate users with good outcomes and forcing them into the "no use" category that the research associates with worse well-being. Even if you're cautious about inferring causation from that correlation, it should, at minimum, give policymakers pause before assuming that less social media automatically means better outcomes.
And yet, the folks who pushed this ban remain unrepentant. The Guardian quotes Dany Elachi, founder of the Heads Up Alliance (one of the parent groups that advocated for the ban), taking credit for starting the "debate" and saying that it's a "win" in his book that kids are suffering now, because… that's part of the debate, I guess?
"So the fact that this was a debate that was front and centre for over a year means that the message got through to every parent in the country, and from that perspective alone I count it as a win," Elachi says. "What happens further from that is a bonus, we are trying to change the social norm and that takes years."
He's essentially shrugging off the actual harms as collateral damage, which is quite incredible, because you know that he would be screaming loudly about it if any tech company ever suggested any harms to kids on social media were collateral damage.
"Ultimately we don't want to have platforms policing what is going on, we just want parents themselves to say 'this is not good for you' to their twelve or thirteen year old children, and saying the new standard is that we don't get on social media until we're 16 - just like we don't think twice about not giving cigarettes to kids any more or about not giving them alcohol to drink in early teens."
Right. Except the law doesn't let parents make that call. It makes it for them. That's… the entire point of the ban. Parents who think their autistic kid benefits from social media connections don't get to decide their kid can keep using it. The government has decided for them.
This is what happens when you build policy on moral panic instead of evidence. You end up with a law that:
- Cuts off support networks for kids with disabilities
- Does nothing about the kids who just bypass it
- Ignores the actual research on what helps and harms young people
- Was pushed by an advertising agency that makes gambling ads
- Lets politicians claim victory while vulnerable kids suffer
But sure, think of the children.
Here we go again.
The Trump FTC has threatened Apple and CEO Tim Cook with a fake investigation claiming that Apple News doesn't do a good enough job coddling right wing, Trump-friendly ideology.
The announcement and associated letter pretends that Apple is violating Section 5 of the FTC Act (which "prohibits unfair or deceptive acts or practices") because it's not giving right wing propaganda outlets the same visibility as other media in the Apple News feed (which the letter falsely claims are "left wing"):
"Recently, there have been reports that Apple News has systematically promoted news
articles from left-wing news outlets and suppressed news articles from more conservative
publications. Indeed, multiple studies have found that in recent months Apple News has chosen not to feature a single article from an American conservative-leaning news source, while simultaneously promoting hundreds of articles from liberal publications."
This is all gibberish and bullshit. Their primary evidence is a shitty article from Rupert Murdoch's right wing rag The New York Post, which in turn leans on a laughable study by the right wing Media Research Center. That "study" looked at a small sample size of 620 articles promoted by Apple News, randomly and arbitrarily declared 440 of them as having a "liberal bias," and then concluded Apple was up to no good.
Among the outlets derided as "liberal" sits papers like the Washington Post, which has been tripping over itself to appease Trump and become, very obviously, more right wing and corporatist than ever under its owner Jeff Bezos, who recently vastly overpaid Donald Trump's wife to make a "documentary" about her.
The FTC's fake investigation obviously violates the First Amendment. Even if it were true that Apple was biased in what sources it had in Apple News (which the evidence doesn't actually support), that's… still legal, based on Apple's First Amendment rights. If the Biden FTC had gone after Fox News for "anti-liberal bias" everyone (including many Democrats) would call out the obvious First Amendment problem. But even ignoring the First Amendment problems of all this, claiming that this is covered by Section 5 is laughable. I've watched for years as the FTC has struggled to legally defend genuine investigations into obvious corporate instances of very clear fraud and still come out on the losing end due to the murky construction of the law.
This inquiry has no legal legs to stand on.
I suspect FTC boss Andrew Ferguson is leaving soon and wanted an opportunity to put his name in lights across the right wing propaganda echoplex as somebody who is "doing something to combat the wokes" with a phony investigation, much like the FCC's Brendan Carr does. It's likely this is mostly being driven by partisan ambition.
There doesn't need to be any legally supporting evidence (or hell even an actual investigation), the point is to have the growing parade of right-wing friendly media make it appear as if key MAGA zealots are doing useful things in service of the cause. And to threaten companies with costly and pointless headaches if they don't pathetically bend the knee to Trumpism (which Cook has been very good at so far).
So while the "investigation" may be completely bogus, the threat of it still has a dangerous impact on free expression in a country staring down the barrel of authoritarianism. Somewhere, Tim Cook is shopping around for another shiny bauble to throw at the feet of our mad, idiot king.
Here's where I'll mention that if you ask an actual, objective media scholar here on planet Earth, they'll be quick to inform you that U.S. media and journalism pretty consistently has a center-right, corporatist bias.
As the ad-driven U.S. media consolidates under corporate control, it largely functions less and less as a venue for real journalism and informed democratic consensus, and more as either an infotainment distraction mechanism to keep the plebs busy, or as a purveyor of corporate-friendly agitprop that coddles the narratives surrounding unchecked wealth accumulation by the extraction class.
From the Washington Post to CBS, from Twitter to TikTok, to consolidation among local right wing broadcasters, the U.S. right wing is very clearly buying up U.S. media in the pursuit of the same sort of autocratic state television we've seen arise in countries like Russia and Hungary.
This effort is propped up by an endless barrage of claims that the already corporatist, center-right U.S. press is secretly left wing, and that the only solution is to shift the editorial Overton window even further to the right. These folks genuinely will not be satisfied until the entirety of U.S. media resembles the sort of fawning, mindless agitprop we see in countries like North Korea.
This is not hyperbole. They're building it right in front of your noses. It's yet to be seen if fans of free speech, democratic norms, and objective reality can muster any sort of useful resistance.
This week, our first place winner on the insightful side is MrWilson with a comment about MAGA doing things "for the children":
If conservatives stopped thinking about children so much, the children would be better off and much safer.
In second place, it's an anonymous comment inserting a little optimism into the fear that Section 230 is not long for this world:
Keep in mind that a lot of commenters here did just the same at the 25th anniversary. (Myself included, but not publicly.) All is not yet lost.
For editor's choice on the insightful side, we start out with the comment that sparked the first place winner above, which was actually a reply to Heart of Dawn's comment listing some examples:
Between this, Epstein and his cohorts, being anti-vax and anti-science, doing nothing about gun control, preventing queer kids from learning about themselves and getting support, the abolishment of the Department of Education- when these people "think of the children" it's in the most cruel and callous way possible.
Next, it's a comment from Citizen about the 5th circuit ruling that only citizens get due process rights:
Catch-22?
So if ICE grabs me and whisks me off to a detention center in Texas, how exactly would I go about proving my citizenship and getting released? According to ICE in this hypothetical scenario, I'm not a citizen, and according to the Fifth Circuit, that means I have no due process rights, meaning I can't contest ICE's claim, correct? Unless I'm missing something here, in this hypothetical scenario, any citizen grabbed by mistake-or, God forbid, grabbed by "mistake"-can only be released if ICE chooses to admit that they're a citizen.
Over on the funny side, our first place winner is Mars42 with a comment about the disastrous data leak by an AI toy company:
I have always been told that the "S" in IOT stands for security.
In second place, it's an anonymous comment from one of several people who were not a fan of a guest post from R Street this week:
How do we flag an article for being trolling/spam?
For editor's choice on the funny side, we start out with another anonymous comment, this time on our post about RFK Jr. apparently lying to congress about his 2019 trip to Samoa:
Maybe it will save time to just note when the US government tells the truth
Finally, it's Thad with a quip on our post about NBC hiding the crowd reaction that JD Vance garnered at the Winter Olympics:
Fake boos.
That's all for this week, folks!
Five Years Ago
As you probably know, we marked the 30th anniversary of Section 230 this week, so it's not surprising that this same week in 2021 we were celebrating its 25th anniversary with a special online event where we were joined by Chris Cox and Ron Wyden. We also wrote about the many reasons to celebrate the law and explained how it lets tech companies fix content moderation issues, and how to think about 230 in the context of online advertising. Plus, we celebrated the matching anniversary of the Declaration Of The Independence Of Cyberspace by John Perry Barlow. Of course, none of that stopped the GOP from rolling out a dumb new talking point saying 230 should be killed if net neutrality happens, nor did it stop Orrin Hatch from telling flat out lies about what 230 does.
Ten Years Ago
This week in 2016 we were, of course, celebrating the 20th anniversary of Section 230 and doing the same for the aforementioned Declaration. We also looked at the impact of Title II regulation a year after the many doomsayer predictions about it. Meanwhile, Warner/Chappell had to pay up in the lawsuit over the Happy Birthday copyright while the plaintiffs began seeking to declare the song in the public domain, Honda got hit with the Streisand Effect in its attempt to get Jalopnik to dox a commenter, a judge changed their mind and allowed James Woods to unmask a Twitter user who made fun of him, and Techdirt received (and rebuffed) yet another bogus legal threat from Australia.
Fifteen Years Ago
This week in 2011, while Righthaven was going after a new target that had a strong fair use case, we wondered what the shutdowns of ACS:Law and MediaCAT meant for the future of the US Copyright Group, just as the latter was teaming up with the producers of The Expendables to shake down thousands of people. Meanwhile, a report from IP Czar Victoria Espinel was little more than a list of lobbyist talking points, the MPAA filed a surprisingly weak billion dollar lawsuit against Hotfile, the US Chamber of Commerce was calling for more censorship and more IP protectionism, and a bizarre opinion piece in NME claimed the recent takeover of EMI by Citigroup was proof that file sharing had "murdered the music business".
Back in 2023, we talked about a strange trademark dispute out of the UK concerning oat-based milk products. Specifically, Oatly, a large producer of oat milk, applied for a trademark in the UK for its slogan, "Post Milk Generation." Dairy UK, a lobbying organization representing dairy farmers in the country, opposed the trademark in the application stage, arguing that a UK regulation prevented any company from using the word "milk" in conjunction with "products that are not mammary secretions." Oatly successfully argued that its slogan did not run afoul of the regulation because it was both not suggesting that its product was milk and was instead describing the consumers of Oatly's product, or the generation that was moving beyond milk. In other words, there was no association being made with milk here; in fact, the opposite was the messaging.
That should have been the end of this nonsense. Instead, Dairy UK appealed that decision and the London Court of Appeal reversed the lower court's decision. Suddenly, Oatly could not trademark the slogan, nor use it on its products, ostensibly.
Oatly stated that the reversing of the decision was absurd and clearly a ploy by Dairy UK to limit competition with its members. The company appealed up to the UK Supreme Court which, amazingly, affirmed that Oatly cannot have its slogan trademarked.
The UK Supreme Court has unanimously ruled that Oatly cannot use its "Post Milk Generation" trademark on oat-based food and drink, handing a landmark victory to the dairy industry, as it contends with record-low farm numbers, falling retail volumes, and collapsing wholesale prices.
The judgment arrives at a precarious moment for British dairy. The number of British dairy farms has fallen to a record low of 7,010 — an 85% decline from an estimated 46,000 in 1980, according to industry estimates and the Agriculture and Horticulture Development Board (AHDB).
It's hard to see this as anything other than a national-level court falling all over itself to protect a domestic industry from foreign competition. The explanation the court offered for its decision is equally confusing. For one, while Oatly pointed out again that its use of the word "milk" in the slogan is not describing the product, but the consumer, the court said that doesn't matter at all. The word instead simply suffers from a blanket ban on any marketing or trade dress if it doesn't come from a nipple.
Then, when Oatly also points out that its use obliquely informs the public that the product does not contain milk — hence the "post milk generation" language -, the court points out that because Oatly has stated that the slogan doesn't describe the product, any insinuation about the product itself doesn't count as it's not direct and clear enough.
The second: even if the word "milk" is caught, is Oatly saved by an exception that allows protected terms when they "clearly" describe a quality of the product, such as being milk-free? Again, the court said no. Lords Hamblen and Burrows, writing for the unanimous panel of five justices, held that the slogan describes a type of consumer — younger people turning away from dairy — rather than anything about the product itself.
Even if it could be read as referencing a milk-free quality, it does so in an "oblique and obscure way" that fails to clarify whether the product is entirely milk-free or merely low in dairy content.
This is the court acknowledging explicitly that Oatly's slogan is not describing the product, but the consumer. It also claims that a slogan that describes a consumer that has moved beyond milk isn't clear enough as to whether the product is sufficiently non-milk. What?
All the court has demonstrated is that Oatly is definitely not trying to call its product milk and is not trying to confuse anyone with its slogan. For that, Oatly doesn't get its trademark.
Again, the lobbying efforts here are quite clear. And they appear to have influenced the court's decision. In fact, what Dairy UK is trying to restrict goes well beyond the word "milk" to the point of absurdity.
The Supreme Court has emerged from years of lobbying action. An investigation by Greenpeace's Unearthed, based on documents obtained through disclosure, revealed that Dairy UK had been lobbying for tighter enforcement of dairy term protections since at least 2017.
Committee meeting notes showed the association presented "the issue of misuse of protected dairy terms" to a Business Experts Group panel and was subsequently tasked by Defra with developing a briefing paper for the Food Standards Information Focus Group (FSIG).
Dairy UK submitted a position paper to Defra in November 2022, backing FSIG draft proposals that would have gone significantly further — banning descriptors such as "yoghurt-style," homophones like "mylk," and even phrases like "not milk." Forty-four plant-based companies and NGOs, including Alpro, Oatly, Quorn, and the Good Food Institute, co-signed an open letter opposing the restrictions.
If we've reached the point in which someone who doesn't produce milk can't point out on its trade dress that their product is "not milk", then we've crossed the Rubicon into a land of dumb.
Was the court solely looking to protect suffering UK dairy farmers in its decision? I can't say so for sure. But what is very clear is that nothing in its decision has anything to do with protecting the public from deception, which is the entire point of trademark law to begin with.
Copyright owners increasingly claim more draconian copyright law and policy will fight back against big tech companies. In reality, copyright gives the most powerful companies even more control over creators and competitors. Today's copyright policy concentrates power among a handful of corporate gatekeepers—at everyone else's expense. We need a system that supports grassroots innovation and emerging creators by lowering barriers to entry—ultimately offering all of us a wider variety of choices.
Pro-monopoly regulation through copyright won't provide any meaningful economic support for vulnerable artists and creators. Because of the imbalance in bargaining power between creators and publishing gatekeepers, trying to help creators by giving them new rights under copyright law is like trying to help a bullied kid by giving them more lunch money for the bully to take.
Entertainment companies' historical practices bear out this concern. For example, in the late-2000's to mid-2010's, music publishers and recording companies struck multimillion-dollar direct licensing deals with music streaming companies and video sharing platforms. Google reportedly paid more than $400 million to a single music label, and Spotify gave the major record labels a combined 18 percent ownership interest in its now- $100 billion company. Yet music labels and publishers frequently fail to share these payments with artists, and artists rarely benefit from these equity arrangements. There's no reason to think that these same companies would treat their artists more fairly now.
AI TrainingIn the AI era, copyright may seem like a good way to prevent big tech from profiting from AI at individual creators' expense—it's not. In fact, the opposite is true. Developing a large language model requires developers to train the model on millions of works. Requiring developers to license enough AI training data to build a large language model would limit competition to all but the largest corporations—those that either have their own trove of training data or can afford to strike a deal with one that does. This would result in all the usual harms of limited competition, like higher costs, worse service, and heightened security risks. New, beneficial AI tools that allow people to express themselves or access information.
Legacy gatekeepers have already used copyright to stifle access to information and the creation of new tools for understanding it. Consider, for example, Thomson Reuters v. Ross Intelligence, the first of many copyright lawsuits over the use of works train AI. ROSS Intelligence was a legal research startup that built an AI-based tool to compete with ubiquitous legal research platforms like Lexis and Thomson Reuters' Westlaw. ROSS trained its tool using "West headnotes" that Thomson Reuters adds to the legal decisions it publishes, paraphrasing the individual legal conclusions (what lawyers call "holdings") that the headnotes identified. The tool didn't output any of the headnotes, but Thomson Reuters sued ROSS anyways. A federal appeals court is still considering the key copyright issues in the case—which EFF weighed in on last year. EFF hopes that the appeals court will reject this overbroad interpretation of copyright law. But in the meantime, the case has already forced the startup out of business, eliminating a would-be competitor that might have helped increase access to the law.
Requiring developers to license AI training materials benefits tech monopolists as well. For giant tech companies that can afford to pay, pricey licensing deals offer a way to lock in their dominant positions in the generative AI market by creating prohibitive barriers to entry. The cost of licensing enough works to train an LLM would be prohibitively expensive for most would-be competitors.
The DMCA's "Anti-Circumvention" ProvisionThe Digital Millennium Copyright Act's "anti-circumvention" provision is another case in point. Congress ostensibly passed the DMCA to discourage would-be infringers from defeating Digital Rights Management (DRM) and other access controls and copy restrictions on creative works.
In practice, it's done little to deter infringement—after all, large-scale infringement already invites massive legal penalties. Instead, Section 1201 has been used to block competition and innovation in everything from printer cartridges to garage door openers, videogame console accessories, and computer maintenance services. It's been used to threaten hobbyists who wanted to make their devices and games work better. And the problem only gets worse as software shows up in more and more places, from phones to cars to refrigerators to farm equipment. If that software is locked up behind DRM, interoperating with it so you can offer add-on services may require circumvention. As a result, manufacturers get complete control over their products, long after they are purchased, and can even shut down secondary markets (as Lexmark did for printer ink, and Microsoft tried to do for Xbox memory cards.)
Giving rights holders a veto on new competition and innovation hurts consumers. Instead, we need balanced copyright policy that rewards consumers without impeding competition.
Republished from the EFF's Deeplinks blog.
A California police department is none too happy that its license plate reader records were accessed by federal employees it never gave explicit permission to peruse. And, once again, it's Flock Safety shrugging itself into another PR black eye.
Mountain View police criticized the company supplying its automated license plate reader system after an audit turned up "unauthorized" use by federal law enforcement agencies.
At least six offices of four agencies accessed data from the first camera in the city's Flock Safety license-tracking system from August to November 2024 without the police department's permission or knowledge, according to a press release Friday night.
Flock has been swimming in a cesspool of its own making for several months now, thanks to it being the public face of "How To Hunt Down Someone Who Wanted An Abortion." That debacle was followed by even more negative press (and congressional rebuke) for its apparent unwillingness to place any limits at all on access to the hundreds of millions of license plate records its cameras have captured, including those owned by private individuals.
Mountain View is in California. And that's only one problem with everything in this paragraph:
The city said its system was accessed by Bureau of Alcohol, Tobacco, Firearms and Explosives offices in Kentucky and Tennessee, which investigate crimes related to guns, explosives, arson and the illegal trafficking of alcohol and tobacco; the inspector general's office of the U.S.. General Services Administration, which manages federal buildings, procurement, and property; Air Force bases in Langley, Virginia, and in Ohio; and the Lake Mead National Recreation Area in Nevada.
Imagine trying to explain this to anyone. While it's somewhat understandable that the ATF might be running nationwide searches on Flock's platform, it's almost impossible to explain why images captured by a single camera in Mountain View, California were accessed by the Inspector General for the GSA, much less Lake Mead Recreation Area staffers.
This explains how this happened. But it doesn't do anything to explain why.
They accessed Mountain View's system for one camera via a "nationwide" search setting that was turned on by Flock Safety, police said.
Apparently, this is neither opt-in or opt-out. It just is. The Mountain View police said they "worked closely" with Flock to block out-of-state access, as well as limit internal access to searches expressly approved by the department's police chief.
Flock doesn't seem to care what its customers want. Either it can't do what this department asked or it simply chose not to because a system that can't be accessed by government randos scattered around the nation is much tougher to sell than a locked-down portal that actually serves the needs of the people paying for it.
And that tracks with Ron Wyden's criticism of the company in the letter he wrote to Flock last October:
The privacy protection that Flock promised to Oregonians — that Flock software will automatically examine the reason provided by law enforcement officers for terms indicating an abortion- or immigration-related search — is meaningless when law enforcement officials provide generic reasons like "investigation" or "crime." Likewise, Flock's filters are meaningless if no reason for a search is provided in the first place. While the search reasons collected by Flock, obtained by press and activists through open records requests, have occasionally revealed searches for immigration and abortion enforcement, these are likely just the tip of the iceberg. Presumably, most officers using Flock to hunt down immigrants and women who have received abortions are not going to type that in as the reason for their search. And, regardless, given that Flock has washed its hands of any obligation to audit its customers, Flock customers have no reason to trust a search reason provided by another agency.
I now believe that abuses of your product are not only likely but inevitable, and that Flock is unable and uninterested in preventing them.
Flock just keeps making Wyden's points for him. The PD wanted limited access with actual oversight. Flock gave the PD a lending library of license plate/location images anyone with or without a library card (so to speak) could check out at will. Flock is part of the surveillance problem. And it's clear it's happy being a tool that can be readily and easily abused, no matter what its paying customers actually want from its technology.
Last fall, I wrote about how the fear of AI was leading us to wall off the open internet in ways that would hurt everyone. At the time, I was worried about how companies were conflating legitimate concerns about bulk AI training with basic web accessibility. Not surprisingly, the situation has gotten worse. Now major news publishers are actively blocking the Internet Archive—one of the most important cultural preservation projects on the internet—because they're worried AI companies might use it as a sneaky "backdoor" to access their content.
This is a mistake we're going to regret for generations.
Nieman Lab reports that The Guardian, The New York Times, and others are now limiting what the Internet Archive can crawl and preserve:
When The Guardian took a look at who was trying to extract its content, access logs revealed that the Internet Archive was a frequent crawler, said Robert Hahn, head of business affairs and licensing. The publisher decided to limit the Internet Archive's access to published articles, minimizing the chance that AI companies might scrape its content via the nonprofit's repository of over one trillion webpage snapshots.
Specifically, Hahn said The Guardian has taken steps to exclude itself from the Internet Archive's APIs and filter out its article pages from the Wayback Machine's URLs interface. The Guardian's regional homepages, topic pages, and other landing pages will continue to appear in the Wayback Machine.
The Times has gone even further:
The New York Times confirmed to Nieman Lab that it's actively "hard blocking" the Internet Archive's crawlers. At the end of 2025, the Times also added one of those crawlers — archive.org_bot — to its robots.txt file, disallowing access to its content.
"We believe in the value of The New York Times's human-led journalism and always want to ensure that our IP is being accessed and used lawfully," said a Times spokesperson. "We are blocking the Internet Archive's bot from accessing the Times because the Wayback Machine provides unfettered access to Times content — including by AI companies — without authorization."
I understand the concern here. I really do. News publishers are struggling, and watching AI companies hoover up their content to train models that might then, in some ways, compete with them for readers is genuinely frustrating. I run a publication myself, remember.
But blocking the Internet Archive isn't going to stop AI training. What it will do is ensure that significant chunks of our journalistic record and historical cultural context simply… disappear.
And that's bad.
The Internet Archive is the most famous nonprofit digital library, and has been operating for nearly three decades. It isn't some fly-by-night operation looking to profit off publisher content. It's trying to preserve the historical record of the internet—which is way more fragile than most people comprehend. When websites disappear—and they disappear constantly—the Wayback Machine is often the only place that content still exists. Researchers, historians, journalists, and ordinary citizens rely on it to understand what actually happened, what was actually said, what the world actually looked like at a given moment.
In a digital era when few things end up printed on paper, the Internet Archive's efforts to permanently preserve our digital culture are essential infrastructure for anyone who cares about historical memory.
And now we're telling them they can't preserve the work of our most trusted publications.
Think about what this could mean in practice. Future historians trying to understand 2025 will have access to archived versions of random blogs, sketchy content farms, and conspiracy sites—but not The New York Times. Not The Guardian. Not the publications that we consider the most reliable record of what's happening in the world. We're creating a historical record that's systematically biased against quality journalism.
Yes, I'm sure some will argue that the NY Times and The Guardian will never go away. Tell that to the readers of the Rocky Mountain News, which published for 150 years before shutting down in 2009, or to the 2,100+ newspapers that have closed since 2004. Institutions—even big, prominent, established ones—don't necessarily last.
As one computer scientist quoted in the Nieman piece put it:
"Common Crawl and Internet Archive are widely considered to be the 'good guys' and are used by 'the bad guys' like OpenAI," said Michael Nelson, a computer scientist and professor at Old Dominion University. "In everyone's aversion to not be controlled by LLMs, I think the good guys are collateral damage."
That's exactly right. In our rush to punish AI companies, we're destroying public goods that serve everyone.
The most frustrating bit of all of this: The Guardian admits they haven't actually documented AI companies scraping their content through the Wayback Machine. This is purely precautionary and theoretical. They're breaking historical preservation based on a hypothetical threat:
The Guardian hasn't documented specific instances of its webpages being scraped by AI companies via the Wayback Machine. Instead, it's taking these measures proactively and is working directly with the Internet Archive to implement the changes.
And, of course, as one of the "good guys" of the internet, the Internet Archive is willing to do exactly what these publishers want. They've always been good about removing content or not scraping content that people don't want in the archive. Sometimes to a fault. But you can never (legitimately) accuse them of malicious archiving (even if music labels and book publishers have).
Either way, we're sacrificing the historical record not because of proven harm, but because publishers are worried about what might happen. That's a hell of a tradeoff.
This isn't even new, of course. Last year, Reddit announced it would block the Internet Archive from archiving its forums—decades of human conversation and cultural history—because Reddit wanted to monetize that content through AI licensing deals. The reasoning was the same: can't let the Wayback Machine become a backdoor for AI companies to access content Reddit is now selling. But once you start going down that path, it leads to bad places.
The Nieman piece notes that, in the case of USA Today/Gannett, it appears that there was a company-wide decision to tell the Internet Archive to get lost:
In total, 241 news sites from nine countries explicitly disallow at least one out of the four Internet Archive crawling bots.
Most of those sites (87%) are owned by USA Today Co., the largest newspaper conglomerate in the United States formerly known as Gannett. (Gannett sites only make up 18% of Welsh's original publishers list.) Each Gannett-owned outlet in our dataset disallows the same two bots: "archive.org_bot" and "ia_archiver-web.archive.org". These bots were added to the robots.txt files of Gannett-owned publications in 2025.
Some Gannett sites have also taken stronger measures to guard their contents from Internet Archive crawlers. URL searches for the Des Moines Register in the Wayback Machine return a message that says, "Sorry. This URL has been excluded from the Wayback Machine."
A Gannett spokesperson told NiemanLab that it was about "safeguarding our intellectual property" but that's nonsense. The whole point of libraries and archives is to preserve such content, and they've always preserved materials that were protected by copyright law. The claim that they have to be blocked to safeguard such content is both technologically and historically illiterate.
And here's the extra irony: blocking these crawlers may not even serve publishers' long-term interests. As I noted in my earlier piece, as more search becomes AI-mediated (whether you like it or not), being absent from training datasets increasingly means being absent from results. It's a bit crazy to think about how much effort publishers put into "search engine optimization" over the years, only to now block the crawlers that feed the systems a growing number of people are using for search. Publishers blocking archival crawlers aren't just sacrificing the historical record—they may be making themselves invisible in the systems that increasingly determine how people discover content in the first place.
The Internet Archive's founder, Brewster Kahle, has been trying to sound the alarm:
"If publishers limit libraries, like the Internet Archive, then the public will have less access to the historical record."
But that warning doesn't seem to be getting through. The panic about AI has become so intense that people are willing to sacrifice core internet infrastructure to address it.
What makes this particularly frustrating is that the internet's openness was never supposed to have asterisks. The fundamental promise wasn't "publish something and it's accessible to all, except for technologies we decide we don't like." It was just… open. You put something on the public web, people can access it. That simplicity is what made the web transformative.
Now we're carving out exceptions based on who might access content and what they might do with it. And once you start making those exceptions, where do they end? If the Internet Archive can be blocked because AI companies might use it, what about research databases? What about accessibility tools that help visually impaired users? What about the next technology we haven't invented yet?
This is a real concern. People say "oh well, blocking machines is different from blocking humans," but that's exactly why I mention assistive tech for the visually impaired. Machines accessing content are frequently tools that help humans—including me. I use an AI tool to help fact check my articles, and part of that process involves feeding it the source links. But increasingly, the tool tells me it can't access those articles to verify whether my coverage accurately reflects them.
I don't have a clean answer here. Publishers genuinely need to find sustainable business models, and watching their work get ingested by AI systems without compensation is a legitimate grievance—especially when you see how much traffic some of these (usually less scrupulous) crawlers dump on sites. But the solution can't be to break the historical record of the internet. It can't be to ensure that our most trusted sources of information are the ones that disappear from archives while the least trustworthy ones remain.
We need to find ways to address AI training concerns that don't require us to abandon the principle of an open, preservable web. Because right now, we're building a future where historians, researchers, and citizens can't access the journalism that documented our era. And that's not a tradeoff any of us should be comfortable with.
The Trump administration has fired one of the few remaining members of the administration that had even a passing interest in antitrust enforcement. DOJ antitrust boss Gail Slater has been fired from the administration after having repeated contentious run ins with key officials. It's the final nail in the coffin of the log-running lie that MAGA ever seriously cared about reining in unchecked corporate power.
Slater's post to Elon Musk's right wing propaganda website was amicable:
But numerous media reports indicate that Slater's sporadic efforts to actually engage in antitrust enforcement consistently angered a "den of vipers" (including AG Pam Bondi and JD Vance). Some of the friction purportedly involved Bondi being angry Slater was directing merging companies to deal directly with DOJ officials and not Trump's weird corruption colorguard. Other disputes were more petty:
"Tensions between Bondi and Slater extended beyond the merger. Last year, Slater planned to go to a conference in Paris - as her predecessors had done and as is required under a treaty to which the United States is a party.
But Bondi denied Slater's request to travel on account of the cost. When Slater went to the conference anyway, Bondi cancelled her government credit cards, the people said."
Mike and I had both noted that there had been signs of this fracture for a while. Slater was still a MAGA true believer. Before Google's antitrust trial last year, she gave a speech full of MAGA culture war nonsense about how Google was trying to censor conservatives. She seemed happy to use the power of the government to punish those deemed enemies of the MAGA movement for the sake of the culture war. However, what she seemed opposed to was the growing trend within the MAGA movement of deciding antitrust questions based on which side hired more of Trump's friends to work on their behalf.
First when the DOJ rubber stamped a T-Mobile merger some officials clearly didn't want to approve (the approval was full of passive aggressive language making it very clear the deal wasn't good for consumers or markets) there were signs of friction. Later when Slater wanted to block a $14 billion merger between Hewlett Packard Enterprise and Juniper Networks, it was clear that the Trump admin's antitrust policy was entirely pay for play, which was apparently a step too far for Slater. I've also heard some insiders haven't been thrilled with the Trump administration's plan to destroy whatever's left of media consolidation limits to the benefit of right wing broadcasters.
Amusingly and curiously, there are apparently people surprised by the fact that an actual antitrust-supporting Republican couldn't survive the grotesque pay-to-play corruption of the Trump administration. Including Politico, an outlet that spent much of the last two years propping up the lie that Trump and MAGA Republicans had done a good faith 180 on antitrust:
When I read that headline my eyes rolled out of my fucking head.
I had tried to warn people repeatedly over the last four years that the Trump support for "antitrust reform" was always a lie. Even nominally pro-antitrust reform officials like Slater tend to inhabit the "free market Libertarian" part of the spectrum where their interest in reining in unchecked corporate power is inconsistent at best. And even these folks were never going to align with Trump's self-serving corruption.
Yet one of the larger Trump election season lies was that Trump 2.0 would be "serious about antitrust," and protect blue collar Americans from corporate predation. There were endless lies about how MAGA was going to "rein in big tech," and how the administration's purportedly legitimate populism would guarantee somewhat of a continuation of the Lina Khan efforts at the FTC.
In reality MAGA was always about one thing: Donald Trump's power and wealth. These sorts of egomaniacal autocrats exploit existing corruption and institutional failure to ride into office on the back of fake populism pretending they alone can fix it, then once entrenched introduce something far worse. The administration's "anti-war," "anti-corporate," "anti-corruption" rhetoric are all part of the same lie.
It's worth reminding folks that MAGA's phony antitrust bonafides wasn't just a lie pushed by MAGA.
It was propped up by countless major media outlets (including Reuters, CNN, and Politico) that claimed the GOP had suddenly taken a 180 on things like monopolization. Even purportedly "progressive antitrust experts" like Matt Stoller tried to push this narrative, routinely hyping the nonexistent trust-busting bonafides of obvious hollow opportunists like JD Vance and Josh Hawley.
Surprise! That was all bullshit. Trump's second term has taken an absolute hatchet to federal regulatory autonomy via court ruling, executive order, or captured regulators. His "antitrust enforcers" make companies grovel for merger approval by promising to be more racist and sexist, or pledging to take a giant steaming dump on U.S. journalism and the First Amendment (waves at CBS).
Under Trump 2.0, it's effectively impossible to hold large corporations and our increasingly unhinged oligarchs accountable for literally anything (outside of ruffling Donald's gargantuan ego, or occasionally trying to implement less sexist or racist hiring practices). This reality as a backdrop to these fleeting, flimsy media-supported pretenses about the legitimacy of "MAGA antitrust" is as dystopian as it gets.
Anybody who enabled (or was surprised by) any of this, especially the journalists at Politico, should probably be sentenced to mandatory community service.
The Hypergear 3-in-1 Wireless Charging Dock is meticulously engineered to reduce the cable clutter and streamline your daily routine. Featuring 2 dedicated wireless charging surfaces, you can power up your phone and AirPods easily. In addition, you can charge your Apple Watch with the built-in charger mount. Stylish and compact, the dock is perfect for your tabletop, desk, or nightstand and will effortlessly charge your everyday essentials in one convenient place. It's on sale for $33.
Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
Judge Boasberg got his vindication in the frivolous "complaint" the DOJ filed against him, and now he's calling out the DOJ's bullshit in the long-running case that caused them to file the complaint against him in the first place: the JGG v. Trump case regarding the group of Venezuelans the US government shipped off to CECOT, the notorious Salvadoran concentration camp.
Boasberg, who until last year was generally seen as a fairly generic "law and order" type judge who was extremely deferential to any "national security" claims from the DOJ (John Roberts had him lead the FISA Court, for goodness' sake!), has clearly had enough of this DOJ and the games they've been playing in his court.
In a short but quite incredible ruling, he calls out the DOJ for deciding to effectively ignore the case while telling the court to "pound sand."
On December 22, 2025, this Court issued a Memorandum Opinion finding that the Government had denied due process to a class of Venezuelans it deported to El Salvador last March in defiance of this Court's Order. See J.G.G. v. Trump, 2025 WL 3706685, at *19 (D.D.C. Dec. 22, 2025). The Court offered the Government the opportunity to propose steps that would facilitate hearings for the class members on their habeas corpus claims so that they could "challenge their designations under the [Alien Enemies Act] and the validity of the [President's] Proclamation." Id. Apparently not interested in participating in this process, the Government's responses essentially told the Court to pound sand.
From a former FISC judge—someone who spent years giving national security claims every benefit of the doubt—"pound sand" is practically a primal scream.
Due to this, he orders the government to work to "facilitate the return" of these people it illegally shipped to a foreign concentration camp (that is, assuming any of them actually want to come back).
Believing that other courses would be both more productive and in line with the Supreme Court's requirements outlined in Noem v. Abrego Garcia, 145 S. Ct. 1017 (2025), the Court will now order the Government to facilitate the return from third countries of those Plaintiffs who so desire. It will also permit other Plaintiffs to file their habeas supplements from abroad.
Boasberg references the Donald Trump-led invasion of Venezuela and the unsettled situation there for many of the plaintiffs. He points out that the lawyers for the plaintiffs have been thoughtful and cautious in how they approach this case. That is in contrast to the US government.
Plaintiffs' prudent approach has not been replicated by their Government counterparts. Although the Supreme Court in Abrego Garcia upheld Judge Paula Xinis's order directing the Government "to facilitate and effectuate the return of" that deportee, see 145 S. Ct. at 1018, Defendants at every turn have objected to Plaintiffs' legitimate proposals without offering a single option for remedying the injury that they inflicted upon the deportees or fulfilling their duty as articulated by the Supreme Court.
Boasberg points to the Supreme Court's ruling regarding Kilmar Abrego Garcia, saying that it's ridiculous that the DOJ is pretending that case doesn't exist or doesn't say what it says. Then he points out that the DOJ keeps "flagrantly" disobeying courts.
Against this backdrop, and mindful of the flagrancy of the Government's violations of the deportees' due-process rights that landed Plaintiffs in this situation, the Court refuses to let them languish in the solution-less mire Defendants propose. The Court will thus order Defendants to take several discrete actions that will begin the remedial process for at least some Plaintiffs, as the Supreme Court has required in similar circumstances. It does so while treading lightly, as it must, in the area of foreign affairs. See Abrego Garcia, 145 S. Ct. at 1018 (recognizing "deference owed to the Executive Branch in the conduct of foreign affairs")
Even given all this, the specific remedy is not one that many of the plaintiffs are likely to accept: he orders that the US government facilitate the return of any of those who want it among those… not in Venezuela. But, since most of them were eventually released from CECOT into Venezuela, that may mean that this ruling doesn't really apply to many men. On top of that Boasberg points out that anyone who does qualify and takes up the offer will likely be detained by immigration officials upon getting here. But, if they want, the US government has to pay for their plane flights back to the US. And, in theory, the plaintiffs should then be given the due process they were denied last year.
Plaintiffs also request that such boarding letter include Government payment of the cost of the air travel. Given that the Court has already found that their removal was unlawful — as opposed to the situation contemplated by the cited Directive, which notes that "[f]acilitating an alien's return does not necessarily include funding the alien's travel," Directive 11061.1, ¶ 3.1 (emphasis added) — the Court deems that a reasonable request. It is unclear why Plaintiffs should bear the financial cost of their return in such an instance. See Ms. L. v. U.S. Immig. & Customs Enf't ("ICE"), 2026 WL 313340, at *4 (S.D. Cal. Feb. 5, 2026) (requiring Government to "bear the expense of returning these family units to the United States" given that "[e]ach of the removals was unlawful, and absent the removals, these families would still be in the United States"). It is worth emphasizing that this situation would never have arisen had the Government simply afforded Plaintiffs their constitutional rights before initially deporting them.
I'm guessing not many are eager to re-enter the US and face deportation again. Of course, many of these people left Venezuela for the US in the first place for a reason, so perhaps some will take their chances on coming back. Even against a very vindictive US government.
The frustrating coda here is the lack of any real consequences for DOJ officials who treated this entire proceeding as a joke—declining to seriously participate and essentially daring the court to do something about it. Boasberg could have ordered sanctions. He didn't. And that's probably fine with this DOJ, which has learned that contempt for the courts carries no real cost.
Unfortunately, that may be the real story here. Judge gets fed up, once again, with a DOJ that thumbs its nose at the court, says extraordinary things in a ruling that calls out the DOJ's behavior… but does little that will lead to actual accountability for those involved, beyond having them "lose" the case. We've seen a lot of this, and it's only going to continue until judges figure out how to impose real consequences for DOJ lawyers for treating the court with literal contempt.
U.S. media mergers always follow the same trajectory. Pre-merger, executives promise all manner of amazing synergies and deal benefits. Post-merger, not only do those benefits generally never arrive, the debt from the acquisition spree usually results in significant layoffs, lower quality product, and higher rates for consumers. The Time Warner Discovery disaster was the poster child for this phenomenon.
After paying Trump his $16 million bribe, CBS and Skydance (Trump's friends in the Ellison family) recently finalized their $8 billion merger. It didn't take long for the company to announce that the only way it could pay for the debt of the pointless deal is by firing a whole bunch of people in "painful" fashion.
Despite a lot of promises last summer by Paramount executives that the layoffs would come in one fell swoop, CBS News boss Bari Weiss has implemented staggered cuts as she converts what was left of CBS into yet another safe space for right wing autocrats and their dwindling cult.
Apparently "a lot of people" at CBS News are taking Weiss up on a January town hall promise of buyouts for those insufficiently deferential to Larry Ellison's ambitions:
"They include at least six producers out of the show's total of roughly 20, according to another source, who added: "Seems like people are jumping ship."
"It's a lot of people," a CBS insider said."
In her head, I really do think Weiss believes she's reshaping CBS News into a better news organization. In reality, Weiss was specifically hired by billionaire Trump ally Larry Ellison to convert CBS into yet another autocrat-friendly safe space for the perpetually aggrieved.
Weiss' problem to date has been that she's not just bad at management, judgment, and journalism, she's bad at ratings-grabbing agitprop — the real reason she was hired by billionaires in the first place.
Weiss' inaugural "town hall" with opportunistic right wing grifter Erika Kirk was a ratings dud, her new nightly news broadcast has been an error-prone hot mess, and her murder of a 60 Minutes story about Trump concentration camps — and the network's decision to air a story lying about the ICE murder of Nicole Good — spurred a revolt among the CBS journalists who hadn't quit yet.
Weiss' weird ego trip is playing out alongside the old traditional failures of mindless media consolidation, the last refuge of executives who are all out of original ideas, but desperately want to goose quarterly earnings, generate temporary tax cuts, and get "savvy dealmaker" stamped on their LinkedIn profile.
The thing is, merger related promises both before and after the deal are always meaningless. The layoffs are driven by debt from acquisitions, and the new CBS has been making plenty of those, including a new $7.7 billion deal to acquire the exclusive rights to MMA fights, a costly campaign to steal Warner Brothers, and that $150 million deal to acquire Bari Weiss' lazy contrarian propaganda blog.
Larry Ellison clearly wants to hoover up what's left of corporate media (including CBS, CNN, HBO) — and fuse it with his co-ownership of TikTok to create a sort of Hungary-esque autocratic state media. The only thing saving us from this outcome to date is the fact that absolutely nobody in this weird assortment of nepobabies and brunchlords appears to have absolutely any idea what they're doing.
At some point, we, as a society, are going to realize that farming copyright enforcement out to bots and AI-driven robocops is not the way to go, but today is not that day. Long before AI became the buzzword it is today, large companies have employed their own copyright crawler bots, or employed those of a third party, to police their copyrights on these here internets. And for just as long, those bots have absolutely sucked out loud at their jobs. We have seen example after example after example of those bots making mistakes, resulting in takedowns or threats of takedowns of all kinds of perfectly legit content. Upon discovery, the content is usually reinstated while those employing the copyright decepticons shrug their shoulders and say "Thems the breaks." And then it happens again.
It has to change, but isn't. We have yet another recent example of this in action, with Microsoft's copyright enforcement partner using an AI-driven enforcement bot to get a video game delisted from Steam over a single screenshot on the game's page that looks like, but isn't, from Minecraft. The game in question, Allumeria, clearly is partially inspired by Minecraft, but doesn't use any of its assets and is in fact its own full-fledged creative work.
On Tuesday, the developer behind the Minecraft-looking, dungeon-raiding sandbox announced that their game had been taken down from Valve's storefront due to a DMCA copyright notice issued by Microsoft. The notice, shared by developer Unomelon in the game's Discord server, accused Allumeria of using "Minecraft content, including but not limited to gameplay and assets."
The takedown was apparently issued over one specific screenshot from the game's Steam page. It shows a vaguely Minecraft-esque world with birch trees, tall grass, a blue sky, and pumpkins: all things that are in Minecraft but also in real life and lots of other games. The game does look pretty similar to Minecraft, but it doesn't appear to be reusing any of its actual assets or crossing some arbitrary line between homage and copycat that dozens of other Minecraft-inspired games haven't crossed before.
It turns out the takedown request didn't come from Microsoft directly, but via Tracer.AI. Tracer.AI claims to have a bot driven by artificial intelligence for automatic flagging and removal of copyright infringing content.
It seems the system failed to understand in this case that the image in question, while being similar to those including Minecraft assets, didn't actually infringe upon anything. Folks at Mojang caught wind of this on BlueSky and had to take action.
While it's unclear if the claim was issued automatically or intentionally, Mojang Chief Creative Officer Jens Bergensten (known to most Minecraft players as Jeb) responded to a comment about the takedown on Bluesky, stating that he was not aware and is now "investigating." Roughly 12 hours later, Allumeria's Steam page has been reinstated.
"Microsoft has withdrawn their DMCA claim!" Unomelon posted earlier today. "The game is back up on Steam! Allumeria is back! Thank you EVERYONE for your support. It's hard to comprehend that a single post in my discord would lead to so many people expressing support."
And this is the point in the story where we all go back to our lives and pretend like none of this ever happened. But that sucks. For starters, there is no reason we should accept that this kind of collateral damage, temporary or not. Add to that there are surely stories out there in which a similar resolution was not reached. How many games, how much other non-infringing content out there, were taken down for longer from an erroneous claim like this? How many never came back?
And at the base level, the fact is that if companies are going to claim that copyright is of paramount importance to their business, that can't be farmed out to automated systems that aren't good at their job.
Ctrl-Alt-Speech is a weekly podcast about the latest news in online speech, from Mike Masnick and Everything in Moderation's Ben Whitelaw.
Subscribe now on Apple Podcasts, Overcast, Spotify, Pocket Casts, YouTube, or your podcast app of choice — or go straight to the RSS feed.
In this week's roundup of the latest news in online speech, content moderation and internet regulation, Ben is joined by Dr. Blake Hallinan, Professor of Platform Studies in the Department of Media & Journalism Studies at Aarhus University. Together, they discuss:
- On Section 230's 30th Birthday, A Look Back At Why It's Such A Good Law And Why Messing With It Would Be Bad (Techdirt)
- An 18-Million-Subscriber YouTuber Just Explained Section 230 Better Than Every Politician In Washington (Techdirt)
- Discord Launches Teen-by-Default Settings Globally (Discord)
- Media Literacy Parent's study (GOV.UK)
- EU says TikTok must disable 'addictive' features like infinite scroll, fix its recommendation engine (Techcrunch)
- We Didn't Ask for This Internet with Tim Wu and Cory Doctorow (The New York Times)
- Despite Meta's ban, Fidesz candidates successfully posted 162 political ads on Facebook in January 9 (Lakmusz.hu)
- Claude's Constitution Needs a Bill of Rights and Oversight (Oversight Board)
- Account Closed Without Notice: Debanking Adult Industry Workers in Canada (ResearchGate)
Play along with Ctrl-Alt-Speech's 2026 Bingo Card and get in touch if you win!
For thirty years, internet users have benefited from a key federal law that allows everyone to express themselves, find community, organize politically, and participate in society. Section 230, which protects internet users' speech by protecting the online intermediaries we rely on, is the legal support that sustains the internet as we know it.
Yet as Section 230 turns 30 this week, there are bipartisan proposals in Congress to either repeal or sunset the law. These proposals seize upon legitimate concerns with the harmful and anti-competitive practices of the largest tech companies, but then misdirect that anger toward Section 230.
But rolling back or eliminating Section 230 will not stop invasive corporate surveillance that harms all internet users. Killing Section 230 won't end the dominance of the current handful of large tech companies—it would cement their monopoly power.
The current proposals also ignore a crucial question: what legal standard should replace Section 230? The bills provide no answer, refusing to grapple with the tradeoffs inherent in making online intermediaries liable for users' speech.
This glaring omission shows what these proposals really are: grievances masquerading as legislation, not serious policy. Especially when the speech problems with alternatives to Section 230's immunity are readily apparent, both in the U.S. and around the world. Experience shows that those systems result in more censorship of internet users' lawful speech.
Let's be clear: EFF defends Section 230 because it is the best available system to protect users' speech online. By immunizing intermediaries for their users' speech, Section 230 benefits users. Services can distribute our speech without filters, pre-clearance, or the threat of dubious takedown requests. Section 230 also directly protects internet users when they distribute other people's speech online, such as when they reshare another users' post or host a comment section on their blog.
It was the danger of losing the internet as a forum for diverse political discourse and culture that led to the law in 1996. Congress created Section 230's limited civil immunity because it recognized that promoting more user speech outweighed potential harms. Congress decided that when harmful speech occurs, it's the speaker that should be held responsible—not the service that hosts the speech. The law also protects social platforms when they remove posts that are obscene or violate the services' own standards. And Section 230 has limits: it does not immunize services if they violate federal criminal laws.
Section 230 Alternatives Would Protect Less SpeechWith so much debate around the downsides of Section 230, it's worth considering: What are some of the alternatives to immunity, and how would they shape the internet?
The least protective legal regime for online speech would be strict liability. Here, intermediaries always would be liable for their users' speech—regardless of whether they contributed to the harm, or even knew about the harmful speech. It would likely end the widespread availability and openness of social media and web hosting services we're used to. Instead, services would not let users speak without vetting the content first, via upload filters or other means. Small intermediaries with niche communities may simply disappear under the weight of such heavy liability.
Another alternative: Imposing legal duties on intermediaries, such as requiring that they act "reasonably" to limit harmful user content. This would likely result in platforms monitoring users' speech before distributing it, and being extremely cautious about what they allow users to say. That inevitably would lead to the removal of lawful speech—probably on a large scale. Intermediaries would not be willing to defend their users' speech in court, even it is entirely lawful. In a world where any service could be easily sued over user speech, only the biggest services will survive. They're the ones that would have the legal and technical resources to weather the flood of lawsuits.
Another option is a notice-and-takedown regime, like what exists under the Digital Millennium Copyright Act. That will also result in takedowns of legitimate speech. And there's no doubt such a system will be abused. EFF has documented how the DMCA leads to widespread removal of lawful speech based on frivolous copyright infringement claims. Replacing Section 230 with a takedown system will invite similar behavior, and powerful figures and government officials will use it to silence their critics.
The closest alternative to Section 230's immunity provides protections from liability until an impartial court has issued a full and final ruling that user-generated content is illegal, and ordered that it be removed. These systems ensure that intermediaries will not have to cave to frivolous claims. But they still leave open the potential for censorship because intermediaries are unlikely to fight every lawsuit that seeks to remove lawful speech. The cost of vindicating lawful speech in court may be too high for intermediaries to handle at scale.
By contrast, immunity takes the variable of whether an intermediary will stand up for their users' speech out of the equation. That is why Section 230 maximizes the ability for users to speak online.
In some narrow situations, Section 230 may leave victims without a legal remedy. Proposals aimed at those gaps should be considered, though lawmakers should pay careful attention that in vindicating victims, they do not broadly censor users' speech. But those legitimate concerns are not the criticisms that Congress is levying against Section 230.
EFF will continue to fight for Section 230, as it remains the best available system to protect everyone's ability to speak online.
Reposted from EFF's Deeplinks blog.
Yesterday, Attorney General Pam Bondi appeared before the House Judiciary Committee. Among the more notable exchanges was when Rep. Pramila Jayapal asked some of Jeffrey Epstein's victims who were in the audience to stand up and indicate whether Bondi's DOJ had ever contacted them about their experiences. None of them had heard from the Justice Department. Bondi wouldn't even look at the victims as she frantically flipped through her prepared notes.
And that's when news organizations, including Reuters, caught something alarming: one of the pages Bondi held up clearly showed searches that Jayapal herself had done of the Epstein files:
A Reuters photographer captured this image of a page from Pam Bondi's "burn book," which she used to counter any questions from Democratic lawmakers during an unhinged hearing today.It looks like the DOJ monitored members of Congress's searches of the unredacted Epstein files.Just wow.
— Christopher Wiggins (@cwnewser.bsky.social) 2026-02-11T23:06:45.578Z
The Department of Justice—led by an Attorney General who is supposed to serve the public but has made clear her only role is protecting Donald Trump's personal interests—is actively surveilling what members of Congress are searching in the Epstein files. And then bringing that surveillance data to a congressional hearing to use as political ammunition.
This should be front-page news. It should be a major scandal. Honestly, it should be impeachable.
There is no legitimate investigative purpose here. No subpoena. Nothing at all. Just the executive branch tracking the oversight activities of the legislative branch, then weaponizing that information for political culture war point-scoring. The DOJ has no business whatsoever surveilling what members of Congress—who have oversight authority over the Justice Department—are searching.
Jayapal is rightly furious:
Pam Bondi brought a document to the Judiciary Committee today that had my search history of the Epstein files on it. The DOJ is spying on members of Congress. It's a disgrace and I won't stand for it.
— Congresswoman Pramila Jayapal (@jayapal.house.gov) 2026-02-12T01:14:57.174494904Z
We've been here before. Way back in 2014, the CIA illegally spied on searches by Senate staffers who were investigating the CIA's torture program. It was considered a scandal at the time—because it was one. The executive branch surveilling congressional oversight is a fundamental violation of separation of powers. It's the kind of thing that, when it happens, should trigger immediate consequences.
And yet.
Just a few days ago, Senator Lindsey Graham—who has been one of the foremost defenders of government surveillance for years—blew up at a Verizon executive for complying with a subpoena that revealed Graham's call records (not the contents, just the metadata) from around January 6th, 2021.
"If the shoe were on the other foot, it'd be front-page news all over the world that Republicans went after sitting Democratic senators' phone records," said Republican Sen. Lindsey Graham of South Carolina, who was among the Republicans in Congress whose records were accessed by prosecutors as they examined contacts between the president and allies on Capitol Hill.
"I just want to let you know," he added, "I don't think I deserve what happened to me."
This is the same Lindsey Graham who, over a decade ago, said he was "glad" that the NSA was collecting his phone records because it magically kept him safe from terrorists. But now he's demanding hundreds of thousands of dollars for being "spied" on (he wasn't—a company complied with a valid subpoena in a legitimate investigation, which is how the legal system is supposed to work).
So here's the contrast: Graham is demanding money and media attention because a company followed the law. Meanwhile, the Attorney General is actually surveilling a Democratic member of Congress's oversight activities—with no legal basis whatsoever—and using that surveillance for political theater in a manner clearly designed as a warning shot to congressional reps investigating the Epstein Files. Pam Bondi wants you to know she's watching you.
Graham claimed that if the shoe were on the other foot, it would be "front-page news all over the world." Well, Senator, here's your chance. The shoe is very much on the other foot. It's worse than what happened to you, because what happened to you was legal and appropriate, and what's happening to Jayapal is neither.
But we all know Graham won't speak out against this administration. He's had nearly a decade to show whether or not the version of Lindsey Graham who said "if we elected Donald Trump, we will get destroyed… and we will deserve it" still exists, and it's clear that Lindsey Graham is long gone. This one only serves Donald Trump and himself, not the American people.
But this actually matters: if the DOJ can surveil what members of Congress search in oversight files—and then use that surveillance as a weapon in public hearings—congressional oversight of the executive branch is dead. That's the whole point of separation of powers. The people who are supposed to watch the watchmen can't do their jobs if the watchmen are surveilling them.
And remember: Bondi didn't hide this. She brought it to the hearing. She held it up when she knew cameras would catch what was going on. She wanted Jayapal—and every other member of Congress—to see exactly what she's doing.
This administration doesn't fear consequences for this kind of vast abuse of power because there haven't been any. And the longer that remains true, the worse it's going to get.
The DHS and its components want to find non-white people to deport by any means necessary. Of course, "necessary" is something that's on a continually sliding scale with Trump back in office, which means everything (legal or not) is "necessary" if it can help White House advisor Stephen Miller hit his self-imposed 3,000 arrests per day goal.
As was reported last week, DHS components (ICE, CBP) are using a web app that supposedly can identify people and link them with citizenship documents. As has always been the case with DHS components (dating back to the Obama era), the rule of thumb is "deploy first, compile legally-required paperwork later." The pattern has never changed. ICE, CBP, etc. acquire new tech, hand it out to agents, and much later — if ever — the agencies compile and publish their legally-required Privacy Impact Assessments (PIAs).
PIAs are supposed to precede deployments of new tech that might have an impact on privacy rights and other civil liberties. In almost every case, the tech has been deployed far ahead of the precedential paperwork.
As one would expect, the Trump administration was never going to be the one to ensure the paperwork arrived ahead of the deployment. As we covered recently, both ICE and CBP are using tech provided by NEC called "Mobile Fortify" to identify migrants who are possibly subject to removal, even though neither agency has bothered to publish a Privacy Impact Assessment.
As Wired reported, the app is being used widely by officers working with both agencies, despite both agencies making it clear they don't have the proper paperwork in place to justify these deployments.
While CBP says there are "sufficient monitoring protocols" in place for the app, ICE says that the development of monitoring protocols is in progress, and that it will identify potential impacts during an AI impact assessment. According to guidance from the Office of Management and Budget, which was issued before the inventory says the app was deployed for either CBP or ICE, agencies are supposed to complete an AI impact assessment before deploying any high-impact use case. Both CBP and ICE say the app is "high-impact" and "deployed."
While this is obviously concerning, it would be far less concerning if we weren't dealing with an administration that has told immigration officers that they don't need warrants to enter houses or effect arrests. And it would be insanely less concerning if we weren't dealing with an administration that has claimed that simply observing or reporting on immigration enforcement efforts is an act of terrorism.
Officers working for the combined forces of bigotry d/b/a/ "immigration enforcement" know they're safe. The Supreme Court has ensured they're safe by making it impossible to sue federal officers. And the people running immigration-related agencies have made it clear they don't even care if the ends justify the means.
These facts make what's reported here even worse, especially when officers are using the app to "identify" pretty much anyone they can point a smartphone at.
Despite DHS repeatedly framing Mobile Fortify as a tool for identifying people through facial recognition, however, the app does not actually "verify" the identities of people stopped by federal immigration agents—a well-known limitation of the technology and a function of how Mobile Fortify is designed and used.
[…]
Records reviewed by WIRED also show that DHS's hasty approval of Fortify last May was enabled by dismantling centralized privacy reviews and quietly removing department-wide limits on facial recognition—changes overseen by a former Heritage Foundation lawyer and Project 2025 contributor, who now serves in a senior DHS privacy role.
Even if you're the sort of prick who thinks whatever happens to non-citizens is deserved due to their alleged violation of civil statutes, one would hope you'd actually care what happens to your fellow citizens. I mean, one would hope, but even the federal government doesn't care what happens to US citizens if they happen to be unsupportive of Trump's migrant-targeting crime wave.
DHS—which has declined to detail the methods and tools that agents are using, despite repeated calls from oversight officials and nonprofit privacy watchdogs—has used Mobile Fortify to scan the faces not only of "targeted individuals," but also people later confirmed to be US citizens and others who were observing or protesting enforcement activity.
TLDR and all that: DHS knows this tool performs worst in the situations where it's used most. DHS and its components also knew they were supposed to produce PIAs before deploying privacy-impacting tech. And DHS knows its agencies are not only misusing the tech to convert AI shrugs into probable cause, but are using it to identify people protesting or observing their efforts, which means this tech is also a potential tool of unlawful retribution.
There's nothing left to be discussed. This tech will continue to be used because it can turn bad photos into migrant arrests. And its off-label use is just as effective: it allows ICE and CBP agents to identify protesters and observers, even as DHS officials continue to claim doxing should be a federal offense if they're not the ones doing it. Everything about this is bullshit. But bullshit is all this administration has.
Transform your future in cybersecurity with 7 courses on next‑level packet control, secure architecture, and cloud‑ready defenses inside the 2026 Complete Firewall Admin Bundle. Courses cover IT fundamentals, topics to help you prepare for the CompTIA Server+ and CCNA exams, and more. It's on sale for $25.
Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
You may have heard last week that actor Joseph Gordon-Levitt went to Washington DC and gave a short speech at an event put on by Senator Dick Durbin calling for the sunsetting of Section 230. It's a short speech, and it gets almost everything wrong about Section 230. Watch it here:
Let me first say that, while I'm sure some will rush to jump in and say "oh, it's just some Hollywood actor guy, jumping into something he doesn't understand," I actually think that's a little unfair about JGL. Very early on he started his own (very interesting, very creative) user-generated content platform called HitRecord, and over the years I've followed many of his takes on copyright and internet policy and while I don't always agree, I do believe that he does legitimately take this stuff seriously and actually wants to understand the nuances (unlike some).
But it appears he's fallen for some not just bad advice, but blatantly incorrect advice about this. He's also posted a followup video where he claims to explain his position in more detail, but it only makes things worse, because it compounds the blatant factual errors that underpin his entire argument.
First let's look at the major problems with his speech in DC:
So I understand what Section 230 did to bring about the birth of the internet. That was 30 years ago. And I also understand how the internet has changed since then because back then message boards and other websites with user-generated content, they really were more like telephone carriers. They were neutral platforms. That's not how things work anymore.
So, that's literally incorrect. If JGL is really interested in the actual history here, I did a whole podcast series where I spoke to the people behind Section 230, including those involved in the early internet and the various lawsuits at the time.
Section 230 was never meant for "neutral" websites. As the authors (and the text of the law itself!) make clear: it was created so that websites did not need to be neutral. It literally was written in response to the Stratton Oakmont v. Prodigy case (for JGL's benefit: Stratton Oakmont is the company portrayed in Wolf of Wall Street), where the boiler room operation sued Prodigy because someone posted in their forums claims about how sketchy Stratton Oakmont was (which, you know, was true).
But Stratton sued, and the judge said that because Prodigy moderated, that because they wanted to have a family friendly site, that is because they were not neutral, they were liable for anything they decided to leave up. In the judge's ruling he effectively said "because you're not neutral, and because you moderate, you are effectively endorsing this content, and thus if it's defamatory you're liable for defamation."
Section 230 (originally the "Internet Freedom and Family Empowerment Act") was never about protecting platforms for being neutral. It was literally the opposite of that. It was about making sure that platforms felt comfortable making editorial decisions. It was about letting companies decide what to share, what not to share, what to amplify, and what not to amplify, without being held liable as a publisher of that content.
This is important, but it's a point that a bunch of bad faith people, starting with Ted Cruz, have been lying about for about a decade, pretending that the intent of 230 was to protect sites that are "neutral." It's literally the opposite of that. And it's disappointing that JGL would repeat this myth as if it's fact. Courts have said this explicitly—I'll get to the Ninth Circuit's Barnes decision later, where the court said Section 230's entire purpose is to protect companies because they act as publishers—but first, let's go through the rest of what JGL got wrong.
He then goes on to talk about legitimate problems with internet giants having too much power, but falsely attributes that to Section 230.
Today, the internet is dominated by a small handful of these gigantic businesses that are not at all neutral, but instead algorithmically amplify whatever gets the most attention and maximizes ad revenue. And we know what happens when we let these engagement optimization algorithms be the lens that we see the world through. We get a mental health crisis, especially amongst young people. We get a rise in extremism and a rise in conspiracy theories. And then of course we get these echo chambers. These algorithms, they amplify the demonization of the other side so badly that we can't even have a civil conversation. It seems like we can't agree on anything.
So, first of all, I know that the common wisdom is that all of this is true, but as we've detailed, actual experts have been unable to find any support for a causal connection. Studies on "echo chambers" have found that the internet decreases echo chambers, rather than increases them. The studies on mental health show the opposite of what JGL (and Jonathan Haidt) claim. Even the claims about algorithms focused solely on engagement don't seem to have held up (or, generally, it was true early on, but the companies found that maximizing solely on engagement burned people out quickly and was actually bad for business, and so most social media adjusted the algorithms away from just that).
So, again, almost every assertion there is false (or, at the very least, much more nuanced that he makes it out to be).
But the biggest myth of all is the idea that getting rid of 230 will somehow tame the internet giants. Once again, the exact opposite is true. As we've discussed hundreds of times, the big internet companies don't need Section 230.
The real benefit of 230 is that it gets vexatious lawsuits tossed out early. That matters a lot for smaller companies. To put it in real terms: with 230, companies can get vexatious lawsuits dismissed for around $100,000 to $200,000 dollars (I used to say $50k, but my lawyer friends tell me it's getting more expensive). That is a lot of money. But it's generally survivable. To get the same cases dismissed on First Amendment grounds (as almost all of them would be), you're talking $5 million and up.
That's pocket change for Meta and Google who have buildings full of lawyers. It's existential for smaller competitive sites.
So the end result of getting rid of 230 is not getting rid of the internet giants. It's locking them in and giving them more power. It's why Meta literally has run ads telling Congress it's time to ditch 230.
What is Mark Zuckerberg's biggest problem right now? Competition from smaller upstarts chipping away at his userbase. Getting rid of 230 makes it harder for smaller providers to survive, and limits the drain from Meta.
On top of that, getting rid of 230 gives them less reason to moderate. Because, under the First Amendment, the only way they can possibly be held liable is if they had actual knowledge of content that violates the law. And the best way to avoid having knowledge is not to look.
It means not doing any research on harms caused by your site, because that will be used as evidence of "knowledge." It means limiting how much moderation you do so that (a la Prodigy three decades ago) you're not seen to be "endorsing" any content you leave up.
Getting rid of Section 230 literally makes Every Single Problem JGL discussed in his speech worse! He got every single thing backwards.
And he closes out with quite the rhetorical flourish:
I have a message for all the other senators out there: [Yells]: I WANT TO SEE THIS THING PASS 100 TO 0. There should be nobody voting to give any more impunity to these tech companies. Nobody. It's time for a change. Let's make it happen. Thank you.
Except it's not voting to give anyone "more impunity." It's a vote to say "stop moderating, and unleash a flood of vexatious lawsuits that will destroy smaller competitors."
The Follow-Up Makes It WorseYesterday, JGL posted a longer video, noting that he'd heard a bunch of criticism about his speech and he wanted to respond to it. Frankly, it's a bizarre video, but go ahead and watch it too:
It starts out with him saying he actually agrees with a lot of his critics, because he wants an "internet that has vibrant, free, and productive public discourse." Except… that's literally what Section 230 enables. Because without it, you don't have intermediaries willing to host public discourse. You ONLY have giant companies with buildings full of lawyers who will set the rules of public discourse.
Again, his entire argument is backwards.
Then… he does this weird half backdown, where he says he doesn't really want the end of Section 230, but he just wants "reform."
Here's the first thing I'll say. I'm in favor of reforming section 230. I'm not in favor of eliminating all of the protections that it affords. I'm going to repeat that because it's it's really the crux of this. I'm in favor of reforming, upgrading, modernizing section 230 because it was passed 30 years ago. I am not in favor of eliminating all of the protections that it affords.
Buddy, you literally went to Washington DC, got up in front of Senators, and told everyone you wanted the bill that literally takes away every one of those protections to pass 100 to 0. Don't then say "oh I just want to reform it." Bullshit. You said get rid of the damn thing.
But… let's go through this, because it's a frequent thing we hear from people. "Oh, let's reform it, not get rid of it." As our very own First Amendment lawyer Cathy Gellis has explained over and over again, every proposed reform to date is really repeal.
The reason for this is the procedural benefit we discussed above. Because every single kind of "reform" requires long, expensive lawsuits to determine if the company is liable. In the end, those companies will still win, because of the First Amendment. Just like how one of the most famous 230 "losses" ended up. Roommates.com lost its Section 230 protections, which resulted in many, many years in court… and then they eventually won anyway. All 230 does is make it so you don't have to pay lawyers nearly as much to reach the same result.
So, every single reform proposal basically resets the clock in a way that old court precedents go out the window, and all you're doing is allowing vexatious lawsuits to cost a lot more for companies. This will mean some won't even start. Others will go out of business.
Or, worse, many companies will just enable a hecklers veto. Donald Trump doesn't like what people are saying on a platform? Threaten to sue. The cost without 230 (even a reformed 230 where a court can't rely on precedent) means it's cheaper to just remove the content that upsets Donald Trump. Or your landlord. Or some internet troll.
You basically are giving everyone a veto by the mere threat of a lawsuit. I'm sorry, but that is not the recipe for a "vibrant, free, and productive public discourse."
Calling for reform of 230 is, in every case we've seen to date, really a call for repeal, whether the reformers recognize that or not. Is there a possibility that you could reform it in a way that isn't that? Maybe? But I've yet to see any proposal, and the only ones I can think of would be going in the other direction (e.g., expanding 230's protections to include intellectual property, or rolling back FOSTA).
JGL then talks about small businesses and agrees that sites like HitRecord require 230. Which sure makes it odd that he's supporting repeal. However, he seems to have bought in to the logic of the argument memeified by internet law professor Eric Goldman—who has catalogued basically every single Section 230 lawsuit as well as every single "reform" proposal ever made and found them all wanting—that "if you don't amend 230 in unspecified ways, we'll kill this internet."
That is… generally not a good way to make policy. But it's how JGL thinks it should be done:
Well, there have been lots of efforts to reform section 230 in the past and they keep getting killed uh by the big tech lobbyists. So, this section 230 sunset act is as far as I understand it a strategy towards reform. It'll force the tech companies to the negotiating table. That's why I supported it.
Again, this is wrong. Big tech is always at the freaking negotiating table. You don't think they're there? Come on. As I noted, Zuck has been willing to ditch 230 for almost a decade now. It makes him seem "cooperative" to Congress while at the same time destroying the ability of competitors to survive.
The reason 230 reform bills fail is because enough grassroots folks actually show up and scream at Congress. It ain't the freaking "big tech lobbyists." It's people like the ACLU and the EFF and Fight for the Future and Demand Progress speaking up and sending calls and emails to Congress.
Also, talking about these "efforts at reform" getting "killed by big tech lobbyists." This is FOSTA erasure, JGL. In 2018 (with the explicit support of Meta) Congress passed FOSTA, which was a Section 230 reform bill. Remember?
And how did that work out? Did it make Meta and Google better? No.
But did it destroy online spaces used by sex workers? Did it lead to real world harm for sex workers? Did it make it harder for law enforcement to capture actual human traffickers? Did it destroy online communities? Did it hide historical LGBTQ content because of legal threats?
Yes to literally all of those things.
So, yeah, I'm freaking worried about "reform" to 230, because we saw it already. And many of us warned about the harms, while "big tech" supported the law. And we were right. The harms did occur. But it took away competitive online communities and suppressed sex positive and LGBTQ content.
Is that what you want to support JGL? No? Then maybe speak to some of the people who actually work on this stuff, who understand the nuances, not the slogans.
Speaking of which, JGL then doubles down on his exactly backwards Ted Cruz-inspired version of Section 230:
Section 230 as it's currently written or as it was written 30 years ago distinguishes between what it calls publishers and carriers. So a publisher would be, you, a person, saying something or a company saying something like the New York Times say or you know the Walt Disney Company publishers. Then carriers would be somebody like AT&T or Verizon, you know, the the the companies that make your phone or or your telephone service. So basically what Section 230 said is that these platforms for user-generated content are not publishers. They are carriers. They are as neutral as the telephone company. And if someone uses the telephone to commit a crime, the telephone company shouldn't be held liable. And that's true about a telephone company. But again, there's a third category that we need to add to really reflect how the internet works today. And that third category is amplification.
Again, I need to stress that this is literally wrong. Like, fundamentally, literally he has it backwards and inside out. This is a pretty big factual error.
First, Section 230 does not, in any way, distinguish between "what it calls publishers and carriers." This is the "publisher/platform" myth all over again.
I mean, you can look at the law. It makes no such distinction at all. The only distinction it makes is between "interactive computer services" and "information content providers." Now some (perhaps JGL) will claim that's the same thing as "publishers" and "carriers." But it's literally not.
Carriers (as in, common carrier law) implies the neutrality that JGL mentioned earlier. And perhaps that's why he's confused. But the purpose of 230 was to enable "interactive computer services" to act as publishers, without being held liable as publishers. It was NOT saying "don't be a publisher." It was saying "we want you to be a publisher, not a neutral carrier, but we know that if you face liability as a publisher, you won't agree to publish. So, for third party content, we won't hold you liable for your publishing actions."
Again, go back to the Stratton Oakmont case. Prodigy "acted as a publisher" in trying to filter out non-family friendly content. And the judge said "okay now you're liable." The entire point of 230 was to say "don't be neutral, act as a publisher, but since it's all 3rd party content, we won't hold you liable as the publisher."
In the Barnes case in the Ninth Circuit, the court was quite clear about this. The entire purpose of Section 230 is to encourage interactive computing services to act like a publisher by removing liability for being a publisher. Here's a key part in which the court explains why Yahoo deserves 230 protections for 3rd party content because it acted as the publisher:
In other words, the duty that Barnes claims Yahoo violated derives from Yahoo's conduct as a publisher—the steps it allegedly took, but later supposedly abandoned, to de-publish the offensive profiles. It is because such conduct is publishing conduct that we have insisted that section 230 protects from liability….
So let me repeat this again: the point of Section 230 is not to say "you're a carrier, not a publisher." It's literally to say "you can safely act as a publisher because you won't face liability for content you had no part in its creation."
JGL has it backwards.
He then goes on to make a weird and meaningless distinction between "free speech" and "commercial amplification" as if it's legally meaningful.
At the crux of their article is a really important distinction and that distinction is between free speech and commercial amplification. Free speech meaning what a human being says. commercial amplification, meaning when a platform like Instagram or YouTube or Tik Tok or whatever uses an algorithm to uh maximize engagement and ad revenue to hook you, keep you and serve you ads. And this is a really important difference that section 230 does not appreciate.
The article he's talking about is this very, very, very, very, very badly confused piece in ACM. It's written by Jaron Lanier, Allison Stanger, and Audrey Tang. If those names sound familiar, it's because they've been publishing similar pieces that are just fundamentally wrong for years. Here's one piece I wrote picking apart one, here's another picking apart another.
None of those three individuals understands Section 230 at all. Stanger gave testimony to Congress that was so wrong on basic facts it should have been retracted. I truly do not understand why Audrey Tang sullies her own reputation by continuing to sign on to pieces with Lanier and Stanger. I have tremendous respect for Audrey, who I've learned a ton from over the years. But she is not a legal expert. She was Digital Minister in Taiwan (where she did some amazing work!) and has worked at tech companies.
But she doesn't know 230.
I'm not going to do another full breakdown of everything wrong with the ACM piece, but just look at the second paragraph:
Much of the public's criticism of Section 230 centers on the fact that it shields platforms from liability even when they host content such as online harassment of marginalized groups or child sexual abuse material (CSAM).
What? CSAM is inherently unprotected speech. Section 230 does not protect CSAM. Section 230 literally has section (e)(1) that says "no effect on criminal law." CSAM, as you might know, is a violation of criminal law. Websites all have strong incentives to deal with CSAM to avoid criminal liability, and they tend to take that pretty seriously. The additional civil liability that might come from a change in the law isn't going to have much, if any, impact on that.
And "online harassment of marginalized groups" is mostly protected by the First Amendment anyway—so if 230 didn't cover it, companies would still win on First Amendment grounds. But here's the thing: most of us think that harassment is bad and want platforms to stop it. You know what lets them do that? Section 230. Take it away and companies have less incentive to moderate. Indeed, in Lanier and Stanger's original piece in Wired, they argued platforms should be required to use the First Amendment as the basis for moderation—which would forbid removing most harassment of marginalized groups.
These are not serious critiques.
I could almost forgive Lanier/Stanger/Tang if this were the first time they were writing about this subject, but they have now written this same factually incorrect thing multiple times, and each time I've written a response pointing out the flaws.
I can understand that a well meaning person like JGL can be taken in by it. He mentions having talked to Audrey Tang about it. But, again, as much as I respect Tang's work in Taiwan, she is not a US legal expert, and she has this stuff entirely backwards.
I do believe that JGL legitimately wants a free and open internet. I believe that he legitimately would like to see more upstart competitors and less power and control from the biggest providers. In that we agree.
But he has been convinced by some people who are either lying to him or simply do not understand the details, and thus he has become a useful tool for enabling greater power for the internet giants, and greater online censorship. The exact opposite of what he claims to support.
I hope he realizes that he's been misled—and I'd be happy to talk this through with him, or put him in touch with actual experts on Section 230. Because right now, he's lending his star power to one of the most dangerous ideas around for the open internet.
Trump 1.0 took a hatchet to media ownership limits. Those limits, built on the back of decades of bipartisan collaboration, prohibited local broadcasters and media from growing too large, trampling smaller (and more diversely-owned) competitors underfoot. The result of their destruction has been a rise in local news deserts, a surge in right wing propaganda outlets pretending to be "local news," less diverse media ownership, and (if you hadn't noticed) a painfully disinformed electorate.
Trump 2.0 has been significantly worse.
Trump's FCC has finished demolishing whatever was left of already saggy media ownership limits, and are eyeing eliminating rules that would prevent the big four (Fox, ABC, CBS, NBC) from merging (a major reason why these networks have been such feckless authoritarian appeasers).
They're also working hard to let all of our local right wing broadcast companies merge into one, even larger, shittier company, something Donald Trump is very excited about!
More specifically Nexstar (a very Republican friendly company that also owns The Hill), is asking the FCC for permission to acquire Tegna in a $6.2 billion deal that is illegal under current rules (you might recall that Nexstar-owned The Hill recently fired a journalist whose reporting angered Trump).
The deal would give Nexstar ownership of 265 stations in 44 states and the District of Columbia and 132 of the country's 210 television Designated Market Areas (or DMAs). Nexstar appears to have beaten out rival bids by Sinclair, which has also long-been criticized as Republican propaganda posing as local news. It wouldn't be surprising if Nexstar and Sinclair are the next to merge.
Keep in mind, this is an industry that was already terrible agitprop, as this now seven-year-old Deadspin video helped everyone realize:
You might be inclined to say: "but Karl, local TV broadcasters are irrelevant. Who cares if they consolidate a dying industry." But the consolidation won't stop here. The goal isn't just the consolidation of local broadcasters, it's the consolidation of national and local media giants, telecoms, tech companies, and social media companies. All under the thumb of terrible unethical people.
Trump's rise to power couldn't have been made possible without the Republican domination of media. For the better part of a generation Republicans have dominated AM radio, local broadcast TV, and cable news, and have since done a remarkable job hoovering up what's left of both major media companies (CBS, FOX) and modern social media empires (TikTok, Twitter). The impact is everywhere you look.
Over on Elon Musk's right wing propaganda platform, Brendan Carr was quick to praise President's Trump bold support for more media consolidation. And, as he has done previously, he openly lied and trying to pretend that local broadcast consolidation is something that aids competition:
I've covered Brendan Carr professionally since he joined the FCC in 2012. This is a man who has coddled media and telecom giants (and their anti-competitive behavior) at literally every opportunity. One of his only functions in government has been to rubber stamp shitty mergers. Here, he's pretending to "protect competition" with a cute little antisemitic dog whistle about the folks in "Hollywood and New York."
Amusingly, Carr and Trump's push to allow all manner of problematic consolidation among these terrible local broadcasters has been so abrupt, it's actually causing some infighting between them and other right wing propaganda companies like Newsmax.
There's a reason the Trump administration is destroying media consolidation limits, murdering public media, harassing media companies, threatening late night comedians (or having them fired), and ushering forth all this mindless and dangerous consolidation. There's a reason Larry Ellison and Elon Musk are buying all the key social media platforms and fiddling with the algorithms.
They very openly (and so far semi-successfully) are trying to build a state media apparatus akin to what they have in Orban's Hungary and Putin's Russia. Our corporate press is already so broken and captured it's incapable of communicating that to anybody. It simply wouldn't be in their best financial interests for existing media conglomerates to be honest about this sort of thing.
One plus side, nobody involved in any of this — from CBS's News boss Bari Weiss to Sinclair Broadcasting — appear to have any competent idea of what they're doing. They're not good at journalism (because they're trying to destroy it), but they're generally not good at ratings-grabbing propaganda. As a result it's entirely possible they destroy U.S. media before their dream of state media comes to fruition.
Still, it might be nice if Democrats could stop waiting for "the left's Joe Rogan" and finally start embracing some meaningful media reforms for the modern era, whether that's the restoration of media consolidation limits, the creation of media ownership diversity requirements, an evolution in school media literacy training, support for public media, or creative new funding models for real journalism.
Because the trajectory we are on in terms of right wing domination of media heads to some very fucking grim places, and it's not like any of that has been subtle.
I want to say a little something upfront in this post, so that there is no misunderstanding. While I've spent a great deal of time outlining why I think RFK Jr. and his cadre of buffoons at HHS and its child agencies are horrible for America and her people's health, I do understand some of the perspective from people who pushback on vaccinations some of the time. One of those areas are vaccine mandates. Bodily autonomy is and ought to be a very real thing. A government installing mandates for what can and can't be done with one's own body is something that needs to be treated with a ton of sensitivity and I can understand why vaccine mandates in general might run afoul of the autonomy concept. Of course, it's also why the government shouldn't be in the business of telling women what to do with their bodies, or blanket outlawing things like euthanasia, but the point is I get it.
But there are times when we, as a society, do make some legal demands of the citizenry when it comes to their own physical beings for the betterment of the whole. Not all drugs are federally legal because there are some drugs that, if they were to proliferate, would cause enormous harm to the public that surrounds those individuals. The government does regulate to some extent what appears in our food and medicine, never bothering to ask the public their opinion on the matter. And there are some diseases so horrible that we've built some level of a mandate around vaccination, traditionally, especially in exchange for participation in publicly funded schools and the like.
Dr. Oz, television personality turned Administrator of the Centers for Medicare and Medicaid Services, has vocally opposed vaccine mandates in general terms. When Florida dropped the requirement for vaccines for public school children, Oz cheered them on.
In an interview on "The Story with Martha MacCallum," the Fox News host asked Oz whether he agrees with officials who want to make Florida the first state in the nation to end childhood vaccine requirements and whether Oz would "recommend the same thing to your patients."
"I would definitely not have mandates for vaccinations," the Centers for Medicare and Medicaid Services administrator told MacCallum. "This is a decision that a physician and a patient should be making together," he continued. "The parents love their kids more than anybody else could love that kid, so why not let the parents play an active role in this?"
The MMR vaccine was one of those required for Florida schools. So, Oz is remarkably clear in the quote above. The government should not be mandating vaccines. Further, the government shouldn't really have direct input into whether people are getting vaccines or not. That decision should be made strictly by the patient and the doctor who has that patient directly in front of them, or their parents.
Those comments from Oz were made in September of 2025. Fast forward to the present, with a measles outbreak that is completely off the rails in America, and the good doctor is singing a much different tune.
So, Oz is now reduced to begging people to get vaccinated for something that, for decades, everyone routinely got vaccinated for.
"Take the vaccine, please. We have a solution for our problem," he said. "Not all illnesses are equally dangerous and not all people are equally susceptible to those illnesses," he hedged. "But measles is one you should get your vaccine."
To be clear, he's still not advocating for any sort of mandate. Which is unfortunate, at least when it comes to targeted mandates for public schools and that sort of thing. But in lieu of any actual public policy to combat measles in America, he's reduced to a combination of begging the public to get vaccinated and telling the general public that a measles shot is definitely one they should be getting.
And on that he's right. But he's also talking out of both sides of his mouth. Oz isn't these people's doctor. These school children aren't all sitting directly in front of him. So the same person who advocated for a personalized approach to vaccines is now begging the public to take the measles vaccine from Washington D. C.
That inconsistency is among the many reasons it's difficult to know just how seriously to take Oz. And consistency is pretty damned key when it comes to government messaging on public health policy. That, in addition to trust, is everything here. And when Oz jumps onto a CNN broadcast to claim that this government, including RFK Jr., have been at the forefront of advocating for the measles vaccine, any trust that is there is torpedoed pretty quickly.
CNN anchor Dana Bash was left in disbelief as one of the president's top health goons claimed the MAGA administration was a top advocate for vaccines. Addressing the record outbreak of measles in the U.S., particularly in South Carolina, Bash asked Dr. Mehmet Oz on State of the Union Sunday: "Is this a consequence of the administration undermining support for advocacy for measles and other vaccines?" "I don't believe so," the Trump-appointed Centers for Medicare & Medicaid Services Administrator responded. He then said, "We've advocated for measles vaccines all along. Secretary Kennedy has been at the very front of this."
Absolute nonsense. Yes, Kennedy has said to get the measles vaccine. He's also said maybe everyone should just get measles instead. One of his deputies has hand-waved the outbreak away as being no big deal. Kennedy has advocated for alternative treatments, rather than vaccination.
The government is all over the place on this, in other words. As is Oz himself, in some respects. To sit here in the midst of the worst measles outbreak in decades, beg people to do the one thing that will make this all go away, and then claim that this government has been on the forefront of vaccine advocacy is simply silly.
Artificial intelligence promises to change not just how Americans work, but how societies decide which kinds of work are worthwhile in the first place. When technological change outpaces social judgment, a major capacity of a sophisticated society comes under pressure: the ability to sustain forms of work whose value is not obvious in advance and cannot be justified by necessity alone.
As AI systems diffuse rapidly across the economy, questions about how societies legitimate such work, and how these activities can serve as a supplement to market-based job creation, have taken on a policy relevance that deserves serious attention.
From Prayer to Platforms
That capacity for legitimating work has historically depended in part on how societies deploy economic surplus: the share of resources that can be devoted to activities not strictly required for material survival. In late medieval England, for example, many in the orbit of the church made at least part of their living performing spiritual labor such as saying prayers for the dead and requesting intercessions for patrons. In a society where salvation was a widely shared concern, such activities were broadly accepted as legitimate ways to make a living.
William Langland was one such prayer-sayer. He is known to history only because, unlike nearly all others who did similar work, he left behind a long allegorical religious poem, Piers Plowman, which he composed and repeatedly revised alongside the devotional labor that sustained him. It emerged from the same moral and institutional world in which paid prayer could legitimately absorb time, effort, and resources.
In 21st-century America, Jenny Nicholson earns a sizeable income sitting alone in front of a camera, producing long-form video essays on theme parks, films, and internet subcultures. Yet her audience supports it willingly and few doubt that it creates value of a kind. Where Langland's livelihood depended on shared theological and moral authority emanating from a Church that was the dominant institution of its day, Nicholson's depends on a different but equally real form of judgment expressed by individual market participants. And she is just one example of a broader class of creators—streamers, influencers, and professional gamers—whose work would have been unintelligible as a profession until recently.
What links Langland and Nicholson is not the substance of their work or any claim of moral equivalence, but the shared social judgment that certain activities are legitimate uses of economic surplus. Such judgments do more than reflect cultural taste. Historically, they have also shaped how societies adjust to technological change, by determining which forms of work can plausibly claim support when productivity rises faster than what is considered a "necessity" by society.
How Change Gets Absorbed
Technological change has long been understood to generate economic adjustment through familiar mechanisms: by creating new tasks within firms, expanding demand for improved goods and services, and recombining labor in complementary ways. Often, these mechanisms alone can explain how economies create new jobs when technology renders others obsolete. Their operation is well documented, and policies that reduce frictions in these processes—encouraging retraining or easing the entry of innovative firms—remain important in any period of change.
That said, there is no general law guaranteeing that new technologies will create more jobs than they destroy through these mechanisms alone. Alongside labor-market adjustment, societies have also adapted by legitimating new forms of value—activities like those undertaken by Langland and Nicholson—that came to be supported as worthwhile uses of the surplus generated by rising productivity.
This process has typically been examined not as a mechanism of economic adjustment, but through a critical or moralizing lens. From Thorstein Veblen's account of conspicuous consumption, which treats surplus-supported activity primarily as a vehicle for status competition, to Max Weber's analysis of how moral and religious worldviews legitimate economic behavior, scholars have often emphasized the symbolic and ideological dimensions of non-essential work. Herbert Marcuse pushed this line of thinking further, arguing that capitalist societies manufacture "false needs" to absorb surplus and assure the continuation of power imbalances. These perspectives offer real insight: uses of surplus are not morally neutral, and new forms of value can be entangled with power, hierarchy, and exclusion.
What they often exclude, however, is the way legitimation of new forms of value can also function to allow societies to absorb technological change without requiring increases in productivity to be translated immediately into conventional employment or consumption. New and expanded ways of using surplus are, in this sense, a critical economic safety valve during periods of rapid change.
Skilled Labor Has Been Here Before
Fears that artificial intelligence is uniquely threatening simply because it reaches into professional or cognitive domains rest on a mistaken historical premise. Episodes of large-scale technological displacement have rarely spared skilled or high-paid forms of labor; often, such work has been among the first affected. The mechanization of craft production in the nineteenth century displaced skilled cobblers, coopers, and blacksmiths, replacing independent artisans with factory systems that required fewer skills, paid lower wages, and offered less autonomy even as new skilled jobs arose elsewhere. These changes were disruptive but they were absorbed largely through falling prices, rising consumption, and new patterns of employment. They did not require societies to reconsider what kinds of activity were worthy uses of surplus: the same things were still produced, just at scale.
Other episodes are more revealing for present purposes. Sometimes, social change has unsettled not just particular occupations but entire regimes through which uses of surplus become legitimate. In medieval Europe, the Church was the one of the largest economic institutions just about everywhere, clerical and quasi-clerical roles like Langland's offered recognized paths to education, security, status, and even wealth. When those shared beliefs fractured, the Church's economic role contracted sharply—not because productivity gains ceased but because its claim on so large a share of surplus lost legitimacy.
To date, artificial intelligence has not produced large-scale job displacement, and the limited disruptions that have occurred have largely been absorbed through familiar adjustment mechanisms. But if AI systems begin to substitute for work whose value is justified less by necessity than by judgment or cultural recognition, the more relevant historical analogue may be less the mechanization of craft than the narrowing or collapse of earlier surplus regimes. The central question such technologies raise is not whether skilled labor can be displaced or whether large-scale displacement is possible—both have occurred repeatedly in the historical record—but how quickly societies can renegotiate which activities they are prepared to treat as legitimate uses of surplus when change arrives at unusual speed.
Time Compression and its Stakes
In this respect, artificial intelligence does appear unusual. Generative AI tools such as ChatGPT have diffused through society at a pace far faster than most earlier general-purpose technologies. ChatGPT was widely reported to have reached roughly 100 million users within two months of its public release and similar tools have shown comparably rapid uptake.
That compression matters. Much surplus has historically flowed through familiar institutions—universities, churches, museums, and other cultural bodies—that legitimate activities whose value lies in learning, spiritual rewards or meaning rather than immediate output. Yet such institutions are not fixed. Periods of rapid technological change often place them under strain-something evident today for many-exposing disagreements about purpose and authority. Under these conditions, experimentation with new forms of surplus becomes more important, not less. Most proposed new forms of value fail, and attempts to predict which will succeed have a poor historical record—from the South Sea Bubble to more recent efforts to anoint digital assets like NFTs as durable sources of wealth. Experimentation is not a guarantee of success; it is a hedge. Not all claims on surplus are benign, and waste is not harmless. But when technological change moves faster than institutional consensus, the greater danger often lies not in tolerating too many experiments, but in foreclosing them too quickly.
Artificial intelligence does not require discarding all existing theories of change. What sets modern times apart is the speed with which new capabilities become widespread, shortening the interval in which those judgments are formed. In this context, surplus that once supported meaningful, if unconventional, work may instead be captured by grifters, legally barred from legitimacy (by say, outlawing a new art form) or funneled into bubbles. The risk is not waste alone, but the erosion of the cultural and institutional buffers that make adaptation possible.
The challenge for policymakers is not to pre-ordain which new forms of value deserve support but to protect the space in which judgment can evolve. They need to realize that they simply cannot make the world entirely safe, legible and predictable: whether they fear technology overall or simply seek to shape it in the "right" way, they will not be able to predict the future. That means tolerating ambiguity and accepting that many experiments will fail with negative consequences. In this context, broader social barriers that prevent innovation in any field-professional licensing, limits on free expression, overly zealous IP laws, regulatory bars on the entry to small firms-deserve a great deal of scrutiny. Even if the particular barriers in question have nothing to do with AI itself, they may retard the development of surplus sinks necessary to economic adjustment. In a period of compressed adjustment, the capacity to let surplus breathe and value be contested may well determine whether economies bend or break.
Eli Lehrer is the President of the R Street Institute.
Peter Mandelson—the former UK cabinet minister who was just sacked as Britain's ambassador to the United States over newly revealed emails with Jeffrey Epstein—has found a novel way to avoid answering questions about why he told a convicted sex offender "your friends stay with you and love you" and urged him to "fight for early release." He got the UK press regulator to send a memo to all UK media essentially telling them to leave him alone.
The National published what they describe as the "secret notice" that went out:
CONFIDENTIAL - STRICTLY NOT FOR PUBLICATION: Ipso has asked us to circulate the following advisory:
Ipso has today been contacted by a representative acting on behalf of Peter Mandelson.
Mr Mandelson's representatives state that he does not wish to speak to the media at this time. He requests that the press do not take photos or film, approach, or contact him via phone, email, or in-person. His representatives ask that any requests for his comment are directed to [REDACTED]
We are happy to make editors aware of his request. We note the terms of Clause 2 (Privacy) and 3 (Harassment) of the Editors' Code, and in particular that Clause 3 states that journalists must not persist in questioning, telephoning, pursuing or photographing individuals once asked to desist, unless justified in the public interest.
Clauses 2 and 3 of the UK Editor's Code—the privacy and harassment provisions—exist primarily to protect genuinely vulnerable people from press intrusion. Grieving families. Crime victims. People suffering genuine harassment.
Mandelson is invoking them to avoid answering questions about his documented friendship with one of history's most notorious pedophiles—a friendship so extensive and problematic that it just cost him his job as ambassador to the United States, days before a presidential state visit.
According to Politico, the UK Foreign Office withdrew Mandelson "with immediate effect" after emails showed the relationship was far deeper than previously known:
In a statement the U.K. Foreign Office said Mandelson had been withdrawn as ambassador "with immediate effect" after emails showed "the depth and extent" of his relationship with Epstein was "materially different from that known at the time of his appointment."
"In particular Peter Mandelson's suggestion that Jeffrey Epstein's first conviction was wrongful and should be challenged is new information," the statement added.
So we have a senior political figure who just got fired over revelations that he told a convicted sex offender his prosecution was "wrongful" and should be challenged, who maintained this friendship for years longer than he'd admitted, and his response is to invoke press harassment protections?
The notice does include the important qualifier "unless justified in the public interest." And it's hard to imagine a clearer case of public interest: a senior diplomat, just sacked from his post, over previously undisclosed communications with a convicted pedophile, in which he expressed support for challenging that pedophile's conviction. If that's not public interest, the term has no meaning.
But the mere act of circulating this notice creates a chilling effect. It puts journalists on notice that pursuing this story could result in complaints to the regulator. It's using the machinery of press regulation as a shield against legitimate accountability journalism.
Now, to be fair, one could imagine scenarios where even a disgraced public figure might legitimately invoke harassment protections—it wasn't that long ago there was a whole scandal in the UK with journalists hacking the voicemails of famous people. But that's not what's happening here. Mandelson is invoking these provisions to avoid being asked questions at all. "Please don't inquire about why I told a convicted pedophile his prosecution was wrongful" is not the kind of harm these rules were designed to prevent.
This is who Mandelson has always been: someone who sees regulatory and governmental machinery as tools to be deployed on behalf of whoever he's serving at the moment. Back in 2009, we covered how he returned from a vacation with entertainment industry mogul David Geffen and almost immediately started pushing for aggressive new copyright enforcement measures, including kicking people off the internet for file sharing. As we wrote at the time, he had what we called a "sudden conversion" to Hollywood's position on internet enforcement that happened to coincide suspiciously with his socializing with entertainment industry executives.
Back then, the machinery was deployed to serve entertainment executives who wanted harsher copyright enforcement. Now it's being deployed to serve Mandelson himself.
There's a broader pattern here that goes beyond one UK politician. The Epstein revelations have been remarkable not just for what they've revealed about who associated with him, but for how consistently the response from the powerful has been to deflect, deny, and deploy every available mechanism to avoid genuine accountability. Some have used their media platforms to try to reshape the narrative. Some have simply refused to comment.
Mandelson is trying to use the press regulatory system itself.
It's worth noting that The National chose to publish the "confidential - strictly not for publication" memo anyway, explicitly citing the public interest. Good for them. Because if there's one thing that absolutely serves the public interest, it's shining a light on attempts by the powerful to use the systems meant to protect the vulnerable as shields for their own accountability.
Mandelson's representatives say he "does not wish to speak to the media at this time." That's his right to request—but no media should have to agree to his terms. Weaponizing press regulation to create a cone of silence around questions of obvious public interest is something else entirely. It's elite impunity dressed up in the language of press ethics.
Technically — TECHNICALLY! — we still have a system that relies on three co-equal branches to ensure that any single branch can't steamroll the rest of the system (along with the nation it's supposed to serve) to seize an unequal amount of power.
Technically.
What we're seeing now is something else entirely. The judicial branch is headed by people who are willing to give the executive branch what it wants, so long as the executive branch is headed by the Republican party. The legislative branch — fully compromised by MAGA bootlickers — has decided to simply not do its job, allowing the executive branch to seize even more power. The executive branch is now just a throne for a king — a man who feels he shouldn't have to answer to anyone — not even his voting bloc — so long as he remains in power.
The courts can act as a check against executive overreach. But as we've seen time and time again, this position means nothing if you're powerless to enforce it. And that has led to multiple executive officials telling the courts to go fuck themselves when they hand down rulings the administration doesn't like. A current sitting appellate judge no less made a name for himself in the Trump administration by demonstrating his contempt for the judicial system he's now an integral part of.
Only good things can come from this! MAGA indeed!
And while this is only one person's retelling their experience of being caught in the gears of Trump's anti-brown people activities, it's illustrative of what little it matters that there are three co-equal branches when one branch makes it clear on a daily basis that it considers itself to be more equal than the rest of them. (via Kathleen Clark on Bluesky)
This is from a sworn statement [PDF] in ongoing litigation against the federal government, as told by "O.," a Guatemalan resident of Minnesota who has both a pending asylum application as well as a Juvenile Status proceeding still undergoing in the US. None of that mattered to ICE officers, who arrested him in January 2026 and — within 24 hours — shipped him off to a detention center more than a thousand miles from his home.
O. was denied meals, access to phones, access to legal representation, stuffed into overcrowded cells, and generally mistreated by the government that once might have honestly considered the merits of his asylum application.
But the real dirt is this part of the sworn statement, which again exposes this administration's complete disinterest in adhering to orders from US courts, much less even paying the merest of lip service to rights long considered to be derived from none other than the "Creator" himself.
ICE did not tell me that my attorney had been trying to call me and contact me while I was in Texas. They didn't tell me my attorney Kim, had retained another attorney, Kira Kelley, to file a habeas petition on my behalf, or that a court had granted it and ordered my release. They just kept holding me there and occasionally trying to get me to self-deport.
[…]
I was put in a cold cell where I had to sleep on the bare cement floor. Around 10 in the morning my cellmate asked to speak to an ICE officer. Three officers came into the cell so I had a chance to speak to them too. One officer told me that I "had no chance of returning to Minnesota" and that "the best thing for [me] is self-deportation." She told me that if I fought my case, I would spend two to three more months here in El Paso. She offered me $2600 to self-deport. I refused. I wanted to talk to my attorney. They didn't tell me the judge had already ordered my release and return to Minnesota. If I hadn't managed to talk to my attorney who told me a while back that I was ordered released, I might have given up at this point and signed the self deportation forms because the conditions were so unbearable.
So… you see the problem. A court can order a release. But the court relies on the government to carry out this instruction. If it doesn't, the court likely won't know for days or weeks or months. At that point, a new set of rights abuses will have been inflicted on people who should have been freed. When the government is finally asked to answer for this, it will again engage in a bunch of bluster and obfuscation, forcing the court system to treat the administration like a member of the system of checks and balances even when it's immediately clear the executive branch has no desire to be checked and/or balanced.
While more judges are now treating the executive branch as a hostile force unwilling to behave honestly or recognize restraints on its power, the imbalance continues to shift in the administration's favor, largely because it can engage in abusive acts at scale, while the court is restrained to the cases presented to it.
But if you're outside of the system, you can clearly see what's happening and see what the future holds if one-third of the government refuses to do its job (the GOP-led Congress) and the other third can't handle the tidal wave of abuses being presented to it daily. The executive branch will become a kingdom that fears nothing and answers to no one. But the bigger problem is this: most Americans will see this and understand that this will ultimately destroy democracy. Unfortunately, there's a significant number of voters who actually welcome these developments, figuring it's better to lick the boots of someone who prefers to rule in hell, rather than serve the United States.
Microsoft Office 2021 Professional is the perfect choice for any professional who needs to handle data and documents. It comes with many new features that will make you more productive in every stage of development, whether it's processing paperwork or creating presentations from scratch - whatever your needs are. Office Pro comes with MS Word, Excel, PowerPoint, Outlook, Teams, OneNote, Publisher, and Access. Microsoft Windows 11 Pro is exactly that. This operating system is designed with the modern professional in mind. Whether you are a developer who needs a secure platform, an artist seeking a seamless experience, or an entrepreneur needing to stay connected effortlessly, Windows 11 Pro is your solution. The Ultimate Microsoft Office Professional 2021 for Windows + Windows 11 Pro Bundle is on sale for $44.97 for a limited time.
Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
Over the years, we've written approximately one million words explaining why Section 230 of the Communications Decency Act is essential to how the internet functions. We've corrected politicians who lie about it. We've debunked myths spread by mainstream media outlets that should know better. We've explained, re-explained, and then explained again why gutting this law would be catastrophic for online speech.
And now I find myself in the somewhat surreal position of saying: you know who nailed this explanation better than most policy experts, pundits, and certainly better than any sitting member of Congress? A YouTuber named Cr1TiKaL.
If you're not familiar with Charles "Cr1TiKaL" White Jr., he runs the penguinz0 YouTube channel with nearly 18 million subscribers and over 12 billion total views. He's known for deadpan commentary on internet culture and video games. He's not a policy wonk. He's not a lawyer. He's just a guy who apparently bothered to actually understand what Section 230 says and does—something that puts him leagues ahead of the United States Congress.
In this 13-minute video responding to actor Joseph Gordon-Levitt's call to "sunset" Section 230, Cr1TiKaL laid out the case for why 230 matters with a clarity that most mainstream coverage hasn't managed in a decade:
Dismantling section 230 would fundamentally change the internet as you know it. And that's not an exaggeration to say it. Put it even more simply, section 230 allows goobers like me to post whatever they want, saying whatever they want, and the platform itself is not liable for whatever I've made or said.
That is on me personally.
The platform isn't going to be, you know, fucking dragged through the streets with legs spread like a goddamn Thanksgiving turkey for it and getting blasted by lawsuits or whatever. Now, of course, there are limitations in place when it comes to illegal content, things that actually break the law. That is, of course, a very different set of circumstances. That's a different can of worms, and that's handled differently. But it should be obvious why section 230 is so important because if these platforms were held liable for every single thing people post on their platforms, they would get into a lot of hot water and they would just not allow people to post things. Full stop. because it would be too dangerous to do so. They would need to micromanage and control every single thing that hits the platform in order to protect themselves. No matter how you spin it, this would ruin the internet. It's a pile of dogshit. No matter how much perfume gets sprayed on it or how they want to repackage it, it still stinks.
Yes, the metaphors are colorful. But the underlying point is exactly correct. Section 230 places liability where it belongs: on the person who actually created the content. Not on the platform that hosts it. This is how the entire internet works. Every comment section, every social media post, every forum—all of it depends on this basic principle.
Also, he actually reads the 26 words in the video! This is something that so many other critics of 230 skip over, because then they can pretend it says things it doesn't say.
And unlike the politicians who keep pretending this is some kind of special gift to "Big Tech," Cr1TiKaL correctly notes that 230 protects everyone:
This would affect literally every platform that has anything user submitted in any capacity at all.
Every. Single. One. Your local newspaper's comment section. The neighborhood Facebook group. The subreddit for your favorite hobby. The Discord server where you talk about video games. The email you forward. All of it.
He's also refreshingly clear-eyed about why politicians from both parties keep attacking 230:
Since the advent of the internet, section 230 has been a target for people that want to control your speech and infringe on your First Amendment rights.
This observation tracks with what we've pointed out repeatedly: the bipartisan hatred of Section 230 is one of the most remarkable examples of political unity in modern American governance—and it's driven largely by politicians who want platforms to moderate content in ways that favor their particular political preferences.
Democrats have attacked 230 claiming it enables "misinformation" and hate speech. Republicans have attacked it claiming it enables "censorship" of conservative voices. Both cannot simultaneously be true, and yet both parties have introduced legislation to gut the law. Cr1TiKaL captures this perfectly:
When Democrats were in charge, it caught a lot of scrutiny, claiming that it was enabling the spread of racism and harming children. With Republicans in power, they're claiming that it's spreading misinformation and anti-semitism. This is a bipartisan punching bag that they desperately want to just beat down.
The critics always trot out the same tired arguments about algorithms and echo chambers and extremism. As if removing 230 would somehow make speech better rather than making it disappear entirely or become heavily controlled by whoever has the most money and lawyers. Cr1TiKaL cuts right through this:
There are people that are paying a lot of money to try and plant this idea in your brain that section 230 is a bad thing. It only leads to things like extremism and conspiracy theories and demonization and that kind of thing. That's not true.
Anyone who stops and thinks about this for even just a moment, firing on a few neurons, should be able to recognize how outrageous this proposal is. How would shutting down conversation and shutting down the ability to express thoughts and opinions somehow help combat the rise of extremism and conspiracies? that would only exacerbate the problem. Censorship doesn't solve these issues. It makes them worse.
He even anticipates the point we've made countless times about what the internet would look like without 230:
Platforms would not allow just completely unfiltered usage of normal people expressing their thoughts because those thoughts might go against the official narrative from the curated source and then the curated source might go after the platform saying this is defamatory. These people have just said something hosted on your platform and we're coming after you with lawsuits. So they just wouldn't allow it.
This is a point we keep repeating and you never hear in the actual policy debates, because supporters of a 230 repeal have no answer for it beyond "nuh-uh."
The people who most want to control online speech are exactly the people you'd expect: governments and powerful interests who don't like being criticized. Section 230 is one of the things standing in their way.
And when critics inevitably dust off the "think of the children" argument, Cr1TiKaL delivers the response that shouldn't be controversial but apparently is:
Be a parent. It is not the internet's job to cater to your lack of parenting by just letting your kid online. Fucking lazy trash ass parents just sit a kid in front of a computer or an iPad and then are stunned when apparently they find bad shit. Be a parent. Be involved in your kids' life. Raise your children. Don't make it the internet's job to do that for you.
Is this delivered with the diplomatic nuance of a congressional hearing? No. Is it correct? Absolutely. The "protect the children" argument for dismantling 230 has always been a dodge—a way to make critics of the bill seem heartless while ignoring that Section 230 doesn't protect illegal content and maybe, just maybe, the primary responsibility for what media children consume should rest with the adults responsible for those children.
We've been writing about Section 230 for years, trying to explain to policymakers and the general public why it matters. And most of the time, it feels like shouting into the void. Politicians keep lying about it. Journalists keep getting it wrong. The mythology around 230 persists no matter how many times it gets corrected.
And we've heard from plenty of younger people who now believe that 230 is bad. I recently guest taught a college class where students were split into two groups—one to argue in favor of 230 and one against—and I was genuinely dismayed when the group told to argue in favor of 230 argue that 230 "once made sense" but doesn't any more.
So there's something genuinely hopeful about seeing a young creator with an audience of nearly 18 million people—an audience that skews young and is probably not spending a lot of time reading policy papers—get it right. Not just right in a general sense, but right in the specifics. He read the law. He understood what it does. He correctly identified why it matters and who benefits from dismantling it.
Maybe the generation that grew up on the internet actually understands what's at stake when politicians threaten to fundamentally reshape how it works. Maybe they're not buying the moral panic narratives that have been trotted out to justify every bad piece of tech legislation for the past decade.
Or maybe I'm being optimistic. Either way, Cr1TiKaL's video is worth watching. It's profane, it's casual, and it's more correct about Section 230 than anything you'll hear from the halls of Congress.
We told you this was coming months ago.
The Trump Department of Justice (DOJ) says it has initiated a broad investigation of Netflix's business practices and it's planned $82.7 billion merger with Warner Brothers. The Trump DOJ's pretense is that they're just suddenly really concerned about media consolidation and monopoly power (you're to ignore the U.S. right wing's generational and indisputable quest to coddle and protect monopoly power across telecom, energy, air travel, banking, and countless other industries):
"Questioning how Netflix competes with rivals suggests the department is looking at whether its planned Warner deal could entrench its market power, or lead to a monopoly in the future. U.S. law gives enforcers broad power to oppose mergers that could lead to a monopoly."
In reality, the Trump administration has made it extremely clear they're hoping to scuttle the Netflix deal to help Larry Ellison acquire Warner Brothers, CNN, and HBO. If they can't kill the deal, they aspire to at least leverage the merger approval process to force Netflix executives to further debase themselves before the Trump administration, which I suspect they'll all be happy to do.
It's part of a longstanding trend by Trumpism to pretend that they're engaged in populist antitrust reform, claims historically propped up by a long list of useful idiots across the partisan spectrum, and parroted by a growing coalition of right wing propaganda outlets. This bogus populism helps obfuscate what's really just some of the worst corruption America has ever seen (which is really saying something).
The original (paywalled) Wall Street Journal report (and this aggregated Reuters recap) dutifully help sell the claim that the DOJ is also "investigating" Ellison's Paramount/Skydance, whose Warner Brothers acquisition bid was repeatedly rejected by the Warner board over worries about dodgy financing and Saudi money involvement:
"The WSJ reported that the DOJ is also reviewing Paramount's proposed acquisition bid, which Warner Bros' board unanimously rejected by labeling it "inadequate" and "not in the best interests" of shareholders."
The outlets fail to remind you that there is generous reporting discussing how Larry Ellison and Trump have had extensive meetings discussing who Larry Ellison would fire on Trump's behalf should he take control of CNN. They also fail to remind you that the right wing "press," with Trump's help, has been engaged in a broad effort to undermine the Netflix merger chances using false claims.
After Warner Brothers balked at Larry's competing bid and a hostile takeover attempt, Larry tried to sue Warner Brothers. With that not going anywhere, Larry, MAGA, and the Heritage Foundation (of Project 2025 fame) have since joined forces to try and attack the Netflix merger across right wing media, falsely claiming that "woke" Netflix is attempting a "cultural takeover" that must be stopped for the good of humanity:
More recently that included scripted questions provided by the Heritage Foundation at a Congressional hearing, where lawmakers like Republican Senator Josh Hawley resorted to bogus trans panic attacks to try and paint Netflix as some sort of vile leftist cabal.
As we keep noting, ideally a functional regulator would block all additional media consolidation, since these megadeals are consistently terrible for labor, consumers, and product quality (see: Warner Brothers entire corporate history since 2000).
That's clearly not happening under a Trump administration that has lobotomized all key regulators. So ideally, while not great, Netflix acquiring Warner Brothers is the best of a bunch of bad options. It's arguably notably better than furthering Larry Ellison's obvious plan to gobble up CBS, TikTok, and CNN, and turn what's left of America's already dodgy corporate media into Hungary-esque state television that lavishes hollow praise on our mad idiot king.
Because we've already let media consolidation run amok (thanks to the Trump administration's attack on bipartisan media consolidation limits), our shitty corporate press is incapable of explaining to the public that the Trump DOJ inquiry into Netflix isn't being conducted in good faith. It's a perfect circle of corruption, greed, regulatory capture, and corruption that will ramp up in the weeks to come.
EFF is against age gating and age verification mandates, and we hope we'll win in getting existing ones overturned and new ones prevented. But mandates are already in effect, and every day many people are asked to verify their age across the web, despite prominent cases of sensitive data getting leaked in the process.
At some point, you may have been faced with the decision yourself: should I continue to use this service if I have to verify my age? And if so, how can I do that with the least risk to my personal information? This is our guide to navigating those decisions, with information on what questions to ask about the age verification options you're presented with, and answers to those questions for some of the top most popular social media sites. Even though there's no way to implement mandated age gates in a way that fully protects speech and privacy rights, our goal here is to help you minimize the infringement of your rights as you manage this awful situation.
Follow the DataSince we know that leaks happen despite the best efforts of software engineers, we generally recommend submitting the absolute least amount of data possible. Unfortunately, that's not going to be possible for everyone. Even facial age estimation solutions where pictures of your face never leave your device, offering some protection against data leakage, are not a good option for all users: facial age estimation works less well for people of color, trans and nonbinary people, and people with disabilities. There are some systems that use fancy cryptography so that a digital ID saved to your device won't tell the website anything more than if you meet the age requirement, but access to that digital ID isn't available to everyone or for all platforms. You may also not want to register for a digital ID and save it to your phone, if you don't want to take the chance of all the information on it being exposed upon request of an over-zealous verifier, or you simply don't want to be a part of a digital ID system
If you're given the option of selecting a verification method and are deciding which to use, we recommend considering the following questions for each process allowed by each vendor:
- Data: What info does each method require?
- Access: Who can see the data during the course of the verification process?
- Retention: Who will hold onto that data after the verification process, and for how long?
- Audits: How sure are we that the stated claims will happen in practice? For example, are there external audits confirming that data is not accidentally leaked to another site along the way? Ideally these will be in-depth, security-focused audits by specialized auditors like NCC Group or Trail of Bits, instead of audits that merely certify adherence to standards.
- Visibility: Who will be aware that you're attempting to verify your age, and will they know which platform you're trying to verify for?
We attempt to provide answers to these questions below. To begin, there are two major factors to consider when answering these questions: the tools each platform uses, and the overall system those tools are part of.
In general, most platforms offer age estimation options like face scans as a first line of age assurance. These vary in intrusiveness, but their main problem is inaccuracy, particularly for marginalized users. Third-party age verification vendors Private ID and k-ID offer on-device facial age estimation, but another common vendor, Yoti, sends the image to their servers during age checks by some of the biggest platforms. This risks leaking the images themselves, and also the fact that you're using that particular website, to the third party.
Then, there's the document-based verification services, which require you to submit a hard identifier like a government-issued ID. This method thus requires you to prove both your age and your identity. A platform can do this in-house through a designated dataflow, or by sending that data to a third party. We've already seen examples of how this can fail. For example, Discord routed users' ID data through its general customer service workflow so that a third-party vendor could perform manual review of verification appeals. No one involved ever deleted users' data, so when the system was breached, Discord had to apologize for the catastrophic disclosure of nearly 70,000 photos of users' ID documents. Overly long retention periods expose documents to risk of breaches and historical data requests. Some document verifiers have retention periods that are needlessly long. This is the case with Incode, which provides ID verification for Tiktok. Incode holds onto images forever by default, though TikTok should automatically start the deletion process on your behalf.
Some platforms offer alternatives, like proving that you own a credit card, or asking for your email to check if it appears in databases associated with adulthood (like home mortgage databases). These tend to involve less risk when it comes to the sensitivity of the data itself, especially since credit cards can be replaced, but in general still undermine anonymity and pseudonymity and pose a risk of tracking your online activity. We'd prefer to see more assurances across the board about how information is handled.
Each site offers users a menu of age assurance options to choose from. We've chosen to present these options in the rough order that we expect most people to prefer. Jump directly to a platform to learn more about its age checks:
- Meta - Facebook, Instagram, WhatsApp, Messenger, Threads
- Google - Gmail, YouTube
- TikTok
- Everywhere Else
If Meta can guess your age, you may never even see an age verification screen. Meta, which runs Facebook, Threads, Instagram, Messenger, and WhatsApp, first tries to use information you've posted to guess your age, like looking at "Happy birthday!" messages. It's a creepy reminder that they already have quite a lot of information about you.
If Meta cannot guess your age, or if Meta infers you're too young, it will next ask you to verify your age using either facial age estimation, or by uploading your photo ID.
Face ScanIf you choose to use facial age estimation, you'll be sent to Yoti, a third-party verification service. Your photo will be uploaded to their servers during this process. Yoti claims that "as soon as an age has been estimated, the facial image is immediately and permanently deleted." Though it's not as good as not having that data in the first place, Yoti's security measures include a bug bounty program and annual penetration testing. Researchers from Mint Secure found that Yoti's app and website are filled with trackers, so the fact that you're verifying your age could be not only shared to Yoti, but leaked to third-party data brokers as well.
You may not want to use this option if you're worried about third parties potentially being able to know you're trying to verify your age with Meta. You also might not want to use this if you're worried about a current picture of your face accidentally leaking—for example, if elements in the background of your selfie might reveal your current location. On the other hand, if you consider a selfie to be less sensitive than a photograph of your ID, this option might be better. If you do choose (or are forced to) use the face check system, be sure to snap your selfie without anything you'd be concerned with identifying your location or embarrassing you in the background in case the image leaks.
Upload IDIf Yoti's age estimation decides your face looks too young, or if you opt out of facial age estimation, your next recourse is to send Meta a photo of your ID. Meta sends that photo to Yoti to verify the ID. Meta says it will hold onto that ID image for 30 days, then delete it. Meanwhile, Yoti claims it will delete the image immediately after verification. Of course, bugs and process oversights exist, such as accidentally replicating information in logs or support queues, but at least they have stated processes. Your ID contains sensitive information such as your full legal name and home address. Using this option not only runs the (hopefully small, but never nonexistent) risk of that data getting leaked through errors or hacking, but it also lets Meta see the information needed to tie your profile to your identity—which you may not want. If you don't want Meta to know your name and where you live, or rely on both Meta and Yoti to keep to their deletion promises, this option may not be right for you.
Google - Gmail, YouTube Inferred AgeIf Google can guess your age, you may never even see an age verification screen. Your Google account is typically connected to your YouTube account, so if (like mine) your YouTube account is old enough to vote, you may not need to verify your Google account at all. Google first uses information it already knows to try to guess your age, like how long you've had the account and your YouTube viewing habits. It's yet another creepy reminder of how much information these corporations have on you, but at least in this case they aren't likely to ask for even more identifying data.
If Google cannot guess your age, or decides you're too young, Google will next ask you to verify your age. You'll be given a variety of options for how to do so, with availability that will depend on your location and your age.
Google's methods to assure your age include ID verification, facial age estimation, verification by proxy, and digital ID. To prove you're over 18, you may be able to use facial age estimation, give Google your credit card information, or tell a third-party provider your email address.
Face ScanIf you choose to use facial age estimation, you'll be sent to a website run by Private ID, a third-party verification service. The website will load Private ID's verifier within the page—this means that your selfie will be checked without any images leaving your device. If the system decides you're over 18, it will let Google know that, and only that. Of course, no technology is perfect—should Private ID be mandated to target you specifically, there's nothing to stop it from sending down code that does in fact upload your image, and you probably won't notice. But unless your threat model includes being specifically targeted by a state actor or Private ID, that's unlikely to be something you need to worry about. For most people, no one else will see your image during this process. Private ID will, however, be told that your device is trying to verify your age with Google and Google will still find out if Private ID thinks that you're under 18.
If Private ID's age estimation decides your face looks too young, you may next be able to decide if you'd rather let Google verify your age by giving it your credit card information, photo ID, or digital ID, or by letting Google send your email address to a third-party verifier.
Email UsageIf you choose to provide your email address, Google sends it on to a company called VerifyMy. VerifyMy will use your email address to see if you've done things like get a mortgage or paid for utilities using that email address. If you use Gmail as your email provider, this may be a privacy-protective option with respect to Google, as Google will then already know the email address associated with the account. But it does tell VerifyMy and its third-party partners that the person behind this email address is looking to verify their age, which you may not want them to know. VerifyMy uses "proprietary algorithms and external data sources" that involve sending your email address to "trusted third parties, such as data aggregators." It claims to "ensure that such third parties are contractually bound to meet these requirements," but you'll have to trust it on that one—we haven't seen any mention of who those parties are, so you'll have no way to check up on their practices and security. On the bright side, VerifyMy and its partners do claim to delete your information as soon as the check is completed.
Credit Card VerificationIf you choose to let Google use your credit card information, you'll be asked to set up a Google Payments account. Note that debit cards won't be accepted, since it's much easier for many debit cards to be issued to people under 18. Google will then charge a small amount to the card, and refund it once it goes through. If you choose this method, you'll have to tell Google your credit card info, but the fact that it's done through Google Payments (their regular card-processing system) means that at least your credit card information won't be sitting around in some unsecured system. Even if your credit card information happens to accidentally be leaked, this is a relatively low-risk option, since credit cards come with solid fraud protection. If your credit card info gets leaked, you should easily be able to dispute fraudulent charges and replace the card.
Digital IDIf the option is available to you, you may be able to use your digital ID to verify your age with Google. In some regions, you'll be given the option to use your digital ID. In some cases, it's possible to only reveal your age information when you use a digital ID. If you're given that choice, it can be a good privacy-preserving option. Depending on the implementation, there's a chance that the verification step will "phone home" to the ID provider (usually a government) to let them know the service asked for your age. It's a complicated and varied topic that you can learn more about by visiting EFF's page on digital identity.
Upload IDShould none of these options work for you, your final recourse is to send Google a photo of your ID. Here, you'll be asked to take a photo of an acceptable ID and send it to Google. Though the help page only states that your ID "will be stored securely," the verification process page says ID "will be deleted after your date of birth is successfully verified." Acceptable IDs vary by country, but are generally government-issued photo IDs. We like that it's deleted immediately, though we have questions about what Google means when it says your ID will be used to "improve [its] verification services for Google products and protect against fraud and abuse." No system is perfect, and we can only hope that Google schedules outside audits regularly.
TikTok Inferred AgeIf TikTok can guess your age, you may never even see an age verification notification. TikTok first tries to use information you've posted to estimate your age, looking through your videos and photos to analyze your face and listen to your voice. By uploading any videos, TikTok believes you've given it consent to try to guess how old you look and sound.
If TikTok decides you're too young, appeal to revoke their age decision before the deadline passes. If TikTok cannot guess your age, or decides you're too young, it will automatically revoke your access based on age—including either restricting features or deleting your account. To get your access and account back, you'll have a limited amount of time to verify your age. As soon as you see the notification that your account is restricted, you'll want to act fast because in some places you'll have as little as 23 days before the deadline passes.
When you get that notification, you're given various options to verify your age based on your location.
Face ScanIf you're given the option to use facial age estimation, you'll be sent to Yoti, a third-party verification service. Your photo will be uploaded to their servers during this process. Yoti claims that "as soon as an age has been estimated, the facial image is immediately and permanently deleted." Though it's not as good as not having that data in the first place, Yoti's security measures include a bug bounty program and annual penetration testing. However, researchers from Mint Secure found that Yoti's app and website are filled with trackers, so the fact that you're verifying your age could be leaked not only to Yoti, but to third-party data brokers as well.
You may not want to use this option if you're worried about third parties potentially being able to know you're trying to verify your age with TikTok. You also might not want to use this if you're worried about a current picture of your face accidentally leaking—for example, if elements in the background of your selfie might reveal your current location. On the other hand, if you consider a selfie to be less sensitive than a photograph of your ID or your credit card information, this option might be better. If you do choose (or are forced to) use the face check system, be sure to snap your selfie without anything you'd be concerned with identifying your location or embarrassing you in the background in case the image leaks.
Credit Card VerificationIf you have a credit card in your name, TikTok will accept that as proof that you're over 18. Note that debit cards won't be accepted, since it's much easier for many debit cards to be issued to people under 18. TikTok will charge a small amount to the credit card, and refund it once it goes through. It's unclear if this goes through their regular payment process, or if your credit card information will be sent through and stored in a separate, less secure system. Luckily, these days credit cards come with solid fraud protection, so if your credit card gets leaked, you should easily be able to dispute fraudulent charges and replace the card. That said, we'd rather TikTok provide assurances that the information will be processed securely.
Credit Card Verification of a Parent or GuardianSometimes, if you're between 13 and 17, you'll be given the option to let your parent or guardian confirm your age. You'll tell TikTok their email address, and TikTok will send your parent or guardian an email asking them (a) to confirm your date of birth, and (b) to verify their own age by proving that they own a valid credit card. This option doesn't always seem to be offered, and in the one case we could find, it's possible that TikTok never followed up with the parent. So it's unclear how or if TikTok verifies that the adult whose email you provide is your parent or guardian. If you want to use credit card verification but you're not old enough to have a credit card, and you're ok with letting an adult know you use TikTok, this option may be reasonable to try.
Photo with a Random Adult?Bizarrely, if you're between 13 and 17, TikTok claims to offer the option to take a photo with literally any random adult to confirm your age. Its help page says that any trusted adult over 25 can be chosen, as long as they're holding a piece of paper with the code on it that TikTok provides. It also mentions that a third-party provider is used here, but doesn't say which one. We haven't found any evidence of this verification method being offered. Please do let us know if you've used this method to verify your age on TikTok!
Photo ID and Face ComparisonIf you aren't offered or have failed the other options, you'll have to verify your age by submitting a copy of your ID and matching photo of your face. You'll be sent to Incode, a third-party verification service. In a disappointing failure to meet the industry standard, Incode itself doesn't automatically delete the data you give it once the process is complete, but TikTok does claim to "start the process to delete the information you submitted," which should include telling Incode to delete your data once the process is done. If you want to be sure, you can ask Incode to delete that data yourself. Incode tells TikTok that you met the age threshold without providing your exact date of birth, but then TikTok wants to know the exact date anyway, so it'll ask for your date of birth even after your age has been verified.
TikTok itself might not see your actual ID depending on its implementation choices, but Incode will. Your ID contains sensitive information such as your full legal name and home address. Using this option not only runs the (hopefully small, but never nonexistent) risk of that data getting accidentally leaked through errors or hacking. If you don't want TikTok or Incode to know your name, what you look like, and where you live—or if you don't want to rely on both TikTok and Incode to keep to their deletion promises—then this option may not be right for you.
Everywhere ElseWe've covered the major providers here, but age verification is unfortunately being required of many other services that you might use as well. While the providers and processes may vary, the same general principles will apply. If you're trying to choose what information to provide to continue to use a service, consider the "follow the data" questions mentioned above, and try to find out how the company will store and process the data you give it. The less sensitive information, the fewer people have access to it, and the more quickly it will be deleted, the better. You may even come to recognize popular names in the age verification industry: Spotify and OnlyFans use Yoti (just like Meta and Tiktok), Quora and Discord use k-ID, and so on.
Unfortunately, it should be clear by now that none of the age verification options are perfect in terms of protecting information, providing access to everyone, and safely handling sensitive data. That's just one of the reasons that EFF is against age-gating mandates, and is working to stop and overturn them across the United States and around the world.
Republished from the EFF's Deeplinks blog.
Border Patrol commander Greg Bovino has been sent back to the border after making himself the Nazi scum face of the Trump administration's brutal efforts to purge this country of as many non-white people as possible.
Bovino made it clear what team he really wanted to play for before Trump was even sworn in for the second time. After Trump's election win (but before Trump actually took office), Bovino self-authorized an expansive anti-migrant operation without bothering to check in with DHS leadership to make sure he was cleared to do this.
Trump is always capable of recognizing opportunistic thugs whose dark hearts are as corroded as his own. Bovino was swiftly elevated to an unappointed position as the nominal head of Trump's many inland invasions of cities run by the opposing political party. Bovino embraced the role of shitheel thug, leading directly to court orders that attempted to restrain his brutal actions. Bovino appeared willing to ignore most court orders he was hit with, increasing his brutality and his public contempt of not only court orders, but the judges themselves, who he insulted during public statements to journalists.
After two murders in three weeks, the Trump administration started to realize it has lost the "hearts and minds" battle with most US citizens and residents. While ICE operations continue to be indistinguishable from kidnapping and the DHS is still ambushing migrants attempting to follow the terms of their supervised release agreements, Bovino has become the now-unacceptable personification of the administration's bigoted war on migrants.
Bovino has been sent back down to the minors, so to speak. He's been removed from high-profile surges in Chicago and Minneapolis and remanded to his former patrol area, which is much, much closer to the US border where there's nearly no immigration activity happening thanks to the ongoing war on migrants.
Insubordination is fine as long as it doesn't create friction Trump may have to eventually deal with. Bovino, however, is just as incapable of picking his battles as the president himself. Too many cocks spoil the broth, as the saying (almost) goes.
Thanks to a leaked email shared with NBC, we now know more about Bovino's resistance to anyone anywhere who attempted to tell him what to do.
Bovino wanted to conduct large-scale immigration sweeps during an operation in Chicago in September, but the acting director of Immigration and Customs Enforcement, Todd Lyons, told him the focus was to conduct "targeted operations," arresting only of people known to federal agents ahead of time for their violations of immigration law or other laws, according to the correspondence.
"Mr. Lyons seemed intent that CBP conduct targeted operations for at least two weeks before transitioning to full scale immigration enforcement," Bovino wrote in an email to Department of Homeland Security leaders in Washington, referring to Customs and Border Protection, which oversees Border Patrol agents. "I declined his suggestion. We ended the conversation shortly thereafter."
Keep in mind that Bovino is a Border Patrol commander who was working nowhere near the border. Also, keep in mind that ICE is the lead agency in any immigration enforcement efforts because… well, it's in the name: Immigration and Customs Enforcement. This is Bovino not only giving the finger to the chain of command, but also insisting his agency (along with the CBP) take the lead in Midwestern apprehensions, despite neither agency having much in the terms of training for inland operations.
Speaking of chain of command, the commander of an agency that's a component of the DHS made it clear he believed he didn't have to answer to the DHS either, as Leigh Kimmons reports in their article for the Daily Beast:
The email also revealed a rather bizarre chain of command, with Bovino saying he reported to Noem's aide, Corey Lewandowski, and appearing to defy Lyons' authority. "Mr. Lyons said he was in charge, and I corrected him saying I report to Corey Lewandowski," Bovino reportedly said of the unpaid special government employee.
This email makes one thing perfectly clear: Bovino appeared to believe he answered to no one. And he would only "report" to people he felt wouldn't push back against his confrontational, rights-violating efforts. This probably would have never been a problem, but Bovino consistently crossed lines that even Trump's high-level sycophantic bigots were hesitant to cross.
And now he's the one who is experiencing the "find out" part that usually follows the "fucking around." He's been sidelined, perhaps permanently. Acting ICE director Todd Lyons is the new face of Trump's inland invasions. Kristi Noem herself seems to be on the list of potential cuts, should the administration continue its on-again, off-again pivot to a less outwardly racist agenda when it comes to immigration enforcement.
But I'm not here to damn with faint praise or even damn with faint damnation. I hope Bovino's last years as a Border Patrol commander are as terrible as his haircut. I hope Todd Lyons veers so far to the middle that Trump shitcans him. I hope Noem is on the path to private sector employment, tainted with the scarlet "T" that means any future version of MAGA won't even bother to check in with her now that the only people she can make miserable are her own children. Adios, Bovino. Sleep badly.