As a former legal aid attorney, Kevin De Liban knows President Donald Trump’s plan to double down on artificial intelligence comes with major risks. Over and over, De Liban has seen how automated decisions can ruin people’s lives.

Just before Christmas in 2022, for example, Robert Austin and his daughter were living in his car in El Paso, Texas. As a single father, he had a hard time finding a shelter that would take them both. 

He applied for food stamps, temporary aid, and tried to enroll his daughter in Medicaid, so she’d get health insurance from the government. Though they were eligible, his benefits were denied. Austin tried again; this time, the health and human services helpline said the paperwork he’d uploaded had been rejected and he needed to reapply. “They kept asking for the same forms, over and over again,” Austin says. Every time he’d call to try to get to the bottom of things, he ended up “full circle back where [he] began.” 

Eventually, Austin turned to lawyers at Texas RioGrande Legal Aid, who learned that Texas’ automated verification system, developed by multinational consulting firm Deloitte, had made extensive and repeated errors, including issuing incorrect notices, wrongful denials, and losing paperwork. 

Robert Austin and his daughter. (Credit: Robert Austin)

For the next two years, Austin continued to reapply to Texas’ safety-net programs as he bounced in and out of temporary housing, eventually losing his car. While his daughter grew into a busy toddler, he turned to the unreliable kindness of strangers on the street. “I ended up begging people for money so I could give her pull-ups, or child care so I could take a [medical] appointment,” he says.

Though De Liban was not involved with Austin’s case, he has worked with scores of people trapped in similar situations — victims of algorithmic decisions gone wrong. These kinds of systemic harms are already impacting Americans in every phase of their lives, he says. “Our legal mechanisms are totally insufficient to deal with the scale and scope of harms these technologies can cause.” 

In Arkansas, De Liban won case after case for people who were denied medical care or other benefits because of artificial intelligence (AI) systems. But each victory underscored the deeper problem: The sheer scale of government actions being made by machines mimicking human decision-making, whether through simple code or machine learning, meant that individual legal victories weren’t sufficient.

BONUS: How To Stop AI From Corrupting Your Life

Learn how to determine if a computer algorithm has screwed you — and what to do to fight back. This is a bonus article for The Lever’s paid subscribers.

Find Out How Now

That’s why De Liban recently started a nonprofit called TechTonic Justice to help people fight back. He’s building resources to help affected communities hold these faceless, impersonal systems accountable, spreading the word about the problem in publications like The Hill and on NPR. The goal is to provide training for lawyers, educate advocates, and help affected people — those denied benefits like health care or social security — participate in policy conversations. 

The stakes for his work just got higher. On Trump’s first day back in office, the president removed existing federal safeguards for AI. 

After a $1 million donation from OpenAI CEO Sam Altman to Trump’s inaugural fund, the company announced it would create a version of the popular artificial intelligence platform ChatGPT for government agencies — including highly sensitive information like nuclear weapon security. The contract followed the president’s $500 billion commitment to a joint venture by OpenAI, Oracle, and SoftBank to build new data centers.