This page is intended to provide readers with more information about potential pathways for the People’s AI Action plan. The content below has not been endorsed by the signatories.

What is a People’s AI Action Plan?

A People’s AI Action Plan is one that delivers on public well-being, shared prosperity, a sustainable future, and security for all. The concrete pathways for this vision will live in the collective work of the many organizations that endorse this effort, differing across issue areas and sectors—from labor to climate to children’s online safety and immigration—and united in ensuring a trajectory for AI that puts people first, rather than the interests of tech billionaires.

The signatories to this statement all have actionable ideas for an AI agenda that meets the needs of everyday people. Read more about them below:

What is the White House AI Action Plan?

Shortly after taking office, President Trump issued an Executive Order on artificial intelligence which amounts to yet another massive handout to the tech industry. The Executive Order requires the development of an “AI action plan” by July 23, 2025 designed to “enhance America’s AI dominance.”

The AI industry has mounted a full-court press on Washington to shape the “AI dominance” agenda in service of their interests. The industry has
ramped up lobbying efforts to attack regulation, including a recent attempt in Congress to be granted immunity from state AI laws for ten years - which was shot down in a 99-1 vote after public outcry. 

The AI industry has now submitted a
wishlist of policy recommendations to the writers of the action plan—a group of tech leaders with deep connections to Silicon Valley, venture capital, and the defense tech industry. This lobbying is likely to pay off exponentially, as recent reporting indicates that the Trump Administration plans to use the action plan on July 23 to celebrate the government’s commitment to “expanding” the AI industry. 

One thing is certain: an AI action plan written by and for tech billionaires cannot and will not serve the interests of the broader public. We cannot let Big Tech special interests buy Washington and write the rules for AI and our economy.

How is the Push for “AI Dominance” Already Affecting the Public?

  • Private tech companies are pushing unsafe AI technologies into the most sensitive spheres of our lives—including our schools, our workplaces, our hospitals, and our children’s screens. Overwhelmingly, AI systems are deployed in coercive and high-stakes settings like immigration and policing, where people’s rights are routinely violated. We need a plan where teachers, workers, healthcare providers, parents, and those communities most impacted by AI systems drive the decision making processes for where and how AI systems are implemented—including the decision that some AI technologies are never used at all.

  • Increasingly, algorithms are used to determine the prices of critical goods and services, from the food in our grocery stores to the rent we pay our landlords—even the price of an airline ticket when we fly to a funeral. Because these algorithms are designed to drive up (but never lower) prices, everyday Americans are forced to pay more for the same goods and services, while billionaire tech companies pocket the gains. We need a plan that addresses the affordability crisis, including the role that tech monopolies play in driving up prices.

  • The AI industry insists that in order to protect “innovation” they should be subject to minimal legal oversight—or none at all. Congress’s recent attempt to ban states from regulating AI shows just how far Washington is willing to go to capitulate to industry demands. Any attempts to limit states from implementing and enforcing their own laws forces our lawmakers to turn their backs on their own constituents’ needs. We need a plan ensuring the government has the freedom to protect the public from AI’s potential harms to people, our economy, and national security.

  • We need an infrastructure plan that prioritizes the urgent and monumental task of transitioning to renewable energy—one that ensures affordable and reliable energy for people, protects our air and water, and guards against climate disaster. Without guardrails, the wealthiest technology corporations have put aside their renewable energy commitments and are pursuing AI data center expansion without making the infrastructure investments that are necessary to protect people and the planet. This irresponsible growth is expanding dirty energy production, polluting our air, and threatening to reverse hard-won climate progress at a time when we can no longer afford delays. Meanwhile, key players in our energy system, like investor-owned monopolies and private equity-owned fossil fuel producers, have been willing accomplices.

  • Employers’ unchecked use of AI is driving down wages, de-skilling jobs, and expanding surveillance and control within workplaces. Opaque algorithms and data-driven productivity tools have been used to fuel wage discrimination on gig platforms, push workers to exert themselves beyond safe limits, and undermine labor rights and protections. New forms of automation like generative AI intensify these exploitative practices, both for those working under AI systems and the global workforces who build and train AI behind the scenes. As tech companies commodify workers’ expertise by feeding their data into AI models, workers are largely shut out from any economic gains. Moreover, tech companies and employers often vastly underestimate the complexity of people’s jobs, poorly automating their roles in ways that merely conceal undercompensated human labor under an illusion of tech-driven “productivity” gains. Without workers’ expertise, oversight, and democratic power, many jobs will become increasingly precarious, poorly-paid, and stressful. AI will also continue to erode standards and accountability within critical social institutions like healthcare, education, government services, and journalism. To build an equitable economy, workers must have the ability to shape technology; this means countering the incentives that favor automation and concentrate power in the hands of a few major companies.

  • Current A.I. models are built on the wholesale theft of intellectual property of generations of creative workers. It’s become so egregious that these companies openly admit their Large Language Models rely on unlicensed creative work and pirated libraries in their training data. And now, in an act of cynical desperation, they claim that legalizing their theft will be the only way to ensure continued A.I. innovation and supposed profitability. Tech companies market A.I. as an all-powerful tool to automate creative labor, a tool that can be used by C-suite executives to maximize corporate profits and devalue creative labor and rights. All of this is incredibly shortsighted. A.I. is probabilistic: it can only tell us what has been, not what will be. It can only tell versions of stories already told by human creators, not find the vital story that has not yet been told. Without meaningful protections for creative work and workers, a flood of generative A.I. slop will lead to cultural paralysis, drain trillions of dollars from the creative economy, and destroy the pipeline of creative talent for generations to come.

  • At the hands of immigration enforcement, AI can exacerbate surveillance harms such as the discriminatory targeting of Black, brown, and immigrant communities. DHS’s acquisitions of AI technology – which may rely on biased algorithms – are helping to automate critical decisions in the immigration system such as whether to detain, deport, or grant immigration relief or a visa to millions of individuals, but with little-to-no meaningful oversight. Moreover, AI allows federal agencies to conduct immigration enforcement in ways that are profoundly and increasingly opaque, making it even more difficult for those who may be falsely caught or accused to extricate themselves. And while DHS has published a set of principles regarding its responsible use of AI, immigration advocates have found that the agency routinely skirts those obligations. The federal government has also invested significantly in facial-recognition technology, which is overwhelmingly used to surveil and track immigrants, asylum seekers, and activists despite having been found to be biased and error-prone. We need an AI plan that protects immigrant communities from abusive and invasive AI-driven surveillance.

  • Absent meaningful, comprehensive transparency, safety, and accountability measures, AI is already supercharging the profound and ongoing harms wrought by the algorithmic-fueled social media era. Many AI products being embedded in kids’ lives are exploitative by design; these tools are fueled by surveillance, behavioral profiling, and addictive design to maximize engagement. Tools such as AI companions are marketed to children and teens as friends, therapists, and romantic partners — and have already caused severe, documented harms, from emotional manipulation to the encouragement of minors’ worst impulses. While state laws have begun stepping in where federal protections fall short, families could be left defenseless if the Administration and Congressional leaders succeed in blocking state efforts. We need a plan to ensure that child and adolescent safety is prioritized over industry profits.

  • In the past six months, DOGE swung an AI-powered chainsaw across federal regulatory agencies long targeted by free market ideologues, exposing the public to severe vulnerabilities to their safety, financial security, and sensitive tax record, medical, and banking information. The White House also let DOGE self-drive the elimination of thousands of public servants’ jobs led by the dangerous and demeaning hijinks of hostile digital mercenaries. DOGE bulldozed public safety and consumer protections that the tech industry claimed were obstacles to growth, using AI to tank competitors’ contracts, target regulatory agencies with open investigations, and seize financial transaction data from the nerve center of the federal government—the Treasury, the Office of Personnel Management, and the General Service Administration. The potential fallout from the consolidation of data across these nerve centers, immigration, IRS, Social Security Administration, and many other agencies has yet to be fully mapped, but we can anticipate it will facilitate the federal government’s power to track and target private citizens using combined datasets. In addition to destroying the federal administrative government and channeling a stream of corporate handouts to itself, DOGE has also used AI to operationalize the White House’s authoritarian agenda by erasing the history, language, and representation of the United States’ diverse racial, ethnic, gender, disability, and religious communities.

  • Retirees, low-wage workers, disabled people, and caregivers—people who tend to live in or near poverty—are bolstered by modest social support programs that provide health insurance, food, and money for necessities. But, use of AI by government officials is actively undermining access to these life-sustaining programs, including Social Security, Medicaid, Medicare, Unemployment Insurance, and SNAP. Eligible people are wrongly denied or terminated, meager benefit amounts get drastically cut, and vital medical treatments are refused. When people suffer these harms, they generally don’t have anywhere to turn for meaningful short-term help to understand or oppose the government’s AI-based decision. Meanwhile, companies often hide behind corporate secrecy laws to avoid accountability. We need a plan that would impose real accountability on government officials who use AI and the companies who sell it to them.

  • Biased and inaccurate AI systems undermine the basic tenets of fairness and justice that form the very fabric of our democracy. Communities of color are disproportionately targeted with disinformation, over-surveilled, and denied mortgage loans and other financial services under digital redlining. They also face higher barriers accessing critical services and are charged higher prices by hotels, airlines, and online retailers. This administration has already rolled back the minimal protections we had, such as President Biden’s Executive Order on AI, and pushed for AI adoption without any guardrails. Now, they are going even further with plans to target companies for “woke” AI models, threatening efforts to ensure that AI is used responsibly and functions for everyone. We need a plan with bright line rules and safeguards that ensure AI innovations work for people rather than automating and turbo-charging existing harms. 

  • Many AI systems compromise our privacy and security: they’re built on top of often sensitive information about us and are vulnerable to an array of cyberattacks that would not only compromise this information, but the systems where it’s implemented. In many instances, these flaws cannot be remedied. As AI is integrated into sensitive domains from healthcare to education to the energy grid to our defense systems, they expose us all to increased security risk. We need a plan that ensures our critical infrastructures are secure from these vulnerabilities, and the American people are safeguarded from harm. 

  • AI is being rapidly integrated into healthcare settings in ways that undermine healthcare providers’ clinical judgment and threatens patient safety. Sensitive data from electronic health records is used in a variety of different settings, from pain scores to acuity assessments to evaluations of critical conditions like sepsis. AI companies like Google have spent billions on failed health projects, but continue to bring tools to market that haven’t been adequately tested or validated. Moreover, the infrastructure scale-up has profound effects on the health of the communities in which data centers are built, particularly by increasing rates of respiratory illness. We need a plan that prioritizes the health of our communities and healthcare workers over profits. 

If you are an individual looking to sign onto a petition boosting the People’s AI Action Plan, see: