A Stoic SaysA Stoic Says logo - Daily Stoic philosophy and wisdom

March 24, 2026

As the US midterms approach, AI is going to emerge as a key issue concerning voters | Nathan E Sanders and Bruce Schneier

As the US midterms approach, artificial intelligence (AI) is poised to become a pivotal issue for voters, particularly in light of recent actions by the Trump administration that undermine state-level regulation. An executive order has sparked a debate that aligns political factions, with a significant majority of voters expressing support for increased regulation of AI. While local opposition to AI datacenters is growing, this grassroots movement could evolve into a broader national dialogue, challenging the current political landscape dominated by corporate interests. Candidates from both parties have the opportunity to engage voters on the implications of AI, making it a critical topic in the upcoming elections.

Thumbnail for As the US midterms approach, AI is going to emerge as a key issue concerning voters | Nathan E Sanders and Bruce Schneier

Stoic Response

Politics & GovernanceTechnology & MediaEconomy & Labor

Stoic Meditation for Dawn Practice

Author's Claim

Nathan E. Sanders asserts that the recent actions taken by the Trump administration regarding AI regulation not only undermine state-level governance but also provoke a significant ideological shift in American politics. He emphasizes that the overwhelming majority of voters support increased regulation of AI, suggesting a disconnect between political leadership and public sentiment.

Weighing Against Nature and Logos

In nature, balance is paramount; ecosystems thrive when all elements coexist harmoniously. Similarly, the application of logos—reason and rationality—demands that we consider the broader implications of AI. The decision to prioritize corporate interests over the public good disrupts this balance, leading to potential societal harm. Sanders notes that “the political reverberations for AI accelerationism are hitting datacenter locales first,” highlighting the immediate consequences of neglecting public welfare.

Actionable Reflections

  1. Embrace Awareness: As we rise with the dawn, take a moment to reflect on the power dynamics at play in your community. Consider how local decisions regarding AI impact your life and the lives of those around you.

  2. Engage in Dialogue: Foster conversations about AI with friends and family. Share insights from recent developments and encourage others to voice their opinions. Collective awareness can amplify the call for responsible regulation.

  3. Support Local Initiatives: Identify grassroots movements in your area opposing unregulated AI development. Engage with these groups, either through participation or by amplifying their messages on social media.

  4. Demand Accountability: Write to your local representatives about the importance of AI regulation. Express your concerns about the implications of unregulated AI on jobs, privacy, and community welfare.

  5. Cultivate Resilience: Understand that the political landscape is complex and often slow to change. Remain steadfast in your beliefs and continue advocating for a balanced approach that prioritizes public interest over corporate gain.

Summary

As dawn breaks, let us remember that the regulation of AI is not merely a political issue but a matter of public welfare. By engaging with our communities, supporting local initiatives, and demanding accountability from our leaders, we can strive for a future where technology serves humanity rather than undermines it.

Article Rewritten Through Stoic Lens

Journal Entry: Reflections on the Nature of Governance and AI

The Nature of Change

As the midterms approach, I observe the tumult surrounding the issue of artificial intelligence. Change, as with all things, is a constant in our lives. The recent actions of the Trump administration—an executive order that diminishes the power of states to regulate AI—serve as a reminder of the delicate balance between governance and the forces of industry. It is not my place to lament the actions of others but to contemplate the nature of such decisions and their implications for our society.

The Will of the People

In the face of overwhelming public sentiment favoring increased regulation of AI, we witness a divergence between the desires of the many and the actions of the few. Over 70% of voters express a longing for oversight, yet the political landscape seems to cater to the interests of corporate elites. Herein lies a lesson: the true measure of governance is not merely in the laws enacted but in the alignment with the collective will. We must accept that the machinations of power often stray from the path of virtue, yet this presents an opportunity for us to engage in dialogue and foster understanding.

The Human Condition

The discourse surrounding AI often frames the issue as one of humans versus machines. Indeed, the advancements in AI may threaten our dignity and livelihoods, as machines encroach upon tasks once reserved for human hands. Yet, rather than despair, let us view this as an invitation to cultivate resilience and adaptability. The challenge is not merely to resist change but to find ways to coexist with it, ensuring that our humanity remains intact amidst the rise of technology.

The Emergence of Opposition

In various states, a grassroots movement emerges, opposing the establishment of AI datacenters. This resistance, uniting individuals across the political spectrum, reflects a deeper yearning for community and environmental stewardship. It is a testament to the power of collective action and the potential for diverse voices to converge on a common cause. Here, we find a fertile ground for virtue—an opportunity to act in accordance with our principles and to advocate for the welfare of our communities.

The Role of Leadership

As candidates prepare for the upcoming elections, they stand at a crossroads. They may choose to align with the interests of the few or champion the needs of the many. The political landscape is ripe for leaders who will rise to the occasion, addressing the concerns surrounding AI with integrity and foresight. True leadership involves not merely responding to the prevailing winds but steering the ship toward a just and equitable future.

The Burden of Responsibility

The Trump administration's actions, while seemingly advantageous to certain industries, carry with them a burden of responsibility. The costs associated with AI—job displacement, environmental degradation, and the erosion of democratic norms—must be acknowledged and addressed. The path forward requires that those who benefit from AI's advancements also bear the weight of its consequences. This is not merely a political issue; it is a moral imperative.

Conclusion: Embracing the Challenge

In the face of uncertainty, let us embrace the challenge presented by AI and the political dynamics surrounding it. We must engage in thoughtful discourse, advocate for responsible governance, and remain vigilant in our pursuit of justice. The true measure of our character lies not in the avoidance of conflict but in our capacity to navigate it with wisdom and virtue. As we move forward, may we strive to align our actions with the greater good, fostering a society that honors both human dignity and the potential of technology.

Source Body Text

In December, the Trump administration signed an executive order that neutered states’ ability to regulate AI by ordering his administration to both sue and withhold funds from states that try to do so. This action pointedly supported industry lobbyists keen to avoid any constraints and consequences on their deployment of AI, while undermining the efforts of consumers, advocates, and industry associations concerned about AI’s harms who have spent years pushing for state regulation. Trump’s actions have clarified the ideological alignments around AI within America’s electoral factions. They set down lines on a new playing field for the midterm elections, prompting members of his party, the opposition, and all of us to consider where we stand in the debate over how and where to let AI transform our lives. In a May 2025 survey of likely voters nationwide, more than 70% favored state and federal regulators having a hand in AI policy. A December 2025 poll by Navigator Research found similar results, with a massive net +48% favorability for more AI regulation. Yet despite the overwhelming preference of both voters and his party’s elected leaders – Congress was essentially unanimous in defeating a previous state AI regulation moratorium – Trump has delivered on a key priority of the industry. The order explicitly challenges the will of voters across blue and red states, from California to South Dakota, scrambling political positions around the technology and setting up a new ideological battleground in the upcoming race for Congress. There are a number of ways that candidates and parties may try to capitalize on this emerging wedge issue before the midterms. In 2025, much of the popular debate around AI was cast in terms of humans versus machines. Advances in AI and the companies it is associated with, it is said, come at the expense of humans. A new model release with greater capabilities for writing, teaching, or coding means more people in those disciplines losing their jobs. This is a humanist debate. Making us talk to an AI customer-support agent is an affront to our dignity. Using AI to help generate media sacrifices authenticity. AI chatbots that persuade and manipulate assault our liberty. There is philosophical merit to these arguments, and yet they seem to have limited political salience. Populism versus institutionalism is a better way to frame this debate in the context of US politics. The Maga movement is widely understood to be a realignment of American party politics to ally the Republican party with populism, and the Democratic party with defenders of traditional institutions of American government and their democratic norms. This frame is shattered by Trump’s AI order, which unabashedly serves economic elites at the expense of populist consumer protections. It is part of an ongoing courting process between Maga and big tech, where the Trump political project sacrifices the interests of consumers and its populist credentials as it cozies up to tech moguls. We are starting to see populist resistance to this government/big tech alignment emerge on the local scale. People in Maryland, Arizona, North Carolina, Michigan and many other states are vigorously opposing AI datacenters in their communities, based on environmental and energy-affordability impacts. These centers of opposition are politically diverse; both progressives and Trump-supporting voters are turning out in force, influencing their local elected officials to resist datacenter development. This opposition to the physical infrastructure of corporate AI is so far staying local, but it may yet translate into a national and politically aligned movement that could divide the Maga coalition. Datacenters are one of a dwindling few national issues not yet polarized. The December Navigator Research polling found that most voters have heard little or nothing about datacenter development. A February poll of voters found relatively little difference (less than 10 percentage points) between Harris and Trump voters on their likelihood to support or oppose datacenter development where they live. The pace of datacenter investment is still accelerating dramatically: big tech AI spending is anticipated to reach nearly $700bn in 2026. The intensity of local response in the communities where datacenters have been proposed combined with this rapid expansion suggests fertile ground for activating and persuading voters around this issue – irrespective of political party. So far, few political leaders have emerged to guide their parties towards a clear position on these concerns. Within the Republican party, the Florida governor, Ron DeSantis, seems to be positioning himself against the administration as the party’s chief AI skeptic. On the other side of the aisle, the progressive independent senator Bernie Sanders and Democratic House colleague congresswoman Rashida Tlaib proposed a moratorium on AI datacenter construction, while senator Amy Klobuchar has been a vocal opponent of the Trump order. Some local legislators in Georgia have passed such a moratorium in their jurisdictions. While the political reverberations for AI accelerationism are hitting datacenter locales first, this issue should encompass far more than just construction. The energy and environmental costs associated with datacenters are just one of many costly harms that tech companies are trying to foist on the public. And the Trump administration’s frequent justification of its corporate AI boosterism as a national security priority in an arms race against China is hokum. Any policy discussions about AI should include the individual harms associated with job loss, as employers seek to replace laborers with machines. It should also include the systemic economic risks associated with concentrated and supercharged AI investment, the democratic risks associated with the increased power in monopolistic and politically influential tech companies, and the degradation of civic functions like journalism and education by AI. In order for our free market to function in the public interest, the companies amassing wealth and profiting from AI must be forced to take ownership of, and internalize, these costs. The political salience of AI will grow to meet the staggering scale of financial investment and societal impact it is already commanding. There is an opportunity for enterprising candidates, of either political party, to take the mantle of opposing AI-linked harms in the midterm elections. Political solutions start with organizing, and broadening the base of political engagement around these issues beyond the locally salient topic of datacenters. Movement leaders and elected officials in states that have taken action on AI regulation should mobilize around the blatant industry capture, wealth extraction, and corporate favoritism reflected in the Trump executive order. AI is no longer just a policy issue for governments to discuss: it is a political issue that voters must decide on and demand accountability on. Nathan E Sanders is a data scientist affiliated with the Berkman Klein center of Harvard University and co-author, with Bruce Schneier, of the book Rewiring Democracy: How AI Will Transform Our Politics, Government, and Citizenship. Bruce Schneier is a security technologist who teaches at the Harvard Kennedy school at Harvard University