A new effort to strengthen US election resilience
The “first AI elections” in the United States will be held against the backdrop of unprecedented distrust in civic institutions, the political system, and traditional media. Rapid advancements in artificial intelligence (AI) mean that voters face a likely future of compelling deepfakes, highly targeted fraudulent messages, and compounding cybersecurity threats. While much about the 2024 elections is uncertain, we know bad actors will try to manipulate public opinion and to sway voter behavior at key moments before polls open. Our country needs concerted leadership and strategic focus at the intersection of AI, elections, and social trust.
Aspen Digital, a program of the Aspen Institute, is launching the AI Elections Initiative as an ambitious new effort to strengthen US election resilience in the face of generative AI. Election officials, policymakers, the private sector–including tech leaders and experts–and the news media must do their part in securing this cornerstone of American democracy.
But these groups are not sufficiently communicating with each other. Operating in silos, they won’t be as effective. That’s why we believe it’s vital to bring experts together to better understand and learn from one another. In the coming weeks, we will begin posting details about our effort to convene essential parties and publish action-oriented resources. This work will be supported by an advisory council composed of cross-sectoral experts who will help enlighten and enhance our work.
A sampling of the AI Elections Initiative’s events in the first quarter includes:
- January: Convening civil society partners in coordination with the Knight Foundation
- February: Briefings state elections officials in coordination with the National Association of Secretaries of State
- March: Conferencing global leaders in partnership with the Institute of Global Politics at the School of International and Public Affairs at Columbia University
We know election preparedness is a whole-of-society challenge. Our approach has been informed by interviews with more than 60 experts across the tech industry, elections administration, media, civil society, and academia – all of whom identified critical risks and helped us chart a constructive path toward Inauguration Day, and beyond.
AI capabilities are evolving quickly, and our expert interviews surfaced conditions that pose significant risk to voters this election cycle. Effective preparedness will require leaders across sectors to anticipate and mitigate threats from:
- Siloed Expertise: Incentives and expertise are misaligned across key groups who aren’t communicating well, or enough.
- Public Susceptibility: While some communities are likely to be duped by misleading AI content, others will increasingly distrust genuine information in a way that deepens the “liar’s dividend” and undermines trust in evidence and facts.
- Inadequate Platform Readiness: Certain major and mid-tier platforms have reduced “trust and safety” staffing and are inadequately prepared for the expected volume and velocity of AI-generated content. Closed messaging services may be major channels for AI content.
- Slow-Moving Policy: Comprehensive federal regulation is not generally expected before the election, and stop-gap regulations vary across states.
- High-Quality AI-Generated Media: Detection of so-called “deepfakes” is increasingly difficult for tech-enabled systems and for ordinary people trying to identify inauthentic content.
- Scaled Distribution at High Speed: AI systems can generate and distribute large quantities of content, quickly, and at low cost.
- Message Targeting & Hyperlocal Misinformation: AI tools can fine-tune messages for particular language groups, demographic communities, psychological profiles, hyperlocal geographies, and even distinct individuals.
- Automated Harassment: AI may be used to significantly increase the volume and specificity of harassing content targeting elections officials or civic leaders.
- Cybersecurity of Elections Infrastructure: Expected advances in AI code-generation may be used to exacerbate malware challenges.
All voters have a stake in election preparedness, especially those in communities historically targeted for election interference based on race. We expect bad actors will continue their long-standing practice of adapting new technologies to exploit rifts within American society in an effort to undermine confidence in democratic values by targeting voters in swing districts, individuals with low digital literacy, members of language minority communities, and people who already face institutional barriers at the voting booth. We will combat efforts to degrade our elections by empowering voters and thought leaders at this critical time.
We believe technology can enable a positive future, one where AI promotes civic engagement, reinforces democratic values, builds understanding, and helps governments effectively serve people. That future is more likely if cross-sector leaders work together at the outset to combat the misuse of AI in civic life. The AI Elections Initiative and the broader team at Aspen Digital are excited to support the community of leaders working to rebuild social trust and to ensure civic participation remains a touchstone of American democracy.
Want to stay current on Aspen Digital’s work on AI elections and more? Sign up for our email list.