The 2024 presidential election comes during unprecedented distrust in civic institutions and information. To strengthen election resilience, Aspen Digital has launched the AI Elections Initiative. Through results-oriented convenings and resources, they aim to ensure leaders from across sectors are equipped to meet the challenges on the horizon, until Inauguration Day and beyond.
In this interview, we asked Aspen Institute VP and Executive Director of Aspen Digital, Vivian Schiller, and, their new Director of AI & Democracy Josh Lawson, and Google U.S. Public Policy Fellow Tom Latkowski to help provide some context and expertise on artificial intelligence and how it’s changing elections at home and abroad.
Unprecedented is an overused word to describe surprising situations, but it really does feel like the best word to describe the current situation regarding AI and elections and the risk I think a lot of us are thinking about. How would you describe the current landscape of elections?
Vivian Schiller: These elections are being held at a time of record-low trust in civic institutions, the political system, and traditional media. Just 4% of Americans today say they have faith in our system, and voters are polarized in a way that makes it hard for us to have fact-based conversations about serious civic issues. Worse, partisan national media is still on the rise, while local news has collapsed over the last 15 years meaning citizens have few places to turn for reliable fact-based information at the community level.
Josh Lawson: Yes, and this was true before the public launch of powerful new tools like generative AI that make it cheap and easy to create believable content quickly––whether it’s images, video, text, or audio. People need to know these tools exist and that bad actors will try to use them to deceive or discourage voters. Voters will need to adapt over time, moving towards a future where deepfakes are treated like spam: around, annoying, but not alarming.
It will take time to get there. AI-generated content will be more compelling than familiar efforts to sway voters or spread false information. On the immediate horizon, we’re worried about increased ”hyperlocal voter suppression,” where bad actors create persuasive messages targeting specific audiences in order to discourage or direct voting in very specific geographies. We’re also concerned about non-English speakers receiving bespoke propaganda translated by AI into languages once out of reach for most bad actors. And we know there’s a high likelihood that propagandists will use AI to create decoy news sites (“astroturfing”) and to push large volumes of content that flood the zone during a crisis.
Is there any other time in the history of the US that you know of where elections were so fraught when there was such ubiquitous concern for the health of our election?
Tom Latkowski: Candidates and voters have hotly debated election integrity for years without spiraling into civil conflict. But today, the extreme tenor of American politics makes it easier for bad actors to exploit our divisions to discredit the democratic process we’ve built.
It’s normal––even healthy––for candidates to disagree deeply over the direction of our country. And our system is built to funnel those sincere differences into a deliberative process that ends at the ballot box. But voters need facts to make informed decisions, and that’s precisely the piece targeted by those trying to stoke conflict, whether they are foreign or domestic actors.
Josh Lawson: This is an all-hands-on-deck moment for American democracy that requires attention and coordination between policy experts, tech companies, the media, and civil society. It’s important to recognize the challenges and opportunities that AI presents as we push to build trust in our electoral process and to inspire confidence in democracy as a vital force in American life.
Elections function on trust and AI has the potential to quickly undermine the public’s trust in elections and the systems we count on to hold fair elections, but also of course in politicians. What are some of the biggest causes for concerns that Aspen Digital is paying attention to?
Josh Lawson: For the “big” races (like for president), we’re less concerned that deepfakes will go unchecked by journalists or campaign operatives interested in debunking faked audio or videos. But we are quite concerned that a general atmosphere of distrust will deepen a so-called “liar’s dividend,” where top candidates claim real incidents are fake and the public is too confused to decide what is real. When that happens, voters may feel as though they must make decisions based on gut instinct, rather than credible reporting. That kind of post-fact world weakens accountability and poses a real risk for democratic decision-making. But even the biggest contests are often decided by small changes in turnout across key geographies, and AI-localized misinformation sent directly to voters through channels that avoid most detection (like text messages).
For “small” or local races, deepfaked video or audio may go unchecked by local journalists, who have lost ground in recent years. Where that’s the case, a deepfake is more likely to dupe voters into thinking a candidate did or said something they have not, and local candidates may struggle to correct the record before an election.
Other civic crises––like violent unrest or post-election disputes––are ripe targets for those planning to “flood the zone” with misleading or inflammatory AI content. The technology to spot manipulated media is still developing, and we’re unlikely to see reliable tools before the election. So it’s vital that civil society and social media redouble efforts to boost trusted community voices and factual reporting sources in a shared effort to build public resilience in the face of generative AI.
Also, is there an opportunity to use AI to build and reinforce trust in elections?
Tom Latkowski: Definitely. Those working to erode faith in democracies benefit from slow-moving institutions and inefficient public services. Advances in AI can help make government services more efficient and increase public accountability. That holds real promise for democracy in the long term, as we work to prove our system is able to govern in a changing world. But there are also near-term benefits, as AI can help reduce barriers to civic education and political participation. Promising opportunities are on the horizon as classrooms, civil society, government agencies, and even political campaigns use AI to build tools citizens can use to contribute effectively in our civic community.
Tell us how the goals of Aspen Digital’s AI Elections Initiative address these challenges.
Vivian Schiller: Aspen Digital’s new AI Elections Initiative fills a critical gap: we’re forging deeper connections both with, and among election officials, tech companies, media, and civil society most responsible for the impact AI will have on the election. Our job is to make sure that they’re all talking to one another and sharing effective solutions to the most important risks in the run-up to November. There’s limited time to respond to risks, and each sector needs the best information in front of them as they make critical decisions.
Josh Lawson: We’re filling a critical community need by bridging siloed communities of expertise through an Advisory Council (more on that later) and a series of strategic convenings that aim to support those on the front lines: elections officials, journalists, and tech decision-makers. There’s a real signal-to-noise ratio problem when there’s no time to chase red herrings. So we’re driving focus by clarifying then centering the most critical risks. The solutions and best practices that emerge from that effort will benefit decision-makers across sectors and inform vital policy discussions.
According to a recent article in the New York Times, there are 83 elections this year, which is the largest concentration of elections for at least the next 24 years, so obviously the stakes are high. Is there any country or place that we can look to that is particularly prepared for the threats of AI elections?
Vivian Schiller: That’s right. Elections this year will decide the leadership for more than 4 billion people. We’re watching closely as AI threats materialize. Weeks ago, for example, Taiwan held its presidential election, with one report finding disinformation increased by 40% in the runup to voting. But even so, they held a free and fair election that benefited from active efforts to counter-influence operations from mainland China. But we’re definitely seeing AI being used in various global elections to produce content that portrays a candidate doing or saying something they did not, or being used as an excuse to discount real (but embarrassing) issues. This has happened recently in Slovakia, Argentina, and elsewhere.
What does election preparedness look like in 2024 for governments holding elections as well as for citizens who are voting? What should we voters be looking out for?
Josh Lawson: Voters should know that AI is out there and that people will try and use it to make a bad information environment worse. But we shouldn’t allow healthy skepticism to snowball into broad distrust of facts. We can and should promote credible information sources and encourage each other to verify claims about candidates and the voting process.
Elections officials need to continue investing in go-to resources for their voters, who may receive convincing misinformation about when, where, and how to vote. Strengthening election resilience also means that journalists must cover AI developments responsibly and avoid inadvertently platforming AI misuse by bad actors who are trying to weaken trust in truth overall.
On the government side, elections are run at the state, county, and local levels, so there isn’t one body that sets election policy. And on the tech side, there are constantly new companies affecting the social media and communication landscape — there isn’t one actor setting policy in that realm either. That’s what makes the work Aspen does so necessary — bringing people together to ensure that different actors are talking and sharing best practices.