The NYC mayor election is becoming a high-stakes testbed for the future of technology in elections, where the digital battlefield is as critical as any candidate’s platform. While polls show Zohran Mamdani leading on a platform of affordability, the real story is the escalating arms race between technologies designed to protect democracy and those with the potential to shatter it. Beyond the usual noise of misinformation, a volatile mix of AI-driven deepfake detectors, state-level spyware, crypto prediction markets, and secure voting platforms is defining the new rules of engagement.
The New Election Arms Race
The dynamic between generative AI creators and those building defenses is becoming increasingly pronounced. A “digital defense” industry has rapidly developed, with companies like Reality Defender and Blackbird.AI creating sophisticated tools to identify AI-manipulated media and track coordinated influence campaigns. This focus on defense operates in an environment shaped by powerful platforms like ElevenLabs and Synthesia, which provide valuable voice-cloning and video-generation tools that also carry a risk of misuse.
Companies like Synthesia, however, are implementing strict governance models to counter potential abuse. Synthesia reports applying content moderation on every generated video, enforcing tight restrictions on political content to prevent the creation of fake candidate messages, and using checks for every voice clone and AI avatar to prevent non-consensual deepfakes. These protections were recently tested for resilience by the cyber unit at NIST, whose researchers were reportedly unsuccessful in dozens of attempts to abuse the platform. Despite these efforts, and pledges from major tech firms like Google and Meta to watermark AI content, the sheer speed and scale of generative AI present a formidable challenge.
This digital conflict is shadowed by the chilling potential of surveillance. The documented use of NSO Group’s Pegasus spyware to monitor political opponents in Poland’s 2019 election serves as a stark warning. The ability to covertly infiltrate a campaign’s devices, steal strategy, and intimidate journalists represents an existential threat to a fair electoral process, creating a chilling effect that undermines democratic trust itself.
Meanwhile, new platforms are reshaping voter engagement and perception. Decentralized prediction markets like Polymarket allow users to bet on election outcomes with crypto, offering a real-time sentiment gauge that some argue is more accurate than polling. But the line between prediction and influence is dangerously thin. Concerns are mounting that these markets can be manipulated by large wagers to create a false sense of a candidate’s momentum, a risk amplified by political-media ventures like Donald Trump’s “Truth Predict”. In stark contrast, companies like Sequent are trying to fortify the system from the inside out, developing end-to-end verifiable online voting platforms to enhance security and potentially boost turnout in cities like New York.
As New Yorkers consider their choices, the underlying technological forces at play are setting precedents for democracies everywhere. The 2025 mayoral race is no longer just a local contest; it’s a live-fire exercise in the ongoing war over technology’s role in our civic future.
Alethea Group
Founded by former national-security analysts, Alethea uses artificial intelligence and network analysis to detect and map online influence operations. The company tracks coordinated misinformation campaigns across social networks for governments, corporations, and media outlets. Its systems can fortify elections by exposing bot networks and synthetic narrative clusters — but the same analytic precision illustrates how social engineering can be automated to manipulate voter sentiment if turned inward.