The NYC mayor election is becoming a high-stakes testbed for the future of technology in elections, where the digital battlefield is as critical as any candidate’s platform. While polls show Zohran Mamdani leading on a platform of affordability, the real story is the escalating arms race between technologies designed to protect democracy and those with the potential to shatter it. Beyond the usual noise of misinformation, a volatile mix of AI-driven deepfake detectors, state-level spyware, crypto prediction markets, and secure voting platforms is defining the new rules of engagement.
The New Election Arms Race
The dynamic between generative AI creators and those building defenses is becoming increasingly pronounced. A “digital defense” industry has rapidly developed, with companies like Reality Defender and Blackbird.AI creating sophisticated tools to identify AI-manipulated media and track coordinated influence campaigns. This focus on defense operates in an environment shaped by powerful platforms like ElevenLabs and Synthesia, which provide valuable voice-cloning and video-generation tools that also carry a risk of misuse.
Companies like Synthesia, however, are implementing strict governance models to counter potential abuse. Synthesia reports applying content moderation on every generated video, enforcing tight restrictions on political content to prevent the creation of fake candidate messages, and using checks for every voice clone and AI avatar to prevent non-consensual deepfakes. These protections were recently tested for resilience by the cyber unit at NIST, whose researchers were reportedly unsuccessful in dozens of attempts to abuse the platform. Despite these efforts, and pledges from major tech firms like Google and Meta to watermark AI content, the sheer speed and scale of generative AI present a formidable challenge.
This digital conflict is shadowed by the chilling potential of surveillance. The documented use of NSO Group’s Pegasus spyware to monitor political opponents in Poland’s 2019 election serves as a stark warning. The ability to covertly infiltrate a campaign’s devices, steal strategy, and intimidate journalists represents an existential threat to a fair electoral process, creating a chilling effect that undermines democratic trust itself.
Meanwhile, new platforms are reshaping voter engagement and perception. Decentralized prediction markets like Polymarket allow users to bet on election outcomes with crypto, offering a real-time sentiment gauge that some argue is more accurate than polling. But the line between prediction and influence is dangerously thin. Concerns are mounting that these markets can be manipulated by large wagers to create a false sense of a candidate’s momentum, a risk amplified by political-media ventures like Donald Trump’s “Truth Predict”. In stark contrast, companies like Sequent are trying to fortify the system from the inside out, developing end-to-end verifiable online voting platforms to enhance security and potentially boost turnout in cities like New York.
As New Yorkers consider their choices, the underlying technological forces at play are setting precedents for democracies everywhere. The 2025 mayoral race is no longer just a local contest; it’s a live-fire exercise in the ongoing war over technology’s role in our civic future.
Alethea Group
Founded by former national-security analysts, Alethea uses artificial intelligence and network analysis to detect and map online influence operations. The company tracks coordinated misinformation campaigns across social networks for governments, corporations, and media outlets. Its systems can fortify elections by exposing bot networks and synthetic narrative clusters — but the same analytic precision illustrates how social engineering can be automated to manipulate voter sentiment if turned inward.
Cyabra
Cyabra, an Israeli-American startup headquartered in New York, builds software that identifies fake profiles and coordinated online campaigns in real time. The platform dissects social networks to reveal inauthentic amplification patterns. It is regularly used by journalists and election-security teams to spot disinformation waves before they go viral. Its insight into how bots evolve and adapt, however, also shows how bad actors could game algorithmic trust and engagement systems.
Logically
UK-based Logically combines natural-language processing with a global fact-checking network to monitor news and social media for false or misleading claims. Its Logically Intelligence platform is used by governments and newsrooms to flag coordinated misinformation and deepfakes. While primarily defensive, its large-scale text-generation and classification models could, in theory, be repurposed to generate hyper-targeted narratives — highlighting the double use of language technology.
Reality Defender
Reality Defender offers multimodal deepfake-detection APIs that scan audio, image, and video content for manipulation artifacts. Its tools integrate directly into social platforms and newsroom pipelines to authenticate media before publication. In democratic contexts, it serves as a bulwark against synthetic candidate videos; in hostile hands, however, deepfake-analysis models can be inverted to design forgeries that evade detection.
Blackbird.AI
Blackbird.AI, a New York–based “narrative intelligence” company, provides dashboards that visualize emerging misinformation trends and measure their reach. Used by government agencies and major corporations, its platform can help election commissions monitor coordinated attacks on credibility. The same granular narrative mapping can also inform micro-targeting or perception management strategies when exploited.
IdentifAI
Italian startup IdentifAI develops AI models to detect synthetic audio, video, and imagery, specializing in biometric-level forgery analysis. Its software is used by media organizations and law enforcement for authenticity verification. In a defensive role, it can protect journalists and voters from fabricated clips; conversely, research into deepfake detection often feeds adversarial learning cycles that make forgeries more convincing.
Synthesia
London-based Synthesia lets users create realistic talking-head videos from text. The company maintains moderation policies and watermarking, yet the mere existence of such easy-to-use synthetic-video generation tools demonstrates how persuasive visuals can be manufactured at scale.
ElevenLabs
ElevenLabs is a voice-synthesis company backed by Andreessen Horowitz and Sequoia. Its ultra-realistic text-to-speech engine can clone any voice from seconds of audio. For accessibility and localization, it’s revolutionary; for democracy, it’s perilous.
Hedra
Hedra builds generative-video models capable of producing lifelike digital characters and environments. Investors tout its creative potential for marketing and entertainment, but the same system could render hyper-real political deepfakes or composite crowds. It encapsulates the growing risk of “synthetic presence” — believable visual narratives detached from reality.
AdVerif.ai
Originally a Tel Aviv startup later acquired by Zefr, AdVerif.ai pioneered AI-driven ad-verification and misinformation filtering. Its technology classifies online content for truthfulness and brand safety. Election campaigns and social networks can deploy it to screen misleading political ads. Without strict governance, similar content-scoring models could also be used to suppress dissenting narratives under the guise of authenticity checks.
Semantic Visions
Based in Prague, Semantic Visions applies machine learning and linguistic forensics to monitor millions of news sources for emerging disinformation. It is used by NATO partners and intelligence analysts to forecast information operations. In elections, it acts as an early-warning radar for viral falsehoods, but it also highlights how predictive analytics can be weaponized for agenda shaping.
Loti
Seattle-based Loti protects public figures against impersonation by continuously scanning the web for AI-generated likenesses and deepfakes. Its tools give politicians and journalists an early alert when their image or voice is being used without consent. Defensively, it helps reclaim personal authenticity; offensively, the same likeness-modeling can be exploited to fabricate counterfeit personas.
Sentinel
Founded by former NATO cybersecurity experts in Estonia, Sentinel builds AI-powered software to detect and categorize manipulated media. Governments and platforms employ its database of millions of fake videos to train moderation systems. Its existence underscores how arms-race dynamics between generative and detection models evolve — each breakthrough in spotting fakes spurring new evasion methods.
Brinker AI
Brinker AI develops enterprise tools for real-time fake-news detection across internal and public communication channels. For media organizations, it flags manipulated or AI-written content before publication. Yet any system that grades “truth” algorithmically invites potential misuse for censorship or perception control if deployed without transparency.
RepScan
Spanish startup RepScan focuses on online reputation management, using AI to detect and request removal of defamatory or fake content. It empowers individuals and campaigns to defend themselves from smear operations, but large-scale deployment could just as easily be used to suppress legitimate criticism under the pretense of reputation repair.
Prompt Security
Prompt Security protects enterprises from data leakage and prompt-injection attacks in generative-AI systems. While not directly electoral, its infrastructure approach is crucial: as campaigns adopt chatbots and voter-engagement AI tools, vulnerabilities could expose sensitive data or allow manipulation of outputs — influencing voters through compromised systems.
Unbiased
Sweden-based Unbiased builds data-marketplace and AI-ethics tools to counter algorithmic bias and misinformation. It incentivizes users to verify content, promoting a more balanced information ecosystem. The same transparency frameworks that strengthen democratic discourse can also expose sensitive political trends if misused for surveillance analytics.
NewsGuard
An American startup best known for its browser extension rating the credibility of news outlets, NewsGuard partners with advertisers and governments to defund misinformation sources. It’s an explicit election-defense tool, though critics warn that its centralized ratings could be politicized, demonstrating again the tension between protection and control.
Graphika
One of the most established names in social-network forensics, Graphika maps digital influence networks and tracks coordinated disinformation from state actors. It has documented Russian, Iranian, and domestic campaigns alike. Its methodologies help democracies defend information space — yet they also reveal how influence can be quantified and potentially replicated by adversaries.


