The intensifying debate surrounding artificial intelligence regulation recently took center stage on CNBC's 'Squawk Box,' where New York Assemblyman Alex Bores, a key proponent of state-level AI legislation, offered a sharp counter-narrative to powerful industry lobbies. Bores, who holds a Master's in Computer Science from Georgia Tech and previously served as an engineer at Palantir, spoke with Emily and Andrew about the New York State Responsible AI Safety and Education (RAISE) Act, a bill he sponsored. His insights revealed a fundamental ideological clash between those advocating for unchecked AI development and those prioritizing public safety through measured governance.
The interview illuminated the stark opposition from a pro-AI Super PAC, "Leading the Future," which launched attacks against Bores, accusing the RAISE Act of "slowing American progress and opening the door for China to win the global race for AI leadership." This aggressive stance, however, is precisely what Bores challenged, arguing that such industry groups ultimately seek to avoid any regulation at all. His technical background lends significant weight to his arguments, positioning him not as an anti-tech Luddite, but as an informed advocate for responsible innovation.
A core insight Bores emphasized throughout the discussion is the inherent dual-use nature of advanced AI. He articulated a sobering truth about the technology's potential: "The same pathways that will allow it to potentially cure diseases could allow it to say, build a bio-weapon." This perspective underscores that the immense benefits of AI are inextricably linked to profound risks, demanding a balanced approach to development and deployment. The challenge, as Bores sees it, is to manage these risks effectively while still harnessing AI's transformative power for good.
Bores’ analysis of the RAISE Act itself revealed another crucial insight: the legislation is not an arbitrary imposition but a codification of principles many leading tech companies have already voluntarily committed to. He stated plainly, "All the RAISE Act does is put those commitments into law." These commitments, Bores noted, include developing safety plans, reporting critical safety incidents, and refraining from disclosing models that pose an unreasonable risk. He pointed to international forums, such as the European Code of Practice, where major AI developers have already agreed to similar benchmarks, suggesting that the industry's public opposition to the RAISE Act in New York is inconsistent with their stated global responsibilities.
The Super PAC's vehement opposition, therefore, appears less about the specifics of the RAISE Act and more about a broader philosophical resistance to any form of governmental oversight. Bores cut directly to this point, asserting, "They don't want there to be any regulation whatsoever." This highlights a significant tension within the AI ecosystem: the desire for rapid, unencumbered innovation versus the imperative to protect society from potential catastrophic harm. For tech insiders, this reveals a political front where financial power is wielded to shape the regulatory environment, often against the very safeguards industry leaders publicly endorse.
On the matter of state versus federal regulation, Bores articulated a compelling case for states as "laboratories of democracy." He acknowledged the eventual need for a federal AI standard, stating, "I strongly agree with that." However, he argued that states can move with greater agility and speed than the federal government, which often lags in addressing emerging technological challenges. This decentralized approach allows for experimentation and refinement of regulatory frameworks, potentially informing a more robust and effective national policy down the line. The current debate, Bores observed, isn't about *whether* there should be a federal standard, but whether states should be permitted to act in its absence.
Related Reading
- AI's Job Paradox: Senator Warner Demands Data and Industry Accountability
- Open Source vs. Closed AI: Navigating the Trillion-Dollar AI Stack
Perhaps the most striking detail Bores provided about the RAISE Act was its definition of "unreasonable risk." This isn't about minor glitches or inconveniences; it targets existential threats. He clarified that the standard applies to models that show "a substantial risk of killing 100 people or leading to a billion dollars in damage." This high threshold for intervention directly addresses concerns about over-regulation stifling innovation, focusing instead on catastrophic outcomes. Bores drew a stark historical parallel to the tobacco industry, which knowingly suppressed evidence of harm, arguing that similar accountability must be built into AI development. The goal, he concluded, is to establish a "backstop on people acting incredibly irresponsibly."
The interview presented a clear picture of the brewing battles over AI governance. Assemblyman Bores, armed with both technical expertise and legislative intent, is pushing for a framework that holds developers accountable for their creations' most severe potential impacts. His perspective challenges the notion that regulation inherently stifles progress, instead framing it as a necessary guardrail for responsible and sustainable innovation. The pushback from well-funded pro-AI groups reveals the intense lobbying efforts underway to shape the future of AI, a future that will undoubtedly be defined by the interplay of technological advancement and human oversight.

