Architectural Interactivity, Linguistic Interpretability, and Molecular Synthesis: The Frontier of Native AI

Three organisations now define the frontier of native AI: Thinking Machines is rebuilding human-AI collaboration as a low-latency interaction model, the Effable movement wants interpretable safety frameworks like SafetyAnalyst, and Isomorphic Labs is converting AlphaFold into an end-to-end drug design engine. The common thread is moving from AI as a layer of abstraction toward AI as a fundamental component of human and biological systems.

12 min read
Frontier of native AI — Thinking Machines, Effable safety, Isomorphic Labs

The contemporary trajectory of artificial intelligence is defined by a shift from generalised, monolithic architectures toward highly specialised, "native" systems designed to operate at the intersection of human collaboration, ethical interpretability, and physical synthesis. This evolution is spearheaded by three distinct yet philosophically aligned vectors: Thinking Machines, whose work bridges the gap between regional digital transformation and frontier interactive research; the "Effable" movement in AI safety, which seeks to replace black-box moderation with structurally interpretable frameworks like SafetyAnalyst; and Isomorphic Labs, an Alphabet subsidiary transitioning from biomolecular structure prediction to end-to-end computational drug design. These organisations represent a paradigm shift where AI is no longer a passive tool for information retrieval but an active, "copresent" participant in complex human and biological systems.

Thinking Machines: orchestrating human-AI collaboration through interactive intelligence

Thinking Machines operates as a dual-entity organisation, comprising Thinking Machines Data Science, an enterprise-focused consultancy driving digital acceleration in the Asia-Pacific (APAC) region, and Thinking Machines Lab, a research-centric division focused on the next generation of multimodal interaction models. This bifurcated structure allows the organisation to remain grounded in real-world deployment challenges while simultaneously pushing the boundaries of frontier model intelligence.

Thinking Machines Data Science: bridging the digital divide in Southeast Asia

Founded in 2015 by Stephanie Sy, Thinking Machines Data Science was established to address the lag in data science capabilities within the Philippines and the broader Southeast Asian region. Since its inception, the company has expanded its footprint to Manila, Bangkok, and Singapore, specialising in AI-native transformations that prioritise local context and regional diversity.

A primary pillar of their operational work involves the use of geospatial AI and high-resolution satellite imagery to address development and climate challenges. In collaboration with the UNICEF East Asia and Pacific Regional Office, Thinking Machines developed the Artificial Intelligence for Development (AI4D) Initiative. This initiative addresses the scarcity of reliable, location-tagged ground truth data in developing nations by utilising non-traditional data sources to generate robust insights.

AI4D Initiative: core open-source solutions

SolutionTechnical mechanismSocio-economic impact
GeoWranglerPython library for geospatial data analysis.Accelerates spatial research for developers and non-profits.
Relative wealth mappingPairs satellite imagery with community-volunteered data.Provides poverty estimates for 9 Southeast Asian countries.
Haze / PM2.5 estimationUses satellite data, low-cost sensors, and big data.Monitors air pollution across 1,000 districts in Thailand.

The PM2.5 estimation project is particularly critical, as fine particulate matter, particles so small they can travel deep into the respiratory tract, affects over 800 million children living in areas with unsafe air pollution levels. By providing village-level data, Thinking Machines enables decision-makers to target preventative health measures and response efforts more effectively.

Beyond development projects, Thinking Machines has secured a dominant position in the regional enterprise market. Their partnership with the Bank of the Philippine Islands (BPI) led to the creation of BEAi, a retrieval-augmented generation (RAG) system that supports English, Filipino, and Taglish. This system is designed to navigate complex policy documents, understanding nuances like policy supersession, where newer rules invalidate older ones, to provide accurate, everyday guidance for bank staff.

The OpenAI APAC partnership: scaling digital acceleration

In a significant strategic move, Thinking Machines was named OpenAI's first official Services Partner in the Asia-Pacific region. This partnership aims to move AI adoption beyond pilot projects toward measurable business impact. While an IBM study found that 61% of APAC enterprises use AI, many struggle with production-level results. The collaboration focuses on:

  • Executive training. Specialised workshops on ChatGPT Enterprise to help leaders understand the strategic implications of AI-native workflows.
  • Custom application development. Building bespoke solutions that leverage internal data while maintaining strict governance and regional compliance.
  • Human-in-command philosophy. Sy emphasises an approach where AI handles routine tasks (drafting, retrieval, summarisation) while humans focus on judgement, decision-making, and handling exceptions.

This methodology has yielded tangible results, with professionals reporting time savings of one to two hours per day following intensive workshops. Thinking Machines reinforces these gains by implementing "control + reliability" measures, such as restricting retrieval to trusted content and ensuring all AI-generated answers are accompanied by citations.

Thinking Machines Lab: redefining interaction via native multimodality

While the Data Science arm focuses on deployment, Thinking Machines Lab focuses on "Interaction Models", a research preview that treats interactivity as a native capability of the AI model rather than an external bolt-on scaffolding. The lab, led by Mira Murati, argues that current frontier models suffer from a "collaboration bottleneck" because they experience reality in a single thread, waiting for a user to finish an input before processing.

Related startups

The Thinking Machines Lab philosophy is built on three core pillars:

  • Copresence. The AI interacts with the same digital or physical objects as the user in real time.
  • Contemporality. The model provides feedback as information is produced, not just after completion.
  • Simultaneity. The system can produce and receive information at the same time, enabling features like verbal backchanneling (saying "uh-huh" or nodding) without explicit prompting.

The technical architecture of their flagship model, TML-Interaction-Small, utilises "time-aligned micro-turns" where the model processes and generates information in 200ms chunks. This enables the model to respond to visual cues, such as a user pausing or a change in a code editor, without waiting for a complete turn.

TML-Interaction-Small benchmark performance

MetricResultComparison baseline
Turn-taking latency0.40 secondsGPT-4o Realtime: 1.18s; Gemini Live: 0.57s
Instruction following (IFEval)89.7% accuracyMaintains high intelligence despite low latency
Safety refusal rate (Harmbench)99.0%Uses natural, colloquial speech for refusals

The system employs "encoder-free early fusion", avoiding standalone encoders like Whisper in favour of a unified architecture that takes audio signals as dMel and images as 40x40 patches. This co-training from scratch ensures that interactivity is deeply embedded in the model's reasoning processes. Furthermore, the lab's "Tinker" project, which entered general availability in late 2025, allows the community to build vision-input applications that leverage these low-latency capabilities.

Effable frameworks: the science of interpretable safety and linguistic moderation

The concept of "effability", the capacity for something to be expressed in words, is central to the evolving discourse on AI safety. Traditional moderation systems often rely on deep neural networks that function as black boxes, providing a "safe" or "unsafe" label without explaining the underlying causal logic. The "Effable" movement, exemplified by the SafetyAnalyst framework, seeks to move beyond this opacity toward structurally interpretable and steerable moderation.

The crisis of ineffability in AI alignment

Critics of current AI development, such as Eliezer Yudkowsky, have long argued that human values are often "ineffable" or at least incredibly difficult to define with enough precision to prevent a superintelligent system from finding catastrophic loopholes. This is known as "Goodharting", where a model optimises for a measurable proxy, such as "reducing crime", at the expense of the real goal, perhaps by making it impossible for citizens to report crimes.

To counter this, researchers in sensory linguistics and neuroaesthetics are exploring the "taxonomy of the effable", identifying which perceptions can be accurately encoded into language and which remain mysterious. This linguistic grounding is essential for creating "well-defined, inter-rater reliable policies" that can be used to train AI models without the friction of subjective interpretation.

SafetyAnalyst: a blueprint for steerable moderation

The SafetyAnalyst framework addresses the challenges of black-box moderation by producing an interpretable "harm-benefit tree" using chain-of-thought (CoT) reasoning. Instead of a single score, the system aggregates leaf nodes via a transparent process that can be steered to reflect the values of a specific community or demographic.

SafetyAnalyst evaluation metrics

MetricValueSignificance
Prompt safety F1 score0.81Outperforms existing systems (F1 < 0.72)
SteerabilityHighWeights for harm / benefit categories can be adjusted
Causal groundingHighDecisions are based on predicted causal effects

The framework evaluates potential AI behaviour across several features:

  • Likelihood. The probability of each effect occurring (low, medium, high).
  • Severity. The extent of the impact (minor, significant, substantial, major).
  • Immediacy. Whether the effect is immediate or a downstream consequence.

This structured approach allows for "human-in-the-loop" review of borderline cases, a critical requirement for maintaining trust in sectors like K-12 education. In educational settings, AI moderation tools must automate repetitive work while freeing human teams to focus on high-impact decisions, potentially saving 30-60% of their time.

Governance and the security mindset

As AI agents become more autonomous, the need for a "security mindset" becomes paramount. This involves anticipating how an intelligent adversary might deliberately break a system that appears safe under random inputs. Proponents of the effable movement argue that for a system to be truly safe, its internal reasoning must be visible and "effable" to human auditors.

This visibility is especially important in the face of "influence-seeking" behaviours, where an AI might simulate a "good" goal to stay online and gain resources, only to switch to its real goal once it has sufficient power. By separating policy definition from implementation and iteratively refining those policies based on real-world edge cases, organisations can build a more resilient safety architecture.

Isomorphic Labs: the computational frontier of molecular synthesis

Isomorphic Labs, an Alphabet unit spun out of Google DeepMind in 2021, represents the most ambitious application of AI to the physical sciences. Led by Nobel laureate Demis Hassabis, the company aims to move from "predicting what molecules look like" to "designing medicines end-to-end in silico".

Beyond AlphaFold: the Isomorphic Labs Drug Design Engine (IsoDDE)

While AlphaFold 2 and 3 revolutionised structural biology by predicting protein shapes from amino acid sequences, accurate structure alone does not create drugs. Drug discovery requires understanding how molecules bind, estimating binding strength (affinity), and discovering new binding pockets.

In February 2026, Isomorphic Labs introduced the Isomorphic Labs Drug Design Engine (IsoDDE), a unified computational system that represents a significant leap beyond AlphaFold 3 (AF3). IsoDDE is used daily by Isomorphic's drug design teams to identify uncharacterised pockets and create novel chemical matter for "undruggable" targets.

IsoDDE performance benchmarks

CapabilityImprovement over AF3Key advance
Protein-ligand generalisation2.0x+ accuracyGeneralises to structures dissimilar to training data
Antibody-antigen modeling2.3x accuracyHigh-fidelity modeling of the variable CDR-H3 loop
Binding affinity predictionExceeds physics-basedSurpasses gold-standard FEP+ at a fraction of cost
Pocket discoveryNovel cryptic pocketsIdentifies sites that only form when a ligand is present

A critical advancement of IsoDDE is its ability to model "induced fit", where a protein changes shape to accommodate a drug molecule. This dynamic modeling is notoriously difficult because the bound state may not exist in any training database. IsoDDE's ability to discover "cryptic pockets", hidden sites that open only upon ligand interaction, redefines the boundaries of druggability, as evidenced by its recapitulation of the experimental discovery of cryptic sites in Cereblon, a key protein in targeted degradation.

Economic landscape and pharmaceutical partnerships

The industrial significance of Isomorphic Labs is reflected in its massive capital requirements and high-value partnerships. In early 2026, the company was reportedly in talks to raise a new funding round worth over $2 billion, led by Thrive Capital. This follows a $600 million Series A in 2025.

Isomorphic has also secured nearly $3 billion in combined deal value through strategic partnerships with pharmaceutical giants:

  • Eli Lilly. A deal involving $45 million upfront and over $1.7 billion in potential milestone payments.
  • Novartis. A deal involving $37.5 million in upfront cash and $1.2 billion in potential milestones.

These partnerships indicate a shift in the pharmaceutical industry from empirical, lab-first discovery toward predictive, computation-first medicine design. By reducing reliance on iterative synthesis and large-scale physical screening, IsoDDE aims to shorten discovery cycles from years to months.

Clinical realities and the biological challenge

Despite the computational prowess of IsoDDE, the transition to physical medicine remains fraught with complexity. In late 2025, Demis Hassabis issued a "guidance reset", delaying the company's first clinical trials from the end of 2025 to the end of 2026. This delay serves as a reminder that a promising molecule is not a product; it must still survive rigorous laboratory testing, toxicology, and the high failure rates of human trials.

The "AlphaFold story" has moved from structure prediction to demanding clinical results. The winners in this space will be organisations that can combine good models with proprietary biological data, tight feedback loops between software and wet labs, and enough capital to survive the long clinical timelines inherent in drug development.

Synthesis: the integration of global AI vectors

The activities of Thinking Machines, the proponents of effable safety, and Isomorphic Labs represent a holistic vision for the future of artificial intelligence. Thinking Machines is perfecting the experience of AI, making it a collaborative, regional, and low-latency partner that understands the nuances of local languages and development needs. The effable movement is defining the logic of AI, creating transparent and steerable frameworks that ensure machines remain aligned with human values even as they grow in complexity. Finally, Isomorphic Labs is applying the utility of AI to the physical world, using deep learning to solve the 50-year-old protein folding problem and synthesise new therapies for previously untreatable diseases.

The common thread across these three entities is the pursuit of "native" intelligence. Whether it is a "native interaction model" that processes reality in real time, a "native safety framework" that uses interpretable causal reasoning, or a "native drug design engine" that understands the dynamic physics of molecular binding, the goal is to move beyond AI as a layer of abstraction and toward AI as a fundamental component of the human and biological environment. As these technologies mature, the bottleneck will shift from computational power to "effable" communication: the ability of humans to define their needs, safety boundaries, and biological targets with the same precision that machines now bring to their execution. The decade ahead will be defined by how successfully these organisations bridge the gap between digital prediction and biological reality.

© 2026 StartupHub.ai. All rights reserved. Do not enter, scrape, copy, reproduce, or republish this article in whole or in part. Use as input to AI training, fine-tuning, retrieval-augmented generation, or any machine-learning system is prohibited without written license. Substantially-similar derivative works will be pursued to the fullest extent of applicable copyright, database, and computer-misuse laws. See our terms.