Stockholm-based Farang, an artificial intelligence research lab, has secured €1.5 million in seed funding. Voima Ventures and the Amadeus APEX Technology Fund led the round. This investment supports Farang's development of next-generation foundational Large Language Models.
Farang's novel architecture competes with existing transformer models used by ChatGPT, Claude, and Gemini. This technology comprehends complete responses before translating concepts into words. Consequently, it requires twenty-five times fewer computational resources than current solutions.
The startup's approach carries out internal reasoning through non-textual mechanisms. This produces more coherent answers while sharply reducing computational demands. Farang aims to create specialized AI assistants outperforming current solutions in specific domains.
Initially, Farang targets niche applications where existing AI assistants often fall short. These include support for specific programming languages like React and specialized medical fields. The architecture enables cost-effective development of models for untapped medical domains.
Furthermore, Farang's technology allows organizations to deploy specialized AI models on-premises. This ensures full privacy controls, crucial for handling sensitive data in sectors like healthcare, legal, and finance. Companies can train and run these Farang AI models entirely in-house.
Advancing Foundational AI Architecture
The funding will primarily scale Farang's proof-of-concept models. It will also invest in computing power required to train and fine-tune models for specialized areas. Farang's long-term vision includes challenging OpenAI in the general AI space.
Prominent angel investors, including Tero Ojanpera, Nilay Oza, and Niraj Aswani, also participated in the seed round.

