Mozilla.ai has released any-llm-go, bringing its unified interface for interacting with large language models (LLMs) to the Go programming language. This move aims to provide Go developers with the same flexibility previously available to Python developers, enabling them to switch between cloud or local LLM providers without rewriting code.
The original any-llm v1.0, launched last year, addressed the need for a single API to manage diverse model providers. This Go integration, detailed in an announcement from Mozilla.ai, extends this capability to Go, a language prevalent in production infrastructure.
Unified Provider Interface
Any-llm-go normalizes the disparate behaviors of LLM providers, such as streaming nuances, error semantics, and feature support, under a consistent interface that adheres to OpenAI API standards. This ensures a predictable developer experience.
The library ships with out-of-the-box support for eight providers, including Anthropic, DeepSeek, Gemini, Groq, Llamafile, Mistral, Ollama, and OpenAI. It clearly delineates support for core functionalities like text completion, streaming, tool usage, reasoning, and embeddings across these providers.
- Anthropic: Supports Completion, Streaming, Tools, Reasoning. Lacks Embeddings.
- DeepSeek: Supports Completion, Streaming, Tools, Reasoning. Lacks Embeddings.
- Gemini: Supports Completion, Streaming, Tools, Reasoning, and Embeddings.
- Groq: Supports Completion, Streaming, Tools. Lacks Reasoning and Embeddings.
- Llamafile: Supports Completion, Streaming, Tools, and Embeddings. Lacks Reasoning.
- Mistral: Supports Completion, Streaming, Tools, Reasoning, and Embeddings.
- Ollama: Supports Completion, Streaming, Tools, Reasoning, and Embeddings.
- OpenAI: Supports Completion, Streaming, Tools, Reasoning, and Embeddings.
Developers can write their application logic once and switch providers simply by changing an import statement and model name, maintaining consistent request shapes, streaming logic, and error handling.
Designed for Go
Any-llm-go is engineered specifically for Go, leveraging the language's idiomatic features. Streaming responses utilize Go channels, making them compatible with `range` and `select` statements. Error handling is managed through typed sentinel errors like `ErrRateLimit` and `ErrAuthentication`, allowing for precise error checking with `errors.Is` and `errors.As`.
Configuration is handled via functional options, providing type-safe and composable setup. Every function call includes `context.Context` for managing cancellation, timeouts, and tracing, aligning with standard Go practices.
OpenAI-Compatible Base
For providers exposing OpenAI-compatible APIs, any-llm-go includes a shared base provider. This significantly simplifies adding new providers like Groq, DeepSeek, Mistral, and Llamafile, as the base handles common functionalities such as completions, streaming, tool calls, embeddings, and error conversion.
Developers only need to define configuration details like the API endpoint and relevant environment variables for API keys. This approach is designed to facilitate community contributions, allowing users to easily add support for providers that expose similar APIs.
Contribution and Future Plans
Mozilla.ai encourages community contributions, providing a detailed guide for adding new providers. The roadmap includes support for additional providers like Cohere, Together AI, AWS Bedrock, and Azure OpenAI, alongside batch completion support and continued parity with the Python any-llm library.
The library also integrates with the beta any-llm managed platform, offering features like secure API key management, LLM performance monitoring, and budget controls for multi-provider environments.
Getting Started
Developers can integrate any-llm-go by running go get github.com/mozilla-ai/any-llm-go. Further details are available in the official documentation and examples repository.



