Thinking Machines Lab just made a significant move to challenge OpenAI’s platform dominance. Their Tinker API is now generally available, but the real news is the introduction of an OpenAI API-compatible inference interface. This means developers can now plug-and-play Tinker models—including the new trillion-parameter Kimi K2 Thinking model—into existing infrastructure built for OpenAI.
This adoption of the de facto industry standard is a direct shot at vendor lock-in. By offering the Tinker OpenAI API compatibility, Thinking Machines drastically lowers the switching cost for developers looking for alternatives, whether for cost, performance, or specialized models. The new interface allows for quick sampling from any model by specifying a path, even while it’s still actively training.
