The Software Development Life Cycle has evolved dramatically over recent years, with DevOps emerging as a critical element. It’s estimated to reach a market size of $25 billion by 2028, and a recent survey suggests that it commands 80% of engineering budgets, according to Grove Ventures.
Yet, despite its significance, the practice is still considerably complex. Balancing speed with quality, orchestrating multi-layered deployments, and maintaining rigorous security standards are just a few of the hurdles encountered. Amidst these complexities, Generative AI is making a profound impact.
Within this dynamic landscape, Kubiya, an Israeli startup, has swiftly capitalized on the potential of Generative AI for DevOps. Founded in 2022 by Amit Eyal Govrin and Shaked Askayo, backed by VCs in Israel and Silicon Valley and with millions in their coffers, Kubiya developed an Agent Based, Large Language Model (LLM) powered DevOps assistant, 'Kubi', offering a comprehensive suite of engineering tools, enabling tasks such as IaC provisioning, triggering CI/CD jobs, managing Kubernetes namespace, and Jira ticket handling with unprecedented ease.
In my ongoing series of interviews with pioneers in Generative AI, I recently sat down with Mr. Govrin. We discussed their groundbreaking work with 'Agents', his early bets on LLMs in the context of Conversational AI and their ambitions to push the envelope further into the realm of autonomous agents for operations.
How has Generative AI helped in the implementation of DevOps?
"Generative AI gave way to a more efficient exchange between users and their engineering tools. The traditional one-sided human to machine interaction, which lacked any form of feedback and required substantial user effort, has evolved into a reciprocal communication medium. It allows for more intuitive and interactive experiences, significantly enhancing the way humans interact with machines, and more importantly, significantly changes the discussion around the time it takes to automate end-to-end DevOps tasks."
"At Kubiya we built a full-stack LLM solution layered with a suite of purpose-built, agent-actors to address every aspect of the enterprise use-cases. A user may interact with our agents on a general purpose question around their knowledge base or docs. In this instance Kubiya fine-tunes the embedding and uses RAG to infer answers to the questions complete with a summary and links, all while maintaining complete control of the access and permission levels of the user. This will typically give them an appetite to challenge the system with more advanced use cases such as granting permission to create a new development stack using Terraform for instance or create an IAM policy and attaching it to an S3 bucket. Finally a user will feel the confidence to ask the system to create a complex chain of actions using multiple tools, data sources and policies."
"We have seen several of our customers create automated tasks that sweep their Jira 'to-do' queue on a scheduled basis, try to solve a myriad of complex use-cases including Kubernetes troubleshooting, elevated permissions, GitHub PR merge and Terraform code, and then notate each ticket once finished so that it’s solved and moved to 'completed'. Some have even gone as far as having the agent assign follow-on steps to admins and notify the requesters on a channel once complete."
"It’s hard to give this use-case a name as it covers so much ground, but it’s an end-to-be automation of DevOps tasks that otherwise would have required multiple tools, knowledge sources and tasks not to mention involve multiple humans in the loop. This is the genesis of AI agents - offloading work from humans to free them to focus on their best skills."
