Agentic AI has become the holy grail of the tech industry, largely outside of the view of the general public. OpenAI defines this idea as “AI systems that can pursue complex goals with limited direct supervision.” Essentially, we’re talking about artificial agents that can act on their own toward achieving goals.
Put simply, it’s the ideal AI personal assistant that can keep track of all your daily tasks for you, plan around changes in your calendar, and understand abstract requests like “prepare my meal plan and order groceries for the next month.”
However, as appealing as it may sound, agentic AI raises a number of practical, ethical, and even moral questions. Let’s explore these artificial agents that are increasingly taking over the internet and our lives.
Agentic AI Explained
Agentic AI has a goal-driven and proactive nature. In short, it aims to automate a huge part of knowledge work in just a few clicks. Jeremy Nixon defined its practical difference from traditional AI as a ‘chaining’ capability - taking a sequence of actions in the face of a single request from the Machine Learning model.
For example, when you ask an AI agent to create a website for you, it needs to immediately generate a series of small goals and begin executing them:
- Come up with a structure of the website and its various screens.
- Write headlines and body content for whatever the website does.
- Generate the HTML code and the backend in a chosen programming language.
- Design the visuals, and fill the page with graphics, photos, etc.
- Test the website on different devices and make sure it’s bug-free.
For an ideal agentic AI, all these actions should be performed in one request. Of course, there’s a lot of complexity involved - when you want to design a website, you probably expect back-and-forth, confirmation of visuals, copy, and so on. And that’s the sort of thing you already can do with the many generative AIs out there - Google’s Gemini, OpenAIs ChatGPT, Antrhopic’s Claude, etc.
So, we’re simply discussing more advanced versions of generative AIs that already exist. An agentic AI would be just like another colleague you communicate with in Slack or Teams - it specializes in something and can go and accomplish complex tasks based on abstract instructions, then report back to you about the results.
Breaking Down Agentic AI
There are a few key differences between agentic AI and traditional systems we’re used to seeing in the tech industry. They are:
- Task Channeling. It’s capable of taking complex abstract instructions and breaking them down into individual tasks, and then executing them.
- Advanced Communication. It can process language, confirm expectations, discuss tasks, and have a degree of reasoning in decision-making.
- Adaptability. Older AIs relied on a series of predefined tasks. Agentic AI, on the other hand, can change its behavior based on the situation.
Agentic AIs are based on large language models and access to massive amounts of data that help them understand connections and differences between concepts or even real-world objects. Most remarkably, these systems can extrapolate new information based on existing knowledge.
Risks of the Agentic AI
This level of autonomy has enormous benefits for businesses and consumers, however, it also comes with unique challenges that need to be addressed:
Bias
As exciting as it may be, agentic AI is trained on data from various internet sources. If the massive amount of hilariously bad replies from Google’s recent integration of AI into their search engine shows, it’s still mostly trained on the internet. And the internet has Reddit, X, and the Onion. There’s a strong need for curating AI interactions with various resources.
Hallucination
Generative AI is prone to making things up, and agentic AI is based on generative AI capabilities. This means that it, too, will be susceptible to hallucinations and odd behaviors. It’ll likely make up answers, fill gaps with randomly generated nonsense, or even learn to ‘lie’ about having done something when unable to properly interpret instructions.
