
Dive Into Building Intelligent Agent Teams with the ADK!
Hey everyone! If you've been curious about how to get started with building applications powered by Large Language Models, or maybe you've tried it out but want to build something more sophisticated, then you're in the right place! I recently went through a super helpful tutorial on the Agent Development Kit (ADK), and it's all about constructing a multi-agent system, which essentially is a team of AI “brains” that work together to solve specific problems, pretty cool right?
For instance, imagine a smart home assistant that doesn’t just answer your questions but coordinates your day across different areas of your home. One agent checks the weather and road conditions, another monitors your calendar and meetings, while a third orders your coffee in advance, all working together behind the scenes. This is the power of having multiple agents!
This tutorial walks you through building a Weather Bot agent team, starting simple and then progressively adding some neat advanced features, like dynamic reasoning, planning, and memory sharing between agents. I know, a weather bot might sound simple, but it's actually a perfect way to get hands-on with the core concepts of ADK. You'll learn the ins and outs of how to structure interactions between agents, manage their memory state, keep things safe, and get multiple AI models to play nicely together. Once you get a feel for this simpler example, you can start building those more complex agents as well.
What is ADK?
For those wondering, ADK is a Python framework from Google designed to make your life easier when you're developing LLM-powered applications. It gives you the building blocks to create agents that can do more than just chat; they can reason, plan, use tools (basically Python functions), have dynamic conversations, and even collaborate as a team.
This tutorial really gets into the nitty-gritty, covering:
-
Tool Definition & Usage: You'll be crafting Python functions that act as "tools" for your agents, giving them specific powers like fetching data. And, you'll learn how to tell your agents how to use these tools effectively.
-
Multi-LLM Flexibility: Want to use Gemini for one task and GPT-4o for another? No problem! ADK's integration with LiteLLM lets you configure agents to use various leading LLMs.
-
Agent Delegation & Collaboration: This is where the "team" part comes in. You'll design specialized sub-agents and set up automatic routing so user requests go to the best agent for the job.
-
Session State for Memory: Learn how to use Session State and ToolContext so your agents can remember what was said earlier in the conversation, which makes interactions feel much more natural.
-
Safety Guardrails with Callbacks: This is super important. You'll implement before_model_callback and before_tool_callback to check, change, or even block requests or tool usage based on rules you set. Think of it as a safety net for your AI.
By the end of it, you'll have a fully functional multi-agent Weather Bot system that not only tells you the weather but also handles chit-chat, remembers the last city you asked about, and contains guardrails to make sure only appropriate questions are asked to your agent.
First Steps
Now before we do anything, we need to install ADK and LiteLLM. They are going to contain the functions we need to create our agents
Then you'll import the necessary libraries:
And of course, you'll need to configure your API keys for the LLMs you plan to use. The tutorial shows you how to initialize these variables. First to get a key, go to the website linked in the comments, hit create new API key, name them, and it'll give you an API key to copy and paste into the code. Remember that these are your personal keys. Don't share them!
(Code snippet adapted from the tutorial for illustrative purposes)
Defining Your First Agent Tool
When building an AI agent using the ADK (Agent Development Kit), one of the foundational concepts is the idea of a "tool." A tool is simply a Python function that performs a specific task, like fetching the weather for a city. What makes a tool special in the context of AI agents is not just the function itself, but it's docstring. This docstring, written in triple quotes ("""..."""), acts as a natural language explanation for the LLM. It tells the LLM what the tool does, what kind of inputs it expects, and what the output will look like. This enables the model to decide when to use the tool and how to call it correctly. Think of the docstring as an instruction manual written in a way the AI can read and understand.
In the example provided, the get_weather function is defined as a tool that accepts a city name as a string and returns a dictionary with weather information. Internally, the function does a bit of basic cleanup (lowercases and removes spaces from the city name), then looks up a mock database of weather reports for a few predefined cities (New York, London, and Tokyo). If the city is found, it returns a success response with a report; otherwise, it returns an error message. The key takeaway here is that this is a self-contained, testable function that also includes a clear docstring, which makes it both executable by Python and discoverable/usable by the AI agent. This combination of code plus natural language explanation is what transforms an ordinary function into an agent-aware "tool."
Here's a snippet of the mock get_weather tool from the tutorial:
(Code snippet adapted from the tutorial for illustrative purposes)
Clear docstrings? Check! This tells the agent what its responsibilities are.
Building the Agent Itself
Once you have a function, you define the Agent. In ADK, an Agent is what orchestrates the interaction between the user, the LLM, and the available tools. You configure it with several key parameters:
-
name: A unique identifier (e.g., "weather_agent_v1").
-
model: Specifies which LLM to use (e.g., MODEL_GEMINI_1_5_FLASH or a LiteLLM object for other providers).
-
description: A concise summary of the agent's overall purpose. This becomes super important later when other agents need to decide if they should delegate tasks to this one.
-
instruction: This is where you give detailed guidance to the LLM on how to behave, its persona, its goals, and, critically, how and when to use its assigned tools.
-
tools: A list containing the actual Python tool functions the agent is allowed to use (e.g., [get_weather]).
Runner and Session Service
To actually make your agent do things and remember conversations, you need two more key components:
-
SessionService: This is responsible for managing conversation history and any state for different users and sessions. The tutorial starts with InMemorySessionService, which is great for testing as it stores everything in memory. It keeps track of all the messages exchanged.
-
Runner: This is the engine that orchestrates the whole interaction flow. It takes user input, routes it to the right agent, manages calls to the LLM and tools based on the agent's logic, handles session updates via the SessionService, and yields events that tell you what's happening every step of the way.
Interacting with Your Agent (Async Style!)
Because interactions with LLMs and external tools often involve time consuming I/O-bound operations, like waiting on an API response or processing a model's output, ADK is designed to run asynchronously. This means instead of blocking the system while waiting for a response, it can continue handling other tasks or interactions. To make working with this async architecture easier, we’ll use a helper function called call_agent_async. This function wraps the logic of sending a user query to the agent and streaming the response back. Under the hood, it uses runner.run_async() to initiate the interaction and listens for incoming events using Python's async for syntax.
This asynchronous approach is especially powerful for agents that may take multiple steps to respond, like deciding on a tool, executing it, interpreting the result, and generating a reply. By streaming events instead of waiting for a full computation to finish, it enables a more responsive user experience and allows the agent to run more efficiently.
The Power of Multi-Model and LiteLLM
One of the really neat aspects of ADK is its flexibility with different LLMs. Through its integration with the LiteLLM library, you can easily switch the "brain" of your agent. LiteLLM acts as a consistent interface to over 100 different LLMs! These include Meta’s Llama, NVIDIA’s NIM, OpenAI’s ChatGPT, Perplexity AI, Google’s Gemini, Antropic’s Claude, and much more, each with its own advantages. For example, you might start building your weather agent using ChatGPT 3.5 turbo from OpenAI for fast and cost-effective responses or Anthropic’s Claude 3 opus for longer context windows and different reasoning patterns. So, if you have API keys for OpenAI or Anthropic, you can try running your weather agent with GPT models or Claude models, respectively, just by changing how you specify the model in your Agent definition:
(Conceptual snippet based on tutorial's multi-model section)
This allows you to experiment and choose models based on performance, cost, or specific capabilities, all while keeping your core agent logic and tools consistent.
Building Your Team of AI Agents to Delegate Your Tasks to
The next step of this tutorial is building an Agent Team. Instead of cramming all functionality into one agent (which can get messy fast), you create multiple, specialized agents. For the Weather Bot, this means creating a greeting_agent and a farewell_agent in addition to the main weather_agent.
First, you define simple tools for these new specialists:
(Code snippet adapted from the tutorial for illustrative purposes)
Then, you create these new agents, each with a very focused instruction and, crucially, a clear description. This description is what the "root" agent (your main weather agent, now acting as an orchestrator) uses to decide when to delegate a task.
Your main weather_agent_v2 (or whatever you call the upgraded version) is then updated. The key change is adding a sub_agents parameter, passing in a list of these specialist agents. Its instruction prompt is also updated to tell it about its team and when to delegate. ADK's "auto flow" then handles the magic of routing: if the root agent's LLM decides a query is better handled by a sub-agent (based on that sub-agent's description), it automatically transfers control.
Giving Your Agents a Memory: Session State
So far, so good. But what about remembering things? Each interaction has been fresh. That's where Session State comes in. It's essentially a Python dictionary (session.state) tied to a specific user session that persists across multiple conversational turns. Agents and Tools can read from and write to this state.
The primary way tools interact with state is through a ToolContext object. If a tool declares tool_context: ToolContext as its last argument, ADK automatically provides it, giving the tool direct access to tool_context.state.
The tutorial demonstrates this by creating a get_weather_stateful tool that reads a user_preference_temperature_unit from the state (initialized to "Celsius" for the demo) and formats its output accordingly.
(Code snippet adapted from the tutorial for illustrative purposes)
Agents can also automatically save their final textual response for a turn into the session state if configured with an output_key="your_key".
Safety Callbacks
With all of that, we also would want to make sure the user asks suitable questions so the agent can provide accurate and appropriate responses. You can implement this using Callbacks: functions that pause execution of an agent.
before_model_callback: The Input Sentry
This callback executes just before an agent sends its compiled request (history, instructions, latest user message) to the LLM. It can inspect the request, modify it if necessary, or even block it entirely. Think input validation, filtering out PII, or preventing harmful/off-topic requests from ever reaching the model. In the same way many LLMs restrict controversial or even dangerous questions, you can put locks on questions you deem unsuitable
The callback function you define accepts callback_context: CallbackContext (giving access to agent info, session state, etc.) and llm_request: LlmRequest (the payload intended for the LLM). If your callback returns None, ADK proceeds to call the LLM. But if it returns an LlmResponse object, ADK sends that response back immediately, skipping the LLM call for that turn – a perfect guardrail!
The tutorial implements a block_keyword_guardrail that checks the user's input for the word "BLOCK" (case-insensitive) and blocks the request if found:
(Code snippet adapted for defining and initializing a blocked keyword guardrail)
Your root agent is then updated to use this callback by setting its before_model_callback parameter.
The Tool Usage Inspector (before_tool_callback)
But what if you want more control over what happens after the LLM decides to use a tool, but before that tool actually runs? For instance, let’s say one of your users asks your weather agent to report the weather of a fictional city. Instead of wasting resources answering the question, it can pause execution and respond with a predetermined response, saying that this city doesn’t exist in its database.
That’s exactly where the before_tool_callback comes in, it acts like a checkpoint or gatekeeper between the model’s decision and the actual execution of the tool. before_tool_callback is a Python function that runs right before a tool is executed, after the LLM has selected the tool and generated its input arguments. Its core purpose is to give you one last chance to intercept, validate, or even replace the tool call before anything is actually run. This is especially useful when you want to protect against bad inputs, enforce custom business logic, or adjust how tools behave based on the session state or specific inputs.
How it Works
Your callback function should accept three parameters:
-
tool: The tool object that is about to be called (you can inspect properties like tool.name).
-
args: A dictionary of the arguments the LLM generated for the tool(city, date, time).
-
tool_context: An object that provides useful contextual data such as session state, agent information, or previously stored variables.
Once inside the function you can either modify the arguments or block the tool entirely,
The tutorial implements a block_paris_tool_guardrail that specifically checks if the get_weather_stateful tool is being called with the city "Paris". If so, it blocks the tool and returns a custom error dictionary that mimics the tool's own error response format:
The root agent is then updated again, this time to include both before_model_callback and before_tool_callback. When you test this, a request for weather in "New York" will pass both callbacks and execute. But a request for "Paris" will pass the before_model_callback (because "BLOCK" isn't in the input), the LLM will decide to call get_weather_stateful(city='Paris'), but then the before_tool_callback will intercept it, block the actual tool call, and return the custom error message, which the agent then relays to the user.
Summary: Your Journey to Building an Intelligent Agent Team
Phew! That was a lot, but hopefully, you see how incredibly comprehensive and practical this ADK tutorial is. You've effectively journeyed from building a single, basic weather agent to constructing a sophisticated, multi-agent team that has memory and can filter out inappropriate questions .
The Agent Development Kit truly provides a robust and flexible foundation for building the next generation of LLM-powered applications. By mastering the concepts covered in this deep-dive tutorial, you'll be well-equipped to design and build increasingly complex and intelligent agentic systems.
So, if you're ready to move beyond simple LLM interactions and build a real AI team, this ADK tutorial is an amazing place to start. We would love to hear about what you’ll build with this. Also let us know what other tools you would like to see us demo!
-Nathan Thomas
Google. “Build Your First Intelligent Agent Team: A Progressive Weather Bot with ADK.” Google Colab, https://colab.research.google.com/github/google/adk-docs/blob/main/examples/python/notebooks/adk_tutorial.ipynb. Accessed 13 May 2025