Recently there's a huge interest in creating multiple software entities that can communciate with each other. This kind of interaction pattern is needed for example in case of using AI agents in a multi-agent system where agents share a common environment. Communicating between multiple software components might also be crucial in case of IoT devices.
To solve this problem we can create an adapter agent which acts as a hub that communicates with other auxilliary agents.
In Agentverse it is possible to create such an adapter agent. The main idea is that if we send a message from DeltaV, it forwards the message to all the other agents that we register in this agent, and waits for the responses of all those agents. When it got all the responses from all the agents, it processes the responses and sends back the final result to DeltaV.
We can register the other agents - in this usecase let's call them AI agents - inside the adapter by storing the addresses of all those agents in agent storage:
python ctx.storage.set("ai_agent_addresses", `{YOUR_AI_AGENT_ADDRESS(ES) AS A LIST OF STRINGS}`)
Agent storage is a global storage of an agent which is globally accessible from any message handlers that we define on our agent. Here you can find more information about how you can use your AI Agents and their Adapter Agent from DeltaV, but if you are interested in a bit more explanation you can find some more technical details below.
In DeltaV you can access this adapter via a chat interface if you specify an objective which matches the description of your adapter service. After selecting the Adapter on the screen and specifying a prompt to pass to your AI agents, your Adapter will be executed and a message will be sent to its message handler, and via this a session will be created between DeltaV and the Adapter agent. This is how we can define a message handler for this interaction in Adapter agent:
python @adapter_protocol.on_message(model=AIRequest, replies=UAgentResponse) async def send_prompt_to_ai_agents(ctx: Context, sender: str, msg: AIRequest):
This handler will then forward your prompt to your AI agents' message handler. Inside the handler methods of your AI Agents based on the prompt and your specific AI model implementation a response will be generated which is sent back to the Adapter agent.
python @agent.on_message(model=AIRequest, replies=AIResponse) async def respond_to_adapter(ctx: Context, sender: str, msg: AIRequest): ctx.logger.info(f"Message from AI Adapter: {msg.prompt}") # TODO: implement AI Agent code here # TODO: send back the actual AI response to the Adapter agent await ctx.send(sender, AIResponse(response="AI response 1"))
The Adapter is then going to process the AI Agent responses one after another via another message handler (not via the one that we used to accept the initial message from DeltaV):
python @adapter_protocol.on_message(model=AIResponse) async def process_response_from_ai_agent(ctx: Context, sender: str, msg: AIResponse):
The AI Agent response is going to be stored in the Adapter agent's storage. The responses will be stored separately for each DeltaV session so we can decide if we have got all the messages - if the number of responses for a session is equal to the number of AI Agents:
if len(ai_agents_responses_session) == len(ctx.storage.get("ai_agent_addresses")):
When the above condition is met we can send all the responses back to DeltaV at once:
python await ctx.send( ctx.storage.get("deltav-sender-address"), UAgentResponse( message=f"AI Agents' responses: {final_ai_responses}", type=UAgentResponseType.FINAL, ), )
The above multi-agent system shows that it is possible to create complex multi-agent system using Fetch.ai's agent framework that can be used to communicate with AI models and or IoT devices by using an adapter artchitecture.
The specific implementation that this blog post is about with verbose explanatory comments is accessible here: https://github.com/fetchai/uAgents/tree/main/integrations/multi-agent-adapter