Building Multi-Agent AI Workflows with AgentOS: Research, Writing, and Podcast Production

Published on: October 4, 2025

What Are Multi-Agent AI Systems?

Multi-agent AI system is one where specialized agents collaborate to solve complex problems. Instead of building one large AI that handles everything, you create teams of focused agents that excel at specific tasks—research, analysis, content creation, or coordination—and work together through structured workflows. For instance, a researcher agent, a writer agent and an editor agent can work in conjunction to write an article.

In recent times many frameworks have emerged including Microsoft’s AutoGen enables conversational multi-agent frameworks, CrewAI focuses on role-based agent collaboration, and AgentOS provides comprehensive orchestration with built-in memory and knowledge management. Each platform approaches the challenge differently—AutoGen emphasizes agent conversations, CrewAI structures agents around job roles, while AgentOS integrates teams, workflows, and persistent intelligence into a unified system.

This architectural approach mirrors how human organizations work: specialists collaborate within their expertise areas, share context and knowledge, and coordinate through established processes to achieve outcomes no individual could accomplish alone.

Building Your Own Multi Agent System using Agnos & AgentOS

In this explainer, let’s explore how to build a multi-agent system that researches AI news, writes daily reports, and produces podcasts. We will use Agnos framework for this. The entire workflow runs autonomously, with agents that specialize in different tasks, remember their experiences, and build knowledge over time.

Context and Evolution

I’ve built several versions of AI news aggregation systems—from simple Python scripts to N8N workflows. This latest version using AgentOS introduces something different: multiple specialized agents working as coordinated teams with persistent memory and accumulated knowledge.

How Multi-Agent Systems Work

You create specialized agents that work together instead of one AI doing everything. Each agent has specific tools and instructions. The system coordinates their work through teams and workflows. Agents remember what they learn and build knowledge over time.

The System Architecture

The system uses six agents organized into teams:

Research Team:

  • HN Reddit Researcher: Gets AI discussions from HackerNews and Reddit r/artificial
  • ArXiv Researcher: Finds AI research papers from ArXiv

Content Team:

  • Report Writer: Combines research into daily reports
  • Podcast Script Writer: Turns reports into podcast scripts
  • Podcast Producer: Creates audio from scripts using ElevenLabs TTS

Analysis:

  • Trend Analyst: Finds patterns in accumulated knowledge

AgentOS Core Concepts

Agents

An agent in Agno is an autonomous AI program that uses a large language model to determine its actions. An agent has a model, instructions, tools, and can use reasoning, knowledge, storage, and memory.

hn_researcher = Agent(
    name="HN Reddit Researcher",
    model=OpenRouter(id="x-ai/grok-4-fast:free"),
    tools=[HackerNewsTools(), RedditTools(), FileTools(base_dir=research_dir)],
    instructions="Get AI news from HackerNews and Reddit r/artificial..."
)

Each agent gets tools for its specific job. The HN Reddit Researcher has social platform tools. The ArXiv Researcher has academic search tools.

Teams

A team in Agno is a collection of agents that work together to accomplish tasks. The team has a leader that delegates tasks to team members based on their roles.

daily_team = Team(
    name="Daily AI Report Team",
    members=[hn_researcher, arxiv_researcher, writer],
    enable_user_memories=True,
    add_memories_to_context=True
)

Team members share memory and context. The team leader decides which agent handles each task.

Workflows

A workflow in Agno runs agents in sequence. Each agent uses the output from the previous agent.

daily_workflow = Workflow(
    steps=[hn_researcher, arxiv_researcher, writer, podcast_script_writer, podcast_producer]
)

Each step builds on previous outputs. This creates a pipeline from research to final podcast.

Memory

Memory in Agno lets agents store and recall information from previous interactions. This helps agents learn user preferences and improve over time.

db=shared_db,
enable_user_memories=True,
session_id=f"session_{TODAY}"

Memory persists across executions. Agents can learn and improve from past experiences.

Knowledge Base

A knowledge base in Agno stores domain-specific information that agents can search at runtime. This helps agents make better decisions and provide accurate responses.

knowledge = Knowledge(
    vector_db=LanceDb(
        embedder=SentenceTransformerEmbedder(dimensions=384)
    )
)

Content gets embedded and stored for future reference. Agents can search this knowledge to find patterns and trends.

Building the System

Step 1: Set Up the Environment

Create the project structure and install dependencies:

mkdir ai-news-system
cd ai-news-system
pip install agno openrouter reddit arxiv elevenlabs

Set up your environment variables:

# .env
OPENROUTER_API_KEY=your_key_here
REDDIT_CLIENT_ID=your_reddit_id
REDDIT_CLIENT_SECRET=your_reddit_secret
ELEVENLABS_API_KEY=your_elevenlabs_key

Step 2: Create Agents

Each agent needs specific tools and clear instructions:

# Research agents with specific tools
hn_researcher = Agent(
    name="HN Reddit Researcher",
    model=OpenRouter(id="x-ai/grok-4-fast:free"),
    tools=[HackerNewsTools(), RedditTools(), FileTools(base_dir=research_dir)],
    instructions=f"""Get AI news from HackerNews and Reddit r/artificial for {TODAY}.
    
    Steps:
    1. Use get_top_hackernews_stories tool to get AI/ML stories
    2. Use get_top_posts tool to get posts from Reddit subreddit r/artificial
    3. Save results to "hn_reddit_{TODAY}.md"
    4. Add file to knowledge base"""
)

arxiv_researcher = Agent(
    name="ArXiv Researcher", 
    model=OpenRouter(id="x-ai/grok-4-fast:free"),
    tools=[ArxivTools(), FileTools(base_dir=research_dir)],
    instructions=f"""Search ArXiv for AI research papers from last 2 days.
    
    Steps:
    1. Search categories cs.AI, cs.CL, cs.LG
    2. Save results to "arxiv_{TODAY}.md"
    3. Add file to knowledge base"""
)

Step 3: Create Content Agents

The Report Writer combines research into reports:

writer = Agent(
    name="Report Writer",
    model=OpenRouter(id="x-ai/grok-4-fast:free"),
    tools=[FileTools()],
    instructions=f"""Create daily AI report for {TODAY}.
    
    Steps:
    1. Read "research/hn_reddit_{TODAY}.md"
    2. Read "research/arxiv_{TODAY}.md" 
    3. Create summary report combining both sources
    4. Save as "reports/daily_report_{TODAY}.md"
    5. Add to knowledge base"""
)

Step 4: Add Podcast Agents

The Podcast agents create audio content:

podcast_script_writer = Agent(
    name="Podcast Script Writer",
    model=OpenRouter(id="x-ai/grok-4-fast:free"),
    tools=[FileTools()],
    instructions=f"""Create podcast script from daily report.
    
    Steps:
    1. Read "reports/daily_report_{TODAY}.md"
    2. Transform into conversational script
    3. Save as "podcasts/{TODAY}/script.md"
    4. Use plain text format for TTS"""
)

podcast_producer = Agent(
    name="Podcast Producer",
    model=OpenRouter(id="x-ai/grok-4-fast:free"),
    tools=[FileTools(), ElevenLabsTools(voice_id="21m00Tcm4TlvDq8ikWAM")],
    instructions=f"""Generate audio from script.
    
    Steps:
    1. Read "podcasts/{TODAY}/script.md"
    2. Use generate_audio tool to create speech
    3. Audio saves automatically to episode directory"""
)

Step 5: Configure Memory and Knowledge

Set up persistent storage:

# Shared database for agent memories
shared_db = SqliteDb(
    db_file="ai_system.db",
    memory_table="agent_memories"
)

# Knowledge base with vector embeddings
knowledge = Knowledge(
    name="AI News Knowledge Base",
    contents_db=SqliteDb(db_file="ai_knowledge.db"),
    vector_db=LanceDb(
        uri="ai_vectors",
        embedder=SentenceTransformerEmbedder(dimensions=384)
    )
)

Step 6: Create Teams and Workflows

Put agents into teams and workflows:

# Sequential workflow execution
daily_workflow = Workflow(
    name="Daily AI Report Workflow",
    steps=[
        hn_researcher,         # Research social platforms
        arxiv_researcher,      # Research academic papers  
        writer,               # Synthesize into report
        podcast_script_writer, # Create audio script
        podcast_producer      # Generate final audio
    ]
)

# AgentOS application
agent_os = AgentOS(
    agents=[hn_researcher, arxiv_researcher, writer, podcast_script_writer, podcast_producer],
    workflows=[daily_workflow]
)

The Workflow in Action

When you run the Daily AI Report Workflow:

  1. Research: HN Reddit Researcher gets social discussions while ArXiv Researcher finds academic papers
  2. Writing: Report Writer combines both sources into a daily report
  3. Script: Podcast Script Writer turns the report into a conversational script
  4. Audio: Podcast Producer creates audio using ElevenLabs TTS
  5. Storage: All content gets stored in the knowledge base

Each agent builds on the previous agent’s work. Shared memory helps agents coordinate and learn over time.

Deployment Instructions

Local Development

  1. Clone and setup:
git clone https://github.com/surendranb/agnos-agent-tutorial.git
cd agnos-agent-tutorial
pip install -r requirements.txt
  1. Configure environment:
cp .env.sample .env
# Add your API keys to .env
  1. Run the system:
python app.py

The dashboard opens at http://localhost:7777

Production Deployment

For production, deploy on a cloud VM:

# On your server
git clone https://github.com/surendranb/agnos-agent-tutorial.git
cd agnos-agent-tutorial
pip install -r requirements.txt

# Set up environment variables
export OPENROUTER_API_KEY=your_key
export ELEVENLABS_API_KEY=your_key

# Run with process manager
nohup python app.py &

Using the System

  1. Trigger Workflow: Navigate to “Workflows” → “Daily AI Report Workflow” → Execute
  2. Monitor Progress: Watch real-time execution in the dashboard
  3. Check Outputs:
    • Research files: research/
    • Daily reports: reports/
    • Podcasts: podcasts/YYYY-MM-DD/
  4. Trend Analysis: Chat with “Trend Analyst” agent for long-term insights

Sample Output

Here’s what the system produces - a daily AI podcast generated entirely by the multi-agent workflow:

Sample AI Daily podcast episode generated by the multi-agent system

The podcast covers AI news from HackerNews and Reddit discussions, plus research papers from ArXiv. All content is combined into a 3-4 minute audio briefing.

Key Concepts

Agent Specialization: Each agent has specific tools and does one job well.

Memory: Agents remember experiences and improve over time through SQLite storage.

Knowledge: Content gets embedded using SentenceTransformer and stored in LanceDB for future use.

Team Coordination: Agents work together through shared context and handoffs.

Workflow Management: AgentOS manages multi-step processes with error handling.

The system shows how multi-agent AI can handle complex workflows while building knowledge over time. Each component—agents, teams, workflows, memory, and knowledge—works together to create capabilities beyond single agents.