I recently built a multi-agent AI system that takes a product idea and generates a full market research report using 5 specialized AI agents โ all running on Groq's free tier. No credit card. No paid API.
But the journey wasn't smooth. I hit 7 different errors before it worked. This article covers both โ what I built and how I fixed everything that broke.
๐ง What I Built
Market Research Crew โ a multi-agent AI pipeline where 5 autonomous agents collaborate sequentially to research any product idea:
- ๐ Market Research Specialist โ industry size, trends, opportunities
- ๐ต๏ธ Competitive Intelligence Analyst โ competitors, pricing, market share
- ๐ฅ Customer Insights Researcher โ personas, pain points, needs
- ๐บ๏ธ Product Strategy Advisor โ positioning, feature roadmap
- ๐ Business Analyst โ synthesizes everything into recommendations
The user types a product idea into a Streamlit UI, hits run, and:
- The UI updates in real time showing each agent's status
- The IDE terminal streams live output โ agent thoughts, tool calls, completions
- 5 markdown reports are generated and displayed in tabs
GitHub: https://github.com/syed-kaif07/market-research-crew
๐ ๏ธ Tech Stack
| Tech | Role |
|---|---|
| CrewAI | Multi-agent orchestration framework |
| Groq | Free LLM API (insanely fast) |
| LLaMA 3.3 70B | The language model powering all agents |
| Streamlit | Web UI |
| Python 3.13 | Language |
| uv | Fast package manager |
๐๏ธ Architecture
The most interesting part of this project is how the UI and terminal work simultaneously.
User types idea in Streamlit
โ
โผ
streamlit_app.py
โโโ subprocess.Popen(stdout=None) โ key: inherits parent terminal
โ
โผ
main.py
โโโ Prints colored agent banners to terminal
โโโ Hooks _TaskTracker into CrewAI's task_callback
โโโ Calls crew.kickoff()
โ
โผ
CrewAI runs 5 agents sequentially
โโโ Each agent writes output to output/*.md
โ
streamlit_app.py polls output/*.md every 4 seconds
โโโ Updates agent cards: QUEUED โ RUNNING โ DONE
The magic is one line in streamlit_app.py:
proc = subprocess.Popen(
[python_exe, main_script, "--product-idea", product_idea],
stdout=None, # โ inherits parent terminal = live output
stderr=None, # โ errors visible too
)
By setting stdout=None, the child process inherits the parent's terminal handles. So everything CrewAI prints โ agent thoughts, tool calls, our colored banners โ all stream live to the IDE terminal while Streamlit polls for completed output files.
๐ค How CrewAI Works
CrewAI uses decorators to define agents and tasks cleanly:
@CrewBase
class MarketResearchCrew():
agents_config = "config/agents.yaml"
tasks_config = "config/tasks.yaml"
@agent
def market_research_specialist(self) -> Agent:
return Agent(
config=self.agents_config["market_research_specialist"],
llm=llm
)
@task
def competitive_intelligence_task(self) -> Task:
return Task(
config=self.tasks_config["competitive_intelligence_task"],
context=[self.market_research_task()] # gets Agent 1's output
)
@crew
def crew(self) -> Crew:
return Crew(
agents=self.agents,
tasks=self.tasks,
process=Process.sequential, # one by one
verbose=True, # prints agent thoughts
max_rpm=3, # respects free tier limits
)
Agent configs live in YAML files โ clean separation of concerns:
# agents.yaml
market_research_specialist:
role: Market Research Specialist
goal: Analyze market size, trends, and opportunities for {product_idea}
backstory: You are an expert market researcher with 10 years of experience...
๐ฅ๏ธ Live Terminal Logging
To show live agent progress in the terminal, I hooked into CrewAI's task_callback:
class _TaskTracker:
def __init__(self):
self.done = 0
self.agent_start = time.monotonic()
def on_task_complete(self, task_output):
# Called automatically by CrewAI after every task
self.done += 1
elapsed = time.monotonic() - self.agent_start
_agent_done(self.done, elapsed) # prints โ AGENT X COMPLETE
if self.done < len(AGENTS):
_agent_start(self.done + 1) # prints >> AGENT X+1 STARTING
# Attach to crew before kickoff
crew_obj.task_callback = tracker.on_task_complete
The result in terminal looks like this:
=================================================================
MARKET RESEARCH CREW - AGENT PIPELINE
Powered by CrewAI x Groq x LLaMA 3.3 70B
=================================================================
Research Topic: future of Gen AI in health sector
AGENT PIPELINE QUEUE:
1. ๐ Market Research Specialist [ QUEUED ]
2. ๐ต๏ธ Competitive Intelligence Analyst [ QUEUED ]
...
-----------------------------------------------------------------
>> AGENT 1/5 STARTING ๐ Market Research Specialist
Time: 14:21:27
-----------------------------------------------------------------
โ AGENT 1/5 COMPLETE ๐ Market Research Specialist
Time taken: 43.2s
Progress: [โโโโโ] 1/5
๐ The Errors (The Real Story)
Here's where it gets interesting. Nothing worked on the first try.
Error 1 โ AttributeError: st.session_state has no attribute "start_time"
Why it happened: Streamlit reruns the entire script on every user interaction. If a session_state key isn't initialized upfront, accessing it on a fresh rerun throws AttributeError.
Fix: Initialize ALL session state keys at the top of the script before any logic runs:
_DEFAULTS = {
"running": False,
"completed": False,
"product_idea": "",
"start_time": None, # โ was missing
"process": None,
}
for _key, _val in _DEFAULTS.items():
if _key not in st.session_state:
st.session_state[_key] = _val
Rule: Always initialize session state with a defaults dict. Never access a key before setting it.
Error 2 โ ImportError: Fallback to LiteLLM is not available
Why it happened: pyproject.toml had crewai[tools]==1.9.3 hardcoded. That old version of crewai couldn't find LiteLLM properly.
Fix: Update pyproject.toml:
# Before
"crewai[tools]==1.9.3"
# After
"crewai[tools,litellm]>=1.10.1b1"
Then sync:
uv sync --upgrade --prerelease=allow
Error 3 โ openai version conflict
Why it happened: litellm needed openai>=2.8.0 but crewai 1.9.3 needed openai==1.83.0. They couldn't coexist.
Fix: Upgrading crewai to v1.10+ resolved this โ the newer version aligned with litellm's openai requirement.
Error 4 โ uv sync keeps rolling back versions
Why it happened: uv respects pyproject.toml strictly. Even after manually installing newer packages, uv sync would revert everything back to the pinned versions.
Fix: The pyproject.toml pin is the source of truth. Fix it there first, then sync.
Error 5 โ TOML parse error
missing comma between array elements, expected `,`
Why it happened: Added "crewai[tools,litellm]>=1.10.1b1" without a trailing comma in the dependencies array.
Fix: TOML arrays need a comma after every element including the last one before ].
Error 6 โ No solution found: crewai[tools]>=1.10.1 unsatisfiable
Why it happened: crewai 1.10.1 stable doesn't exist yet โ only 1.10.1b1 (beta).
Fix: Use >=1.10.1b1 and add --prerelease=allow to uv commands.
Error 7 โ git push rejected
error: src refspec main does not match any
Why it happened: I was running git commands from inside src/market_research_crew/ instead of the project root.
Fix:
cd ../.. # go to project root first
git push origin main
๐ก Key Lessons
1. pyproject.toml is the source of truth
When using uv, always fix version pins in pyproject.toml first. Manual pip installs get overridden on next uv sync.
2. Streamlit session state must be initialized upfront
Use a _DEFAULTS dict pattern. Any key you access must be initialized before Streamlit reruns.
3. stdout=None enables live terminal streaming
When spawning subprocesses, stdout=None inherits the parent terminal โ much better for debugging than redirecting to a file.
4. CrewAI's task_callback is powerful
Hooking into task_callback lets you track exactly when each agent completes without modifying CrewAI internals.
5. Dependency conflicts need a root cause fix
Installing packages manually is a band-aid. The real fix is always the version constraint in your project config file.
๐ Try It Yourself
git clone https://github.com/syed-kaif07/market-research-crew.git
cd market-research-crew
pip install uv
uv sync --prerelease=allow
# Add your free Groq API key to .env
# GROQ_API_KEY=your_key_here
# MODEL=groq/llama-3.3-70b-versatile
python -m streamlit run src/market_research_crew/streamlit_app.py
Get a free Groq API key at console.groq.com โ no credit card needed.
๐ฎ What's Next
- Add PDF export for the full report
- Add web search tools so agents pull live data
- Let users choose between different LLMs from the UI
- Build a second crew for content writing or SEO
If you found this useful or have questions, drop a comment below. And if you're working on something similar with CrewAI, I'd love to see it!
Built by Syed Kaifuddin
Top comments (0)