A practical guide to building LinkedIn automation using CrewAI agents and compliant APIs - no scraping, no bans, just clean architecture
Built a LinkedIn job search agent using CrewAI + ConnectSafely.ai that finds jobs, identifies hiring managers, and sends personalized connection requests.
Stack:
- 🤖 CrewAI (agent framework)
- 🔗 ConnectSafely.ai (LinkedIn API)
- 🎨 Streamlit (chat UI)
- 🐍 Python 3.12
Key approach: One smart agent with multiple tools. No multi-agent complexity. No scraping.
The Problem: Job Hunting is a Manual Nightmare
You know the drill:
- Search for jobs on LinkedIn
- Open company page
- Try to find hiring manager
- Check if you're already connected
- Craft personalized message
- Send connection request
- Repeat 50 times
Time per job: 5-10 minutes
Total for 50 jobs: 4-8 hours of soul-crushing clicking
There had to be a better way.
Why CrewAI?
CrewAI is designed for multi-agent systems, but here's the twist: I'm using it with a single agent.
Why?
Multi-agent systems are great for:
- Complex role-based workflows
- Parallel task execution
- Simulating team dynamics
Single agents are better for:
- Sequential workflows
- Centralized reasoning
- Easier debugging
- Predictable execution
Job search is inherently sequential: find job → identify company → find manager → check status → send request.
One smart agent > five agents arguing.
Architecture: Keep It Simple
Here's the entire system:
┌─────────────┐
│ Streamlit │ ← Chat interface
└──────┬──────┘
│
┌──────▼────────┐
│ CrewAI Agent │ ← Single reasoning agent
└──────┬────────┘
│
┌──────▼────────────────────────┐
│ Tools (LinkedIn + Jobs) │
│ via ConnectSafely.ai API │
└───────────────────────────────┘
Key decisions:
1. Single Agent
Simpler to debug. Maintains context. No coordination overhead.
2. Tool-First Design
Each tool does one thing. Agent orchestrates.
3. API-Driven
No scraping. No browser automation. Just clean API calls.
4. Chat Interface
Natural language commands. Incremental execution.
Setting Up the CrewAI Agent
Here's the complete agent configuration:
# agents/agents.py
from crewai import Agent
from tools import (
search_jobs_tool,
get_company_details_tool,
search_hiring_managers_tool,
fetch_profile_details_tool,
check_connection_status_tool,
send_connection_request_tool,
)
def create_job_outreach_agent():
"""
Single agent that handles the entire job search workflow
"""
return Agent(
role="LinkedIn Job Search Assistant",
goal=(
"Help users find relevant jobs, identify hiring managers, "
"and facilitate meaningful LinkedIn connections"
),
backstory=(
"You are an expert at job search strategies and LinkedIn networking. "
"You understand how to identify the right opportunities, find decision-makers, "
"and craft personalized outreach that gets responses."
),
verbose=True,
allow_delegation=False, # Single agent, no delegation
tools=[
search_jobs_tool,
get_company_details_tool,
search_hiring_managers_tool,
fetch_profile_details_tool,
check_connection_status_tool,
send_connection_request_tool,
],
llm="gemini/gemini-2.0-flash-exp", # Using Google Gemini
)
Key parameters:
allow_delegation=False
We don't want this agent delegating to other agents. Keep it simple.
verbose=True
See what the agent is thinking. Critical for debugging.
tools
Explicitly defined. Agent can only use what we give it.
Building the Tools: One Job Each
Each tool wraps a ConnectSafely.ai API endpoint.
Tool #1: Search Jobs
# tools/search_jobs_tool.py
from crewai_tools import tool
import requests
import os
@tool("Search Jobs")
def search_jobs_tool(keyword: str, location: str, limit: int = 25):
"""
Search for jobs on LinkedIn by keyword and location.
Args:
keyword: Job title or keywords (e.g., 'Product Manager', 'Software Engineer')
location: Geographic location (e.g., 'Bangalore', 'Remote')
limit: Maximum number of results (default: 25)
Returns:
List of job postings with company info
"""
url = f"{os.getenv('CONNECTSAFELY_API_URL')}/jobs/search"
response = requests.post(
url,
headers={
"Authorization": f"Bearer {os.getenv('CONNECTSAFELY_API_KEY')}",
"Content-Type": "application/json",
},
json={
"keyword": keyword,
"location": location,
"limit": limit,
}
)
if not response.ok:
return {"error": f"Job search failed: {response.status_text}"}
data = response.json()
return {
"total": data.get("total", 0),
"jobs": [
{
"id": job["id"],
"title": job["title"],
"company": job["company"],
"company_id": job["companyId"],
"location": job["location"],
"posted_date": job["postedDate"],
"url": job["jobUrl"],
}
for job in data.get("results", [])
]
}
Design decisions:
Explicit docstring:
CrewAI uses docstrings to help the agent understand what tools do.
Structured output:
Return clean, typed data the agent can reason about.
Error handling:
Don't crash - return error objects the agent can handle.
Tool #2: Find Hiring Managers
# tools/search_hiring_managers_tool.py
from crewai_tools import tool
import requests
import os
@tool("Search Hiring Managers")
def search_hiring_managers_tool(company_id: str, job_title: str = None):
"""
Find hiring managers and recruiters at a specific company.
Args:
company_id: LinkedIn company ID
job_title: Optional filter (e.g., 'Recruiter', 'Engineering Manager')
Returns:
List of people who might be hiring managers
"""
url = f"{os.getenv('CONNECTSAFELY_API_URL')}/companies/{company_id}/people"
params = {}
if job_title:
params["jobTitle"] = job_title
response = requests.get(
url,
headers={
"Authorization": f"Bearer {os.getenv('CONNECTSAFELY_API_KEY')}",
},
params=params
)
if not response.ok:
return {"error": f"Failed to find hiring managers: {response.status_text}"}
data = response.json()
return {
"company_id": company_id,
"company_name": data.get("companyName"),
"people": [
{
"id": person["id"],
"name": person["name"],
"headline": person["headline"],
"profile_url": person["publicProfileUrl"],
}
for person in data.get("results", [])
]
}
Tool #3: Check Connection Status (THE CRITICAL ONE)
This is the most important tool. Never skip this.
# tools/check_connection_status_tool.py
from crewai_tools import tool
import requests
import os
@tool("Check Connection Status")
def check_connection_status_tool(profile_id: str):
"""
Check if you're already connected to someone on LinkedIn.
ALWAYS use this before sending connection requests to avoid spam.
Args:
profile_id: LinkedIn profile ID or public identifier
Returns:
Connection status information
"""
url = f"{os.getenv('CONNECTSAFELY_API_URL')}/connections/status/{profile_id}"
response = requests.get(
url,
headers={
"Authorization": f"Bearer {os.getenv('CONNECTSAFELY_API_KEY')}",
}
)
if not response.ok:
return {"error": f"Status check failed: {response.status_text}"}
data = response.json()
return {
"profile_id": profile_id,
"is_connected": data.get("connected", False),
"connection_degree": data.get("degree"), # 1st, 2nd, 3rd
"pending_request": data.get("pending", False),
"can_connect": data.get("canConnect", False),
}
Why this matters:
- ✅ Prevents spam
- ✅ Avoids LinkedIn penalties
- ✅ Respects people's time
- ✅ Saves API credits
Tool #4: Send Connection Request
# tools/send_connection_request_tool.py
from crewai_tools import tool
import requests
import os
@tool("Send Connection Request")
def send_connection_request_tool(profile_id: str, message: str = None):
"""
Send a personalized LinkedIn connection request.
Args:
profile_id: LinkedIn profile ID
message: Optional personalized note (max 300 characters)
Returns:
Success status and request details
"""
# Validate message length
if message and len(message) > 300:
return {
"error": f"Message too long: {len(message)} characters (max 300)"
}
url = f"{os.getenv('CONNECTSAFELY_API_URL')}/connections/send"
response = requests.post(
url,
headers={
"Authorization": f"Bearer {os.getenv('CONNECTSAFELY_API_KEY')}",
"Content-Type": "application/json",
},
json={
"profileId": profile_id,
"message": message,
}
)
if not response.ok:
return {"error": f"Connection request failed: {response.status_text}"}
data = response.json()
return {
"success": True,
"profile_id": profile_id,
"request_id": data.get("invitationId"),
"sent_at": data.get("timestamp"),
}
Wiring It Together: The Crew
# crew.py
from crewai import Crew, Task, Process
from agents.agents import create_job_outreach_agent
def create_job_search_crew():
"""
Create a CrewAI crew with a single agent
"""
agent = create_job_outreach_agent()
return Crew(
agents=[agent],
tasks=[], # Tasks are created dynamically via chat
process=Process.sequential,
verbose=True,
)
# Initialize crew
crew = create_job_search_crew()
def execute_command(user_input: str):
"""
Execute a user command via the agent
"""
# Create a dynamic task from user input
task = Task(
description=user_input,
agent=crew.agents[0],
expected_output="Completion status and results"
)
# Execute
result = crew.kickoff(tasks=[task])
return result
Key points:
Dynamic tasks:
Instead of pre-defined workflows, we create tasks on-the-fly from user commands.
Sequential process:
Tasks execute one at a time, maintaining context.
Single agent:
Even though CrewAI supports multi-agent, we keep it simple.
The Streamlit Chat Interface
# App.py
import streamlit as st
from crew import execute_command
st.title("🤖 LinkedIn Job Search Assistant")
st.caption("Powered by CrewAI + ConnectSafely.ai")
# Initialize chat history
if "messages" not in st.session_state:
st.session_state.messages = []
# Display chat history
for message in st.session_state.messages:
with st.chat_message(message["role"]):
st.markdown(message["content"])
# Chat input
if prompt := st.chat_input("What would you like to do?"):
# Add user message
st.session_state.messages.append({"role": "user", "content": prompt})
with st.chat_message("user"):
st.markdown(prompt)
# Get agent response
with st.chat_message("assistant"):
with st.spinner("Thinking..."):
response = execute_command(prompt)
st.markdown(response)
# Add assistant message
st.session_state.messages.append({
"role": "assistant",
"content": response
})
Why Streamlit?
- Quick to build
- Chat-style interface
- State management built-in
- Good for prototyping
A Real Workflow in Action
Here's what happens when you use this system:
You: Find product manager jobs in Bangalore
Agent: 🔍 Searching for jobs...
Found 35 Product Manager positions in Bangalore
You: Show me the top 3 companies
Agent: Here are the top 3:
1. Flipkart (Senior PM - Payments)
2. Razorpay (Product Manager - Growth)
3. PhonePe (Lead Product Manager)
You: Find hiring managers at Razorpay
Agent: 🔎 Searching Razorpay employees...
Found 12 people in product/recruiting roles:
- Sarah Kumar (Head of Product)
- Amit Patel (Senior Recruiter)
- Priya Sharma (Product Manager)
You: Check if I'm connected to Sarah Kumar
Agent: ✓ Checking connection status...
Status: Not connected (2nd degree)
Can send request: Yes
Mutual connections: 3
You: Send her a connection request about the PM role
Agent: ✅ Crafting personalized message...
✅ Sent connection request to Sarah Kumar
Message: "Hi Sarah, I noticed Razorpay is hiring for a
Product Manager - Growth role. With my background in fintech
and growth product management, I'd love to connect and learn
more about the opportunity."
Total time: ~2 minutes
Manual equivalent: 30-45 minutes
Why ConnectSafely.ai Makes This Possible
Let's be honest: scraping LinkedIn is a terrible idea.
I know this because I tried it first. Built a beautiful Puppeteer script that worked for 4 days before:
- LinkedIn changed their DOM structure
- Everything broke
- My account got flagged
- I wasted a week
The scraping approach:
# ❌ Don't do this
from selenium import webdriver
driver = webdriver.Chrome()
driver.get("https://linkedin.com/jobs")
# Wait for selectors that change weekly
jobs = driver.find_elements_by_class_name("job-card-xyz")
# Pray nothing breaks
for job in jobs:
job.click()
The ConnectSafely approach:
# ✅ Do this instead
response = requests.post(
f"{API_URL}/jobs/search",
headers={"Authorization": f"Bearer {API_KEY}"},
json={"keyword": "Product Manager", "location": "Bangalore"}
)
jobs = response.json()
Benefits:
✅ Stable API endpoints
✅ No DOM parsing
✅ Structured responses
✅ Rate limiting built-in
✅ Compliance guaranteed
✅ No account bans
One API call vs 50 lines of fragile automation.
Environment Setup
# .env
GOOGLE_API_KEY=your_google_api_key
CONNECTSAFELY_API_KEY=your_connectsafely_api_key
CONNECTSAFELY_API_URL=https://api.connectsafely.ai/v1
Dependencies (pyproject.toml):
[project]
name = "linkedin-job-agent"
version = "0.1.0"
requires-python = ">=3.12"
dependencies = [
"crewai>=0.86.0",
"streamlit>=1.40.2",
"requests>=2.32.3",
"python-dotenv>=1.0.1",
]
Install with uv:
# Install uv (fast Python package manager)
pip install uv
# Create virtual environment and install dependencies
uv sync
# Run the app
uv run streamlit run App.py
Try It Yourself
Ready to build your own?
# Clone the repo
git clone https://github.com/ConnectSafelyAI/agentic-framework-examples
cd job-seekers-reach-out-to-hiring-managers/agentic/crewai
# Install dependencies
uv sync
# Set up environment
cp .env.example .env
# Add your API keys
# Run
uv run streamlit run App.py
Get API Access:
- Sign up at connectsafely.ai
- Get your API key from dashboard
- Read the docs
Resources & Support
📂 Code: GitHub - CrewAI Examples
📚 Documentation:
💬 Support:
- Email: support@connectsafely.ai
- Docs: connectsafely.ai/docs
🌐 Connect:
LinkedIn • YouTube • Instagram • Facebook • X
Building something cool with CrewAI? Drop a comment—I'd love to see what you're working on!
Questions about the architecture? Hit me up. Always happy to nerd out about agent design patterns. 🤓
Want to see this built with other frameworks? Check out the repo—we've got Mastra and LangGraph versions too!


Top comments (0)