Skills are reusable abilities an AI agent uses to get things done, like writing, reasoning, or coding tasks. The concept was introduced by Anthropic and is widely used in modern coding agents (ClaudeCode, OpenCode, AWS Kiro, Cursor, Cline, RooCode, Antigravity, etc.). In this post, I want to show how to implement basic skills using Langchain and AWS Bedrock Nova.
What are Agent Skills?
- Agent skills are what an AI can do, thinking through problems, making plans, using tools, remembering things, and talking clearly.
- They matter because they help the AI handle more than just simple questions, like tasks with multiple steps.
- These skills let the AI use real data and remember past context, so answers are more useful and relevant.
- They make the AI faster and more independent, so you don’t have to guide every step.
Skills vs Tools vs Rules vs MCP Tools: What are the differences?
- Skills: What the agent can do internally, often packaged as reusable logic. They can include code scripts + Markdown (MD) instructions/docs, and can be added or updated dynamically.
- Tools: Callable functions the agent uses for specific actions. Usually code-based, triggered when needed (e.g., run code, fetch data).
-
MCP tools: Tools connected to external systems via MCP.
Also code-backed, but live outside the agent (APIs, databases, services).
- Example: APIs, databases, web search, company services.
- Rules: What the agent must follow every time. Typically static instructions (often MD/text) that don’t change per run.
Sample UseCase: Imagine Dev Assistant agent
- Skill = a packaged write blog post capability (Python script + MD template).
- Tool = a local function to execute code snippets.
- MCP tool = GitHub API to create a repo or PR.
- Rule = Always respond in Markdown and never expose secrets.
Whether you're exploring agent design or building your own system, this will give you a clear, practical starting point 😉
Table of Contents
- Dependencies & Configuration
- Skills (MD Files, Scripts)
- Load & List Skills
- Ask, System Prompt, Agent
- Call Agent with Different Prompts
- All Code & Demo
- Conclusion
- References
Dependencies & Configuration
- Please install dependencies:
python3 -m venv .venv
source .venv/bin/activate
pip install --upgrade pip
pip install -r requirements.txt
# deactivate
Requirements.txt:
langchain>=1.0.0
langchain-aws>=1.2.0
langgraph>=1.0.0
python-dotenv>=1.0.0
boto3>=1.34.0
langfuse>=4.0.0
Enable AWS Bedrock model access in your region (e.g. eu-central-1, us-east-1): AWS Bedrock > Bedrock Configuration > Model Access > AWS Nova-Pro, or Claude Sonnet
In this code, we'll use
AWS Nova-Pro, because it's served in different regions by AWS.After model access, give permission in your IAM to access AWS Bedrock services: AmazonBedrockFullAccess
-
2 Options to reach AWS Bedrock Model using your AWS account:
- AWS Config: With aws configure, to create config and credentials files
- Getting variables using .env file: Add .env file:
AWS_ACCESS_KEY_ID= PASTE_YOUR_ACCESS_KEY_ID_HERE
AWS_SECRET_ACCESS_KEY=PASTE_YOUR_SECRET_ACCESS_KEY_HERE
Skills (MD Files, Scripts)
- It needs to create "skills" directory and put skills MD files in it.
Code Review Skill
We define prompt.md and tools.py that related skill:
skills
- code_review
- prompt.md
- tools.py
Prompt.md for code review skill:
// code_review/prompt.md
You are a principal software engineer conducting a thorough code review.
RULES:
Prioritise correctness, then security, then performance, then style
Always check for edge cases (empty input, None, overflow, off-by-one)
Flag security issues: injection, insecure deserialization, hardcoded secrets
Suggest specific refactors, not just "this is bad"
Praise good patterns — not just criticism
OUTPUT FORMAT: Summary (what the code does, overall quality)
🔴 Critical Issues (bugs, security holes — fix before merge)
🟡 Improvements (performance, readability, testability)
🟢 Good Practices (what's done well)
Suggested Refactor (rewrite a key section if needed)
LANGUAGES SUPPORTED:
Python, JavaScript/TypeScript, SQL, Bash, Go, Java
State the language in your review header.
Tools.py for code review skill:
# code_review/tools.py: registered automatically when load_skill("code_review") is called.
import ast
import re
import textwrap
from langchain_core.tools import tool
@tool
def detect_secrets(code: str) -> str:
"""Scan source code for hardcoded credentials, API keys, and connection strings."""
patterns = [
("Hardcoded password", r'(?i)password\s*=\s*["\'][^"\']{3,}["\']'),
("AWS Access Key", r'AKIA[0-9A-Z]{16}'),
("Generic API key", r'(?i)api[_-]?key\s*=\s*["\'][^"\']{8,}["\']'),
("Connection string w/creds",r'(?i)(?:postgres|mysql|mongodb)://[^:]+:[^@]+@'),
("Private key block", r'-----BEGIN (?:RSA )?PRIVATE KEY-----'),
]
findings = []
for lineno, line in enumerate(code.splitlines(), 1):
for label, pat in patterns:
if re.search(pat, line):
findings.append(f" 🔴 Line {lineno} — {label}: {re.sub(pat, '[REDACTED]', line).strip()}")
return "Secret Scan:\n" + "\n".join(findings) if findings else "✅ No hardcoded secrets detected."
@tool
def analyze_python_ast(code: str) -> str:
"""Static analysis via Python AST: bare excepts, eval/exec, mutable defaults, long functions."""
code = textwrap.dedent(code)
try:
tree = ast.parse(code)
except SyntaxError as exc:
return f"❌ Syntax error: {exc}"
issues = []
for node in ast.walk(tree):
if isinstance(node, ast.ExceptHandler) and node.type is None:
issues.append(f" 🟡 Line {node.lineno}: Bare `except:` — use `except Exception:`.")
if isinstance(node, ast.Call):
name = getattr(node.func, "id", getattr(node.func, "attr", ""))
if name in ("eval", "exec"):
issues.append(f" 🔴 Line {node.lineno}: `{name}()` — arbitrary code execution risk.")
if isinstance(node, (ast.FunctionDef, ast.AsyncFunctionDef)):
for d in node.args.defaults:
if isinstance(d, (ast.List, ast.Dict, ast.Set)):
issues.append(f" 🟡 Line {node.lineno}: `{node.name}` has mutable default argument.")
length = node.end_lineno - node.lineno + 1
if length > 50:
issues.append(f" 🟡 Line {node.lineno}: `{node.name}` is {length} lines — consider splitting.")
if isinstance(node, ast.Global):
issues.append(f" 🟡 Line {node.lineno}: `global` statement — prefer explicit state passing.")
return "AST Analysis:\n" + "\n".join(issues) if issues else "✅ No structural issues found."
@tool
def check_sql_injection(code: str) -> str:
"""Detect SQL injection via unsafe string interpolation in execute() calls."""
patterns = [
("f-string in execute()", r'\.execute\s*\(\s*f["\']'),
("% formatting in execute()",r'\.execute\s*\(\s*["\'][^"\']*%[^"\']*["\'\s]*%'),
(".format() in execute()", r'\.execute\s*\(\s*["\'][^"\']*\{.*?\}.*?\.format'),
("String concat in execute()",r'\.execute\s*\(\s*["\'][^"\']*["\'\s]*\+'),
]
findings = []
for lineno, line in enumerate(code.splitlines(), 1):
for label, pat in patterns:
if re.search(pat, line):
findings.append(f" 🔴 Line {lineno} — {label}\n Fix: use parameterised queries → execute(query, (value,))")
return "SQL Injection Scan:\n" + "\n".join(findings) if findings else "✅ No SQL injection patterns detected."
@tool
def measure_complexity(code: str) -> str:
"""Estimate cyclomatic complexity per function (1–5 ✅, 6–10 🟡, 11–20 🟠, >20 🔴)."""
code = textwrap.dedent(code)
try:
tree = ast.parse(code)
except SyntaxError as exc:
return f"❌ Syntax error: {exc}"
_BRANCH = (ast.If, ast.For, ast.While, ast.ExceptHandler, ast.With, ast.Assert, ast.BoolOp)
results = []
for node in ast.walk(tree):
if isinstance(node, (ast.FunctionDef, ast.AsyncFunctionDef)):
score = 1 + sum(1 for n in ast.walk(node) if isinstance(n, _BRANCH))
icon = "🔴" if score > 20 else ("🟠" if score > 10 else ("🟡" if score > 5 else "✅"))
results.append(f" {icon} {node.name} (line {node.lineno}): {score}")
return "Complexity Report:\n" + "\n".join(sorted(results)) if results else "No functions found."
Legal Document Review Skill
We define prompt.md and tools.py that related skill:
skills
- review_legal_doc
- prompt.md
- tools.py
Prompt.md for legal document review skill:
// review_legal_doc/prompt.md
You are a senior legal document reviewer specialising in commercial contracts.
RULES:
Flag ALL liability clauses, indemnities, and limitation of liability caps
Highlight non-standard or one-sided terms
Identify missing standard protections (e.g. IP ownership, termination rights)
Note jurisdiction and governing law issues
Never give definitive legal advice — always recommend counsel review
OUTPUT FORMAT:
Document Summary (type, parties, purpose)
🔴 High Risk Clauses (must review with lawyer)
🟡 Medium Risk Clauses (worth negotiating)
🟢 Standard / Acceptable Clauses
Missing Clauses (protections not present)
Overall Risk Rating: LOW / MEDIUM / HIGH
FOCUS AREAS:
Intellectual property assignment and licensing
Payment terms and late payment penalties
Auto-renewal and cancellation notice periods
Data privacy and confidentiality obligations
Force majeure and dispute resolution
Tools.py for legal document review skill:
# review_legal_doc/tools.py: registered automatically when load_skill("review_legal_doc") is called.
import re
from langchain_core.tools import tool
_CLAUSES = {
"Liability Cap": [r"(?i)total\s+liability\s+shall\s+not\s+exceed", r"(?i)aggregate\s+liability.{0,40}limited\s+to"],
"Consequential Damages Waiver": [r"(?i)not\s+be\s+liable\s+for\s+any\s+indirect", r"(?i)indirect.{0,20}incidental.{0,20}consequential"],
"Auto-Renewal": [r"(?i)auto.?renew", r"(?i)renews?\s+annually"],
"Cancellation Notice": [r"(?i)\d+\s+days?\s+(?:written\s+)?notice"],
"IP Assignment": [r"(?i)intellectual\s+property.{0,40}assign", r"(?i)work\s+made\s+for\s+hire"],
"Confidentiality": [r"(?i)confidential\s+information", r"(?i)non.disclosure"],
"Indemnification": [r"(?i)indemnif(?:y|ication)", r"(?i)hold\s+harmless"],
"Governing Law": [r"(?i)governed\s+by\s+the\s+laws?\s+of"],
"Force Majeure": [r"(?i)force\s+majeure", r"(?i)acts?\s+of\s+God"],
"Termination for Cause": [r"(?i)terminat.{0,40}material\s+breach"],
}
_HIGH_RISK = {"Liability Cap", "Consequential Damages Waiver", "Auto-Renewal", "IP Assignment", "Indemnification"}
_RISK_WEIGHTS = {"Consequential Damages Waiver": 25, "Liability Cap": 20, "IP Assignment": 20,
"Indemnification": 15, "Auto-Renewal": 15, "Cancellation Notice": 10,
"Force Majeure": -5, "Confidentiality": -5, "Termination for Cause": -10}
def _found_clauses(text: str) -> set[str]:
return {ct for ct, patterns in _CLAUSES.items() if any(re.search(p, text) for p in patterns)}
@tool
def extract_legal_clauses(text: str) -> str:
"""Scan contract text and return all detected clause types with context snippets."""
results = []
for clause_type, patterns in _CLAUSES.items():
for pattern in patterns:
m = re.search(pattern, text)
if m:
start, end = max(0, m.start() - 30), min(len(text), m.end() + 90)
snippet = text[start:end].replace("\n", " ").strip()
icon = "🔴" if clause_type in _HIGH_RISK else "🟡"
results.append(f"{icon} {clause_type}: …{snippet}…")
break
return "\n".join(results) if results else "No recognised clause types detected."
@tool
def score_legal_risk(text: str) -> str:
"""Return a heuristic risk score (0–100) with a LOW / MEDIUM / HIGH rating."""
found = _found_clauses(text)
score = max(0, min(100, sum(w for c, w in _RISK_WEIGHTS.items() if c in found)))
rating = "🔴 HIGH" if score >= 60 else ("🟡 MEDIUM" if score >= 30 else "🟢 LOW")
breakdown = "\n".join(
f" {'🔴' if w>0 else '🟢'} {c}: {'+' if w>0 else ''}{w} pts"
for c, w in _RISK_WEIGHTS.items() if c in found
)
return f"Risk Score: {score}/100 → {rating}\n\nBreakdown:\n{breakdown}"
@tool
def extract_dates_and_deadlines(text: str) -> str:
"""Pull all date references and notice periods from contract text."""
patterns = [r"\b\d+[\s-]?days?\b", r"\b\d+[\s-]?months?\b", r"\b\d+[\s-]?years?\b",
r"\bann(?:ual(?:ly)?|um)\b", r"\b\d{4}-\d{2}-\d{2}\b"]
seen, findings = set(), []
for pat in patterns:
for m in re.finditer(pat, text, re.IGNORECASE):
key = m.group(0).lower()
if key not in seen:
seen.add(key)
start, end = max(0, m.start()-40), min(len(text), m.end()+60)
findings.append(f' • "{m.group(0)}" — …{text[start:end].strip()}…')
return "Dates & Deadlines:\n" + "\n".join(findings) if findings else "No date references found."
SQL Write Skill
We define prompt.md and tools.py that related skill:
skills
- write_sql
- prompt.md
- tools.py
Prompt.md for writing SQL skill:
// write_sql/prompt.md
You are an expert SQL engineer.
RULES:
Always use CTEs (WITH clauses) for complex queries
Add comments explaining non-obvious logic
Prefer window functions over subqueries for performance
Always include an ORDER BY for deterministic results
Flag any potential N+1 or missing index issues
Default dialect: PostgreSQL (state if switching)
OUTPUT FORMAT:
Brief explanation of the approach
The SQL query (in a sql block)
Performance notes (if relevant)
Alternative approaches (if simpler option exists)
EXAMPLE SCHEMA AWARENESS:
Always ask for schema if not provided
Infer column names from context when possible
Warn about NULLs and data type mismatches
Tools.py for writing SQL skill:
# write_sql/tools.py: registered automatically when load_skill("write_sql") is called.
import re
from langchain_core.tools import tool
@tool
def validate_sql_syntax(sql: str, dialect: str = "postgres") -> str:
"""Parse a SQL query and report syntax errors without executing it."""
try:
import sqlglot
sqlglot.parse(sql, dialect=dialect, error_level=sqlglot.ErrorLevel.RAISE)
return "✅ Valid SQL — no syntax errors detected."
except ImportError:
return "⚠️ sqlglot not installed. Run pip install sqlglot."
except Exception as exc:
return f"❌ Syntax error: {exc}"
@tool
def format_sql(sql: str, dialect: str = "postgres") -> str:
"""Pretty-print a SQL query using canonical formatting."""
try:
import sqlglot
return sqlglot.transpile(sql, read=dialect, write=dialect, pretty=True)[0]
except ImportError:
return "⚠️ sqlglot not installed. Run pip install sqlglot."
except Exception as exc:
return f"❌ Could not format SQL: {exc}"
_RISKS = [
(r"\bSELECT\s+\*\b", "🟡", "SELECT * — enumerate columns explicitly."),
(r"\bIN\s*\(\s*SELECT\b", "🟡", "IN (SELECT …) — prefer EXISTS or a JOIN."),
(r"(?i)\bDELETE\s+FROM\b(?!.*\bWHERE\b)","🔴", "DELETE without WHERE — deletes ALL rows!"),
(r"(?i)\bUPDATE\b(?!.*\bWHERE\b)", "🔴", "UPDATE without WHERE — updates ALL rows!"),
(r"(?i)\bDROP\s+(TABLE|DATABASE)\b", "🔴", "DROP statement — destructive DDL."),
(r"(?i)\bNOT\s+IN\s*\(\s*SELECT\b", "🟡", "NOT IN (subquery) is NULL-unsafe — use NOT EXISTS."),
(r"(?i)ORDER\s+BY\s+RAND\(\)", "🟡", "ORDER BY RAND() is O(n log n) — slow on large tables."),
]
@tool
def detect_sql_risks(sql: str) -> str:
"""Scan a SQL query for common anti-patterns and pitfalls."""
findings = [
f"{lvl}: {msg}"
for pattern, lvl, msg in _RISKS
if re.search(pattern, sql, re.IGNORECASE)
]
return "\n".join(findings) if findings else "✅ No obvious risks detected."
Load & List Skills
It needs to implement list_skills, load_skills to load proper skills according to agent system prompt.
import importlib.util
import inspect
import sys
from pathlib import Path
SKILLS_DIR = Path(__file__).parent / "skills"
def _import_tools(skill_dir: Path) -> list[BaseTool]:
"""Import tools.py from a skill directory and return all @tool objects."""
py_file = skill_dir / "tools.py"
if not py_file.exists():
return []
module_id = f"skills.{skill_dir.name}"
if module_id not in sys.modules:
spec = importlib.util.spec_from_file_location(module_id, py_file)
mod = importlib.util.module_from_spec(spec)
sys.modules[module_id] = mod
spec.loader.exec_module(mod)
return [obj for _, obj in inspect.getmembers(sys.modules[module_id]) if isinstance(obj, BaseTool)]
@tool
def list_skills() -> str:
"""List every available skill."""
lines = []
for d in sorted(SKILLS_DIR.iterdir()):
if d.is_dir():
tag = "🔧 prompt + tools" if (d / "tools.py").exists() else " prompt only"
lines.append(f" • {d.name} [{tag}]")
return "\n".join(lines) or "No skills found."
def _make_load_skill(session_tools: dict[str, BaseTool]):
"""Return a load_skill tool that registers into the given session dict."""
@tool
def load_skill(skill_name: str) -> str:
""" Load a specialist skill by its directory name."""
skill_dir = SKILLS_DIR / skill_name
if not skill_dir.is_dir():
available = [d.name for d in SKILLS_DIR.iterdir() if d.is_dir()]
return f"Skill '{skill_name}' not found. Valid names: {', '.join(available)}"
prompt = (skill_dir / "prompt.md").read_text(encoding="utf-8")
new_tools = _import_tools(skill_dir)
session_tools.update({t.name: t for t in new_tools})
tool_note = (
"\n\n🔧 Tools registered (call directly, NOT via load_skill):\n"
+ "\n".join(f" - {t.name}" for t in new_tools)
if new_tools else ""
)
return prompt + tool_note
return load_skill
Ask, System Prompt, Agent
Agent implementation, tools, system prompt to give LLM which skills it will load:
from langchain_aws import ChatBedrockConverse
from langchain_core.messages import SystemMessage
from langchain_core.tools import BaseTool, tool
from langchain.agents import create_agent
LLM = ChatBedrockConverse(model="us.amazon.nova-pro-v1:0", temperature=0.2)
SYSTEM_PROMPT = SystemMessage(content=(
"You are a versatile expert assistant with specialist skills.\n\n"
"SKILL ROUTING — call load_skill() with exactly one of these names:\n"
" • 'write_sql' → user wants to WRITE or GENERATE a SQL query\n"
" • 'review_legal_doc' → user wants to REVIEW a CONTRACT, CLAUSE, or LEGAL text\n"
" • 'code_review' → user wants to REVIEW SOURCE CODE (any language)\n"
" • call list_skills() → if unsure\n\n"
"IMPORTANT: tool names like 'detect_sql_risks', 'score_legal_risk' etc. are NOT skill names.\n"
"After load_skill() returns, use any registered tools to enrich your answer."
))
def ask(query: str) -> None:
print(f"\n{'═'*60}\nUSER: {query}\n{'─'*60}")
# fresh tool scope per call
session_tools: dict[str, BaseTool] = {}
load_skill = _make_load_skill(session_tools)
def build_agent():
return create_agent(
model=LLM,
tools=[list_skills, load_skill, *session_tools.values()],
system_prompt=SYSTEM_PROMPT,
)
result = build_agent().invoke({"messages": [{"role": "user", "content": query}]})
for msg in result["messages"]:
for tc in getattr(msg, "tool_calls", []):
args_str = ", ".join(f"{k}={repr(v)}" for k, v in tc["args"].items())
print(f" 🔧 {tc['name']}({args_str})")
if msg.type == "ai" and msg.content and not getattr(msg, "tool_calls", []):
print(f"\n{msg.content}")
print()
Call Agent with Different Prompts
We can now write 3 different prompts, agent gives LLM to select which skills needed to select/load:
if __name__ == "__main__":
# SQL skill
ask(
"Write a SQL query to find the top 5 customers by total revenue "
"in the last 90 days, with order count and average order value. "
"Tables: orders(id, customer_id, total, created_at), customers(id, name, email)."
)
# Legal skill
ask(
"Review this clause: 'The Vendor shall not be liable for any indirect, "
"incidental, or consequential damages. Total liability shall not exceed "
"fees paid in the last 30 days. Agreement auto-renews annually unless "
"cancelled with 90 days written notice.'"
)
# Code review skill
ask(
"Review this Python function:\n\n"
"def get_user(user_id):\n"
" conn = psycopg2.connect('postgresql://admin:password123@db:5432/prod')\n"
" cur = conn.cursor()\n"
" cur.execute(f\"SELECT * FROM users WHERE id = {user_id}\")\n"
" return cur.fetchone()"
)
All Code & Demo
GitHub Link: Project on GitHub
Run app.py:
python agent.py
Demo
Conclusion
In this post, we mentioned:
- differences between skills, tools, MCP tools, rules,
- how to load/list skills.
If you found the tutorial interesting, I’d love to hear your thoughts in the blog post comments. Feel free to share your reactions or leave a comment. I truly value your input and engagement 😉
For other posts 👉 https://dev.to/omerberatsezer 🧐
References
- https://docs.langchain.com/oss/python/langchain/overview
- https://langfuse.com/
- https://aws.amazon.com/bedrock
- https://github.com/omerbsezer/Fast-LLM-Agent-MCP/
Your comments 🤔
- Which tools are you using to develop AI Agents (e.g.
AWS Strands, Langchain, etc.)? Pleasemention in the comment your experience, your interest? - What are you thinking about
Skills?

Top comments (1)
Skills make agents much more useful than simple Q&A, they help them handle multi-step tasks and think more clearly. In my experience, building skills as small, reusable modules (markdown + code) made things easier to scale and maintain. I could load what I need instead of hardcoding everything. It also made the system simpler and more consistent.