The landscape of AI-assisted development has fundamentally shifted from passive autocomplete to active, agentic workflows. Tools like Claude Code (a powerful CLI agent) and Gemini Code Assist (a deeply integrated IDE agent) can read your file system, execute terminal commands and navigate your project context.
However, true power unlocks when you teach these agents your proprietary workflows. By utilizing Agent Skills (markdown-based instructions) and MCP Servers (active programmatic tools), you can transform these assistants into customized junior developers.
The Architectural Hierarchy of Agent Skills
Before writing code, we must understand how modern AI agents discover what they are allowed to do. The AI industry is converging on an open standard for defining skills via directory structures. When you launch Gemini Code Assist or Claude Code the engine scans specific file paths to load capabilities into its context window.
There are three distinct tiers of skill discovery:
Workspace Skills (Project-Specific)
Locations: .gemini/skills/, .claude/skills/ or the unified .agents/skills/ alias within your project’s root directory.
These are isolated to the current repository. Use this tier for local database migration commands, project-specific deployment scripts or PR review guidelines unique to the team.
Always commit this folder to version control so your entire team shares the same AI workflows.
User Skills (Global/Personal)
Locations: ~/.gemini/skills/, ~/.claude/skills/ or ~/.agents/skills/ located in your system’s Home directory.
These are your global, personal developer tools available across any project you open. Use this for your preferred Git commit formatting style, personal Docker cleanup scripts or customized syntax preferences.
Keep these purely local. They represent how you like to work, keeping shared repositories free of personal configuration.
Extension Skills (Bundled)
When you install third-party extensions (like a Jira or AWS plugin) into your IDE or CLI, they bundle their own skills.
These are generally read-only and managed by the extension provider, granting the AI immediate access to external APIs.
The Power of the .agents/skills/ Alias: > Using the .agents/skills/ directory is highly recommended. It acts as a universal standard. If you write a skill and place it in .agents/skills/, both Claude Code and Gemini Code Assist can theoretically discover and utilize it, making your custom developer tools perfectly portable across different AI ecosystems.
Building Skills
Let’s create a shared team skill that teaches AI exactly how to scaffold a new React component according to your company’s strict architecture guidelines.
Create the Directory Structure
In the root of your project terminal, create the folder using the universal alias:
mkdir -p .agents/skills/scaffold-component
touch .agents/skills/scaffold-component/SKILL.md
Write the Scaffolding Protocol Skill Definition
Open SKILL.md. This file acts as both the trigger and the execution instructions.
---
name: scaffold-component
description: "Generates a new React component strictly following company architecture rules (Tailwind, TypeScript, Vitest)."
---
# Component Scaffolding Protocol
You have been asked to create a new UI component. You must strictly follow these local workspace rules:
## Rules of Execution
1. **Location:** All components must be created inside `src/components/`.
2. **Language:** Use TypeScript (`.tsx`).
3. **Styling:** You MUST use Tailwind CSS. Do not generate `.css` or `.scss` files.
4. **Testing:** For every component, you must generate an adjacent test file named `[ComponentName].test.tsx` using Vitest syntax.
5. **Exporting:** Always use named exports. Never use default exports.
## Execution Steps
1. Ask the user for the name of the component if they haven't provided it.
2. Generate the `.tsx` file.
3. Generate the `.test.tsx` file.
4. Run standard formatting on the newly created files.
This is in the Workspace tier, when any developer on your team opens the AI Assistant chat and asks, “Can you scaffold a user profile card?”, AI reads this skill, adopts the rules and perfectly mimics your senior developer’s coding standards.
Write the Frontmatter and Logic Skill Definition
---
name: clean-docker
description: Analyzes currently running Docker containers and safely stops/removes them.
disable-model-invocation: true
---
# Docker Cleanup Assistant
You are a DevOps assistant. The user wants to clean up their local Docker environment.
## Current Environment Context
Here is the raw output of the user's current Docker processes:
!`docker ps -a`
## Instructions
1. Analyze the output provided above.
2. Identify any containers that have a status of "Exited".
3. Identify any containers that look like temporary test databases or dangling instances.
4. Present a formatted list to the user summarizing what is currently running and what is stopped.
5. Ask the user for confirmation on which containers they would like to remove.
6. Once confirmed, use your bash execution tools to run `docker rm [CONTAINER_ID]`.
By setting disable-model-invocation: true, we ensure Claude/Gemini doesn’t randomly delete your containers. You must trigger this manually by typing /clean-docker in the Claude Code CLI. Claude reads the file, runs docker ps -a in the background, injects the output into the prompt and then logically guides you through the cleanup process.
Architectural Best Practices for AI Skills
As you build out your .agents/skills/ directories, keep these theoretical principles in mind to maintain a healthy codebase:
Principle of Least Privilege
When giving an agent shell access or database access via MCP, restrict the API tokens and database users you provide to the script. If the agent hallucinates a DROP TABLE command, your database user should lack the permissions to execute it.
Semantic Prompt Weighting
When writing the description in your YAML frontmatter remember that the description is the compiled code. The LLM uses semantic vector mapping to match a user’s intent to your tool description.
Poor description: “Manages versions”
Excellent description: “Increments the semantic version in package.json. Use this strictly when preparing a release or hotfix.”
Context Window Pruning
If your MCP servers return massive logs (e.g., fetching 10,000 lines of AWS CloudWatch logs), the AI will suffer from “Lost in the Middle” syndrome, forgetting its original instructions. Always write your tools to filter, summarize or paginate data before returning it to the SKILL.md context.
Integrating Third-Party MCP Servers and Cross-Skill Workflows
However, the open-source community is rapidly building pre-made MCP servers that you can plug directly into your AI environment.
We are going to install the mattleads/telegramBotMcp server to give our agent the ability to send messages and then we will update our existing skills to trigger a Telegram notification the moment they finish their work.
Installing the Telegram Bot MCP Server
While the exact installation command depends on how the repository is structured (Node.js vs. Python), the configuration principle in your agent settings remains the same. You need to provide the execution command and inject your secret Telegram Bot Token as an environment variable.
Get the Source Code
Clone the repository to your machine and install its dependencies (assuming a standard Node.js/TypeScript MCP setup):
cd ~/path/to/your/tools
git clone https://github.com/mattleads/telegramBotMcp.git
cd telegramBotMcp
npm install
npm run build
Configure the Agent Settings
Now, we must tell Claude Code or Gemini CLI where this server is and give it your API credentials.
For Gemini Code Assist: Edit ~/.gemini/settings.json
For Claude Desktop/CLI: Edit ~/.claude/claude_desktop_config.json
Add the Telegram server configuration alongside your existing tools. Notice how we pass the secure tokens via the env object:
{
"mcpServers": {
"telegram-notifier": {
"command": "node",
"args": [
"/path/to/tools/telegramBotMcp/build/index.js"
],
"env": {
"TELEGRAM_BOT_TOKEN": "123456789:ABCDEF_Your_Bot_Token_Here"
}
}
}
}
Once you restart your IDE or CLI, the agent will parse this MCP server and expose its tools (e.g. send_telegram_message) to the context window.
Cross-Skill Communication: The “Notification” Workflow
Now for the magic. How do we make one skill trigger another?
In the world of LLM agents, skills do not call other skills via hardcoded function pointers. Instead, the LLM orchestrates them. You create cross-skill communication by writing explicit instructions in your SKILL.md file, telling the agent to invoke the Telegram tool as its final execution step.
Let’s upgrade the Workspace Skill we built earlier (scaffold-component) to include a Telegram notification workflow.
Update the SKILL.md
Open your .agents/skills/scaffold-component/SKILL.md file and add a new “Completion Protocol” section.
---
name: scaffold-component
description: Generates a new React component and notifies the team via Telegram when complete.
---
# Component Scaffolding Protocol
You are an expert frontend developer. Follow these instructions perfectly.
## Execution Steps
1. Ask the user for the name of the component if not provided.
2. Generate the `.tsx` file in `src/components/` using Tailwind CSS and named exports.
3. Generate the `.test.tsx` file using Vitest.
4. Run standard formatting on the files.
## Completion Protocol (Cross-Tool Notification)
Once all files are created and formatted, you must alert the team that the component is ready for integration.
1. Use the `send_telegram_message` tool provided by your MCP environment.
2. Format the message as follows:
`🚀 Component Scaffolding Complete!`
`Name: [Component Name]`
`Files created: [List of files]`
`Status: Ready for review.`
3. Send the message. Do not ask for the user's permission to send the notification, do it automatically as the final step of this workflow.
How the Execution Loop Works
When you type /scaffold-component UserProfile in your terminal or IDE, here is the exact chronological flow the agent executes:
- Read Rules: The agent reads the .agents/skills/scaffold-component/SKILL.md file.
- File System Tools: It uses its built-in Write_File tools to create UserProfile.tsx and UserProfile.test.tsx.
- Observation: It observes that the files were created successfully.
- Tool Transition: It reads the “Completion Protocol”. It searches its context window for a tool matching the description of sending Telegram messages.
- MCP Execution: It finds the send_telegram_message tool (injected via your settings.json) and outputs a JSON payload with the formatted text.
- Delivery: The telegramBotMcp local node script runs, hits the Telegram API and your phone buzzes with the update!
By combining Markdown rules (SKILL.md) with dynamic MCP servers (telegramBotMcp), you transform your agent from a simple code generator into a fully automated project manager.
Conclusion
We have covered an incredible amount of ground in this guide. By moving beyond standard chat interfaces, you have learned how to fundamentally rewire your development environment. You are no longer just writing code; you are engineering your own AI-powered junior developer.
Let’s recap the core architectural pillars you can now use to build your ultimate workflow:
- The Open Standard (.agents/skills/): By utilizing Workspace and User skill tiers, you create highly organized, portable Markdown workflows that work seamlessly across both Claude Code and Gemini Code Assist.
- Model Context Protocol (MCP): You have unlocked the ability to bridge the gap between deterministic local scripts and probabilistic AI, allowing your agents to query databases, read APIs and perform complex logic safely.
- Agentic Orchestration: By chaining tools together — such as generating a React component and autonomously firing off a Telegram notification — you have built a true, multi-step agentic loop.
The beauty of modern AI coding assistants is their extensibility. Your IDE and your terminal are now blank canvases. Whether you want to build a skill that automatically reviews your pull requests against company security standards, a tool that manages your local Docker containers or a workflow that updates Jira tickets when you commit code, you now have the exact architectural blueprint to build it.
I hope this definitive guide empowers you to start writing your own SKILL.md files and using MCP servers today!
Source Code: You can find the full implementation and follow the TelegramBot MCP Server progress on GitHub: [https://github.com/mattleads/telegramBotMcp]
Let’s Connect!
If you found this helpful or have questions about the implementation, I’d love to hear from you. Let’s stay in touch and keep the conversation going across these platforms:
- LinkedIn: [https://www.linkedin.com/in/matthew-mochalkin/]
- X (Twitter): [https://x.com/MattLeads]
- Telegram: [https://t.me/MattLeads]
- GitHub: [https://github.com/mattleads]
Top comments (0)