The Problem Every Developer Knows
You build an app with OpenAI. It works great. Then:
- OpenAI raises prices 3×
- Anthropic releases a better model
- Your client wants to use their own provider
- You need a local model for GDPR compliance
What happens? You rewrite everything. Different SDK, different request format, different response parsing, different error handling. Weeks of work for what should be a simple switch.
The Fix: One Import Line
# Today: OpenAI
from pulse_openai import OpenAIAdapter as AI
adapter = AI(api_key="sk-...")
# Tomorrow: Anthropic Claude
from pulse_anthropic import AnthropicAdapter as AI
adapter = AI(api_key="sk-ant-...")
# Everything below stays EXACTLY the same
msg = PulseMessage(
action="ACT.ANALYZE.SENTIMENT",
parameters={"text": "I love this product!"}
)
response = adapter.send(msg)
print(response.content["parameters"]["result"])
That's it. One line changes. The rest of your code doesn't know or care which AI is behind the curtain.
How It Works
PULSE adapters follow a simple pattern:
Your Code → PULSE Message → Adapter → Provider API → Adapter → PULSE Response
You write your logic in PULSE semantic actions. The adapter handles all the provider-specific translation:
| PULSE Action | OpenAI | Anthropic |
|---|---|---|
ACT.QUERY.DATA |
chat.completions → gpt-4o-mini | messages → claude-haiku |
ACT.CREATE.TEXT |
chat.completions → gpt-4o | messages → claude-sonnet |
ACT.ANALYZE.SENTIMENT |
+ sentiment system prompt | + sentiment system prompt |
ACT.TRANSFORM.TRANSLATE |
+ translation prompt | + translation prompt |
Same action, same parameters, same response format. Different AI underneath.
What's Available Right Now
Both packages are live on PyPI:
pip install pulse-openai # OpenAI adapter
pip install pulse-anthropic # Anthropic Claude adapter
Both support 6 actions out of the box:
1. ACT.QUERY.DATA — Ask questions, get answers
- ACT.CREATE.TEXT — Generate text from instructions
- ACT.ANALYZE.SENTIMENT — Analyze text sentiment
- ACT.ANALYZE.PATTERN — Find patterns in data
- ACT.TRANSFORM.TRANSLATE — Translate between languages
- ACT.TRANSFORM.SUMMARIZE — Summarize long text
Real Code, Real Tests
This isn't a concept. Both adapters have full test suites:
- pulse-openai: 34 tests, all passing
- pulse-anthropic: 31 tests, all passing
- Provider switching test: Proves both adapters support identical actions
def test_same_interface():
"""Both adapters have the same supported actions."""
openai = OpenAIAdapter(api_key="sk-test")
anthropic = AnthropicAdapter(api_key="sk-ant-test")
assert set(openai.supported_actions) == set(anthropic.supported_actions)
This test is in the actual codebase. Not documentation. Not a promise. Running code.
Why This Matters for Your Business
Cost optimization: Route cheap queries to Haiku, complex ones to GPT-4o. Switch when prices change.
Risk mitigation: If one provider has an outage, switch to another. No code rewrite. No emergency deployment.
Compliance: Need to keep data in Europe? Swap to a local model adapter. Same code.
Future-proofing: The next great model could come from Google, Meta, Mistral, or a startup that doesn't exist yet. With PULSE adapters, you'll be ready.
What's Next
We're building more adapters:
- pulse-local — Run local models (Ollama, llama.cpp)
- pulse-binance — Bridge to exchange APIs
- pulse-langchain — LangChain integration
Want to build an adapter for your favorite service? The base class is ready:
pip install pulse-protocol
Subclass PulseAdapter, implement three methods (to_native, call_api, from_native), and your service speaks PULSE.
GitHub: github.com/pulseprotocolorg-cyber
PULSE Protocol is open source (Apache 2.0). Free forever. Built for the AI ecosystem, not against it.
PULSE Protocol is open source (Apache 2.0). Free forever.
Try it:
pip install pulse-protocol
Top comments (0)