The Anthropic Python SDK makes it simple to integrate Claude into your applications. In this guide you'll go from zero to a working chatbot in under 10 minutes — covering installation, your first API call, streaming, multi-turn conversations, and error handling.
Prerequisites
- Python 3.8+
- An Anthropic API key (console.anthropic.com)
- Basic Python knowledge
Step 1: Install the SDK
pip install anthropic
That's the only dependency you need. The SDK includes everything: the client, streaming support, and type hints.
Step 2: Set Your API Key
Store your key as an environment variable — never hardcode it in your source files:
export ANTHROPIC_API_KEY="sk-ant-..."
Or create a .env file in your project root:
ANTHROPIC_API_KEY=sk-ant-...
Load it with python-dotenv:
pip install python-dotenv
Step 3: Your First API Call
Create a file main.py and add:
import anthropic
client = anthropic.Anthropic() # reads ANTHROPIC_API_KEY from env
message = client.messages.create(
model="claude-sonnet-4-6",
max_tokens=1024,
messages=[
{"role": "user", "content": "Explain what an API is in 2 sentences."}
]
)
print(message.content[0].text)
Run it:
python main.py
You'll get a clean, concise response from Claude. The message.content[0].text contains the text output.
Step 4: Add a System Prompt
A system prompt sets the context and personality for Claude — it's the first thing Claude reads before any user message:
client = anthropic.Anthropic()
message = client.messages.create(
model="claude-sonnet-4-6",
max_tokens=1024,
system="You are a senior Python developer. Answer concisely with code examples.",
messages=[
{"role": "user", "content": "How do I read a JSON file in Python?"}
]
)
print(message.content[0].text)
Step 5: Streaming Responses
For a better user experience — especially with long outputs — use streaming so text appears word by word:
import anthropic
client = anthropic.Anthropic()
with client.messages.stream(
model="claude-sonnet-4-6",
max_tokens=1024,
messages=[{"role": "user", "content": "Write a Python function to parse CSV files"}]
) as stream:
for text in stream.text_stream:
print(text, end="", flush=True)
print() # newline at the end
Step 6: Multi-Turn Conversations
Build a simple chatbot by keeping track of the message history:
import anthropic
client = anthropic.Anthropic()
conversation_history = []
def chat(user_message: str) -> str:
conversation_history.append({
"role": "user",
"content": user_message
})
response = client.messages.create(
model="claude-sonnet-4-6",
max_tokens=1024,
system="You are a helpful AI assistant.",
messages=conversation_history
)
assistant_message = response.content[0].text
conversation_history.append({
"role": "assistant",
"content": assistant_message
})
return assistant_message
# Example conversation
print(chat("What is Python?"))
print(chat("What are its main use cases?"))
print(chat("Which one is best for AI development?"))
Each call passes the full history so Claude remembers what was said earlier in the conversation.
Step 7: Error Handling
Always wrap API calls in try/except for production code:
import anthropic
client = anthropic.Anthropic()
try:
message = client.messages.create(
model="claude-sonnet-4-6",
max_tokens=1024,
messages=[{"role": "user", "content": "Hello!"}]
)
print(message.content[0].text)
except anthropic.APIConnectionError as e:
print(f"Connection error: {e}")
except anthropic.RateLimitError as e:
print(f"Rate limit hit — slow down: {e}")
except anthropic.APIStatusError as e:
print(f"API error {e.status_code}: {e.message}")
Available Models
Choose the right Claude model for your use case:
- claude-opus-4-7 — most capable, best for complex reasoning and analysis
- claude-sonnet-4-6 — best balance of speed and intelligence (recommended for most apps)
- claude-haiku-4-5-20251001 — fastest and most affordable, great for simple tasks
Key Parameters
The most important parameters in messages.create():
message = client.messages.create(
model="claude-sonnet-4-6", # which Claude model to use
max_tokens=1024, # maximum tokens in the response
temperature=0.7, # 0 = deterministic, 1 = creative
system="...", # system prompt (optional)
messages=[...] # conversation history
)
💡 Tip: Use temperature=0 for code generation and factual tasks. Use higher values (0.7–1.0) for creative writing.
Complete Example: Simple CLI Chatbot
import anthropic
def main():
client = anthropic.Anthropic()
history = []
print("Claude Chatbot — type 'quit' to exit\n")
while True:
user_input = input("You: ").strip()
if user_input.lower() in ("quit", "exit"):
break
if not user_input:
continue
history.append({"role": "user", "content": user_input})
with client.messages.stream(
model="claude-sonnet-4-6",
max_tokens=2048,
system="You are a helpful assistant.",
messages=history
) as stream:
print("Claude: ", end="", flush=True)
response_text = ""
for text in stream.text_stream:
print(text, end="", flush=True)
response_text += text
print()
history.append({"role": "assistant", "content": response_text})
if __name__ == "__main__":
main()
What's Next?
Now that you have the basics working, here's what to explore next:
- Tool use — let Claude call functions and APIs in your code
- Vision — send images to Claude for analysis
- Prompt caching — reduce costs on repeated context (up to 90% savings)
- Batch API — process thousands of requests asynchronously at 50% discount
💡 Resources: Anthropic Docs | Python SDK on GitHub
Originally published at kalyna.pro
Top comments (0)