DEV Community

diwushennian4955
diwushennian4955

Posted on

Complete AI API Tutorial 2026: Build Your First AI App in 10 Minutes (Python + JS)

Complete AI API Tutorial 2026: Build Your First AI App in 10 Minutes

This tutorial gets you from zero to a working AI-powered app in 10 minutes. We'll use NexaAPI — an OpenAI-compatible gateway that gives you access to GPT-4.1, Claude Sonnet 4.6, Gemini 3.1 Pro, and more at 1/5 the official price.

📧 Get your API key: frequency404@villaastro.com

🌐 Platform: https://ai.lmzh.top

Why NexaAPI?

Model Official Price NexaAPI Price
Claude Sonnet 4.6 $3.00/M input ~$0.60/M input
GPT-4.1 $2.00/M input ~$0.40/M input
Gemini 3.1 Pro $2.00/M input ~$0.40/M input

Same models. Same quality. 80% cheaper. One line of code change.

Part 1: Python Tutorial

Install

pip install openai
Enter fullscreen mode Exit fullscreen mode

Basic Chat Completion

from openai import OpenAI

client = OpenAI(
    api_key="YOUR_NEXAAPI_KEY",
    base_url="https://ai.lmzh.top/v1"  # ← only change from OpenAI
)

response = client.chat.completions.create(
    model="gpt-4.1",  # or "claude-sonnet-4-6", "gemini-3.1-pro"
    messages=[
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": "What is the capital of France?"}
    ]
)

print(response.choices[0].message.content)
# Output: "The capital of France is Paris."
Enter fullscreen mode Exit fullscreen mode

Streaming Responses

stream = client.chat.completions.create(
    model="claude-sonnet-4-6",
    messages=[{"role": "user", "content": "Write a haiku about Python"}],
    stream=True
)

for chunk in stream:
    if chunk.choices[0].delta.content is not None:
        print(chunk.choices[0].delta.content, end="", flush=True)
Enter fullscreen mode Exit fullscreen mode

Multi-Turn Chatbot

conversation = [
    {"role": "system", "content": "You are a Python tutor. Be concise."}
]

while True:
    user_input = input("You: ")
    if user_input.lower() == "quit":
        break

    conversation.append({"role": "user", "content": user_input})

    response = client.chat.completions.create(
        model="gpt-4.1",
        messages=conversation
    )

    assistant_msg = response.choices[0].message.content
    conversation.append({"role": "assistant", "content": assistant_msg})
    print(f"AI: {assistant_msg}\n")
Enter fullscreen mode Exit fullscreen mode

Image Generation

response = client.images.generate(
    model="dall-e-3",
    prompt="A futuristic city skyline at sunset, digital art",
    size="1024x1024",
    n=1
)

print(f"Image URL: {response.data[0].url}")
# Cost: ~$0.003/image via NexaAPI
Enter fullscreen mode Exit fullscreen mode

Async for Production (Batch Processing)

import asyncio
from openai import AsyncOpenAI

client = AsyncOpenAI(
    api_key="YOUR_NEXAAPI_KEY",
    base_url="https://ai.lmzh.top/v1"
)

async def process_batch(prompts):
    tasks = [
        client.chat.completions.create(
            model="gemini-2.5-flash",  # Cheapest for bulk
            messages=[{"role": "user", "content": p}]
        )
        for p in prompts
    ]
    responses = await asyncio.gather(*tasks)
    return [r.choices[0].message.content for r in responses]

# Process 10 prompts concurrently
prompts = [f"Summarize topic {i}" for i in range(10)]
results = asyncio.run(process_batch(prompts))
Enter fullscreen mode Exit fullscreen mode

Part 2: JavaScript Tutorial

Install

npm install openai
Enter fullscreen mode Exit fullscreen mode

Basic Chat

import OpenAI from 'openai';

const client = new OpenAI({
  apiKey: 'YOUR_NEXAAPI_KEY',
  baseURL: 'https://ai.lmzh.top/v1'
});

const response = await client.chat.completions.create({
  model: 'gpt-4.1',
  messages: [
    { role: 'system', content: 'You are a helpful assistant.' },
    { role: 'user', content: 'Hello!' }
  ]
});

console.log(response.choices[0].message.content);
Enter fullscreen mode Exit fullscreen mode

Streaming

const stream = await client.chat.completions.create({
  model: 'claude-sonnet-4-6',
  messages: [{ role: 'user', content: 'Tell me a story' }],
  stream: true
});

for await (const chunk of stream) {
  const text = chunk.choices[0]?.delta?.content || '';
  process.stdout.write(text);
}
Enter fullscreen mode Exit fullscreen mode

Express.js API Server

import express from 'express';
import OpenAI from 'openai';

const app = express();
app.use(express.json());

const client = new OpenAI({
  apiKey: process.env.NEXAAPI_KEY,
  baseURL: 'https://ai.lmzh.top/v1'
});

app.post('/api/chat', async (req, res) => {
  const { message, model = 'gpt-4.1' } = req.body;

  const response = await client.chat.completions.create({
    model,
    messages: [{ role: 'user', content: message }]
  });

  res.json({ response: response.choices[0].message.content });
});

app.listen(3000);
Enter fullscreen mode Exit fullscreen mode

Part 3: Choose the Right Model

Task Model Cost (NexaAPI)
Customer support Claude Haiku 4.5 ~$40/10M calls
Code generation Claude Sonnet 4.6 ~$180/5M calls
Content writing GPT-4.1 ~$100/5M calls
Bulk processing Gemini 2.5 Flash ~$8/50M calls
Image generation FLUX via NexaAPI ~$0.003/image

Part 4: Error Handling

from openai import OpenAI, RateLimitError
import time

client = OpenAI(
    api_key="YOUR_NEXAAPI_KEY",
    base_url="https://ai.lmzh.top/v1"
)

def robust_chat(message, max_retries=3):
    for attempt in range(max_retries):
        try:
            response = client.chat.completions.create(
                model="gpt-4.1",
                messages=[{"role": "user", "content": message}],
                timeout=30.0
            )
            return response.choices[0].message.content

        except RateLimitError:
            if attempt < max_retries - 1:
                time.sleep(2 ** attempt)  # Exponential backoff
            else:
                raise
Enter fullscreen mode Exit fullscreen mode

Quick Start Checklist

  • [ ] Email frequency404@villaastro.com for your API key
  • [ ] pip install openai or npm install openai
  • [ ] Set base_url="https://ai.lmzh.top/v1"
  • [ ] Test with a "Hello, world" prompt
  • [ ] Choose the right model for your use case

FAQ

Q: Do I need to change my existing OpenAI code?

Only base_url and api_key. Everything else stays the same.

Q: Works with LangChain/LlamaIndex?

Yes. Any OpenAI-compatible framework works with NexaAPI.

Q: Is there a free trial?

Email frequency404@villaastro.com to discuss trial credits.


📧 Get API Access: frequency404@villaastro.com

🌐 Platform: https://ai.lmzh.top

💡 1/5 of official price | Pay as you go | No subscription

Also on: RapidAPI | PyPI | npm

Top comments (0)