DEV Community

Cover image for Built a less‑filtered LLM chat & API
Abliteration.ai
Abliteration.ai

Posted on

Built a less‑filtered LLM chat & API

I’ve been building a project, and I finally pushed it live: Abliteration : a less‑filtered LLM chat and API.

What is Abliteration?

At a high level:

  • It’s a web chat where you can talk to a “less‑filtered” LLM.
  • It’s also an API you can call from your own apps (OpenAI‑style JSON).
  • It’s aimed at developers doing things like:
    • red‑teaming / robustness testing
    • internal tools
    • creative / experimental projects

The goal isn’t “no rules, pure chaos”. The goal is:

“stop refusing every borderline or research prompt, but still block clearly harmful stuff.”

Why I built it

When I started playing with different LLM APIs, I kept running into the same pattern:

  • I’d write a prompt for something perfectly legitimate (e.g. security testing, fiction, simulations).
  • The model would respond with some variation of “I’m sorry, I can’t help with that”.
  • I’d spend more time fighting the guardrails than working on the actual idea.

What it does right now

Trying to keep v1 small and focused:

  • Web chat interface
  • Simple REST API for chat completions
  • API keys + usage dashboard
  • Small free tier so you can kick the tires
  • Basic quickstart examples (curl)

Top comments (0)