Connecting LINE Bot to Home Lab Without cloudflared — WebSocket Relay Architecture
When running a LINE Bot in a local environment, the first wall you hit is receiving webhooks from the outside world. LINE sends HTTPS POST requests from their servers. Your local machine has no global IP.
The usual answers are ngrok or Cloudflare Tunnel (cloudflared). But both have friction. ngrok changes URLs on every restart with the free tier. cloudflared needs a persistent daemon and non-trivial setup.
At TechsFree, we took a different approach: a WebSocket relay server in the middle.
Architecture
LINE → HTTPS POST → Relay Server (linebot.techsfree.com)
↓ WebSocket
Local Node (line_bridge.py)
↓ HTTP POST
OpenClaw chatCompletions API (localhost:18789)
↓
LINE Reply API → User
The relay server sits on the cloud side, always accepting HTTPS. The local line_bridge.py keeps a persistent outbound WebSocket connection to it. Since the connection is outbound (inside → outside), it passes through firewalls without any port forwarding. When a LINE webhook arrives, the relay forwards it over the WebSocket.
Why WebSocket?
HTTP polling adds latency. Server-Sent Events are unidirectional. WebSocket gives you bidirectional, low-latency, persistent connection — ideal for a relay pattern.
Implementation Highlights
1. Auto-Reconnect with Exponential Backoff
Networks always fail. Home routers reboot. Connections drop on sleep/wake. The bridge must recover on its own.
reconnect_delay = RECONNECT_DELAY # 5 seconds
MAX_RECONNECT_DELAY = 60
while True:
try:
async with websockets.connect(relay_url, ping_interval=20, ping_timeout=10) as ws:
reconnect_delay = RECONNECT_DELAY # reset on success
async for raw_message in ws:
...
except ConnectionClosed as e:
logger.warning(f"Connection closed: {e}")
except Exception as e:
logger.error(f"Connection error: {e}")
await asyncio.sleep(reconnect_delay)
reconnect_delay = min(reconnect_delay * 2, MAX_RECONNECT_DELAY)
The key detail: reset the delay on success. Only grow the backoff on errors. Cap at 60 seconds — no infinite waiting.
2. LINE Signature Verification
Every LINE webhook carries an x-line-signature header. Skip HMAC-SHA256 verification and you'll process forged requests.
def verify_signature(body: str, signature: str, channel_secret: str) -> bool:
hash_val = hmac.new(
channel_secret.encode("utf-8"),
body.encode("utf-8"),
hashlib.sha256
).digest()
expected = base64.b64encode(hash_val).decode("utf-8")
return hmac.compare_digest(signature, expected)
Using hmac.compare_digest instead of == prevents timing attacks — string comparison time can leak whether characters match.
3. Config Reload on Every Webhook
The bridge reloads its config file on every incoming webhook:
elif msg_type == "webhook":
current_config = load_config() # reload every time
asyncio.create_task(handle_webhook(msg, current_config))
This means LINE tokens and channel_secret can be swapped out without restarting the daemon. In our miniPC distribution scenario, we ship a unit to a customer, they fill in their credentials, and the bridge picks them up immediately — no restart needed.
4. OpenClaw chatCompletions API Integration
When a LINE message arrives, the bridge calls the local OpenClaw API:
async def call_openclaw(message: str, openclaw_url: str, gateway_token: str = "") -> str:
payload = {
"model": "gpt-4",
"messages": [{"role": "user", "content": message}]
}
headers = {"Content-Type": "application/json"}
if gateway_token:
headers["Authorization"] = f"Bearer {gateway_token}"
async with httpx.AsyncClient(timeout=60) as client:
resp = await client.post(openclaw_url, json=payload, headers=headers)
resp.raise_for_status()
return resp.json()["choices"][0]["message"]["content"]
It uses the OpenAI-compatible chatCompletions format, so you can point it at Ollama or LM Studio just by changing the URL.
Designed for Distribution
line_bridge.py is built to ship inside the miniPCs TechsFree sells. Each unit gets a unique relay_token; the relay server uses it to route webhooks to the right local bridge.
{
"relay_token": "gmk-xxxxxxxx",
"channel_secret": "...",
"channel_access_token": "...",
"relay_url": "wss://linebot.techsfree.com/gmk/{token}/ws",
"openclaw_url": "http://localhost:18789/v1/chat/completions"
}
One JSON file per unit. A GUI fills in the fields — that's the entire setup for end users.
Comparison
| ngrok | cloudflared | WebSocket Relay | |
|---|---|---|---|
| Fixed URL | Not on free tier | Yes | Yes (relay server) |
| Daemon setup | Required | Required |
line_bridge.py only |
| Offline behavior | URL invalidated | Tunnel drops | Auto-reconnects |
| Custom routing | No | No | Full control |
| Multi-unit management | Hard | Hard | Token-based |
cloudflared is powerful, but "dependency on an external service" was a concern. Running your own relay means authentication, logging, and routing are all under your control. For scenarios like managing many local environments — home labs or distributed miniPC products — the self-hosted relay approach is a clean fit.
Top comments (0)