DEV Community

Cover image for Building Auth Validation: 5 Lessons on Making Error Messages Actually Helpful
Yuto Takashi
Yuto Takashi

Posted on

Building Auth Validation: 5 Lessons on Making Error Messages Actually Helpful

Why you should care

You're building a tool that connects to multiple external services. GitHub, some AI agents, maybe a few APIs. Everything works fine... until it doesn't.

A user runs a job. It fails. They check the logs. Scroll through output. 30 minutes later: "Oh, the auth token expired."

Sound familiar?

I built an auth validation feature for DevLoop Runner (a tool that automates PRs using AI agents). Here's what I learned about making validation actually useful.

Lesson 1: Hide what users don't need to see

First implementation? JWT tokens, email addresses, organization names - all printed in plain text in the validation output.

Bad idea.

const isJsonString = authJsonPath.startsWith('{') || authJsonPath.startsWith('[');
const displayValue = isJsonString 
  ? '[JSON content - masked for security]' 
  : authJsonPath;
Enter fullscreen mode Exit fullscreen mode

Users need to know: "Is it configured?" and "Does it work?"

They don't need to see the actual credentials. Mask them.

Lesson 2: Validate what actually gets used

The validation command was checking an environment variable. The actual execution command was using a file.

Validation passes. Execution fails. Users confused.

Fixed it by checking the same file path the execution command uses:

const homeDir = process.env.HOME || os.homedir();
const authFilePath = path.join(homeDir, '.codex', 'auth.json');

if (fs.existsSync(authFilePath)) {
  // Validate the file, not the env var
}
Enter fullscreen mode Exit fullscreen mode

Validate the environment as it actually exists, not as you think it should.

Lesson 3: "Failed" doesn't always mean failed

OpenAI and Anthropic API keys are optional in my tool. They're for a follow-up feature. Main functionality works fine without them.

But the first version showed status: failed when they weren't configured.

That's misleading. Changed it to status: skipped:

if (isBlank(apiKey)) {
  checks.push({
    name: 'OPENAI_API_KEY',
    status: 'skipped',
    message: 'Not configured (optional for follow-up issue generation)',
  });
  return { status: 'passed', checks };
}
Enter fullscreen mode Exit fullscreen mode

Missing required config = failed.
Missing optional config = skipped.

The distinction matters.

Lesson 4: Don't ask for more than you need

GitHub token validation was checking for three scopes: repo, workflow, read:org.

I checked the codebase. We never use read:org. We only need workflow for GitHub Actions coding.

Reduced it to just repo:

const REQUIRED_GITHUB_SCOPES = ['repo'];
Enter fullscreen mode Exit fullscreen mode

When validation is too strict, users get error messages for things that would actually work fine. That's frustrating.

Lesson 5: Make your validation tool debuggable

The connectivity check was timing out in Jenkins. But no logs. Couldn't figure out why.

Added verbose logging and increased the timeout:

await codexClient.executeTask({
  prompt: 'ping',
  maxTurns: 1,
  verbose: true,  // Now we can see what's happening
});

// Increased timeout from 10s to 30s for Docker environments
setTimeout(() => reject(new Error('Timeout after 30 seconds')), 30000)
Enter fullscreen mode Exit fullscreen mode

If your validation tool is hard to debug, you've just created another problem.

The pattern

Building validation isn't just about technical correctness. It's about the message you're sending to users:

  • Mask what they don't need to see
  • Check what actually gets executed
  • Distinguish between failed and skipped
  • Don't over-require permissions
  • Make failures debuggable

Validation results are communication. They should help users understand what to do next, not confuse them with technically correct but practically misleading messages.


More thoughts on building tools and making decisions at https://tielec.blog/

Top comments (0)