DEV Community

Manoj Mishra
Manoj Mishra

Posted on

The Human Side – Workflow & Culture Mistakes

Introduction

AI tools don’t replace engineering judgment—they amplify it. But when misused, they can erode learning, hinder collaboration, and introduce subtle workflow issues. This final post covers five human‑centric mistakes and how to keep your team’s culture healthy while leveraging AI.


Mistake 1: Over‑Trusting AI Without Understanding Code

Description: Developers accept AI-generated code without understanding how it works, creating maintenance debt.

Realistic Scenario: Senior dev leaves team; remaining team can't maintain AI-generated complex code they didn't write.

Wrong Prompt:

Implement complex event sourcing system

Enter fullscreen mode Exit fullscreen mode

Developer copies code without understanding.

⚠️ Why it is wrong: Team becomes reliant on AI for maintenance, can't debug or extend.

Better Prompt:

Help me learn how to implement event sourcing by:

Explaining core concepts first

Generating a simple example with comments explaining each part

Walking through how to test event-sourced systems

Providing references to learn more

I will write the actual implementation myself based on understanding, using AI to review and suggest improvements.

Current understanding: I've read about event sourcing but never implemented. Focus on practical patterns.

Enter fullscreen mode Exit fullscreen mode

💡 What changed: AI used as learning tool, not code generator; team retains understanding.


Mistake 2: Using AI as a Crutch for Learning

Description: Junior developers use AI to generate code instead of learning fundamentals, stagnating growth.

Realistic Scenario: Junior developer generates all code via AI, can't solve problems without AI assistance.

Wrong Prompt:

Write entire REST API for me

Enter fullscreen mode Exit fullscreen mode

⚠️ Why it is wrong: Developer doesn't learn design patterns, error handling, or best practices.

Better Prompt:

Guide me through building a REST API step by step.

I'll write code, and you can review and suggest improvements.

Current learning goals:

Understand REST principles

Learn proper error handling

Practice writing tests

Step 1: I'll create a simple endpoint. Please review my code.
[developer's code]

Step 2: Provide feedback and next learning topic.

Enter fullscreen mode Exit fullscreen mode

💡 What changed: Developer actively learning, AI as mentor not replacement.


Mistake 3: No Pair Review of AI‑Generated Code

Description: One developer uses AI and merges code without peer review, missing subtle bugs.

Realistic Scenario: Developer uses AI to generate SQL migration with subtle bug that corrupts data; no review caught it.

Wrong Prompt:

Write data migration script

Enter fullscreen mode Exit fullscreen mode

Developer merges without review.

⚠️ Why it is wrong: No second set of eyes; critical bugs reach production.

Better Prompt:

Write data migration script that I'll submit for code review.

Requirements for review-ready code:

Include unit tests with edge cases

Add rollback script

Document assumptions

Add logging and metrics

Tested in staging with production-like data

After generating, I'll open PR with:

Link to AI conversation

Explanation of approach

Test results

Request review from team member with DB expertise

Enter fullscreen mode Exit fullscreen mode

💡 What changed: AI-generated code goes through same review process as human-written code.


Mistake 4: AI‑Assisted Commit Messages That Hide Intent

Description: Using AI to generate commit messages that are generic, hiding actual change intent.

Realistic Scenario: Complex refactor with AI-generated commit message "Update code" making git history useless.

Wrong Prompt:

Generate commit message

Enter fullscreen mode Exit fullscreen mode

⚠️ Why it is wrong: Generic messages like "Fix bug" don't explain why change was made or what problem it solves.

Better Prompt:

Generate commit message based on these changes:

Changes:

Added retry logic for payment gateway calls

Added exponential backoff with jitter

Added circuit breaker after 3 failures

Added metrics for retry attempts

Commit message should follow Conventional Commits:
type(scope): description

Body explaining:

Why change was needed (payment gateway timeouts in prod)

What was changed

Impact (improved reliability, no breaking changes)

Example output:
feat(payment): add retry and circuit breaker for gateway calls

Payment gateway timeouts increased to 15% during peak hours.
Added retry with exponential backoff (max 3 attempts) and
circuit breaker to prevent cascading failures.

Metrics: new retry_attempts_total counter for observability.

Enter fullscreen mode Exit fullscreen mode

💡 What changed: Structured, informative commit messages with reasoning.


Mistake 5: Con


``` Switching Due to AI Hallucinations

Description: AI provides incorrect information causing developers to waste time chasing wrong solutions.

Realistic Scenario: AI hallucinates that a deprecated library has security vulnerability, team spends day upgrading unnecessarily.

Wrong Prompt:


TEXT
Is there a security vulnerability in Apache Commons Collections?



Enter fullscreen mode Exit fullscreen mode

⚠️ Why it is wrong: AI may claim outdated vulnerability still exists without verifying version.

Better Prompt:


TEXT
Check if Apache Commons Collections 3.2.2 has known vulnerabilities.

Process to verify:

First, search official sources: CVE database, NVD, GitHub advisories

Cross-reference with our version (3.2.2)

If vulnerability exists, provide mitigation steps

If hallucination suspected, note "verify against official sources"

I will independently verify any security claims against CVE database before taking action.

Current understanding: I recall Commons Collections 3.2.1 had deserialization issue (CVE-2015-6420). Need to verify 3.2.2 status.



Enter fullscreen mode Exit fullscreen mode

💡 What changed: Explicit verification steps prevent wasted effort on AI hallucinations.


Summary & Best Practices

  • Understand the code you accept from AI—don’t treat it as a black box.
  • Use AI as a mentor, not a crutch—ask for explanations and write the code yourself.
  • Maintain code review discipline—AI‑generated code is not exempt.
  • Write meaningful commit messages that explain why, not just what.
  • Verify AI claims against official sources to avoid chasing hallucinations.

AI is a powerful tool, but it works best when it enhances human collaboration, learning, and quality standards—not when it replaces them.

Top comments (0)