DEV Community

Munagala Karthik
Munagala Karthik

Posted on

Why Your AI Prompts Are Getting Generic Answers (And How to Fix It)

Most engineers treat AI like a search engine.
Type a question; hope for a good answer; get frustrated when it's generic.
That's not how it works.
AI responds to context. The more specific you are, the better the output. Think of it like briefing a new team member who knows everything, but has no idea about your environment.
Here is what changed my results completely.
Give it a role; "You are a cloud security engineer working on AWS" instantly shifts the tone and depth of the response.
Give it context; mention the service, the environment, the constraint. "Write an IAM policy for a Lambda function that only reads from one S3 bucket" beats "write an IAM policy" every single time.
Give it a format; tell it exactly what you want back, a policy document, a checklist, a Terraform block. Be specific.
Tell it what to avoid; "no broad permissions, no wildcards, follow least privilege" saves you from cleaning up a messy output.
I now use this for IAM policies, security checklists, Terraform reviews, incident response drafts and boto3 scripts.
Same AI, better prompts, completely different results.
Prompt engineering is not a skill for AI people. It is a skill for anyone who wants to stop wasting time.

Top comments (0)