DEV Community

Obed Mpaka
Obed Mpaka

Posted on

What an AWS Hallway Conversation Made Me See That Nobody Is Talking About

Four months ago, someone at AWS asked me how Bedrock stops LLMs from seeing customer data.

Casual. No meeting invite. Just one of those conversations that sticks.

I knew the answer Bedrock doesn't store your prompts, doesn't train on them, doesn't let model providers touch them. AWS built a serious perimeter around that. It's real infrastructure, real compliance, real commitment.

But then we kept talking. And someone mentioned agents.

Because every major player right now AWS, Google, Microsoft, OpenAI is saying the same thing: AI agents are the next wave. Not chatbots. Agents. Systems that don't wait for instructions. They book appointments, process claims, trigger payments, read records, and make decisions. Fully autonomous. No human in the loop.

And I started reading everything I could find on how they actually behave.

What I kept running into: AWS's own security team flagged it when agents automate workflows and process patient data, sensitive information disclosure is one of the top risks. The agent needs data to do its job, so it gets the data. All of it. OWASP put sensitive information disclosure on their 2025 top 10 risks for LLMs specifically because of this the model sees what it shouldn't, and the recommended fix is sanitizing the data before it ever reaches the model.

Here's what got me. Bedrock's Guardrails sit between your app and the model and can mask sensitive information but that's reactive. It catches what slips through. It doesn't change the core architecture. The agent still gets the full object. It still reasons over real values. The SSN is in the prompt. The diagnosis code is in the prompt. The card number is in the prompt.

AWS built the perimeter. Nobody built what happens inside it.

And the thing is the agent doesn't need to see a social security number to decide what to do with it. It needs to know a value exists. What type it is. What action to take. That's it. The raw value is irrelevant to the decision. It only matters at execution.

That gap between what the agent needs and what it sees is what Codeastra is about.

Tokenize before the prompt. Agent reasons on safe placeholders. Real values resolve only at execution. The model never touched them.

Bedrock already promises it won't store or log your prompts. Codeastra makes sure there's nothing sensitive in them to begin with.

https://codeastra.dev/

Top comments (0)