<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Emily Woods</title>
    <description>The latest articles on DEV Community by Emily Woods (@emilywoodsnyc).</description>
    <link>https://dev.to/emilywoodsnyc</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/emilywoodsnyc"/>
    <language>en</language>
    <item>
      <title>I used AI coding agents for a week at work. Here is what actually happened.</title>
      <dc:creator>Emily Woods</dc:creator>
      <pubDate>Fri, 08 May 2026 20:17:14 +0000</pubDate>
      <link>https://dev.to/emilywoodsnyc/i-used-ai-coding-agents-for-a-week-at-work-here-is-what-actually-happened-99h</link>
      <guid>https://dev.to/emilywoodsnyc/i-used-ai-coding-agents-for-a-week-at-work-here-is-what-actually-happened-99h</guid>
      <description>&lt;p&gt;I used AI coding agents as my first approach for every task during a full work week. My raw line output went up about 40 percent. The amount of code that actually shipped to production without needing significant rework stayed roughly the same. The agents were fast at the repetitive parts. They could not help me with the parts that were actually hard, which are the parts where you need to understand why a system was built a certain way by someone who is no longer on the team, or whether adding a new component is worth the operational cost your team can barely handle right now. The bottleneck was never typing.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why I did this?
&lt;/h2&gt;

&lt;p&gt;Every engineering Slack channel and substack newsletter I am reading is all about fighting about AI coding agents since early 2025. One side says software engineering is basically over and that agents will be writing most production code. The other side says the agents are fancy autocomplete and the real work of engineering, making decisions when you do not have all the information, is still a human problem.&lt;/p&gt;

&lt;p&gt;I kept going back and forth depending on which demo I had most recently watched, which is a bad way to form an opinion about anything. So I stopped watching demos and decided to collect my own data. One full work week, leaning on the agents as heavily as I could for my actual job, and tracking what happened.&lt;/p&gt;

&lt;p&gt;What I used&lt;br&gt;
The codebase is a mix of Python and Go and the current work involves maintaining a handful of microservices, dealing with message queues, Postgres, Redis, and a set of third party API integrations that each have their own authentication quirks.&lt;/p&gt;

&lt;p&gt;For the experiment I used Cursor with agent mode turned on for code generation across files, Claude Code for longer reasoning tasks where I wanted the model to look at an entire service and suggest changes, and a custom internal tool our team built that reads Jira tickets and suggests implementation plans.&lt;/p&gt;

&lt;p&gt;The rule was simple and If I would normally open a file and start writing, I would instead describe what I wanted to the agent and let it go first. Then I would review and correct rather than writing from scratch.&lt;/p&gt;

&lt;p&gt;Where the agents were actually useful&lt;br&gt;
On Tuesday I needed to spin up a new service. Kafka consumer, schema validation, some business rules, write results to Postgres. I have built this kind of thing enough times that the structure is predictable. I described the requirements and Cursor generated about 80 percent of it in around 12 minutes. The consumer setup, the Pydantic models, the SQLAlchemy layer, the error handling. It was clean and mostly correct however I adjusted the logging to match our team conventions and it was ready.&lt;/p&gt;

&lt;p&gt;That kind of work, the repetitive structural stuff that follows a pattern you already know, is where the agents are genuinely good. They are fast and they do not make the dumb typos that I make at 4 PM on a Tuesday.&lt;/p&gt;

&lt;p&gt;Test writing was also surprisingly good. I had a backlog task to add test coverage to a service that had been shipped quickly with minimal tests. I pointed Claude Code at the directory and asked for unit tests on the core modules. It produced a solid suite that covered the main paths, caught a boundary condition I had missed in one of the validation functions, and only needed three tests adjusted because it made wrong assumptions about how we mock the database layer. That would have been most of my Wednesday afternoon done manually. The agent did it in twenty minutes.&lt;/p&gt;

&lt;p&gt;And also it did the documentation. I needed to onboard a new teammate onto a service I wrote six months ago. Instead of spending an hour writing up a walkthrough, I had Claude Code analyze the service and produce an explanation of the architecture and the data flow. It was about 90 percent accurate and gave the new engineer something to read before asking me questions, which meant I spent the onboarding time on their actual questions instead of on prose.&lt;/p&gt;

&lt;p&gt;Where they fell apart&lt;br&gt;
Monday morning we had an incident. A downstream service was getting malformed data from one of our endpoints. The root cause turned out to be an interaction between our serialization layer and a schema change that another team had made. Figuring that out required reading the Git history of two repos, finding a Slack thread from three weeks earlier that explained why the schema change had been made, and then reasoning about how the serialization would behave differently under a specific race condition between the old and new versions.&lt;/p&gt;

&lt;p&gt;I tried to use Claude Code for this and gave it the relevant files and the error logs. It generated several guesses that were plausible but wrong, because the actual answer depended on context that is not in the code. It was in conversations, in commit messages, in the organizational memory of why someone made a decision three weeks ago. The agent could read the code. It could not read the room.&lt;/p&gt;

&lt;p&gt;On Wednesday I had to decide whether to add a cache to a service that was slow under load. The textbook answer is yes, obviously, add a cache. The agent gave me the textbook answer. But the textbook answer did not account for the fact that we had just shrunk our on call rotation and nobody had time to babysit another cache layer. It did not know that the database team had a query optimization sprint planned for next month that might fix the latency at the source. And it definitely did not know that our last caching attempt had caused a consistency bug that took two weeks to untangle.&lt;/p&gt;

&lt;p&gt;The agent produced a technically correct recommendation that would have been the wrong decision in our specific situation. That gap between the general answer and the right answer for this team at this moment is where the human work actually lives.&lt;/p&gt;

&lt;p&gt;On Friday I was implementing billing logic. Proration across subscription tiers, timezone handling for billing cycles, grandfathering rules for legacy customers. The business rules lived across three spec documents and two Slack threads with the product manager. The agent could generate the structure, but it kept getting the edge cases wrong where the rules interacted with each other. Every fix I made introduced a new regression somewhere else. After ninety minutes of going back and forth with the agent, I closed the tool and wrote the logic myself in about the same amount of time, because at that point reviewing and correcting was costing me more than just thinking through the problem on my own.&lt;/p&gt;

&lt;p&gt;Numbers&lt;br&gt;
At the end of the week I compared my Git activity to a normal week.&lt;/p&gt;

&lt;p&gt;Lines committed were up about 40 percent. Almost all of that came from the new service and the test generation, both of which produce a lot of code quickly.&lt;/p&gt;

&lt;p&gt;Pull requests merged went from my usual four to five. The extra one was the new service.&lt;/p&gt;

&lt;p&gt;I spent roughly three hours across the week reviewing and correcting agent generated code, which ate into some of the time I saved by not writing it myself.&lt;/p&gt;

&lt;p&gt;No production incidents from code I shipped that week, but that was because I reviewed everything carefully before merging. If I had trusted the output without checking, at least two sections would have caused problems based on the errors I caught during review.&lt;/p&gt;

&lt;p&gt;What I actually think after doing this&lt;br&gt;
The agents are good at the parts of the job that were already the easiest parts. Repetitive service setup, test boilerplate, documentation that summarizes code you already wrote. They are fast at those things and they produce reasonable output.&lt;/p&gt;

&lt;p&gt;They are not good at the parts that make engineering hard. Understanding why a system was built the way it was. Knowing the team well enough to factor operational capacity into an architecture decision. Debugging across service boundaries when the root cause is in a Slack thread, not a stack trace. Writing business logic where the rules contradict each other in ways that only show up at the edges.&lt;/p&gt;

&lt;p&gt;The people who should be worried are the ones whose main contribution has been cranking out predictable code on well understood problems, because the agents can now do that faster. The people who should not be worried are the ones who spend most of their time in the messy middle, making judgment calls that depend on context the agents cannot access.&lt;/p&gt;

&lt;p&gt;I do not think the agents are replacing engineers. I think they are replacing the part of engineering that engineers already did not find very interesting. The part that is actually hard, the part that makes you stare at your screen and think for twenty minutes before typing anything, is still yours.&lt;/p&gt;

&lt;p&gt;If you are anxious about AI agents taking your job, run this experiment yourself. Use the tools hard for a week. See what they speed up and where they stall out. Form your own opinion from your own data instead of from someone else’s demo video or Twitter takes.&lt;/p&gt;

&lt;p&gt;My take after the week is pretty simple. I type less now. (Also Whispr Flow is blessing where you need to type less and speak more) I think the same amount.&lt;/p&gt;

&lt;p&gt;If you are in an active interview process and want to make sure you are practicing effectively for the technical rounds that come after the recruiter stage, &lt;a href="https://prachub.com/?utm_source=devto&amp;amp;utm_campaign=v1012" rel="noopener noreferrer"&gt;PracHub&lt;/a&gt; has company specific questions organized by role and round type that let you prepare for the exact format each company uses. But all of that preparation is wasted if you never reach the technical rounds because your recruiter emails communicated that you were difficult to work with before anyone had the chance to evaluate your code.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>I will never walk into a backend interview without solving these 20 questions.</title>
      <dc:creator>Emily Woods</dc:creator>
      <pubDate>Tue, 05 May 2026 23:14:59 +0000</pubDate>
      <link>https://dev.to/emilywoodsnyc/i-will-never-walk-into-a-backend-interview-without-solving-these-20-questions-4mep</link>
      <guid>https://dev.to/emilywoodsnyc/i-will-never-walk-into-a-backend-interview-without-solving-these-20-questions-4mep</guid>
      <description>&lt;p&gt;I failed more than 20 interviews in last 10 months&lt;/p&gt;

&lt;p&gt;I saw many time my approach fell apart as soon as the interviewer asks a follow up question about what happens inside those boxes when traffic spikes.&lt;/p&gt;

&lt;p&gt;I compiled this list based on the questions that actually get asked in real rooms. I failed many interviews because I did not know the answers to these. I ask them now because they are excellent filters and I think they separate the people who only know the vocabulary from the people who know how the systems behave under pressure.&lt;/p&gt;

&lt;p&gt;You just need to understand the underlying mechanics well enough to discuss them conversationally.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Databases and query performance&lt;/strong&gt;&lt;br&gt;
Databases are where most applications spend most of their time. You need to know how to get data in and out efficiently when the tables get large.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. What happens when you put an index on a random UUID column&lt;/strong&gt;&lt;br&gt;
A lot of developers use UUIDs for primary keys because it makes distributed generation easy. But standard UUIDs are random. Inserting random values into a B-tree index causes massive page fragmentation and forces the database to write to disk constantly. You should know why sequential IDs or time sorted UUIDs fix this.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. How do you paginate through 50 million rows without using OFFSET?&lt;/strong&gt;&lt;br&gt;
Using OFFSET and LIMIT works for the first few pages. By page ten thousand, the database is scanning and discarding huge numbers of rows before returning your data. You need to be able to explain keyset pagination (cursor based pagination) and why it keeps query times flat regardless of the page depth.&lt;br&gt;
**&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;When would you use a composite index instead of two separate indexes?**
If you frequently query a table using two columns together, creating an index on column A and an index on column B is usually the wrong move. The database typically only uses one index per table scan. You need to understand how a composite index on (A, B) works and why the order of the columns matters.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;4. What is the N+1 query problem and how do you fix it?&lt;/strong&gt;&lt;br&gt;
This is the most common performance bug in modern applications that use ORMs. You query a list of users, and then as you loop through the users, the ORM fires a separate query to fetch each user’s profile. You need to be able to explain how to fix it using explicit joins or eager loading.&lt;/p&gt;

&lt;p&gt;Concurrency and transactions&lt;br&gt;
Distributed systems do things at the same time. Bad things happen when you do not manage that timing.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. How do you prevent double booking a ticket in a distributed system&lt;/strong&gt;&lt;br&gt;
Checking if a seat is available and then booking it creates a race condition. You cannot solve this with a mutex in your application code if you are running multiple servers. You need to know how to use database level locks, like a SELECT FOR UPDATE statement, or an optimistic locking strategy with version numbers.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;6. What is the difference between repeatable read and serializable isolation levels?&lt;/strong&gt;&lt;br&gt;
You do not need a PhD in database theory. But you do need to know that the default isolation level in Postgres allows certain types of race conditions, and you need to know when you have to crank the isolation level up to serializable to guarantee absolute correctness.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;7. How do you implement a distributed lock without creating a single point of failure?&lt;/strong&gt;&lt;br&gt;
If multiple workers need exclusive access to a resource, they need a distributed lock. Using a single Redis instance works until that instance goes down. You should be familiar with algorithms like Redlock or systems like ZooKeeper that handle distributed consensus.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;8. How do you implement idempotency for a payment retry endpoint?&lt;/strong&gt;&lt;br&gt;
Mobile networks drop connections. Clients will retry requests. If a client retries a payment request because they did not receive the success response, you cannot charge their card twice. You need to explain how to use idempotency keys and where to store them to guarantee exactly once processing.&lt;/p&gt;

&lt;p&gt;Caching strategy&lt;br&gt;
Everyone knows caching makes things faster. Interviewers want to know if you understand how caching makes things break.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;9. What causes a cache stampede and how do you prevent it?&lt;/strong&gt;&lt;br&gt;
When a highly requested cache key expires, thousands of requests miss the cache simultaneously and hit the database at the exact same moment. The database falls over. You need to know how to prevent this using probabilistic early expiration or a lock to ensure only one thread regenerates the cache.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;10. If you cache a user profile, how do you invalidate it when they update their email?&lt;/strong&gt;&lt;br&gt;
Cache invalidation is a genuinely hard problem. You need to explain the difference between a write-through cache and a cache-aside pattern, and discuss the tradeoffs of setting a short time-to-live versus explicitly deleting the key on every update.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;11. Why might putting Redis in front of your database actually slow your system down?&lt;/strong&gt;&lt;br&gt;
Adding a network hop to check a cache takes a few milliseconds. If your cache hit rate is terrible, you are paying the network penalty on every request just to find out the data is not there, and then hitting the database anyway. You should know how to monitor and calculate cache hit ratios.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;12. What eviction policy makes sense for a session store versus a content feed?&lt;/strong&gt;&lt;br&gt;
Caches run out of memory. When they do, they have to delete something to make room. Least Recently Used makes sense for a content feed. It is a terrible choice for a session store where you might log active users out randomly.&lt;/p&gt;

&lt;p&gt;APIs and network architecture&lt;br&gt;
Your API is a contract. You have to know how to change it safely and protect it from abuse.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;13. How do you safely change the payload of a live API without breaking existing mobile clients?&lt;/strong&gt;&lt;br&gt;
Mobile apps live on user devices for years without being updated. If you change a response format and remove a field, old apps will crash. You need to understand API versioning via headers or URLs, and how to maintain backwards compatibility.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;14. What is the difference between a sliding window log and a fixed window counter for rate limiting?&lt;/strong&gt;&lt;br&gt;
A fixed window counter lets clients burst double their allowed limit right at the boundary between two minutes. A sliding window smooths out the limit. You should be able to explain the mechanics and the memory tradeoffs of both approaches.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;15. How do you design an endpoint that needs to upload a 5GB video file?&lt;/strong&gt;&lt;br&gt;
You cannot read a 5GB file into memory on your application server. It will kill the process. You need to explain streaming uploads, multipart chunking, or using pre assigned URLs to let the client upload directly to cloud storage.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;16. How do you handle long running tasks in a synchronous API request?&lt;/strong&gt;&lt;br&gt;
If an endpoint triggers a PDF generation that takes forty seconds, the client connection will likely timeout. You need to explain the asynchronous worker pattern. The API returns a 202 Accepted with a job ID immediately, and the client polls a separate endpoint to check the status.&lt;/p&gt;

&lt;p&gt;Message brokers and asynchronous processing&lt;br&gt;
Real systems decouple work. You need to know what happens when those decoupled pieces fail.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;17. Why would you choose RabbitMQ over Kafka, or vice versa?&lt;/strong&gt;&lt;br&gt;
They are not interchangeable. Kafka is a distributed log built for high throughput and replayability. RabbitMQ is a smart broker built for complex routing and task queues. You need to know which one fits your use case.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;18. What happens if your Kafka consumer reads a message but fails to commit the offset?&lt;/strong&gt;&lt;br&gt;
The consumer will read the same message again when it restarts. Your processing logic has to be idempotent, or you will process the data twice.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;19. How do you handle poison messages that repeatedly crash your workers?&lt;/strong&gt;&lt;br&gt;
If a malformed message causes a null pointer exception in your worker, the worker crashes, the message goes back to the queue, and another worker picks it up and crashes. You need to explain Dead Letter Queues and retry limits.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;20. How do you ensure messages are processed in the exact order they were sent?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In a distributed queue with multiple consumers, order is almost impossible to guarantee because consumer A might take longer to process message 1 than consumer B takes to process message 2. You need to explain partitioning or routing keys that ensure related messages go to the same single consumer thread.&lt;/p&gt;

&lt;p&gt;This list looks intimidating if you try to memorize it in a weekend but I think it is completely manageable if you take the time to understand the problems these concepts solve.&lt;/p&gt;

&lt;p&gt;Every single one of these questions is about dealing with scale, failure, and reality. The database gets slow. The network drops. The users send bad data. Senior engineers spend their days mitigating these exact problems. For practicing questions, i generally do LCs, and some from &lt;a href="https://prachub.com/questions?utm_source=devto&amp;amp;utm_campaign=v1012" rel="noopener noreferrer"&gt;here.&lt;/a&gt; and mainly brush up on basics like bytebytego and system design primer.&lt;/p&gt;

&lt;p&gt;I would say try to understand the pain points. When you understand the pain points, the answers make sense.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2f2cex3fyb5yjq05v015.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2f2cex3fyb5yjq05v015.png" alt=" " width="800" height="456"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>backendinterviews</category>
      <category>interview</category>
      <category>programming</category>
    </item>
  </channel>
</rss>
