<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Alexandru A</title>
    <description>The latest articles on DEV Community by Alexandru A (@programmer4web).</description>
    <link>https://dev.to/programmer4web</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/programmer4web"/>
    <language>en</language>
    <item>
      <title>AI Made Me a Better Reviewer (The Hard Way)</title>
      <dc:creator>Alexandru A</dc:creator>
      <pubDate>Mon, 13 Apr 2026 03:45:53 +0000</pubDate>
      <link>https://dev.to/programmer4web/ai-made-me-a-better-reviewer-the-hard-way-3dm5</link>
      <guid>https://dev.to/programmer4web/ai-made-me-a-better-reviewer-the-hard-way-3dm5</guid>
      <description>&lt;p&gt;I build a SaaS QA platform. When AI coding assistants became good enough to trust with real work, I leaned in — and my velocity went up noticeably. Less boilerplate, faster feature implementation, fewer rabbit holes on syntax I'd forgotten.&lt;/p&gt;

&lt;p&gt;But something shifted that I didn't expect. I wasn't writing less code and saving time. I was writing less code and reviewing more instead.&lt;/p&gt;

&lt;p&gt;For a while I thought that was fine. Then one evening I pushed a feature involving Google SSO and suddenly users couldn't log in. Not "the Google button doesn't work" — I mean the entire sign-in and logout flow was broken. Silent. No obvious error.&lt;/p&gt;

&lt;p&gt;I dug in. The AI had been working on routing middleware to serve pre-rendered pages for SEO. Reasonable task. What it also did, without flagging it, was intercept every GET request that looked like a browser navigation — including /auth/google. The OAuth redirect came back from Google, hit the middleware, got served the SPA shell instead, and the session was never established.&lt;/p&gt;

&lt;p&gt;The fix was one line. The damage could have been significant if it had reached more users.&lt;/p&gt;

&lt;p&gt;Here's what stayed with me: the feature it was building worked perfectly. The tests I ran against the new functionality passed. What broke was something adjacent — code I didn't ask it to touch, changed in a way that was internally logical but contextually wrong.&lt;/p&gt;

&lt;p&gt;That's the real lesson about working with AI on production code. It doesn't lack skill. It lacks awareness of consequences outside the task boundary. It will solve the problem you gave it and not think twice about what it quietly rearranged to get there.&lt;/p&gt;

&lt;p&gt;So now I review differently. I don't just check that the feature works. I read the diff the way I'd read a pull request from a smart junior developer who might not know what they don't know — looking for what changed that I didn't ask to change.&lt;/p&gt;

&lt;p&gt;The trade-off is real: I write far less code than before, but I spend more time in review. Whether that's a net win depends on how careful you are. If you test properly and treat every AI output as a PR that needs approval, the productivity gain holds. If you trust the result because the happy path works, you're accumulating invisible risk.&lt;/p&gt;

&lt;p&gt;I've caught CORS middleware generating 500 stack traces from bot traffic. A VAT field being HTML-encoded into / before storing in the database. A soft-delete flag silently stripped from a user object because the context mapper didn't know about the new field.&lt;/p&gt;

&lt;p&gt;Each one was a sensible decision in isolation. Each one was wrong in context.&lt;/p&gt;

&lt;p&gt;AI didn't make me a worse engineer. But it did make me a much more deliberate reviewer. That wasn't in the marketing material.&lt;/p&gt;

&lt;p&gt;Do you have similar situations — where AI solved the task but broke something it wasn't supposed to touch? How do you review AI-generated code differently now?&lt;/p&gt;

</description>
      <category>ai</category>
      <category>codereview</category>
      <category>webdev</category>
      <category>productivity</category>
    </item>
    <item>
      <title>How Do You Actually Integrate Jira and CI/CD Into a Real Web Application?</title>
      <dc:creator>Alexandru A</dc:creator>
      <pubDate>Sat, 11 Apr 2026 11:31:24 +0000</pubDate>
      <link>https://dev.to/programmer4web/how-do-you-actually-integrate-jira-and-cicd-into-a-real-web-application-417d</link>
      <guid>https://dev.to/programmer4web/how-do-you-actually-integrate-jira-and-cicd-into-a-real-web-application-417d</guid>
      <description>&lt;p&gt;When you first hear about integrating Jira with CI/CD, it often sounds abstract—like something happening “around” your application rather than inside it. But once you start building a real system, you quickly realize the challenge is very concrete:&lt;/p&gt;

&lt;p&gt;How do you connect your &lt;strong&gt;codebase, pipelines, and issue tracking&lt;/strong&gt; into one coherent flow?&lt;/p&gt;

&lt;p&gt;Recently, while working on a quality assurance platform, I had to implement this integration from scratch—and the biggest lesson was this: integration is not a feature, it’s an architecture decision.&lt;/p&gt;

&lt;p&gt;At the application level, everything starts with traceability. Your web app doesn’t directly “talk” to Jira in most cases, but your development workflow does. The first real bridge between your application and Jira is your version control strategy. By enforcing that every branch and commit references a Jira ticket, you create a consistent link between code and requirement. This small discipline allows Jira to automatically reflect development activity without any custom logic inside your application.&lt;/p&gt;

&lt;p&gt;From there, CI/CD becomes the execution engine. Tools like Jenkins or GitHub Actions take over whenever code is pushed. They build your application, run validations, and determine whether the current state of the code is reliable. At this point, your application is indirectly part of the integration: every change to it triggers a pipeline that evaluates its health.&lt;/p&gt;

&lt;p&gt;The real integration happens when you close the loop between pipelines and Jira. A CI/CD system that only runs builds is useful, but not enough. The moment it starts sending results back—marking tickets as ready, blocked, or completed—you move from automation to coordination. This is where your application lifecycle becomes visible to the entire team.&lt;/p&gt;

&lt;p&gt;In practice, this often means configuring your pipeline to communicate with Jira through existing integrations or APIs. For example, after a successful build, a ticket can automatically move to a “Ready for Testing” state. If something fails, the same ticket can be flagged or annotated with the failure context. None of this requires your web application to change—but it fundamentally changes how your application is delivered and validated.&lt;/p&gt;

&lt;p&gt;While implementing this for a QA-focused platform, I went a step further and introduced a few key capabilities to make the integration truly practical in real-world scenarios. One of them was &lt;strong&gt;personal access tokens&lt;/strong&gt;, allowing users to securely authenticate API requests and integrate the platform with CI/CD pipelines, scripts, and internal tools—without exposing credentials. This made automation much safer and easier to adopt.&lt;/p&gt;

&lt;p&gt;Another important piece was the ability to &lt;strong&gt;push defects directly to Jira&lt;/strong&gt;, including detailed information and reproduction steps. Instead of manually copying bugs, test failures could be turned into structured Jira issues instantly, improving both speed and consistency in defect tracking.&lt;/p&gt;

&lt;p&gt;Finally, I implemented &lt;strong&gt;CI/CD-triggered Test Runs&lt;/strong&gt;, where pipelines can automatically create test runs as part of the delivery process. This ensures that every build is not just compiled, but also prepared for structured and traceable manual testing, fully connected back to Jira.&lt;/p&gt;

&lt;p&gt;One subtle but important realization is that your application’s structure influences how effective this integration can be. If your project lacks clear environments, consistent build steps, or reliable test execution, even the best Jira integration will feel unreliable. In other words, CI/CD doesn’t fix chaos—it exposes it.&lt;/p&gt;

&lt;p&gt;What truly defines a good integration is not how many tools you connect, but how well they communicate. A well-integrated setup creates a powerful effect: your Jira board becomes a real-time reflection of your application’s state. You no longer rely on manual updates or status meetings, because the system itself tells the story.&lt;/p&gt;

&lt;p&gt;In the end, integrating Jira and CI/CD into a web application is not about embedding APIs into your frontend or backend. It’s about connecting the lifecycle around your application so tightly that every change is tracked, validated, and visible.&lt;/p&gt;

&lt;p&gt;And once that happens, your application is no longer just code—it becomes part of a system that continuously proves its own quality.&lt;/p&gt;

&lt;p&gt;So the real question is not whether you can integrate Jira and CI/CD…&lt;/p&gt;

&lt;p&gt;…but whether your application lifecycle is structured well enough to support it.&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>automation</category>
      <category>testing</category>
      <category>javascript</category>
    </item>
    <item>
      <title>What happens when you give an AI your acceptance criteria and ask it to write test cases?</title>
      <dc:creator>Alexandru A</dc:creator>
      <pubDate>Mon, 06 Apr 2026 06:13:08 +0000</pubDate>
      <link>https://dev.to/programmer4web/what-happens-when-you-give-an-ai-your-acceptance-criteria-and-ask-it-to-write-test-cases-1d3</link>
      <guid>https://dev.to/programmer4web/what-happens-when-you-give-an-ai-your-acceptance-criteria-and-ask-it-to-write-test-cases-1d3</guid>
      <description>&lt;p&gt;After years of building frontend applications across e-health and e-learning products, I've sat in enough sprint reviews to notice a pattern: &lt;em&gt;QA test cases&lt;/em&gt; are written the same way every time. Happy path first, a handful of negative cases if the deadline allows, edge cases if the tester has seen that bug before.&lt;/p&gt;

&lt;p&gt;The process is repetitive, experience-dependent, and the first thing to get cut when a release is running late.&lt;/p&gt;

&lt;p&gt;So I started experimenting — feeding acceptance criteria directly to an AI and asking for a complete test suite. Here's an honest account of what works, what doesn't, and what it actually changes about the process.&lt;/p&gt;

&lt;p&gt;What the AI gets right immediately&lt;/p&gt;

&lt;p&gt;The output quality on structured coverage is genuinely impressive. Given clear acceptance criteria, the AI will produce happy path cases, negative scenarios, boundary conditions, and precondition states faster than any manual process — and it won't skip the boring ones.&lt;/p&gt;

&lt;p&gt;It also structures the output consistently: steps, expected results, preconditions. That consistency alone has value when you're maintaining a growing test library across releases.&lt;/p&gt;

&lt;p&gt;Where it falls short&lt;/p&gt;

&lt;p&gt;The AI has no knowledge of your system beyond what you give it. It doesn't know that your application handles an unauthenticated empty cart differently from an authenticated one, or that a particular field has a known edge case from three sprints ago.&lt;/p&gt;

&lt;p&gt;More critically: vague acceptance criteria produce vague test cases. With a human tester, ambiguity triggers a question. With an AI, it triggers a confident but incorrect assumption. If your requirements only describe the happy path, the generated test suite will skew heavily toward the happy path.&lt;/p&gt;

&lt;p&gt;What actually determines the output quality&lt;/p&gt;

&lt;p&gt;After enough iterations, the pattern is consistent: the quality of the generated tests is almost entirely determined by the quality of the input. A few things that made a measurable difference:&lt;/p&gt;

&lt;p&gt;Write constraints explicitly. "The form should validate correctly" is not a requirement. "The email field must reject inputs without an @ symbol and a valid domain" is.&lt;br&gt;
Include failure conditions in your acceptance criteria. If you only document what should succeed, the AI will generate tests for success.&lt;br&gt;
Specify the user role and context. "As an admin" and "as a guest" produce meaningfully different test suites for the same feature.&lt;br&gt;
Add environment context. First-time user vs returning user, mobile vs desktop, authenticated vs unauthenticated — these details shape coverage significantly.&lt;br&gt;
An honest assessment&lt;/p&gt;

&lt;p&gt;AI doesn't replace a QA engineer. It replaces the first draft.&lt;/p&gt;

&lt;p&gt;A good tester still needs to review the output, discard cases that don't apply to the actual system, and add scenarios based on knowledge no requirements document captures. That judgment isn't going away.&lt;/p&gt;

&lt;p&gt;But the shift from writing to reviewing is more significant than it sounds. Starting with 80% of the test suite already structured means your QA effort goes toward the cases that actually require expertise — the ones that come from understanding the system, not from reading the spec.&lt;/p&gt;

&lt;p&gt;That's a different kind of QA work. Arguably a more valuable one.&lt;/p&gt;

&lt;p&gt;I cover this in more depth in a free QA handbook — link in my profile if you're interested.&lt;/p&gt;

&lt;p&gt;Has anyone else been experimenting with AI-generated test cases? Curious whether the input quality pattern holds across different approaches — and what you've found the AI consistently gets wrong.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>automation</category>
      <category>productivity</category>
      <category>testing</category>
    </item>
  </channel>
</rss>
