<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: 137Foundry</title>
    <description>The latest articles on DEV Community by 137Foundry (@137foundry).</description>
    <link>https://dev.to/137foundry</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/137foundry"/>
    <language>en</language>
    <item>
      <title>7 Free UX Tools for Researching and Testing Web Form Design</title>
      <dc:creator>137Foundry</dc:creator>
      <pubDate>Thu, 23 Apr 2026 15:30:19 +0000</pubDate>
      <link>https://dev.to/137foundry/7-free-ux-tools-for-researching-and-testing-web-form-design-35d2</link>
      <guid>https://dev.to/137foundry/7-free-ux-tools-for-researching-and-testing-web-form-design-35d2</guid>
      <description>&lt;p&gt;Designing better forms requires data about how users interact with them. These seven tools help with different parts of that process: understanding where forms fail, testing how users experience them, checking accessibility, and researching what evidence-based form design looks like across products and contexts. All have a meaningful free tier.&lt;/p&gt;

&lt;h2&gt;
  
  
  1. Microsoft Clarity
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://clarity.microsoft.com" rel="noopener noreferrer"&gt;Microsoft Clarity&lt;/a&gt; is a free behavioral analytics tool that records user sessions and generates heatmaps of click, scroll, and interaction patterns. For form design, the session recording feature is particularly valuable: you can watch how users interact with specific form fields, where they pause, which fields they re-enter, and at which point they abandon the form.&lt;/p&gt;

&lt;p&gt;Clarity's "rage click" and "dead click" detection automatically flags interactions where users appear frustrated (rapid repeated clicks) or where clicks are not triggering expected responses. Both of these patterns frequently appear in form interaction data and can surface problems with small touch targets, confusing validation states, and non-interactive-looking submit buttons.&lt;/p&gt;

&lt;p&gt;The session recording capability does not capture personally identifiable information or form field contents by default, which makes it safer to use on forms without additional configuration. The free tier includes unlimited session recordings and heatmaps.&lt;/p&gt;

&lt;h2&gt;
  
  
  2. Google Analytics 4 (with Event Tracking)
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://analytics.google.com" rel="noopener noreferrer"&gt;Google Analytics 4&lt;/a&gt; tracks user behavior across your site and can be configured with custom events to measure form-specific metrics: how many users viewed a form, how many started it, how many completed it, and what percentage abandoned at each step of a multi-step form.&lt;/p&gt;

&lt;p&gt;The funnel analysis feature in GA4 allows you to define a sequence of steps and see the dropout rate at each point. For multi-step forms, this reveals exactly which step drives the most abandonment. For single-page forms with multiple fields, field-level events require manual event implementation, but the resulting data is highly specific to your actual form and users.&lt;/p&gt;

&lt;p&gt;GA4 is free at standard traffic volumes. The event tracking setup for forms requires some JavaScript implementation, but the payoff in diagnostic specificity is significant.&lt;/p&gt;

&lt;h2&gt;
  
  
  3. Maze (Free Tier)
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://maze.design" rel="noopener noreferrer"&gt;Maze&lt;/a&gt; is an unmoderated user testing platform that lets you create tasks for users to complete, including filling out a prototype or live form, and then analyzes where users get stuck or fail. The free tier includes a limited number of tests per month and access to the core path and mission metrics.&lt;/p&gt;

&lt;p&gt;For form testing, Maze is useful for discovering usability problems before launch by having representative users attempt to complete the form while recording where they hesitate, fail, or succeed. The platform aggregates results across multiple participants and shows paths through the form as a visual flow.&lt;/p&gt;

&lt;p&gt;The unmoderated format means testing can happen asynchronously without requiring you to be present, which makes it practical to run a quick test before shipping a form change.&lt;/p&gt;

&lt;p&gt;For the principles behind what these tools help you identify, the guide at &lt;a href="https://137foundry.com/articles/how-to-design-web-forms-users-complete" rel="noopener noreferrer"&gt;137foundry.com/articles/how-to-design-web-forms-users-complete&lt;/a&gt; covers validation patterns, field count, mobile layout, and error message design in detail.&lt;/p&gt;

&lt;h2&gt;
  
  
  4. WAVE Accessibility Checker
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://webaim.org" rel="noopener noreferrer"&gt;WebAIM&lt;/a&gt; produces WAVE, a browser-based accessibility evaluation tool that checks web pages including forms for accessibility errors and warnings. Running WAVE on a form reveals missing labels, insufficient color contrast, unlabeled form controls, and missing ARIA attributes that would make the form inaccessible to users of assistive technology.&lt;/p&gt;

&lt;p&gt;The browser extension version evaluates pages in their current state, including dynamic states like validation errors, which makes it more useful for form accessibility testing than crawling-based tools that only see the initial page state.&lt;/p&gt;

&lt;p&gt;WAVE is free as both a browser extension and a web-based tool. For teams embedding accessibility checks in a development workflow, the API version allows automated scanning as part of a CI pipeline.&lt;/p&gt;

&lt;h2&gt;
  
  
  5. Axe DevTools (Free Browser Extension)
&lt;/h2&gt;

&lt;p&gt;The axe DevTools browser extension from &lt;a href="https://www.deque.com" rel="noopener noreferrer"&gt;Deque Systems&lt;/a&gt; performs automated accessibility audits on web pages. Like WAVE, it identifies accessibility violations and provides specific guidance on how to fix them.&lt;/p&gt;

&lt;p&gt;Where axe differentiates itself for development teams is in its integration with the browser DevTools panel, making it easy to inspect specific elements alongside their accessibility issues. The extension is built on the same axe-core rules used by tools like Jest-axe and Playwright's accessibility testing APIs, which means issues found in browser testing with axe are consistent with what automated testing will catch.&lt;/p&gt;

&lt;p&gt;The free extension covers a substantial portion of WCAG 2.1 violations. The paid DevTools Pro version adds guided testing and more comprehensive rule sets.&lt;/p&gt;

&lt;h2&gt;
  
  
  6. The A11y Project Checklist
&lt;/h2&gt;

&lt;p&gt;The &lt;a href="https://www.a11yproject.com" rel="noopener noreferrer"&gt;A11y Project&lt;/a&gt; maintains a comprehensive checklist of web accessibility requirements organized by WCAG criteria. For form design specifically, the checklist covers labels, error identification, keyboard navigation, focus management, and timeout notifications, all in plain language that is more actionable than reading the WCAG specification directly.&lt;/p&gt;

&lt;p&gt;This is a reference tool rather than a testing tool, but using it as a design checklist before building a form reduces the number of accessibility fixes required after testing. It is particularly useful for designers and developers who are not accessibility specialists and need a clear, prioritized list of what to check.&lt;/p&gt;

&lt;h2&gt;
  
  
  7. Nielsen Norman Group Research Reports (Free Articles)
&lt;/h2&gt;

&lt;p&gt;The &lt;a href="https://www.nngroup.com" rel="noopener noreferrer"&gt;Nielsen Norman Group&lt;/a&gt; makes a substantial portion of its UX research findings freely available in article form. For form design, the NNG article archive covers field ordering, label placement, error message design, mobile form patterns, multi-step form design, and checkout UX in detail backed by usability studies.&lt;/p&gt;

&lt;p&gt;While the full research reports require a subscription or purchase, the free articles provide enough evidence-based guidance to inform most form design decisions. Searching the NNG archive for "form design" or "form usability" returns a large set of relevant articles that can be used as a reference layer alongside your own testing data.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8ag5g8e9yh2hv7vn2dyf.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8ag5g8e9yh2hv7vn2dyf.jpg" alt="person laptop testing interface design form ux" width="800" height="533"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Photo by &lt;a href="https://pixabay.com/users/Pexels-2286921/" rel="noopener noreferrer"&gt;Pexels&lt;/a&gt; on &lt;a href="https://pixabay.com" rel="noopener noreferrer"&gt;Pixabay&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  How These Tools Work Together
&lt;/h2&gt;

&lt;p&gt;Using these tools together covers the full form design and validation cycle. Clarity and Google Analytics provide behavioral data from real users on your live forms. Maze lets you test with representative users before or alongside launch. WAVE and Axe check accessibility compliance at the implementation level. The A11y Project gives you a reference checklist for design decisions. NNG research provides the evidence base for why certain patterns work and others do not.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://137foundry.com" rel="noopener noreferrer"&gt;UX and web studio 137Foundry&lt;/a&gt; builds and tests forms as part of broader web design and development projects. The &lt;a href="https://137foundry.com/services/web-development" rel="noopener noreferrer"&gt;web development services page&lt;/a&gt; describes how form design and UX testing fit into our project process.&lt;/p&gt;

&lt;p&gt;The &lt;a href="https://www.w3.org" rel="noopener noreferrer"&gt;World Wide Web Consortium&lt;/a&gt; maintains the WCAG accessibility standards that WAVE, Axe, and the A11y Project checklist are built around, and provides the authoritative reference for understanding accessibility requirements at a specification level.&lt;/p&gt;

&lt;p&gt;The most effective approach to form improvement combines at least two of these tools: one that provides behavioral data from real users (Clarity, GA4) and one that provides a way to understand the why behind that behavior (Maze user testing, NNG research). Behavioral data tells you where users stop. User testing and research tell you why. Acting on behavioral data without understanding why the abandonment is happening can lead to fixing symptoms rather than the underlying design problem. The combination of quantitative data and qualitative insight is what produces form improvements that hold up over time rather than winning a single A/B test and then plateauing.&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>ux</category>
      <category>tools</category>
      <category>productivity</category>
    </item>
    <item>
      <title>How Inline Validation Reduces Form Abandonment and Errors</title>
      <dc:creator>137Foundry</dc:creator>
      <pubDate>Thu, 23 Apr 2026 15:28:31 +0000</pubDate>
      <link>https://dev.to/137foundry/how-inline-validation-reduces-form-abandonment-and-errors-5258</link>
      <guid>https://dev.to/137foundry/how-inline-validation-reduces-form-abandonment-and-errors-5258</guid>
      <description>&lt;p&gt;Form validation is one of the most consequential UX decisions in web development. The same set of validation rules, implemented with two different timing strategies, can produce meaningfully different completion rates. Inline validation, where feedback appears field-by-field as users progress through a form, consistently outperforms submit-and-validate-all patterns for user experience and completion.&lt;/p&gt;

&lt;p&gt;This article covers how inline validation works, when to use it, how to implement it correctly, and the specific patterns that make it effective versus counterproductive.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Submit-Time Validation Creates a Poor Experience
&lt;/h2&gt;

&lt;p&gt;The traditional validation pattern, validate all fields when the user clicks submit, creates several compounding problems.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Error discovery is deferred.&lt;/strong&gt; The user completes the entire form before learning anything is wrong. At that point they have the most invested in the task and the most to lose psychologically if they have to redo work.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Error location requires searching.&lt;/strong&gt; Validation errors returned after submit are typically shown at the top of the form or highlighted inline, but the user must scroll back through the form to find each highlighted field. On a long form, this requires significant navigation. On mobile, it can feel like starting over.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Multiple errors appear simultaneously.&lt;/strong&gt; When several fields fail validation at once, users face a list of errors to work through. Each one requires re-reading the instructions, locating the field, and correcting it. The cognitive and emotional cost compounds.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;False success signals occur.&lt;/strong&gt; A user who fills in a field incorrectly but receives no feedback until submitting believes the field is fine until the error appears. The correction feels like a reversal rather than a natural part of the process.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Inline Validation Changes
&lt;/h2&gt;

&lt;p&gt;Inline validation checks each field individually after the user leaves it (on blur). Feedback appears immediately below the field while the user is still in the context of that section of the form. Errors are corrected one at a time, at the moment of lowest cost.&lt;/p&gt;

&lt;p&gt;The research on this is consistent. A landmark study by the Interaction Design Foundation and subsequent replications found that inline validation reduced errors by 22%, reduced completion time by 42%, and increased satisfaction scores compared to after-submit validation for the same form content. The gains are largest for long forms and forms with complex field requirements.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://137foundry.com" rel="noopener noreferrer"&gt;Web design agency 137Foundry&lt;/a&gt; implements inline validation as the default validation pattern on forms built for client projects. The principle is covered in our broader form design guide at &lt;a href="https://137foundry.com/articles/how-to-design-web-forms-users-complete" rel="noopener noreferrer"&gt;137foundry.com/articles/how-to-design-web-forms-users-complete&lt;/a&gt;, which covers field count, input types, mobile layout, and confirmation experience alongside validation strategy.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Critical Implementation Detail: Validate on Blur, Not on Input
&lt;/h2&gt;

&lt;p&gt;The most common inline validation mistake is triggering validation while the user is still typing (on the &lt;code&gt;input&lt;/code&gt; event). This produces false errors constantly.&lt;/p&gt;

&lt;p&gt;An email field checked on input will show "invalid email" the moment the user types a single character. A user who has not yet typed the @ symbol is not making an error; they are in the middle of typing. Checking at this point creates visual noise and anxiety without providing useful feedback.&lt;/p&gt;

&lt;p&gt;The correct event to validate on is &lt;code&gt;blur&lt;/code&gt;, which fires when the user moves focus out of the field. At that point, the user has finished entering their input and validation feedback is appropriate and timely.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="nb"&gt;document&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;querySelector&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;#email&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;addEventListener&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;blur&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nf"&gt;function &lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nf"&gt;validateField&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For confirm-password or interdependent fields where one field's validity depends on another's value, you may need to re-validate one field when the other changes. For example, confirming that a "confirm password" field matches the password field should re-run when either field changes.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;password&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nb"&gt;document&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;querySelector&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;#password&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;confirm&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nb"&gt;document&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;querySelector&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;#confirm-password&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="nx"&gt;confirm&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;addEventListener&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;blur&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nf"&gt;validateMatch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;password&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;confirm&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;
&lt;span class="nx"&gt;password&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;addEventListener&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;blur&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;confirm&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;value&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="nf"&gt;validateMatch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;password&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;confirm&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Error Message Placement and Content
&lt;/h2&gt;

&lt;p&gt;Error messages should appear immediately below the field they describe, in the reading flow between the field and the next element. They should be visible without scrolling, associated with the field via &lt;code&gt;aria-describedby&lt;/code&gt; for screen reader accessibility, and dismissed automatically when the user corrects the error.&lt;/p&gt;

&lt;p&gt;Message content should be specific about what is wrong and what the correct format is:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight html"&gt;&lt;code&gt;&lt;span class="nt"&gt;&amp;lt;label&lt;/span&gt; &lt;span class="na"&gt;for=&lt;/span&gt;&lt;span class="s"&gt;"phone"&lt;/span&gt;&lt;span class="nt"&gt;&amp;gt;&lt;/span&gt;Phone number&lt;span class="nt"&gt;&amp;lt;/label&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;input&lt;/span&gt;
  &lt;span class="na"&gt;type=&lt;/span&gt;&lt;span class="s"&gt;"tel"&lt;/span&gt;
  &lt;span class="na"&gt;id=&lt;/span&gt;&lt;span class="s"&gt;"phone"&lt;/span&gt;
  &lt;span class="na"&gt;aria-describedby=&lt;/span&gt;&lt;span class="s"&gt;"phone-error"&lt;/span&gt;
  &lt;span class="na"&gt;aria-invalid=&lt;/span&gt;&lt;span class="s"&gt;"true"&lt;/span&gt;
&lt;span class="nt"&gt;/&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;p&lt;/span&gt; &lt;span class="na"&gt;id=&lt;/span&gt;&lt;span class="s"&gt;"phone-error"&lt;/span&gt; &lt;span class="na"&gt;role=&lt;/span&gt;&lt;span class="s"&gt;"alert"&lt;/span&gt;&lt;span class="nt"&gt;&amp;gt;&lt;/span&gt;
  Enter a phone number with 10 digits, like 5551234567
&lt;span class="nt"&gt;&amp;lt;/p&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;aria-invalid="true"&lt;/code&gt; attribute signals to screen readers that the field has an error. The &lt;code&gt;role="alert"&lt;/code&gt; on the error paragraph causes screen readers to announce the message when it appears, without requiring the user to navigate to it. The &lt;a href="https://developer.mozilla.org" rel="noopener noreferrer"&gt;Mozilla Developer Network&lt;/a&gt; provides the full reference for ARIA form patterns, and the &lt;a href="https://www.w3.org/WAI" rel="noopener noreferrer"&gt;Web Accessibility Initiative&lt;/a&gt; documents the accessibility requirements for form error identification.&lt;/p&gt;

&lt;h2&gt;
  
  
  Visual Design of Inline Validation States
&lt;/h2&gt;

&lt;p&gt;Each field should have three visible states beyond the default: active/focused, valid, and error.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Active/focused:&lt;/strong&gt; A clear focus ring that meets WCAG 2.1 contrast requirements. Do not remove the native focus ring without providing a visible alternative.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Valid:&lt;/strong&gt; A subtle success indicator, typically a green checkmark or border color change, that appears when the user leaves a field after entering acceptable input. Keep this understated; a form that aggressively celebrates each correct field becomes noisy.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Error:&lt;/strong&gt; A red border, error icon, and the error message. Red should not be the only indicator (for color-blind users); combine it with an icon and the text message.&lt;/p&gt;

&lt;p&gt;Avoid using placeholder text to communicate required format or examples. Placeholder text disappears when the user starts typing, which means they cannot reference it if they are unsure what to enter. Visible hint text below the label, present before the user interacts with the field, is the correct pattern.&lt;/p&gt;

&lt;h2&gt;
  
  
  Testing Inline Validation With Automated Tools
&lt;/h2&gt;

&lt;p&gt;Inline validation introduces dynamic content changes to the DOM, which means your standard HTML validation pass may not catch all issues. Testing should cover:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Keyboard navigation:&lt;/strong&gt; Tab through all fields and verify that error messages appear and are announced correctly without requiring a mouse.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Screen reader testing:&lt;/strong&gt; Use NVDA (on Windows) or VoiceOver (on macOS and iOS) to verify that errors are announced at the right moment and associated correctly with their fields.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Automated accessibility checks:&lt;/strong&gt; Tools like Axe (from &lt;a href="https://www.deque.com" rel="noopener noreferrer"&gt;deque.com&lt;/a&gt;) and the built-in browser DevTools accessibility panel catch missing ARIA attributes, insufficient color contrast, and unlabeled fields.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// Example: programmatically triggering validation for testing&lt;/span&gt;
&lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;runValidationTests&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;fields&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nb"&gt;document&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;querySelectorAll&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;[data-validate]&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="nx"&gt;fields&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;forEach&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;field&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;field&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;dispatchEvent&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Event&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;blur&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;
  &lt;span class="p"&gt;});&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;a href="https://www.a11yproject.com" rel="noopener noreferrer"&gt;A11y Project&lt;/a&gt; maintains a checklist that covers the accessibility requirements for form validation states. &lt;a href="https://webaim.org" rel="noopener noreferrer"&gt;WebAIM&lt;/a&gt; provides additional documentation on accessible form design and testing approaches.&lt;/p&gt;

&lt;h2&gt;
  
  
  When to Show Positive Confirmation
&lt;/h2&gt;

&lt;p&gt;Not every field needs a success state. For fields where the validity criteria are simple and familiar (email, phone number, date), a success indicator after the user leaves the field provides reassurance that the input was accepted. For fields with complex or unusual requirements (password strength, specific numeric ranges), the success state after validation is more valuable because it confirms that the requirements were met.&lt;/p&gt;

&lt;p&gt;For password fields, showing strength feedback while the user is typing (on the &lt;code&gt;input&lt;/code&gt; event) is one of the few legitimate exceptions to the blur-validation rule, because the feedback is progressive and genuinely useful during input, not a false error.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="nx"&gt;password&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;addEventListener&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;input&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nf"&gt;function &lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nf"&gt;updateStrengthMeter&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;value&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;a href="https://www.nngroup.com" rel="noopener noreferrer"&gt;Nielsen Norman Group&lt;/a&gt; has published specific research on password field design and strength meter usability that provides useful reference for this specific case.&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;Inline validation on the blur event, with specific error messages, correct ARIA attributes, and clear visual states, is consistently better than submit-time validation for both users and completion rates. The implementation is straightforward in vanilla JavaScript and can be adapted to any front-end framework. The gains in completion rate, user satisfaction, and error reduction are well-documented and reliably reproducible by applying the pattern correctly.&lt;/p&gt;

&lt;p&gt;For the broader context on how validation fits into a complete form UX strategy, the &lt;a href="https://137foundry.com/services" rel="noopener noreferrer"&gt;137Foundry services page&lt;/a&gt; covers the web design and development work where these patterns are applied in production.&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>ux</category>
      <category>javascript</category>
      <category>productivity</category>
    </item>
    <item>
      <title>7 Free Tools for Testing and Analyzing HTTP Caching Behavior</title>
      <dc:creator>137Foundry</dc:creator>
      <pubDate>Wed, 22 Apr 2026 11:15:31 +0000</pubDate>
      <link>https://dev.to/137foundry/7-free-tools-for-testing-and-analyzing-http-caching-behavior-2n29</link>
      <guid>https://dev.to/137foundry/7-free-tools-for-testing-and-analyzing-http-caching-behavior-2n29</guid>
      <description>&lt;p&gt;Getting HTTP caching right is mostly a matter of setting the correct headers. But knowing whether you set them correctly requires being able to inspect actual response headers, simulate cache behavior, and verify that resources are being cached and invalidated the way you intend.&lt;/p&gt;

&lt;p&gt;These seven tools let you do that without paying for anything. They cover browser-level inspection, command-line header analysis, performance auditing, and CDN-layer caching behavior.&lt;/p&gt;

&lt;h2&gt;
  
  
  1. Chrome DevTools Network Panel
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://www.google.com/chrome" rel="noopener noreferrer"&gt;Chrome DevTools&lt;/a&gt; is the fastest way to inspect cache headers for any resource a page loads. Open the Network panel, load a page with cache disabled, and click any request to see its response headers.&lt;/p&gt;

&lt;p&gt;The panel shows &lt;code&gt;Cache-Control&lt;/code&gt;, &lt;code&gt;ETag&lt;/code&gt;, &lt;code&gt;Last-Modified&lt;/code&gt;, &lt;code&gt;Expires&lt;/code&gt;, and &lt;code&gt;Vary&lt;/code&gt; headers directly. On subsequent loads, the Size column displays "disk cache" or "memory cache" for resources served from cache. The status column shows 304 for revalidated resources.&lt;/p&gt;

&lt;p&gt;The Lighthouse tab in DevTools includes a "Serve static assets with an efficient cache policy" audit that lists every resource with a TTL under one week and estimates the bandwidth savings from extending it.&lt;/p&gt;

&lt;p&gt;This should be the first tool in any caching audit workflow.&lt;/p&gt;

&lt;h2&gt;
  
  
  2. curl
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://curl.se" rel="noopener noreferrer"&gt;curl&lt;/a&gt; is the most reliable way to inspect HTTP headers from the command line. It makes actual HTTP requests to your server or CDN and displays the raw response headers.&lt;/p&gt;

&lt;p&gt;To see just the response headers without downloading the body:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="nt"&gt;-I&lt;/span&gt; https://example.com
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To follow redirects and see all response headers along the way:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="nt"&gt;-IL&lt;/span&gt; https://example.com/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;-I&lt;/code&gt; flag sends a HEAD request. For resources where HEAD behaves differently from GET, use &lt;code&gt;-X GET --head&lt;/code&gt; instead.&lt;/p&gt;

&lt;p&gt;curl is particularly useful for checking how your CDN modifies cache headers relative to what your origin server sends. Run the same command against the CDN URL and the origin URL directly and compare the output. Differences between the two often explain why a resource appears to cache correctly at the origin but fails to cache at the edge.&lt;/p&gt;

&lt;h2&gt;
  
  
  3. WebPageTest
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://github.com/WPO-Foundation/webpagetest" rel="noopener noreferrer"&gt;WebPageTest&lt;/a&gt; is a free, open-source performance testing tool that runs synthetic tests from multiple geographic locations. It measures real page load times including the effect of caching on repeat visits.&lt;/p&gt;

&lt;p&gt;The "Repeat View" feature runs the same test twice: once for a first-time visitor and once for a returning visitor who has cached resources. The difference between the two load times tells you how much your current cache configuration is helping for repeat visitors.&lt;/p&gt;

&lt;p&gt;WebPageTest also produces a waterfall chart that shows when each resource was requested, whether it was cached, and what the response headers contained. This is useful for identifying resources that should be cached but are not.&lt;/p&gt;

&lt;h2&gt;
  
  
  4. PageSpeed Insights
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://pagespeed.web.dev" rel="noopener noreferrer"&gt;PageSpeed Insights&lt;/a&gt; is Google's free performance analysis tool that runs &lt;a href="https://developer.chrome.com/docs/lighthouse" rel="noopener noreferrer"&gt;Lighthouse&lt;/a&gt; audits against any public URL. Its "Serve static assets with an efficient cache policy" audit surfaces resources with short or missing cache TTLs and estimates the bandwidth savings from extending them.&lt;/p&gt;

&lt;p&gt;Because PageSpeed Insights runs Lighthouse server-side, results are consistent and reproducible regardless of which device or browser you are testing from. This makes it useful for confirming that a cache configuration change had the intended effect after deployment without relying on local browser state.&lt;/p&gt;

&lt;p&gt;The tool separates lab data from field data. Lab data shows what Lighthouse measured in a controlled test run. Field data draws from the Chrome User Experience Report, giving you a sense of real-world caching performance across actual user visits. For cache header auditing, the lab data section is most directly relevant because it shows exactly which headers each resource returned during the test. For teams that ship frequently, running PageSpeed Insights against a production URL after each deployment is a low-cost check that cache header regressions have not crept in through new asset types or updated server configurations. The audit output names each offending resource alongside its current TTL and a recommended minimum, which maps directly to the &lt;code&gt;Cache-Control&lt;/code&gt; directives you need to adjust.&lt;/p&gt;

&lt;h2&gt;
  
  
  5. Redbot
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://redbot.org" rel="noopener noreferrer"&gt;Redbot&lt;/a&gt; is a purpose-built HTTP header analysis tool maintained by the HTTP working group community. You enter a URL and it fetches the resource and analyzes the response headers in detail, explaining what each directive means and flagging problems.&lt;/p&gt;

&lt;p&gt;Redbot explains why a header is or is not correct, not just whether it is present. For developers learning caching behavior, this explanatory output is more useful than a binary pass/fail.&lt;/p&gt;

&lt;p&gt;It checks &lt;code&gt;Cache-Control&lt;/code&gt;, &lt;code&gt;ETag&lt;/code&gt;, &lt;code&gt;Last-Modified&lt;/code&gt;, &lt;code&gt;Vary&lt;/code&gt;, &lt;code&gt;Content-Encoding&lt;/code&gt;, and several other headers, and it follows redirects to check the headers at the final destination.&lt;/p&gt;

&lt;h2&gt;
  
  
  6. Fastly's Cache Simulator
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://www.fastly.com" rel="noopener noreferrer"&gt;Fastly&lt;/a&gt; provides a free cache behavior simulator as part of their developer documentation. It lets you input response headers and see how a CDN interprets them, including which directives control what behavior at the shared cache layer.&lt;/p&gt;

&lt;p&gt;While Fastly is a paid CDN service, the simulator itself is free and useful for understanding CDN-specific behavior independently of which CDN you actually use. Different CDNs have different default behaviors for responses without explicit &lt;code&gt;public&lt;/code&gt; directives or for responses that include &lt;code&gt;Set-Cookie&lt;/code&gt; headers.&lt;/p&gt;

&lt;p&gt;The simulator is particularly useful for verifying how &lt;code&gt;s-maxage&lt;/code&gt; and &lt;code&gt;stale-while-revalidate&lt;/code&gt; behave at the CDN layer before you deploy.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"The most common caching audit finding we see is long-lived HTML pages referencing short-lived assets. Two header changes fix the whole pattern." - Dennis Traina, &lt;a href="https://137foundry.com/services" rel="noopener noreferrer"&gt;founder of 137Foundry&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  7. Nginx Logs With Cache Hit Analysis
&lt;/h2&gt;

&lt;p&gt;If you are running &lt;a href="https://www.nginx.com" rel="noopener noreferrer"&gt;Nginx&lt;/a&gt; as a reverse proxy or CDN equivalent, its proxy cache module logs include a &lt;code&gt;$upstream_cache_status&lt;/code&gt; variable that reports whether each request was a HIT, MISS, BYPASS, or EXPIRED in the cache.&lt;/p&gt;

&lt;p&gt;Adding this variable to your access log format gives you a real-time cache hit rate breakdown without any additional tooling:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight nginx"&gt;&lt;code&gt;&lt;span class="k"&gt;log_format&lt;/span&gt; &lt;span class="s"&gt;cache_log&lt;/span&gt; &lt;span class="s"&gt;'&lt;/span&gt;&lt;span class="nv"&gt;$remote_addr&lt;/span&gt; &lt;span class="s"&gt;-&lt;/span&gt; &lt;span class="nv"&gt;$upstream_cache_status&lt;/span&gt; &lt;span class="s"&gt;-&lt;/span&gt; &lt;span class="s"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$request&lt;/span&gt;&lt;span class="s"&gt;"&lt;/span&gt; &lt;span class="nv"&gt;$status&lt;/span&gt;&lt;span class="s"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;access_log&lt;/span&gt; &lt;span class="n"&gt;/var/log/nginx/cache.log&lt;/span&gt; &lt;span class="s"&gt;cache_log&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After collecting a few thousand requests, parsing the log for cache status gives you a practical hit rate for each URL pattern. A consistently low hit rate on resources that should be cacheable points to a configuration problem.&lt;/p&gt;

&lt;p&gt;This approach works for any Nginx-based cache, including Nginx configured as a local caching proxy in front of a Node.js or Python application.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to Use These Tools Together
&lt;/h2&gt;

&lt;p&gt;A typical caching audit workflow:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Use Chrome DevTools to identify resources with missing or short Cache-Control headers&lt;/li&gt;
&lt;li&gt;Verify the exact headers with curl from the command line to confirm what the CDN is passing through&lt;/li&gt;
&lt;li&gt;Run WebPageTest or GTmetrix to see the repeat-visit improvement from fixing the headers&lt;/li&gt;
&lt;li&gt;Use Redbot on individual URLs to get detailed explanations for anything unclear&lt;/li&gt;
&lt;li&gt;Use Nginx logs or Fastly's simulator to verify CDN-layer behavior&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The audit is iterative rather than one-time. New resources added after a deployment often inherit whatever default header configuration is in place, which may not match the correct TTL for their type. Reviewing caching headers after each major deployment is a low-effort way to catch these regressions before they compound into a significant performance difference. Automating the check with curl against a known resource list as part of your deployment verification process eliminates the need for manual audits in the first place.&lt;/p&gt;

&lt;p&gt;For a deeper look at the caching patterns behind these checks, the article &lt;a href="https://137foundry.com/articles/http-caching-web-application-guide" rel="noopener noreferrer"&gt;HTTP Caching: A Practical Guide for Web Developers&lt;/a&gt; covers the strategy behind what the tools surface. &lt;a href="https://137foundry.com" rel="noopener noreferrer"&gt;137Foundry&lt;/a&gt; includes caching configuration as a standard part of web application delivery. &lt;a href="https://developer.mozilla.org" rel="noopener noreferrer"&gt;MDN's HTTP caching documentation&lt;/a&gt; provides the canonical reference for every header and directive these tools analyze.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw95tc4yvbg8cmi0cfujk.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw95tc4yvbg8cmi0cfujk.jpeg" alt="developer tools browser showing http request headers and cache status" width="800" height="533"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Photo by svetlana photographer on &lt;a href="https://www.pexels.com" rel="noopener noreferrer"&gt;Pexels&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>programming</category>
      <category>productivity</category>
    </item>
    <item>
      <title>How HTTP Caching Works at the Browser, CDN, and Proxy Layer</title>
      <dc:creator>137Foundry</dc:creator>
      <pubDate>Wed, 22 Apr 2026 11:09:56 +0000</pubDate>
      <link>https://dev.to/137foundry/how-http-caching-works-at-the-browser-cdn-and-proxy-layer-j5h</link>
      <guid>https://dev.to/137foundry/how-http-caching-works-at-the-browser-cdn-and-proxy-layer-j5h</guid>
      <description>&lt;p&gt;HTTP caching is not one thing. It is a set of behaviors that happen at different layers of the network stack, each governed by the same response headers but producing different effects depending on which layer is doing the caching.&lt;/p&gt;

&lt;p&gt;Understanding each layer separately makes it much easier to diagnose caching problems, because a bug that looks like "the browser is not caching this" is often actually "the CDN is stripping the header before it reaches the browser." Treating all caching as one system obscures the actual source of the problem.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Three Cache Layers
&lt;/h2&gt;

&lt;p&gt;A typical web request passes through three caches on its way from origin server to user.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The browser cache&lt;/strong&gt; is local to the user's device. It stores responses that the server marks as cacheable, keyed by URL. Subsequent requests for the same URL check this cache first. If the stored response is still fresh, the request never leaves the device.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The CDN edge cache&lt;/strong&gt; is a shared cache maintained by your CDN provider at geographic nodes distributed around the world. When a user requests a resource, the CDN node closest to them checks whether it has a cached copy. If it does, it serves the response directly. If not, it fetches from the origin and caches the response for future requests in that region.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Intermediate proxies&lt;/strong&gt; sit between the user and the CDN, or between the CDN and origin, depending on network topology. Corporate networks often include forward proxies that cache responses on behalf of internal users. These proxies also consult Cache-Control directives.&lt;/p&gt;

&lt;h2&gt;
  
  
  How Freshness Works
&lt;/h2&gt;

&lt;p&gt;A cached response has a lifetime. The server signals how long the response should be considered fresh via the &lt;code&gt;max-age&lt;/code&gt; directive in the &lt;code&gt;Cache-Control&lt;/code&gt; header.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;Cache-Control: max-age=3600&lt;/code&gt; means the response is fresh for 3,600 seconds after it was received. During that window, caches at any layer can serve the stored response without consulting the server.&lt;/p&gt;

&lt;p&gt;After the window expires, the response is stale. A stale response can still be served in some cases, but the cache should attempt to revalidate it first. Revalidation sends a conditional request to the server: "I have this response from earlier. Has anything changed?"&lt;/p&gt;

&lt;p&gt;The server responds either with &lt;code&gt;304 Not Modified&lt;/code&gt;, which means the cached copy is still valid and can be served, or with a full &lt;code&gt;200 OK&lt;/code&gt; response containing the updated content.&lt;/p&gt;

&lt;p&gt;Freshness applies independently at each cache layer. A browser cache entry might expire before a CDN cache entry for the same resource, or vice versa, depending on how long the resource has been cached at each layer.&lt;/p&gt;

&lt;h2&gt;
  
  
  How the Browser Cache Decides What to Store
&lt;/h2&gt;

&lt;p&gt;The browser caches a response if the response headers permit it. The decision involves several rules:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The request method must be GET or HEAD. POST responses are not cached.&lt;/li&gt;
&lt;li&gt;The response status must be cacheable (200, 301, 302, 404, and a few others are cacheable by default).&lt;/li&gt;
&lt;li&gt;The response must not include &lt;code&gt;Cache-Control: no-store&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;If &lt;code&gt;Cache-Control: private&lt;/code&gt; is present, the response is cached only in the browser, not in shared caches.&lt;/li&gt;
&lt;li&gt;If no Cache-Control header is present, the browser may cache heuristically based on &lt;code&gt;Last-Modified&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;When a cached response becomes stale, the browser sends a conditional request. If the response included an &lt;code&gt;ETag&lt;/code&gt; header, the browser sends &lt;code&gt;If-None-Match: "etag-value"&lt;/code&gt;. If the response included &lt;code&gt;Last-Modified&lt;/code&gt;, the browser sends &lt;code&gt;If-Modified-Since: timestamp&lt;/code&gt;. The server responds with 304 if the resource has not changed, allowing the browser to extend the life of its cached copy.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8zs0vfqhuj9g6bia8jsg.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8zs0vfqhuj9g6bia8jsg.jpeg" alt="browser network waterfall showing cached and uncached resources" width="800" height="533"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Photo by Markus Spiske on &lt;a href="https://www.pexels.com" rel="noopener noreferrer"&gt;Pexels&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  How the CDN Cache Differs From the Browser Cache
&lt;/h2&gt;

&lt;p&gt;CDN caches are shared: they store responses that are served to many different users. This introduces considerations that browser caches do not have.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Personalization.&lt;/strong&gt; The CDN must not store responses that differ by user. &lt;code&gt;Cache-Control: private&lt;/code&gt; tells CDNs to skip the response. Without this directive, a CDN might cache a personalized response and serve it to the wrong user.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cache keys.&lt;/strong&gt; CDNs cache by URL by default. If a response varies by request header (e.g., different responses for mobile vs. desktop based on &lt;code&gt;User-Agent&lt;/code&gt;), the &lt;code&gt;Vary&lt;/code&gt; header must be included so the CDN stores separate entries.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;TTL differentiation.&lt;/strong&gt; &lt;code&gt;s-maxage&lt;/code&gt; lets you specify a TTL for shared caches independently of the browser TTL. &lt;code&gt;Cache-Control: max-age=60, s-maxage=3600&lt;/code&gt; gives the browser a 1-minute freshness window and the CDN a 1-hour window. This pattern is useful for resources where you want the CDN to cache aggressively but the browser to check for updates more often.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Purging.&lt;/strong&gt; Unlike browser caches, CDN caches can be cleared server-side. Most CDNs offer an API to purge cached entries by URL, path prefix, or tag. This enables cache invalidation as part of a deployment pipeline.&lt;/p&gt;

&lt;h2&gt;
  
  
  How Intermediate Proxies Behave
&lt;/h2&gt;

&lt;p&gt;Intermediate proxies follow the same HTTP caching spec as CDNs. They respect &lt;code&gt;Cache-Control: private&lt;/code&gt; to avoid storing personalized responses, and they honor &lt;code&gt;no-store&lt;/code&gt; to skip caching entirely.&lt;/p&gt;

&lt;p&gt;The main practical difference is that you typically cannot predict which proxies a request might pass through, and you cannot purge their caches. A corporate proxy that caches a response with a long TTL will continue serving that response until the TTL expires, regardless of what happens on your origin.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;Cache-Control: no-cache&lt;/code&gt; directive is useful here. It allows responses to be stored in intermediate caches but requires revalidation before serving. This means even a proxy that has cached the response for a long time will check with the server before serving it to a new request.&lt;/p&gt;

&lt;h2&gt;
  
  
  How Service Workers Interact With HTTP Caching
&lt;/h2&gt;

&lt;p&gt;Service workers add a programmable cache layer between the browser and the network. They can intercept fetch requests, serve responses from their own cache, and bypass HTTP caching entirely.&lt;/p&gt;

&lt;p&gt;A service worker's cache is independent of the browser's HTTP cache. A resource cached by a service worker may be served even when the HTTP cache would have revalidated or rejected the cached copy.&lt;/p&gt;

&lt;p&gt;This means HTTP caching behavior and service worker behavior can conflict. If you are debugging a caching issue in an application that uses a service worker, check whether the service worker is intercepting the request before checking the HTTP cache configuration.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fymuc3pr26h8k4pg3qfp0.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fymuc3pr26h8k4pg3qfp0.jpeg" alt="server and CDN cache architecture diagram concept" width="800" height="532"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Photo by Brett Sayles on &lt;a href="https://www.pexels.com" rel="noopener noreferrer"&gt;Pexels&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The Practical Takeaway
&lt;/h2&gt;

&lt;p&gt;Each cache layer operates independently but responds to the same response headers. &lt;code&gt;Cache-Control&lt;/code&gt; is the single header that controls all of them, with directives that target different layers specifically.&lt;/p&gt;

&lt;p&gt;The most reliable approach to multi-layer caching:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Use &lt;code&gt;public, max-age=31536000, immutable&lt;/code&gt; for static assets with content-addressed URLs&lt;/li&gt;
&lt;li&gt;Use &lt;code&gt;no-cache&lt;/code&gt; for HTML pages so browsers revalidate but CDN caches work with short TTLs&lt;/li&gt;
&lt;li&gt;Use &lt;code&gt;private, no-store&lt;/code&gt; for personalized API responses&lt;/li&gt;
&lt;li&gt;Use &lt;code&gt;s-maxage&lt;/code&gt; to differentiate CDN and browser TTLs when needed&lt;/li&gt;
&lt;li&gt;Add &lt;code&gt;Vary&lt;/code&gt; for any response where content differs by request header&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For more detail on how these directives interact with CDN behavior and deployment workflows, &lt;a href="https://137foundry.com" rel="noopener noreferrer"&gt;137Foundry&lt;/a&gt; covers caching configuration as part of web application delivery at &lt;a href="https://137foundry.com/services" rel="noopener noreferrer"&gt;our services page&lt;/a&gt;. The full article &lt;a href="https://137foundry.com/articles/http-caching-web-application-guide" rel="noopener noreferrer"&gt;HTTP Caching: A Practical Guide for Web Developers&lt;/a&gt; covers each directive in depth. The &lt;a href="https://httpwg.org" rel="noopener noreferrer"&gt;HTTP caching RFC at the IETF&lt;/a&gt; is the authoritative specification if you need to resolve ambiguous behavior. &lt;a href="https://web.dev" rel="noopener noreferrer"&gt;Google's web.dev caching guide&lt;/a&gt; is the most accessible reference for practical application. For CDN-specific caching behavior and how edge nodes modify response headers, &lt;a href="https://www.cloudflare.com" rel="noopener noreferrer"&gt;Cloudflare's caching documentation&lt;/a&gt; is the most detailed public reference for a widely deployed CDN.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fizj2fdot9vp0lfx5ys2o.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fizj2fdot9vp0lfx5ys2o.jpeg" alt="developer reviewing web performance metrics on dashboard" width="800" height="533"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Photo by Daniil Komov on &lt;a href="https://www.pexels.com" rel="noopener noreferrer"&gt;Pexels&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>programming</category>
      <category>productivity</category>
    </item>
    <item>
      <title>7 Free AI Coding Tools Worth Adding to Your Development Workflow</title>
      <dc:creator>137Foundry</dc:creator>
      <pubDate>Tue, 21 Apr 2026 11:10:42 +0000</pubDate>
      <link>https://dev.to/137foundry/7-free-ai-coding-tools-worth-adding-to-your-development-workflow-130k</link>
      <guid>https://dev.to/137foundry/7-free-ai-coding-tools-worth-adding-to-your-development-workflow-130k</guid>
      <description>&lt;p&gt;The AI coding tool landscape has expanded quickly. Most developers know about the major offerings, but fewer have a clear sense of what each one is actually best for, or how they complement each other in a real development workflow.&lt;/p&gt;

&lt;p&gt;These seven tools are either free or have a meaningful free tier, and each addresses a different part of the development cycle. Knowing what each one does well - and what it does not - helps you choose where to introduce AI assistance without creating new overhead or unnecessary tool sprawl.&lt;/p&gt;

&lt;p&gt;The quality of output from all of these tools improves significantly with better prompting. The guide on &lt;a href="https://137foundry.com/articles/effective-prompts-ai-coding-assistants-production-code" rel="noopener noreferrer"&gt;how to write effective prompts for AI coding assistants&lt;/a&gt; covers the prompting patterns that apply regardless of which tool you are using. The tool is not the limiting factor in most cases - the quality of the context you provide is.&lt;/p&gt;

&lt;h2&gt;
  
  
  1. GitHub Copilot (Free for Students and Open Source)
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://github.com/features/copilot" rel="noopener noreferrer"&gt;GitHub Copilot&lt;/a&gt; is the most widely deployed AI coding assistant in professional settings. It integrates directly into &lt;a href="https://code.visualstudio.com/" rel="noopener noreferrer"&gt;VS Code&lt;/a&gt;, &lt;a href="https://www.jetbrains.com/" rel="noopener noreferrer"&gt;JetBrains&lt;/a&gt; IDEs, and several other editors. The free tier is available for students and open source maintainers; paid plans cover professional use.&lt;/p&gt;

&lt;p&gt;Copilot is strongest on completion tasks - finishing functions where the pattern is already established by the surrounding code. It indexes your open files and provides inline suggestions that follow the style and conventions of what it can see. For codebases with consistent patterns, Copilot's suggestions align well with existing conventions without requiring much explicit prompting.&lt;/p&gt;

&lt;p&gt;Where Copilot is weaker: tasks that require integrating context from many different parts of the codebase, or implementations that require knowledge of your team's specific architectural decisions. These tasks benefit from more explicit prompting than Copilot's inline completion interface naturally supports.&lt;/p&gt;

&lt;h2&gt;
  
  
  2. Cursor (Free Tier Available)
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://cursor.com/" rel="noopener noreferrer"&gt;Cursor&lt;/a&gt; is an AI-native code editor built on VS Code. The key differentiator from Copilot is the multi-file context awareness and the ability to reference the entire codebase when generating code. Cursor indexes your repository and can use that index when generating implementations, which means it can follow your project's existing patterns more reliably than tools that only see open files.&lt;/p&gt;

&lt;p&gt;Cursor also exposes a chat interface for longer, more complex prompts where you describe the task and provide context explicitly. This is where the prompting patterns covered in the full article apply directly - the more specific context you provide, the more accurately Cursor uses its codebase index to produce output that fits your architecture.&lt;/p&gt;

&lt;p&gt;The free tier has meaningful usage limits. For development work that requires frequent multi-file generation, the paid tier is more practical.&lt;/p&gt;

&lt;h2&gt;
  
  
  3. Tabnine (Free Tier Available)
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://www.tabnine.com/" rel="noopener noreferrer"&gt;Tabnine&lt;/a&gt; provides AI code completion with a focus on privacy. Unlike tools that send your code to external servers, Tabnine offers a local model option that runs entirely on your machine. This matters for teams working on proprietary codebases that cannot be shared with external APIs.&lt;/p&gt;

&lt;p&gt;Tabnine integrates with most major editors and supports a wide range of languages. The local model is smaller and less capable than the cloud models powering Copilot or Cursor, but it produces no data sharing with external services - a tradeoff that is the right call in many professional settings.&lt;/p&gt;

&lt;p&gt;For teams where data residency or code confidentiality is a hard requirement, Tabnine's local option provides AI-assisted completion without the compliance concerns of cloud-based tools.&lt;/p&gt;

&lt;h2&gt;
  
  
  4. Claude via the API (Free Trial, Then Paid)
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://www.anthropic.com/" rel="noopener noreferrer"&gt;Anthropic&lt;/a&gt; offers access to Claude through their API, which has a free trial tier. Claude handles complex, multi-turn prompting conversations particularly well - which makes it useful for the kind of structured, context-rich prompting that produces the best AI-generated code.&lt;/p&gt;

&lt;p&gt;Unlike editor-integrated tools, using Claude via the API or the web interface requires more manual context management - you paste in the relevant code, types, and constraints yourself. But this manual approach also means you have full control over what context the model receives, which is the key variable in prompt quality. When the prompting is done well, Claude produces function-level and class-level code that is often closer to production-ready than what editor-integrated tools produce for the same task.&lt;/p&gt;

&lt;p&gt;For teams at &lt;a href="https://137foundry.com" rel="noopener noreferrer"&gt;137Foundry&lt;/a&gt; that need reliable AI-generated code for complex features, using Claude for the more complex prompting sessions and an editor-integrated tool for inline completion is a common and effective combination.&lt;/p&gt;

&lt;h2&gt;
  
  
  5. Codeium (Free for Individuals)
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://codeium.com/" rel="noopener noreferrer"&gt;Codeium&lt;/a&gt; provides free AI code completion and chat for individual developers. It integrates with VS Code, JetBrains, Vim, Emacs, and a growing list of other editors. The completions are competitive with Copilot in quality for most standard tasks, and the free tier has no meaningful usage limits for individual use.&lt;/p&gt;

&lt;p&gt;Codeium also includes a chat interface for longer prompts, similar to Cursor's. For developers who want editor-integrated AI assistance without a subscription cost, Codeium is currently the strongest free alternative to Copilot. The same structured prompting patterns that improve output from Copilot or Cursor apply here: include the function signature, the types, and the constraints, and the completion quality improves significantly regardless of which underlying model powers the suggestions.&lt;/p&gt;

&lt;h2&gt;
  
  
  6. OpenAI Codex / ChatGPT (Free Tier Available)
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://openai.com/" rel="noopener noreferrer"&gt;OpenAI&lt;/a&gt; provides access to code generation through ChatGPT and the API. The free tier of ChatGPT allows code generation conversations with GPT-4o, which handles code tasks competently across most standard languages and frameworks.&lt;/p&gt;

&lt;p&gt;ChatGPT is particularly useful for one-off code generation tasks where you need to describe a problem in natural language and see a plausible implementation. The chat format supports iterative refinement - you can provide feedback on the initial output and ask for specific changes without rewriting the entire prompt.&lt;/p&gt;

&lt;p&gt;Like Claude, ChatGPT requires manual context management. The quality of the output is directly proportional to the quality of the context you provide.&lt;/p&gt;

&lt;h2&gt;
  
  
  7. ESLint with AI Rules (Free, Open Source)
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://eslint.org/" rel="noopener noreferrer"&gt;ESLint&lt;/a&gt; is not an AI coding tool in the conventional sense, but it belongs in any AI-assisted development workflow for a practical reason: it catches the most common issues in AI-generated code automatically.&lt;/p&gt;

&lt;p&gt;AI coding assistants regularly produce code that passes type checking but violates your team's style rules, uses deprecated patterns, or misses important linting checks. Running ESLint on AI-generated code before review catches a significant portion of these issues without manual inspection.&lt;/p&gt;

&lt;p&gt;Pairing AI code generation with automated linting provides a quality gate that does not require developer time for the most mechanical review tasks. The combination - AI for generation, ESLint for style and pattern enforcement - produces a more reliable output than either alone.&lt;/p&gt;

&lt;p&gt;For teams using &lt;a href="https://www.typescriptlang.org/" rel="noopener noreferrer"&gt;TypeScript&lt;/a&gt;, adding strict TypeScript checks alongside ESLint catches the type-related issues that AI-generated code is most prone to, particularly around null handling and return type precision.&lt;/p&gt;

&lt;p&gt;The guide on &lt;a href="https://137foundry.com/articles/effective-prompts-ai-coding-assistants-production-code" rel="noopener noreferrer"&gt;how to write effective prompts for AI coding assistants&lt;/a&gt; explains how these tools fit into a structured prompting workflow. The quality gate provided by ESLint and TypeScript catches issues in AI-generated output automatically, which reduces the manual review time needed before code is ready to ship. The &lt;a href="https://137foundry.com/services/ai-automation" rel="noopener noreferrer"&gt;AI automation services&lt;/a&gt; at &lt;a href="https://137foundry.com" rel="noopener noreferrer"&gt;137Foundry&lt;/a&gt; support teams in integrating these tools into a cohesive development workflow that includes AI generation, automated validation, and structured code review.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>automation</category>
      <category>programming</category>
    </item>
    <item>
      <title>How to Structure Your Prompts for AI Coding Assistants When Building Complex Features</title>
      <dc:creator>137Foundry</dc:creator>
      <pubDate>Tue, 21 Apr 2026 11:08:36 +0000</pubDate>
      <link>https://dev.to/137foundry/how-to-structure-your-prompts-for-ai-coding-assistants-when-building-complex-features-1ooi</link>
      <guid>https://dev.to/137foundry/how-to-structure-your-prompts-for-ai-coding-assistants-when-building-complex-features-1ooi</guid>
      <description>&lt;p&gt;AI coding assistants perform well on small, bounded tasks. Ask for a utility function with clear inputs and outputs, and you will usually get something close to what you need. The challenge is complex features - multi-step implementations that span multiple components, involve existing infrastructure, and require coordination across parts of the codebase the model has not seen.&lt;/p&gt;

&lt;p&gt;For complex features, the prompting approach that works for simple tasks breaks down. The model does not have enough context to make good architectural decisions, and a single generic prompt produces output that needs to be completely restructured.&lt;/p&gt;

&lt;p&gt;This guide covers how to decompose complex features into a sequence of bounded prompts, how to carry context between them, and how to validate each step before moving to the next.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 1: Map the Feature Into Bounded Subtasks Before Prompting
&lt;/h2&gt;

&lt;p&gt;The single most effective thing you can do before prompting for a complex feature is to write out the implementation steps as you would plan them yourself. List the components that need to change, the new code that needs to be written, and the integration points between them.&lt;/p&gt;

&lt;p&gt;This is planning work you would do regardless of whether you are using an AI coding assistant. The difference is that this plan becomes the structure for your prompts. Each discrete implementation step becomes its own prompt, with explicit context about where it fits in the larger feature.&lt;/p&gt;

&lt;p&gt;A feature that touches five files and requires three new functions should be approached as five separate prompting sessions, not one. Each session focuses on one piece of the implementation, includes context about the adjacent pieces, and produces output you can verify before moving to the next step.&lt;/p&gt;

&lt;p&gt;The time to do this decomposition is before you open your AI coding tool, not after you have received generic output and are trying to figure out what went wrong.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 2: Write the Data Layer Prompt First
&lt;/h2&gt;

&lt;p&gt;For features that require new data access or transformation, start with the data layer. This gives you real type definitions and function signatures to include in all subsequent prompts.&lt;/p&gt;

&lt;p&gt;Prompt for the data schema changes, the query functions, and the repository interface. Include your existing database schema context and your ORM conventions. Specify return types explicitly, including the null/error cases.&lt;/p&gt;

&lt;p&gt;Once you have working data layer code - verified against your test suite - you have concrete interfaces to reference in every subsequent prompt. You can paste in the actual TypeScript types or Python dataclasses that were generated, and the model will use them accurately in the next layer.&lt;/p&gt;

&lt;p&gt;This is the key principle behind sequential prompting for complex features: each completed step provides real, verified context for the next one. You are not asking the model to imagine what the data layer might look like - you are showing it exactly what it produced.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cursor.com/" rel="noopener noreferrer"&gt;Cursor&lt;/a&gt; and similar tools that index your codebase can surface some of this automatically. But for non-trivial features, explicitly including the verified output from earlier steps is more reliable than trusting automatic context retrieval.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 3: Write the Business Logic Prompt With the Data Types Included
&lt;/h2&gt;

&lt;p&gt;With verified data layer types in hand, write the business logic prompt. Paste in the relevant data types and repository interface from the previous step. This is explicit context the model cannot generate on its own.&lt;/p&gt;

&lt;p&gt;Specify the input to the business logic function (probably an event or user action) and the output (the resulting state change or response). Include any domain rules that are not obvious from the data types: "a user can only have one active session at a time," "the price must be recalculated whenever quantity changes," "admin users bypass the rate limit check."&lt;/p&gt;

&lt;p&gt;These domain rules are the things the model cannot infer from your data schema. They are also the things most likely to be missing from the output if you do not specify them. Stating them explicitly produces code that enforces them correctly from the first iteration.&lt;/p&gt;

&lt;p&gt;At &lt;a href="https://137foundry.com" rel="noopener noreferrer"&gt;137Foundry&lt;/a&gt;, we have found that domain rule specification is the most commonly skipped element in developer prompts for business logic, and it is the most expensive omission - because domain rule errors often require significant restructuring to fix rather than simple edits.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 4: Write the API or Handler Layer With the Business Logic Interface Included
&lt;/h2&gt;

&lt;p&gt;With verified business logic complete, prompt for the API layer or handler. Include the business logic function signature and return type from the previous step.&lt;/p&gt;

&lt;p&gt;Specify the request shape (route parameters, request body, headers that matter), the response shapes for success and each error case, and the authentication or authorization requirements. If you have existing handler code in the project, include a short example of how your handlers are structured to establish the pattern.&lt;/p&gt;

&lt;p&gt;The output at this layer should be structurally consistent with your existing handlers. Pasting in one similar handler from your codebase as an example is usually sufficient to establish the pattern - the model will follow it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/features/copilot" rel="noopener noreferrer"&gt;GitHub Copilot&lt;/a&gt; handles this kind of pattern-following well when the context is visible. For handlers in languages like TypeScript, including the relevant framework documentation or type imports in the prompt also helps anchor the output to your specific framework's conventions.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 5: Write Integration Points One at a Time
&lt;/h2&gt;

&lt;p&gt;If the feature requires integration with external services - a payment processor, an email provider, an analytics platform - prompt for each integration separately. Do not combine multiple external integrations into a single prompt.&lt;/p&gt;

&lt;p&gt;For each integration, include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The external service's interface or SDK type definitions (paste in the relevant parts)&lt;/li&gt;
&lt;li&gt;Your existing wrapper or client for the service, if one exists&lt;/li&gt;
&lt;li&gt;The specific method calls you need&lt;/li&gt;
&lt;li&gt;How errors from the external service should be handled and propagated&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;External service integrations are a common source of hallucinated APIs in AI-generated code. The model knows about many external APIs from its training, but API details change and specific method signatures can be wrong. Including the actual SDK types in your prompt gives the model the correct interface to work against.&lt;/p&gt;

&lt;p&gt;If you do not have the SDK types available to paste, reference the official documentation homepage (&lt;a href="https://www.anthropic.com/" rel="noopener noreferrer"&gt;Anthropic&lt;/a&gt;, for example, publishes SDK documentation) and avoid relying on the model to know the exact method signatures from memory.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 6: Validate Each Layer Before Moving to the Next
&lt;/h2&gt;

&lt;p&gt;Sequential prompting only works if each layer is actually validated before you proceed. Running the tests for the data layer before prompting for business logic catches type mismatches and schema issues before they cascade into higher layers.&lt;/p&gt;

&lt;p&gt;If you move to the next step before validating the previous one, errors compound. A wrong return type in the data layer produces wrong types in the business logic, which produces wrong types in the handler, and you end up debugging a cascade of type errors that all trace back to one early mistake.&lt;/p&gt;

&lt;p&gt;The validation step does not need to be thorough - it needs to confirm that the interface is correct, the types align, and the basic behavior works. A minimal test covering the happy path and one error case is enough to verify the contract before moving on.&lt;/p&gt;

&lt;p&gt;This discipline is what makes sequential prompting for complex features reliable. Each step builds on verified output from the previous one rather than assuming it. The output at each stage is real code you can read and test, not an optimistic assumption about what the model might have produced.&lt;/p&gt;

&lt;p&gt;The full approach to structured prompting, including how to handle cases where the model produces output that does not align with your expectations, is covered in the guide on &lt;a href="https://137foundry.com/articles/effective-prompts-ai-coding-assistants-production-code" rel="noopener noreferrer"&gt;how to write effective prompts for AI coding assistants&lt;/a&gt;. The &lt;a href="https://137foundry.com/services/ai-automation" rel="noopener noreferrer"&gt;AI automation services&lt;/a&gt; at 137Foundry also include workflow support for teams building this kind of sequential prompting practice into their development process.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>automation</category>
      <category>programming</category>
    </item>
    <item>
      <title>7 Tools That Help You Review and Validate AI-Generated Code in Your Pipeline</title>
      <dc:creator>137Foundry</dc:creator>
      <pubDate>Mon, 20 Apr 2026 04:53:08 +0000</pubDate>
      <link>https://dev.to/137foundry/7-tools-that-help-you-review-and-validate-ai-generated-code-in-your-pipeline-22ci</link>
      <guid>https://dev.to/137foundry/7-tools-that-help-you-review-and-validate-ai-generated-code-in-your-pipeline-22ci</guid>
      <description>&lt;p&gt;AI coding assistants are fast. Code review is slow. The gap between those two speeds is where problems accumulate.&lt;/p&gt;

&lt;p&gt;These seven tools address different parts of the review and validation problem. Some run in CI, some at commit time, some in your editor. Together, they form a reasonable stack for teams that are shipping a meaningful share of AI-generated code and want systematic quality gates rather than relying entirely on reviewer attention.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fma0pc9y5snd361bn41e9.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fma0pc9y5snd361bn41e9.jpeg" alt="developer reviewing code in an IDE on a large monitor" width="800" height="600"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Photo by Digital Buggu on &lt;a href="https://www.pexels.com" rel="noopener noreferrer"&gt;Pexels&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  1. pre-commit
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://pre-commit.com" rel="noopener noreferrer"&gt;pre-commit&lt;/a&gt; is a framework for managing Git hooks. You configure it with a YAML file that specifies which linters, formatters, and checks run before each commit. For AI-assisted codebases, it catches style drift and convention violations before they reach a pull request.&lt;/p&gt;

&lt;p&gt;The value is automation at the earliest possible point. By the time a reviewer sees the code, pre-commit has already enforced your import conventions, formatting rules, and any other static checks you've configured. That frees up reviewer time for logic and correctness rather than style.&lt;/p&gt;

&lt;p&gt;Setup is a one-time investment. Configuration is declarative and version-controlled. Every developer installs the hooks once with &lt;code&gt;pre-commit install&lt;/code&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  2. ESLint
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://eslint.org" rel="noopener noreferrer"&gt;ESLint&lt;/a&gt; is the standard static analysis tool for JavaScript and TypeScript. For AI-assisted TypeScript codebases specifically, it is worth configuring with strict rules for type narrowing and explicit return types.&lt;/p&gt;

&lt;p&gt;AI tools frequently generate TypeScript that compiles but leaves type assertions implicit or relies on type inference in ways that produce unexpected behavior at runtime. A strict ESLint configuration surfaces these patterns during development rather than in production.&lt;/p&gt;

&lt;p&gt;ESLint integrates with pre-commit (to run at commit time) and with &lt;a href="https://github.com/features/actions" rel="noopener noreferrer"&gt;GitHub Actions&lt;/a&gt; (to run on every pull request). Run it in both places: locally for fast feedback, in CI to enforce it independently of local hook installation.&lt;/p&gt;

&lt;h2&gt;
  
  
  3. mypy
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://mypy-lang.org" rel="noopener noreferrer"&gt;mypy&lt;/a&gt; is the standard static type checker for Python. For teams using AI tools to generate Python code, mypy catches a specific and common failure mode: method calls that do not exist on the inferred type.&lt;/p&gt;

&lt;p&gt;AI tools learn from large corpora of Python code and sometimes suggest methods that existed in an older version of a library, belong to a different class, or were fabricated entirely. mypy catches these before they ship.&lt;/p&gt;

&lt;p&gt;Configure mypy with &lt;code&gt;--strict&lt;/code&gt; for new codebases or add it incrementally to existing ones with &lt;code&gt;--ignore-missing-imports&lt;/code&gt; as a starting point. Integrate with pre-commit for local checks and add it to your CI pipeline for PR enforcement.&lt;/p&gt;

&lt;h2&gt;
  
  
  4. Semgrep
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://semgrep.dev" rel="noopener noreferrer"&gt;Semgrep&lt;/a&gt; is an open-source static analysis tool that supports custom rules. For AI-generated code, it is particularly useful for enforcing patterns that ESLint and mypy don't cover: business logic rules, security patterns, or project-specific conventions.&lt;/p&gt;

&lt;p&gt;Examples of rules Semgrep handles well: "never call this deprecated internal API directly," "always use our wrapper around the authentication library," "external HTTP requests must go through our rate limiter." These are the kinds of constraints AI tools have no way of knowing about and that reviewers frequently need to catch manually.&lt;/p&gt;

&lt;p&gt;You can write custom Semgrep rules for your specific codebase conventions, or use the community-maintained &lt;a href="https://semgrep.dev/explore" rel="noopener noreferrer"&gt;Semgrep Registry&lt;/a&gt; for common security and quality checks.&lt;/p&gt;

&lt;h2&gt;
  
  
  5. Codecov
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://codecov.io" rel="noopener noreferrer"&gt;Codecov&lt;/a&gt; tracks test coverage and shows coverage changes per pull request. For AI-assisted workflows, it answers a specific question reviewers often have: is this AI-generated code actually tested?&lt;/p&gt;

&lt;p&gt;AI tools generate code that looks correct but may have untested branches. Codecov's PR comments highlight exactly which lines were added but not covered by the test suite. A coverage threshold requirement in CI (blocking PRs that drop coverage below a certain percentage) creates a forcing function for testing AI-generated logic.&lt;/p&gt;

&lt;p&gt;Codecov integrates with GitHub Actions and most major CI platforms. Configuration is a YAML file and a CI step.&lt;/p&gt;

&lt;h2&gt;
  
  
  6. Snyk
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://snyk.io" rel="noopener noreferrer"&gt;Snyk&lt;/a&gt; scans code for security vulnerabilities, focusing on dependencies and known vulnerability patterns. For AI-generated code, it catches a common problem: suggestions that import vulnerable package versions or use patterns with known security implications.&lt;/p&gt;

&lt;p&gt;AI tools suggest packages based on training data that may predate a known vulnerability. They also sometimes suggest patterns (string interpolation in SQL queries, eval with user input) that appear in training data and are known to be problematic.&lt;/p&gt;

&lt;p&gt;Snyk runs as a CI check and integrates with pull request workflows. It produces actionable output: the specific vulnerability, the affected line, and a suggested fix.&lt;/p&gt;

&lt;h2&gt;
  
  
  7. SonarCloud
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://sonarcloud.io" rel="noopener noreferrer"&gt;SonarCloud&lt;/a&gt; provides code quality analysis across multiple dimensions: bugs, code smells, security hotspots, and maintainability ratings. For AI-assisted codebases, the "code smells" and "maintainability" dimensions are particularly relevant.&lt;/p&gt;

&lt;p&gt;AI tools sometimes generate code that is technically correct but structured in ways that will create maintenance problems: deeply nested conditionals, duplicated logic, methods that do too much. SonarCloud surfaces these patterns on every pull request with context about why they are flagged.&lt;/p&gt;

&lt;p&gt;SonarCloud's free tier covers public repositories and integrates with GitHub Actions. The setup is a workflow YAML file and a project token.&lt;/p&gt;




&lt;p&gt;For a broader guide on building the Git workflow that connects these tools together, read &lt;a href="https://137foundry.com/articles/integrate-ai-coding-tools-git-workflow" rel="noopener noreferrer"&gt;How to Integrate AI Coding Tools Into Your Git Workflow Without Losing Control&lt;/a&gt;. &lt;a href="https://137foundry.com" rel="noopener noreferrer"&gt;137Foundry&lt;/a&gt; works with development teams on AI tool integration, including the toolchain configuration described above.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frud577cl9rq10xset5c4.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frud577cl9rq10xset5c4.jpeg" alt="software quality analysis dashboard on screen" width="800" height="533"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Photo by Daniil Komov on &lt;a href="https://www.pexels.com" rel="noopener noreferrer"&gt;Pexels&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Choosing Your Stack
&lt;/h2&gt;

&lt;p&gt;You don't need all seven tools to start. A useful starting configuration is: pre-commit for local enforcement, your language's type checker in CI, and Codecov for coverage tracking. That covers the three most common failure modes in AI-generated code reviews: style drift, type errors, and untested paths.&lt;/p&gt;

&lt;p&gt;Add Semgrep when you have project-specific patterns you want to enforce systematically. Add Snyk when your project has a significant dependency surface area. Add SonarCloud when code maintainability is a priority and you want systematic tracking over time.&lt;/p&gt;

&lt;p&gt;The goal is a review process where automated tools handle the detectable, repetitive checks and human reviewers focus on correctness, intent, and the edge cases no tool can know about.&lt;/p&gt;

&lt;h2&gt;
  
  
  What These Tools Don't Replace
&lt;/h2&gt;

&lt;p&gt;Automated tooling handles the checks that are formulaic and repeatable. It does not replace the review question that matters most: did the AI-generated code solve the actual problem it was supposed to solve?&lt;/p&gt;

&lt;p&gt;That question requires a reviewer who understands the intent behind the change, knows the relevant system, and is reading the code actively rather than looking for a green CI badge. The tools in this list are fast typists for the mechanical checks. The human review is still the part that catches logic errors, misunderstood requirements, and assumptions the AI made that don't match your system's reality.&lt;/p&gt;

&lt;p&gt;Configure the tools, run them consistently, and use the time they save for the review work that requires judgment. That combination -- automated gates plus focused human review -- is what makes AI-assisted development sustainable at scale rather than a liability that grows with team size.&lt;/p&gt;

&lt;p&gt;Good tooling and good review habits reinforce each other. When developers know that pre-commit, CI, and coverage checks will catch the mechanical issues, they write more focused review comments about the things that actually require their knowledge of the system. And when reviewers consistently catch the intent-level problems that no tool can see, the overall quality of AI-assisted output improves across the team.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>automation</category>
      <category>programming</category>
    </item>
    <item>
      <title>How to Set Up Pre-Commit Hooks for Teams Using AI Coding Assistants</title>
      <dc:creator>137Foundry</dc:creator>
      <pubDate>Mon, 20 Apr 2026 04:52:18 +0000</pubDate>
      <link>https://dev.to/137foundry/how-to-set-up-pre-commit-hooks-for-teams-using-ai-coding-assistants-i87</link>
      <guid>https://dev.to/137foundry/how-to-set-up-pre-commit-hooks-for-teams-using-ai-coding-assistants-i87</guid>
      <description>&lt;p&gt;AI coding assistants write syntactically correct code most of the time. They are considerably less reliable about respecting your project's import conventions, naming patterns, or type contracts. That gap is exactly what pre-commit hooks close.&lt;/p&gt;

&lt;p&gt;This is a step-by-step guide to setting up pre-commit hooks for a team that uses AI coding tools. The setup applies to both Python and JavaScript/TypeScript projects. The concepts transfer to any language with a linter.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj8j2ov1lpl3ae5zyj80q.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj8j2ov1lpl3ae5zyj80q.jpeg" alt="terminal window showing pre-commit hook running" width="800" height="533"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Photo by Nemuel Sereti on &lt;a href="https://www.pexels.com" rel="noopener noreferrer"&gt;Pexels&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Why Pre-Commit Hooks Matter More With AI Tools
&lt;/h2&gt;

&lt;p&gt;Without AI tools, style and convention drift usually happens slowly. One developer's personal habits creep in here and there, and code review catches most of it.&lt;/p&gt;

&lt;p&gt;With AI tools, convention drift can happen fast. A developer working with &lt;a href="https://github.com/features/copilot" rel="noopener noreferrer"&gt;GitHub Copilot&lt;/a&gt; or &lt;a href="https://cursor.sh" rel="noopener noreferrer"&gt;Cursor&lt;/a&gt; accepts 20 suggestions in an afternoon. Each suggestion is syntactically valid and logically plausible. But several of them use import paths that differ from your project's convention, name variables using a pattern the tool learned from a different codebase, or skip error handling that your team has agreed is mandatory.&lt;/p&gt;

&lt;p&gt;By the time this reaches code review, the reviewer has to evaluate each deviation individually: is this intentional, or is it AI drift? The answer is usually "AI drift," but confirming that takes time. Pre-commit hooks catch the detectable violations automatically, so reviewers can focus on logic rather than style.&lt;/p&gt;
&lt;h2&gt;
  
  
  Step 1: Install the pre-commit Framework
&lt;/h2&gt;

&lt;p&gt;The &lt;a href="https://pre-commit.com" rel="noopener noreferrer"&gt;pre-commit&lt;/a&gt; framework is the standard tool for managing Git hooks. Install it with pip:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pip &lt;span class="nb"&gt;install &lt;/span&gt;pre-commit
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Or if you are managing Python tool dependencies with a dev dependencies group:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pip &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;--group&lt;/span&gt; dev pre-commit
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For teams on Node.js projects that prefer keeping everything in &lt;code&gt;package.json&lt;/code&gt;, &lt;a href="https://typicode.github.io/husky" rel="noopener noreferrer"&gt;Husky&lt;/a&gt; is the equivalent. The configuration differs slightly, but the concept is the same: a hook runs before each commit and can block the commit if checks fail.&lt;/p&gt;

&lt;p&gt;Add a &lt;code&gt;.pre-commit-config.yaml&lt;/code&gt; file to the root of your repository. This file defines which hooks run and in what order.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 2: Configure Your Linter
&lt;/h2&gt;

&lt;p&gt;For Python projects, add ruff as your primary lint and format check:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;repos&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;repo&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;https://github.com/astral-sh/ruff-pre-commit&lt;/span&gt;
    &lt;span class="na"&gt;rev&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v0.4.4&lt;/span&gt;
    &lt;span class="na"&gt;hooks&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ruff&lt;/span&gt;
        &lt;span class="na"&gt;args&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;--fix&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ruff-format&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://astral.sh/ruff" rel="noopener noreferrer"&gt;Ruff&lt;/a&gt; is a fast Python linter that combines flake8, isort, and several other tools. For teams on older setups that prefer flake8 directly:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;repos&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;repo&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;https://github.com/PyCQA/flake8&lt;/span&gt;
    &lt;span class="na"&gt;rev&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;7.0.0&lt;/span&gt;
    &lt;span class="na"&gt;hooks&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;flake8&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For TypeScript/JavaScript projects, add ESLint:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;repos&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;repo&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;https://github.com/pre-commit/mirrors-eslint&lt;/span&gt;
    &lt;span class="na"&gt;rev&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v8.57.0&lt;/span&gt;
    &lt;span class="na"&gt;hooks&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;eslint&lt;/span&gt;
        &lt;span class="na"&gt;files&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;\.[jt]sx?$&lt;/span&gt;
        &lt;span class="na"&gt;types&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;file&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://eslint.org" rel="noopener noreferrer"&gt;ESLint&lt;/a&gt; is the standard linter for JavaScript and TypeScript. Configure your &lt;code&gt;.eslintrc&lt;/code&gt; as you normally would; the pre-commit hook runs it on staged files only, which keeps it fast.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 3: Add a Type Checker
&lt;/h2&gt;

&lt;p&gt;Type checking is where AI tools tend to introduce the most subtle errors. A suggestion might use a method that doesn't exist on the type the tool inferred, or assume a nullable field is always present.&lt;/p&gt;

&lt;p&gt;For Python projects, add mypy:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;repo&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;https://github.com/pre-commit/mirrors-mypy&lt;/span&gt;
    &lt;span class="na"&gt;rev&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1.10.0&lt;/span&gt;
    &lt;span class="na"&gt;hooks&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;mypy&lt;/span&gt;
        &lt;span class="na"&gt;additional_dependencies&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;types-requests&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;types-PyYAML&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Adjust &lt;code&gt;additional_dependencies&lt;/code&gt; to include the type stubs your project uses. &lt;a href="https://mypy-lang.org" rel="noopener noreferrer"&gt;mypy&lt;/a&gt; will catch cases where AI-generated code calls methods that do not exist on a type, passes arguments in the wrong order, or skips null checks.&lt;/p&gt;

&lt;p&gt;For TypeScript projects, tsc handles type checking as part of the normal build. You can add it to pre-commit directly:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;repo&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;local&lt;/span&gt;
    &lt;span class="na"&gt;hooks&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;tsc&lt;/span&gt;
        &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;TypeScript type check&lt;/span&gt;
        &lt;span class="na"&gt;language&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;node&lt;/span&gt;
        &lt;span class="na"&gt;entry&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;npx tsc --noEmit&lt;/span&gt;
        &lt;span class="na"&gt;types&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;ts&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;tsx&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
        &lt;span class="na"&gt;pass_filenames&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://www.typescriptlang.org" rel="noopener noreferrer"&gt;TypeScript&lt;/a&gt; type errors from AI suggestions are common, especially when the tool generates code that interfaces with an existing typed module it did not see in full context.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 4: Install the Hooks
&lt;/h2&gt;

&lt;p&gt;Once &lt;code&gt;.pre-commit-config.yaml&lt;/code&gt; is configured, install the hooks into your local Git repository:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pre-commit &lt;span class="nb"&gt;install&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This adds a &lt;code&gt;.git/hooks/pre-commit&lt;/code&gt; script that runs automatically before each commit. Every developer on the team needs to run &lt;code&gt;pre-commit install&lt;/code&gt; once after cloning the repository. Add this to your contributing guide or your repository's &lt;code&gt;Makefile&lt;/code&gt; setup target.&lt;/p&gt;

&lt;p&gt;To verify the hooks run correctly against the current staged files:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pre-commit run &lt;span class="nt"&gt;--all-files&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This runs all configured hooks against every file in the repository. Expect some failures on first run in an existing codebase; fix them or add exceptions as needed before requiring the hooks in CI.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 5: Add the Check to CI
&lt;/h2&gt;

&lt;p&gt;Pre-commit hooks run locally by default. If a developer bypasses them (intentionally or by committing directly without the hooks installed), violations can still reach your pull request.&lt;/p&gt;

&lt;p&gt;Adding a CI check ensures the hooks run on every PR regardless of local setup. In GitHub Actions, add a job like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;pre-commit&lt;/span&gt;
&lt;span class="na"&gt;on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;pull_request&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
&lt;span class="na"&gt;jobs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;pre-commit&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;runs-on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ubuntu-latest&lt;/span&gt;
    &lt;span class="na"&gt;steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/checkout@v4&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/setup-python@v5&lt;/span&gt;
        &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;python-version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;3.12'&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;pre-commit/action@v3.0.1&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This uses the official &lt;a href="https://github.com/pre-commit/action" rel="noopener noreferrer"&gt;pre-commit/action&lt;/a&gt; to run the same hooks your developers run locally. Failed checks block the PR. Passed checks mean the linting and type checks have been verified end to end.&lt;/p&gt;

&lt;h2&gt;
  
  
  Keeping Hooks Fast
&lt;/h2&gt;

&lt;p&gt;One failure mode with pre-commit setups is hooks that run too slowly. If committing takes 20-30 seconds every time, developers start using &lt;code&gt;git commit --no-verify&lt;/code&gt; to skip the hooks. That defeats the purpose entirely.&lt;/p&gt;

&lt;p&gt;Configure hooks to check only staged files where possible (most hooks do this by default). Avoid running expensive operations like full test suites in pre-commit. Those belong in CI. Pre-commit should run in under 5 seconds on a typical commit, which means linting and type checking staged files only.&lt;/p&gt;

&lt;p&gt;If a specific hook is slow, check whether it supports a &lt;code&gt;files&lt;/code&gt; or &lt;code&gt;types&lt;/code&gt; filter to narrow its scope. Running ESLint on only &lt;code&gt;.ts&lt;/code&gt; and &lt;code&gt;.tsx&lt;/code&gt; files is faster than running it on every file in the repository.&lt;/p&gt;

&lt;p&gt;For a broader picture of how pre-commit hooks fit into a full Git workflow for AI-assisted teams, see &lt;a href="https://137foundry.com/articles/integrate-ai-coding-tools-git-workflow" rel="noopener noreferrer"&gt;How to Integrate AI Coding Tools Into Your Git Workflow Without Losing Control&lt;/a&gt;. The &lt;a href="https://137foundry.com" rel="noopener noreferrer"&gt;AI automation agency 137Foundry&lt;/a&gt; helps engineering teams set this up alongside the rest of their AI tooling integration.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjt2aaltwfcjxoktd7cx7.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjt2aaltwfcjxoktd7cx7.jpeg" alt="developer setting up automation on a terminal" width="800" height="1200"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Photo by cottonbro studio on &lt;a href="https://www.pexels.com" rel="noopener noreferrer"&gt;Pexels&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;Pre-commit hooks are a practical, low-overhead guardrail for AI-assisted codebases. The setup takes an hour and runs silently after that. The configuration described here covers the most common failure modes in AI tool output: style drift, import convention violations, and type errors.&lt;/p&gt;

&lt;p&gt;The investment is a &lt;code&gt;.pre-commit-config.yaml&lt;/code&gt;, a &lt;code&gt;pre-commit install&lt;/code&gt; in your setup instructions, and a CI job. The return is a class of review comments that stops appearing because the automated check catches them first.&lt;/p&gt;

&lt;p&gt;One last note on team adoption: pre-commit works best when it is part of your repository setup documentation rather than something developers discover on their own. Add it to your &lt;code&gt;CONTRIBUTING.md&lt;/code&gt;, include it in your onboarding checklist, and add a &lt;code&gt;make setup&lt;/code&gt; target that runs &lt;code&gt;pre-commit install&lt;/code&gt; automatically. When every developer has the hooks installed from day one, the coverage is consistent and the review benefits are immediate for the entire team rather than just the developers who remembered to run the install command.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>automation</category>
      <category>programming</category>
    </item>
    <item>
      <title>How to Build a Code Quality Gate for AI-Assisted Pull Requests</title>
      <dc:creator>137Foundry</dc:creator>
      <pubDate>Fri, 17 Apr 2026 16:25:31 +0000</pubDate>
      <link>https://dev.to/137foundry/how-to-build-a-code-quality-gate-for-ai-assisted-pull-requests-2kbg</link>
      <guid>https://dev.to/137foundry/how-to-build-a-code-quality-gate-for-ai-assisted-pull-requests-2kbg</guid>
      <description>&lt;p&gt;Code quality gates exist to automate the mechanical checks so reviewers can focus on judgment calls. That premise becomes more valuable when a significant portion of the code is AI-generated, because AI tools produce more code per developer than before, and the failure modes are different from what reviewers are trained to look for.&lt;/p&gt;

&lt;p&gt;This guide covers how to build a quality gate pipeline specifically calibrated to AI-assisted development: what to automate, what to leave to human review, and how to sequence the checks to keep feedback loops fast. The goal is a process that scales with increased PR volume without requiring proportionally more review time.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 1: Define What AI-Assisted Means for Your Team
&lt;/h2&gt;

&lt;p&gt;Before building anything, agree on what counts as AI-generated or AI-assisted code in your workflow. The practical definition matters for deciding which checks to apply at which thresholds, and it creates accountability that wouldn't otherwise exist.&lt;/p&gt;

&lt;p&gt;Some teams require authors to tag PRs as AI-assisted when more than 50% of the diff is AI-generated. Others apply the same checks to all PRs. The labeling approach has a useful side effect: it makes explicit what was AI-generated, which changes how reviewers approach the diff.&lt;/p&gt;

&lt;p&gt;A simple PR template addition:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight markdown"&gt;&lt;code&gt;&lt;span class="gu"&gt;## AI Assistance&lt;/span&gt;
&lt;span class="p"&gt;-&lt;/span&gt; [ ] This PR contains significant AI-generated code (&amp;gt;25% of diff)
&lt;span class="p"&gt;-&lt;/span&gt; [ ] I have verified all external library method calls against the installed version
&lt;span class="p"&gt;-&lt;/span&gt; [ ] I have run all tests locally and reviewed test output, not just pass/fail status
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The checkboxes can be enforced as required gates before merging if you configure branch protection rules to require all PR template items to be checked. This creates a lightweight but meaningful author accountability checkpoint.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 2: Set Up Static Analysis in CI
&lt;/h2&gt;

&lt;p&gt;Static analysis should run on every PR. The rule configuration can be tuned for AI-generated failure modes specifically, beyond general code quality checks.&lt;/p&gt;

&lt;p&gt;For JavaScript/TypeScript projects, combine &lt;a href="https://eslint.org" rel="noopener noreferrer"&gt;ESLint&lt;/a&gt; with TypeScript's type-aware rules. Type-aware rules catch method calls on incorrect types - a common AI generation error. Run on changed files only to keep CI time under two minutes:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# .github/workflows/lint.yml&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Lint changed files&lt;/span&gt;
  &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
    &lt;span class="s"&gt;CHANGED=$(git diff --name-only origin/main...HEAD -- '*.ts' '*.tsx')&lt;/span&gt;
    &lt;span class="s"&gt;if [ -n "$CHANGED" ]; then&lt;/span&gt;
      &lt;span class="s"&gt;npx eslint --parser-options project:./tsconfig.json $CHANGED&lt;/span&gt;
    &lt;span class="s"&gt;fi&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For Python projects, add &lt;a href="https://semgrep.dev" rel="noopener noreferrer"&gt;Semgrep&lt;/a&gt; alongside flake8 or pylint. Semgrep's community rules include checks for common AI-generated patterns like deprecated API usage and security antipatterns. The configuration is minimal:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Semgrep&lt;/span&gt;
  &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;returntocorp/semgrep-action@v1&lt;/span&gt;
  &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;config&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;p/default p/security-audit&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 3: Require Branch Coverage in Tests
&lt;/h2&gt;

&lt;p&gt;Line coverage misses a category of behavioral errors that AI-generated code commonly contains: correct handling of one branch but absent handling of another. Switching to branch coverage requirements catches these gaps automatically.&lt;/p&gt;

&lt;p&gt;For Python:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pytest &lt;span class="nt"&gt;--cov&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;src &lt;span class="nt"&gt;--cov-branch&lt;/span&gt; &lt;span class="nt"&gt;--cov-report&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;term-missing &lt;span class="nt"&gt;--cov-fail-under&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;85 tests/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For JavaScript with Jest:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="err"&gt;//&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;jest.config.js&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;-&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;coverage&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;thresholds&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="err"&gt;coverageThreshold:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="err"&gt;global:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="err"&gt;branches:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;80&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="err"&gt;functions:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;85&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="err"&gt;lines:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;90&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="err"&gt;statements:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;90&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Set the threshold at what your current codebase achieves, then enforce it as a minimum. AI-generated code that significantly drops coverage metrics is a signal that the tests don't exercise the new branches. The branch coverage report also shows which specific conditions aren't tested, making it actionable for reviewers.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 4: Automate Dependency Verification
&lt;/h2&gt;

&lt;p&gt;AI coding tools sometimes generate import statements for library versions that differ from what's pinned in your dependency file, or for packages that are similar-sounding but incorrect. Add dependency audit steps to CI:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# For Node.js projects&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Dependency audit&lt;/span&gt;
  &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;npm audit --audit-level=moderate&lt;/span&gt;

&lt;span class="c1"&gt;# For Python projects  &lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;pip-audit&lt;/span&gt;
  &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;pip install pip-audit &amp;amp;&amp;amp; pip-audit&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Additionally, a check that verifies any new dependencies introduced in the PR are explicitly listed in the dependency file catches transitive dependencies that AI tools sometimes generate as if they were direct:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Detect imports not in requirements.txt (Python - simplified check)&lt;/span&gt;
python &lt;span class="nt"&gt;-m&lt;/span&gt; pip check
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;A package that appears in code but not in the dependency file is either a transitive dependency the AI incorrectly treated as direct, or a package name that doesn't exist under that name.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 5: Add Integration Test Requirements for System Boundaries
&lt;/h2&gt;

&lt;p&gt;Static analysis and unit tests verify code in isolation. The highest-value checks for AI-generated code verify behavior at system boundaries, where the new code interacts with a database, an external API, or another service. AI models consistently miss assumptions about system state, concurrent access, and error propagation across boundaries.&lt;/p&gt;

&lt;p&gt;Add a label-triggered CI workflow for integration tests on code touching system boundaries:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Label-based CI trigger for integration tests&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Run integration tests if needed&lt;/span&gt;
  &lt;span class="na"&gt;if&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;contains(github.event.pull_request.labels.*.name, 'touches-system-boundary')&lt;/span&gt;
  &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;pytest tests/integration/ -v&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Require that PRs touching database models, API clients, or message queue producers and consumers carry the label. Reviewers add it during the review process when they identify that a system boundary is involved. The label triggers the integration test suite for that PR.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 6: Set Up Code Complexity Tracking
&lt;/h2&gt;

&lt;p&gt;AI models often generate higher-complexity code than the problem requires, because they optimize for completeness rather than simplicity. Tracking cognitive complexity over time reveals whether AI adoption is increasing technical debt at the function level.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.sonarsource.com" rel="noopener noreferrer"&gt;SonarSource&lt;/a&gt; community edition provides cognitive complexity tracking as part of its free tier. For smaller teams, radon for Python is a lightweight alternative:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Flag functions with high cognitive complexity&lt;/span&gt;
radon cc src/ &lt;span class="nt"&gt;-nc&lt;/span&gt; &lt;span class="nt"&gt;--min&lt;/span&gt; B
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The goal isn't to block PRs on complexity - it's to track whether average complexity is trending upward as AI-generated code accumulates. Establishing a baseline before AI adoption and reviewing the trend quarterly provides early warning before complexity becomes a maintenance problem.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 7: Write a Focused Pre-Merge Checklist for Reviewers
&lt;/h2&gt;

&lt;p&gt;Automation handles the mechanical checks. Human reviewers handle the things automation can't: system-level context, business rule correctness, and whether the code does what the system actually needs. A focused checklist directs reviewer attention to these categories specifically.&lt;/p&gt;

&lt;p&gt;For AI-assisted PRs, a five-item checklist covers the high-value review work:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Pre-merge checklist for AI-assisted code:
[ ] Verified all new external library calls against installed versions
[ ] Traced the primary error path from start to finish
[ ] Confirmed test names describe behavior, not implementation
[ ] Checked integration points: what calls this? What does this call?
[ ] Read the PR description in the author's own words (not AI-generated)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The last item - requiring a PR description in the author's own words - is a lightweight accountability check. An engineer who can't explain AI-generated code in a paragraph is merging code they don't understand. That accountability gap surfaces as expensive debugging work later.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"Quality gates work when they redirect attention, not just add gates. The checklist should tell reviewers where to look, not just give them more boxes to check." - Dennis Traina, &lt;a href="https://137foundry.com/services" rel="noopener noreferrer"&gt;founder of 137Foundry&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Putting It Together
&lt;/h2&gt;

&lt;p&gt;A complete quality gate for AI-assisted PRs includes: PR template with author confirmation checkboxes, static analysis on changed files in CI, branch coverage thresholds, dependency auditing, label-triggered integration tests for system boundary changes, complexity trend tracking, and a focused five-item reviewer checklist.&lt;/p&gt;

&lt;p&gt;The automation handles the mechanical verification. The human checklist handles the judgment calls. Together they address the specific categories of issues that AI-generated code introduces without adding significant overhead to the review process.&lt;/p&gt;

&lt;p&gt;For the broader organizational and process questions around AI coding tools in production - how to set team norms, handle AI-generated code in security-sensitive areas, and measure whether tools are improving or degrading quality over time - see &lt;a href="https://137foundry.com/articles/practical-framework-ai-coding-production-codebases" rel="noopener noreferrer"&gt;A Practical Framework for Using AI Coding Tools in Production Codebases&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://137foundry.com" rel="noopener noreferrer"&gt;137Foundry&lt;/a&gt; helps engineering teams design and implement processes for AI-assisted development that maintain production quality.&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>programming</category>
      <category>productivity</category>
    </item>
    <item>
      <title>7 Free Tools for Managing AI Code Output in Production Engineering Teams</title>
      <dc:creator>137Foundry</dc:creator>
      <pubDate>Fri, 17 Apr 2026 16:25:30 +0000</pubDate>
      <link>https://dev.to/137foundry/7-free-tools-for-managing-ai-code-output-in-production-engineering-teams-1d9e</link>
      <guid>https://dev.to/137foundry/7-free-tools-for-managing-ai-code-output-in-production-engineering-teams-1d9e</guid>
      <description>&lt;p&gt;AI coding assistants generate code faster than most review processes were designed to handle. The backlog doesn't come from slow reviewers - it comes from a mismatch between generation speed and the verification work that responsible production deployment requires.&lt;/p&gt;

&lt;p&gt;Several categories of tooling help manage this gap: static analysis, dependency verification, test quality checking, and integration testing. Most of the best options in each category are free or open-source. Here's a practical list of tools worth integrating into a workflow that includes significant AI-generated code, with specific notes on how they address AI-specific failure modes rather than general code quality.&lt;/p&gt;

&lt;h2&gt;
  
  
  1. Semgrep - Pattern-Based Static Analysis
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://semgrep.dev" rel="noopener noreferrer"&gt;Semgrep&lt;/a&gt; runs static analysis using rules that match code patterns across many languages. For AI-generated code specifically, it's useful for catching common hallucination patterns: calls to deprecated API methods, uses of removed library functions, or security antipatterns that appear in training data because they were widespread in code before security guidance was widely adopted.&lt;/p&gt;

&lt;p&gt;The community rule registry has thousands of pre-built rules covering security, correctness, and performance. Running Semgrep in CI means every PR gets screened for known-bad patterns before a human reads it. Custom rules can target patterns specific to your codebase that AI tools frequently get wrong. A team using a specific internal API can write Semgrep rules to catch incorrect usage patterns before they reach review.&lt;/p&gt;

&lt;p&gt;The installation is a single pip package. The CI integration is a GitHub Action that runs on PRs and reports findings as comments. It takes about 30 minutes to set up and catches issues that standard linters miss because they operate on semantics rather than syntax.&lt;/p&gt;

&lt;h2&gt;
  
  
  2. ESLint with Type-Aware Rules - JavaScript/TypeScript Linting
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://eslint.org" rel="noopener noreferrer"&gt;ESLint&lt;/a&gt; with TypeScript's type-aware rules (&lt;code&gt;@typescript-eslint/parser&lt;/code&gt;) catches a category of error that AI models produce frequently: type mismatches that aren't obvious from the function signature alone, incorrect null handling, and calls to methods that don't exist on the inferred type.&lt;/p&gt;

&lt;p&gt;Type-aware linting is slower than standard linting because it requires running the TypeScript compiler to infer types before checking rules. For most codebases, running it on changed files only keeps CI time under two minutes. The &lt;code&gt;@typescript-eslint&lt;/code&gt; plugin extends ESLint with rules that require type information, including detecting when a method is called on a type that doesn't define it - a common AI generation error that's hard to catch otherwise because the method name looks plausible.&lt;/p&gt;

&lt;p&gt;The most valuable rules for AI-generated code: &lt;code&gt;no-unsafe-call&lt;/code&gt;, &lt;code&gt;no-unsafe-member-access&lt;/code&gt;, &lt;code&gt;strict-boolean-expressions&lt;/code&gt;, and &lt;code&gt;no-floating-promises&lt;/code&gt;. These catch the specific patterns that appear when an AI model writes code that's structurally correct but makes wrong assumptions about the types it's working with.&lt;/p&gt;

&lt;h2&gt;
  
  
  3. pytest with Branch Coverage - Python Test Quality
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://docs.pytest.org" rel="noopener noreferrer"&gt;pytest&lt;/a&gt; is Python's standard testing framework. Its value for AI-generated code specifically comes from using branch coverage requirements rather than line coverage. AI-generated tests frequently achieve high line coverage while missing behavioral coverage: they run every line but don't test every conditional branch, meaning they pass while missing scenarios that fail in production.&lt;/p&gt;

&lt;p&gt;Setting a branch coverage threshold of 85% forces tests to cover both branches of conditional logic. The difference between line coverage and branch coverage on AI-generated test suites is often 10 to 20 percentage points - the tests look solid on line metrics but have significant gaps on behavioral paths. Switching the threshold catches those gaps automatically.&lt;/p&gt;

&lt;p&gt;Branch coverage reports also show which specific branches aren't tested, making it straightforward for reviewers to ask "why wasn't this case tested?" rather than just noting that coverage is sufficient.&lt;/p&gt;

&lt;h2&gt;
  
  
  4. Playwright - End-to-End and Component Testing
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://playwright.dev" rel="noopener noreferrer"&gt;Playwright&lt;/a&gt; runs browser-based end-to-end tests and is particularly useful for verifying AI-generated UI code. AI tools produce visually plausible UI components that sometimes have interaction bugs: forms that submit but don't handle validation state correctly, modals that open but can't be closed via keyboard, elements that appear correct visually but have the wrong ARIA roles for accessibility, or buttons that trigger the right action in isolation but create state conflicts when combined with other components.&lt;/p&gt;

&lt;p&gt;Playwright's component testing mode allows testing components in isolation without a full application stack, which makes it fast enough to run in CI on PRs that touch UI code. The API is expressive enough to test keyboard navigation, focus management, and responsive behavior - the categories of UI behavior that AI tools miss most often because they're not represented clearly in the prompt.&lt;/p&gt;

&lt;h2&gt;
  
  
  5. SonarQube Community Edition - Code Quality Trend Tracking
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://www.sonarsource.com" rel="noopener noreferrer"&gt;SonarSource&lt;/a&gt; offers a community edition of SonarQube that tracks code quality metrics over time. For teams with AI-generated code in the codebase, the trend lines matter more than any individual metric: are complexity metrics increasing as AI adoption scales? Is test coverage trending down as PR volume increases? Are code smells accumulating faster than they're being resolved?&lt;/p&gt;

&lt;p&gt;AI tools tend to produce high-cognitive-complexity code for tasks that could be simpler, because they optimize for completeness given the prompt rather than for simplicity in the broader codebase context. SonarQube's cognitive complexity metric flags functions that are harder to understand than they need to be. Establishing a baseline before AI adoption and tracking against it provides objective data on whether AI tools are improving or degrading code maintainability.&lt;/p&gt;

&lt;h2&gt;
  
  
  6. pre-commit - Hook-Based Local Checks
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://pre-commit.com" rel="noopener noreferrer"&gt;pre-commit&lt;/a&gt; runs checks before a commit is made locally, which for AI-generated code means catching obvious problems before they enter the review queue. A useful pre-commit configuration for AI-assisted development includes trailing whitespace detection, YAML and JSON validity checking, secrets detection (particularly important since AI tools sometimes generate code with hardcoded credentials from training data patterns), and a fast subset of linting rules.&lt;/p&gt;

&lt;p&gt;The value of pre-commit for AI-generated code is in reducing noise from the review queue. When reviewers see that trivial issues are already handled automatically, they can direct attention to the non-trivial ones: context blindness, wrong library versions, missing error paths, incorrect test assertions. Noise reduction makes the human review faster and more focused.&lt;/p&gt;

&lt;h2&gt;
  
  
  7. npm audit and pip-audit - Dependency Security Scanning
&lt;/h2&gt;

&lt;p&gt;AI coding tools sometimes generate import statements for packages that are similar-sounding to the intended dependency but are actually different packages (a variant of the hallucination problem that extends to package names), for versions that have known security vulnerabilities, or for packages that exist in documentation but aren't actually available as stable releases.&lt;/p&gt;

&lt;p&gt;Running &lt;code&gt;npm audit&lt;/code&gt; for Node.js projects and &lt;code&gt;pip-audit&lt;/code&gt; for Python projects on every PR catches dependency security issues and can flag packages that are unusually new, have low download counts, or have known CVEs. For teams with significant AI-generated code, adding dependency auditing to CI takes about 30 minutes to set up and runs in under 10 seconds per PR.&lt;/p&gt;

&lt;h2&gt;
  
  
  How These Tools Work Together
&lt;/h2&gt;

&lt;p&gt;These seven tools address different parts of the AI code management problem. Semgrep and ESLint catch pattern-level issues in static analysis. pytest and Playwright verify behavioral correctness. SonarQube tracks quality trends over time. pre-commit reduces review noise. Dependency auditing handles supply chain and version risk.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"The tooling layer isn't about distrust - it's about redirecting human attention. Linters and static analysis handle the mechanical checks so reviewers can focus on the things only a human who knows the system can evaluate." - Dennis Traina, &lt;a href="https://137foundry.com/services" rel="noopener noreferrer"&gt;founder of 137Foundry&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;A minimal starting configuration for a team new to AI-assisted development: pre-commit for local hygiene, ESLint or Semgrep in CI, branch coverage requirements in the test suite, and dependency auditing on every PR. Add SonarQube tracking once you want visibility into trends.&lt;/p&gt;

&lt;p&gt;No combination of tooling substitutes for human review of integration logic, business rule correctness, and system-level behavior. But this stack makes that human review more targeted and significantly more effective.&lt;/p&gt;

&lt;p&gt;For a complete framework on how AI coding tools fit into production engineering workflows - including governance, team process design, and quality standards - see &lt;a href="https://137foundry.com/articles/practical-framework-ai-coding-production-codebases" rel="noopener noreferrer"&gt;A Practical Framework for Using AI Coding Tools in Production Codebases&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://137foundry.com" rel="noopener noreferrer"&gt;137Foundry&lt;/a&gt; helps engineering teams adopt AI tooling without compromising code quality or delivery velocity.&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>programming</category>
      <category>productivity</category>
    </item>
    <item>
      <title>5 UX Patterns That Reduce Form Abandonment Instantly</title>
      <dc:creator>137Foundry</dc:creator>
      <pubDate>Tue, 14 Apr 2026 12:23:12 +0000</pubDate>
      <link>https://dev.to/137foundry/5-ux-patterns-that-reduce-form-abandonment-instantly-2com</link>
      <guid>https://dev.to/137foundry/5-ux-patterns-that-reduce-form-abandonment-instantly-2com</guid>
      <description>&lt;p&gt;Form abandonment is one of those metrics that feels abstract until you connect it to revenue. A signup form with a 40 percent completion rate sounds okay until you realize it means 60 percent of your acquisition spend is walking away after they were interested enough to click the CTA. A checkout form with 70 percent abandonment means your marketing team is spending ten dollars to deliver three dollars of completed purchases. The difference between a well-designed form and a poorly-designed one is often measured in significant conversion percentage points.&lt;/p&gt;

&lt;p&gt;The encouraging part is that form UX is largely a solved problem. The patterns that work are well-documented, tested across thousands of implementations, and easy to adopt. These five specific patterns can make a measurable difference in completion rates when implemented correctly. Most of them can be added to an existing form without a ground-up rebuild.&lt;/p&gt;

&lt;h2&gt;
  
  
  1. Multi-Step Forms With Progress Indication
&lt;/h2&gt;

&lt;p&gt;Long forms create visual overwhelm. A signup flow that displays 12 fields at once feels daunting. The same 12 fields spread across 4 steps of 3 fields each feels manageable, even though the total amount of typing is identical. The cognitive trick is that users commit to finishing what they started, so breaking a form into steps increases perceived progress and reduces the apparent effort at any given moment.&lt;/p&gt;

&lt;p&gt;The implementation is straightforward but requires attention to specific details:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Always show the user where they are in the flow (Step 2 of 4)&lt;/li&gt;
&lt;li&gt;Allow users to go back to previous steps to correct information&lt;/li&gt;
&lt;li&gt;Save their progress automatically so a page refresh does not lose their work&lt;/li&gt;
&lt;li&gt;Validate each step before allowing progression, but do not aggressively block on fields that are not yet required&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Libraries like &lt;a href="https://react-hook-form.com/advanced-usage#WizardFormFunnel" rel="noopener noreferrer"&gt;React Hook Form&lt;/a&gt; include built-in support for multi-step forms with state preservation across steps. The &lt;a href="https://formik.org/docs/examples/wizard" rel="noopener noreferrer"&gt;Formik documentation&lt;/a&gt; covers similar patterns for controlled components.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fji5cc66tw1le6ojzfjq0.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fji5cc66tw1le6ojzfjq0.jpeg" alt="User interface showing a multi-step form progress bar" width="800" height="1200"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Photo by MART  PRODUCTION on &lt;a href="https://www.pexels.com" rel="noopener noreferrer"&gt;Pexels&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The trade-off is that multi-step forms can feel slow for short interactions. A 3-field contact form does not benefit from being split into 3 steps. Use multi-step only when the total field count is high enough that a single page creates visual pressure.&lt;/p&gt;
&lt;h2&gt;
  
  
  2. Inline Validation With Positive Feedback
&lt;/h2&gt;

&lt;p&gt;Validation that fires after submission is too late. By the time the user has filled out 12 fields and clicked submit, their tolerance for errors is minimal. Inline validation that gives feedback as users move through the form keeps the cognitive load per field low and prevents the cascading frustration of discovering multiple errors at once.&lt;/p&gt;

&lt;p&gt;The pattern works best with these specific rules:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Fire validation on blur (when the user leaves a field), not as they type&lt;/li&gt;
&lt;li&gt;Show green checkmarks when a field passes validation, not just red when it fails&lt;/li&gt;
&lt;li&gt;Make error messages specific and actionable, not generic "invalid input" text&lt;/li&gt;
&lt;li&gt;Never turn an empty optional field red just because the user has not filled it in&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Positive validation is often overlooked but matters significantly. When users complete a field and see a green checkmark, they know they can move forward confidently. Without positive feedback, they are uncertain whether their input was accepted. This uncertainty accumulates across fields and contributes to abandonment. The &lt;a href="https://design-system.service.gov.uk/patterns/validation/" rel="noopener noreferrer"&gt;GOV.UK Design System guidance on form validation&lt;/a&gt; provides tested implementations used on high-traffic government services.&lt;/p&gt;
&lt;h2&gt;
  
  
  3. Smart Defaults and Autofill
&lt;/h2&gt;

&lt;p&gt;The fastest field to complete is the one that is already filled in correctly. Smart defaults and autofill can reduce the actual typing required by 50-80 percent on common forms. Yet many forms actively fight autofill by using non-standard field names, incorrect input types, or JavaScript that interferes with browser autofill behavior.&lt;/p&gt;

&lt;p&gt;Getting autofill right requires following specific conventions:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight html"&gt;&lt;code&gt;&lt;span class="nt"&gt;&amp;lt;input&lt;/span&gt; 
  &lt;span class="na"&gt;type=&lt;/span&gt;&lt;span class="s"&gt;"email"&lt;/span&gt; 
  &lt;span class="na"&gt;name=&lt;/span&gt;&lt;span class="s"&gt;"email"&lt;/span&gt; 
  &lt;span class="na"&gt;autocomplete=&lt;/span&gt;&lt;span class="s"&gt;"email"&lt;/span&gt; 
  &lt;span class="na"&gt;required&lt;/span&gt;
&lt;span class="nt"&gt;/&amp;gt;&lt;/span&gt;

&lt;span class="nt"&gt;&amp;lt;input&lt;/span&gt; 
  &lt;span class="na"&gt;type=&lt;/span&gt;&lt;span class="s"&gt;"tel"&lt;/span&gt; 
  &lt;span class="na"&gt;name=&lt;/span&gt;&lt;span class="s"&gt;"phone"&lt;/span&gt; 
  &lt;span class="na"&gt;autocomplete=&lt;/span&gt;&lt;span class="s"&gt;"tel"&lt;/span&gt; 
&lt;span class="nt"&gt;/&amp;gt;&lt;/span&gt;

&lt;span class="nt"&gt;&amp;lt;input&lt;/span&gt; 
  &lt;span class="na"&gt;type=&lt;/span&gt;&lt;span class="s"&gt;"text"&lt;/span&gt; 
  &lt;span class="na"&gt;name=&lt;/span&gt;&lt;span class="s"&gt;"address-line1"&lt;/span&gt; 
  &lt;span class="na"&gt;autocomplete=&lt;/span&gt;&lt;span class="s"&gt;"address-line1"&lt;/span&gt;
&lt;span class="nt"&gt;/&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;a href="https://html.spec.whatwg.org/multipage/form-control-infrastructure.html#autofill" rel="noopener noreferrer"&gt;WHATWG autofill attribute documentation&lt;/a&gt; lists all standard autofill values. Using these correctly lets browsers recognize your fields and offer saved data for one-click completion.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"Autofill is the lowest-effort improvement most form developers can make. Just setting the right autocomplete attributes on existing fields gives users the option to fill the entire form with two taps. It takes 15 minutes to implement and measurably improves completion rates." - Dennis Traina, &lt;a href="https://137foundry.com/about" rel="noopener noreferrer"&gt;137Foundry&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Smart defaults go beyond autofill. If you know the user's country from their IP address, pre-select it in the country dropdown. If they are logged in and you already have their name and email, pre-fill those fields. If they are updating their profile, pre-fill every field with existing values so they only need to change what they want to change.&lt;/p&gt;

&lt;h2&gt;
  
  
  4. Clear Visual Hierarchy and Field Grouping
&lt;/h2&gt;

&lt;p&gt;Forms feel easier when related fields are visually grouped. The billing section is clearly separated from the shipping section. Contact information is in its own cluster. Payment details are distinct from address details. Without visual grouping, fields blur together and users lose their sense of progress through the form.&lt;/p&gt;

&lt;p&gt;Use these specific techniques:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Group related fields with visual whitespace between groups&lt;/li&gt;
&lt;li&gt;Use subheadings to label each group ("Shipping Address," "Payment Details")&lt;/li&gt;
&lt;li&gt;Align labels and inputs consistently so the eye can scan naturally&lt;/li&gt;
&lt;li&gt;Make the primary action button visually distinct from secondary actions&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The &lt;a href="https://www.nngroup.com/articles/form-design-white-space/" rel="noopener noreferrer"&gt;Nielsen Norman Group's research on form layouts&lt;/a&gt; found that forms with clear visual hierarchy completed significantly faster than visually dense forms with the same number of fields. Users perceive well-organized forms as less work, even when the actual work is identical.&lt;/p&gt;

&lt;p&gt;Single-column layouts generally outperform multi-column layouts for completion rates. Multi-column layouts look more compact but force the user's eye to jump left-right-left, which disrupts the vertical scanning flow that matches how forms are typically filled out. Exceptions exist for specific field pairs (first name and last name, city and state) where the fields are conceptually related and naturally grouped.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2bggvij6crkbsqwavbkz.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2bggvij6crkbsqwavbkz.jpeg" alt="Clean form interface with clear sections and visual hierarchy" width="800" height="1198"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Photo by Pavel Danilyuk on &lt;a href="https://www.pexels.com" rel="noopener noreferrer"&gt;Pexels&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  5. Conditional Fields That Appear Only When Needed
&lt;/h2&gt;

&lt;p&gt;Forms with conditional logic show fields only when they are relevant. A shipping address form that shows international-specific fields only when the user selects a non-US country. A signup form that shows the "Company Name" field only after the user indicates they are signing up as a business. A checkout form that shows gift-message fields only if the user has indicated the purchase is a gift.&lt;/p&gt;

&lt;p&gt;This pattern reduces visual complexity significantly. Users see fewer fields on first load, which reduces the perceived effort. Fields that do not apply to their situation never appear. The form feels tailored to their specific needs rather than designed for some generic user.&lt;/p&gt;

&lt;p&gt;Implementation requires careful state management. When a user changes the trigger field, dependent fields should appear or disappear smoothly (animated transitions help), previously entered values should be preserved if the same fields reappear later, and validation rules should update to match the current field visibility.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;RegistrationForm&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;accountType&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;setAccountType&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;useState&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;personal&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

  &lt;span class="k"&gt;return &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;form&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
      &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;label&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
        Account Type
        &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;select&lt;/span&gt; 
          &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;accountType&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt; 
          &lt;span class="na"&gt;onChange&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;e&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nf"&gt;setAccountType&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;e&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;target&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;value&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;
        &lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
          &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;option&lt;/span&gt; &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"personal"&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;Personal&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;option&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
          &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;option&lt;/span&gt; &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"business"&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;Business&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;option&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
        &lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;select&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
      &lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;label&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;

      &lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;accountType&lt;/span&gt; &lt;span class="o"&gt;===&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;business&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="p"&gt;&amp;lt;&amp;gt;&lt;/span&gt;
          &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;label&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
            Company Name
            &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;input&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"companyName"&lt;/span&gt; &lt;span class="na"&gt;required&lt;/span&gt; &lt;span class="p"&gt;/&amp;gt;&lt;/span&gt;
          &lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;label&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
          &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;label&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
            Tax ID
            &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;input&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"taxId"&lt;/span&gt; &lt;span class="na"&gt;required&lt;/span&gt; &lt;span class="p"&gt;/&amp;gt;&lt;/span&gt;
          &lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;label&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
        &lt;span class="p"&gt;&amp;lt;/&amp;gt;&lt;/span&gt;
      &lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;form&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
  &lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;a href="https://www.w3.org/WAI/tutorials/forms/notifications/" rel="noopener noreferrer"&gt;Web Accessibility Initiative guidelines on dynamic content&lt;/a&gt; cover how to announce field changes to screen reader users so conditional logic remains accessible.&lt;/p&gt;

&lt;h2&gt;
  
  
  Combining the Patterns
&lt;/h2&gt;

&lt;p&gt;These patterns work best in combination. A well-designed form might use multi-step layout for length management, inline validation for real-time feedback, autofill for quick completion of known fields, clear visual grouping for cognitive clarity, and conditional fields to hide irrelevant complexity. No single pattern solves form UX by itself.&lt;/p&gt;

&lt;p&gt;For a broader guide on the principles underlying these patterns, including why forms fail users in the first place, &lt;a href="https://137foundry.com/articles/how-to-design-web-forms-that-users-actually-complete" rel="noopener noreferrer"&gt;this guide on designing web forms that users actually complete&lt;/a&gt; covers the design framework that makes these patterns effective. The &lt;a href="https://137foundry.com" rel="noopener noreferrer"&gt;user experience team at 137Foundry&lt;/a&gt; regularly audits client forms using these patterns and helps teams prioritize the specific improvements that will have the biggest impact on completion rates.&lt;/p&gt;

&lt;p&gt;The best forms are not the cleverest or the most technically sophisticated. They are the ones that feel easy to complete, respect the user's time, and guide users toward success without friction. These five patterns are the foundation that makes that feel possible.&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>productivity</category>
    </item>
    <item>
      <title>Why Your Checkout Form Is Killing Your Conversion Rate</title>
      <dc:creator>137Foundry</dc:creator>
      <pubDate>Tue, 14 Apr 2026 12:23:10 +0000</pubDate>
      <link>https://dev.to/137foundry/why-your-checkout-form-is-killing-your-conversion-rate-1pck</link>
      <guid>https://dev.to/137foundry/why-your-checkout-form-is-killing-your-conversion-rate-1pck</guid>
      <description>&lt;p&gt;E-commerce teams spend enormous amounts of time and money driving traffic to their product pages. Paid search, content marketing, social campaigns, influencer partnerships, SEO. The funnel looks great until the moment a customer clicks "Buy Now" and hits the checkout form. From that point, industry data shows that most of those hard-won visitors disappear before completing their purchase. The &lt;a href="https://baymard.com/lists/cart-abandonment-rate" rel="noopener noreferrer"&gt;Baymard Institute tracks checkout abandonment rates&lt;/a&gt; across thousands of e-commerce sites, and the average hovers around 70 percent. That is not a typo. Seven out of ten people who add items to their cart never finish buying.&lt;/p&gt;

&lt;p&gt;The uncomfortable truth is that most of this abandonment is preventable. It is not caused by comparison shopping or impulse regret. It is caused by checkout form design that introduces friction where none needs to exist. Understanding the specific causes, and fixing them, is one of the highest-leverage things any e-commerce team can do for revenue.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Real Reasons People Abandon Checkout
&lt;/h2&gt;

&lt;p&gt;Survey research across multiple studies has identified consistent reasons customers abandon checkout. Some are genuinely outside your control. The majority are not.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Unexpected costs at checkout.&lt;/strong&gt; Shipping fees, taxes, or additional charges that appear only after the customer has invested time in the checkout process create a sense of bait-and-switch that frequently leads to abandonment. The fix is not hiding these costs elsewhere. It is showing them earlier, on the product page or in the cart, so the customer's expectation matches what they see at checkout.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Forced account creation.&lt;/strong&gt; Many e-commerce platforms default to requiring account creation before checkout. Every additional step in the checkout flow reduces completion rates, and account creation is one of the costliest additional steps. Guest checkout should be the default option, with account creation offered as a benefit ("Save your information for faster checkout next time") rather than a requirement.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Slow page loads.&lt;/strong&gt; A checkout flow that feels slow creates anxiety that the payment did not go through. Users click the submit button again, end up with duplicate orders or errors, and abandon the process entirely. &lt;a href="https://web.dev/articles/why-speed-matters" rel="noopener noreferrer"&gt;Google's research on page speed&lt;/a&gt; shows that every second of load delay on mobile reduces conversion rates by around 20 percent.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Concerns about security and trust.&lt;/strong&gt; Entering credit card information requires trust. Sites that look outdated, lack visible security indicators, or ask for information that seems unnecessary trigger legitimate concern. Trust signals like security badges, visible customer reviews, and clear return policies reduce this hesitation measurably.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxu5x17ia66ireeffjyqc.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxu5x17ia66ireeffjyqc.jpeg" alt="Frustrated customer looking at a checkout error on a laptop" width="800" height="533"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Photo by Beyzanur K. on &lt;a href="https://www.pexels.com" rel="noopener noreferrer"&gt;Pexels&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The Specific Form Problems That Kill Conversion
&lt;/h2&gt;

&lt;p&gt;Beyond these macro issues, specific form design choices compound into conversion loss. Each of these issues is independently fixable.&lt;/p&gt;

&lt;h3&gt;
  
  
  Too Many Fields
&lt;/h3&gt;

&lt;p&gt;A standard checkout form asks for billing address, shipping address, payment method, and contact information. That is already a lot. Every additional field you ask for (phone number, company name, how did you hear about us, birthday) reduces completion rates. Review your checkout form honestly: is each field essential for completing the transaction, or is it there because a marketing team wanted the data?&lt;/p&gt;

&lt;p&gt;If a field is not required to fulfill the order, remove it from checkout. Collect it later through an optional post-purchase survey or a profile completion flow. The checkout form should be focused entirely on completing the purchase.&lt;/p&gt;

&lt;h3&gt;
  
  
  Poor Address Form Design
&lt;/h3&gt;

&lt;p&gt;Address fields are where international customers often give up. A form that demands a US-style state dropdown, a 5-digit zip code, and a specific phone format confuses every customer outside the United States. Smart address forms use a single country dropdown that dynamically adjusts the rest of the address fields to match that country's conventions.&lt;/p&gt;

&lt;p&gt;Address autocomplete from services like &lt;a href="https://developers.google.com/maps/documentation/places/web-service/autocomplete" rel="noopener noreferrer"&gt;Google Places API&lt;/a&gt; or &lt;a href="https://www.loqate.com/" rel="noopener noreferrer"&gt;Loqate&lt;/a&gt; reduces the typing required and reduces errors in address entry. Users start typing their street address and the system suggests complete, validated addresses. Implementation takes a few hours and can significantly improve checkout completion.&lt;/p&gt;

&lt;h3&gt;
  
  
  Unclear Payment Field Labels
&lt;/h3&gt;

&lt;p&gt;"CVV" means nothing to most customers. "The 3-digit code on the back of your card" makes sense immediately. Payment fields often use industry jargon that confuses users who are not processing payments every day. Replace jargon with plain language explanations, and include a small visual showing where to find each piece of information on a physical card.&lt;/p&gt;

&lt;p&gt;Credit card number fields should format as the user types, breaking the number into groups of 4 digits automatically. The expiration date should use a date picker or dropdown rather than requiring specific text formatting. These micro-improvements reduce entry errors and make the form feel more professional.&lt;/p&gt;

&lt;h3&gt;
  
  
  Aggressive Validation
&lt;/h3&gt;

&lt;p&gt;Validation that fires as the user types, highlighting their credit card number in red before they have finished typing it, creates anxiety. Wait until they move to the next field (on blur) before showing validation errors. If they leave a field incomplete and try to submit, then show validation errors. The principle is to help users succeed, not to catch them mid-error.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"Every checkout form I audit has 3 to 5 specific improvements that would measurably increase conversion. The fixes are small individually, but they compound into significant revenue differences over time. It is almost always worth the engineering effort to get them right." - Dennis Traina, &lt;a href="https://137foundry.com/services/web-development" rel="noopener noreferrer"&gt;137Foundry&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Mobile Checkout: A Special Case
&lt;/h2&gt;

&lt;p&gt;Mobile checkout needs specific attention because the constraints are different from desktop. Thumb-sized tap targets, autofill behavior, keyboard types, and viewport scrolling all affect mobile conversion in ways that do not apply on desktop.&lt;/p&gt;

&lt;p&gt;The mobile keyboard should match the field type. Email fields should trigger the email keyboard. Number fields should trigger the numeric keyboard. Phone number fields should trigger the phone keyboard. This is controlled by the HTML &lt;code&gt;inputmode&lt;/code&gt; and &lt;code&gt;type&lt;/code&gt; attributes, but many production forms use generic text inputs that force users to manually switch keyboards.&lt;/p&gt;

&lt;p&gt;Autofill should work. &lt;a href="https://webkit.org/blog/9958/new-webkit-features-in-safari-13/" rel="noopener noreferrer"&gt;The WebKit guide to autocomplete attributes&lt;/a&gt; covers the specific attributes that enable proper autofill behavior on iOS. A well-configured checkout form on mobile can let the user complete the entire billing address with a single tap on an autofill suggestion.&lt;/p&gt;

&lt;p&gt;Apple Pay, Google Pay, and similar payment methods should be prominently offered on mobile because they eliminate most of the checkout form entirely. Users who have these configured can complete checkout in 2-3 taps instead of filling in every field manually. For many e-commerce sites, making these payment options more visible has a larger impact on mobile conversion than any form optimization.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3nu4muboqxfj9spsifia.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3nu4muboqxfj9spsifia.jpeg" alt="E-commerce dashboard showing conversion analytics and funnel metrics" width="800" height="553"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Photo by Negative Space on &lt;a href="https://www.pexels.com" rel="noopener noreferrer"&gt;Pexels&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Testing Your Way to Better Conversion
&lt;/h2&gt;

&lt;p&gt;The specific fixes that will improve your checkout conversion depend on your specific customers. Instead of guessing, instrument your checkout flow to see where users actually abandon.&lt;/p&gt;

&lt;p&gt;Session replay tools like &lt;a href="https://www.hotjar.com/" rel="noopener noreferrer"&gt;Hotjar&lt;/a&gt; or &lt;a href="https://www.fullstory.com/" rel="noopener noreferrer"&gt;FullStory&lt;/a&gt; let you watch real customers attempt checkout. The patterns become obvious quickly. You will see specific fields where users hesitate, error messages that confuse them, or moments where they get distracted by non-checkout elements on the page. These observations are more valuable than any abstract best practices list because they show exactly where your specific customers get stuck.&lt;/p&gt;

&lt;p&gt;Then A/B test your fixes. Changes that look like obvious improvements sometimes have unexpected effects. A field that seems redundant might actually be important for a segment of your customers. Run experiments, measure results, and ship the changes that measurably improve completion rates.&lt;/p&gt;

&lt;p&gt;For a broader guide on form design principles that apply to checkout forms as well as signup forms, contact forms, and onboarding flows, &lt;a href="https://137foundry.com/articles/how-to-design-web-forms-that-users-actually-complete" rel="noopener noreferrer"&gt;this guide on designing web forms that users actually complete&lt;/a&gt; covers the full design framework. When you need help auditing your checkout flow and implementing improvements, &lt;a href="https://137foundry.com" rel="noopener noreferrer"&gt;137Foundry&lt;/a&gt; works with e-commerce teams to identify the specific conversion killers in their funnels and build solutions that drive measurable revenue improvements.&lt;/p&gt;

&lt;p&gt;The best e-commerce teams do not treat checkout as a solved problem. They continuously test, measure, and improve their forms because even small improvements compound into significant revenue over time. Every percentage point of conversion improvement on a checkout form that handles thousands of orders per month is real money that was previously leaking out of an otherwise well-built funnel.&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>productivity</category>
    </item>
  </channel>
</rss>
