DEV Community

I added a blocked-row remediation loop to my CSV intake console

If you want the broader project context, I wrote about the overall intake console in the previous post: [From CSV Import Demo to CSV Triage Console]
https://dev.to/fastapier/from-csv-import-demo-to-csv-triage-console-2047

In the previous iteration of this project, the CSV flow could already do the important parts:

  • stage a run first
  • show row decisions before writing anything
  • apply valid rows intentionally
  • revert a run safely

That was a solid base.

But it still had one real operational gap.

If a row was blocked, the system could explain why it failed, but the actual fix still lived outside the UI.

That meant the workflow was still too close to this:

download → fix somewhere else → re-upload → hope nothing changed

So in this update, I added a blocked-row remediation loop directly into the console.

Now the flow is:

staged → blocked → edit in place → re-evaluate → ready → apply

That changes the project from a CSV preview screen into something much closer to an actual intake console.

What changed

When a CSV preview contains blocked rows, the operator can now:

  • open that row
  • inspect the blocked reason
  • edit the necessary field in place
  • save the fix
  • re-run validation for that row only

If the correction is valid, the row moves from blocked to ready.

At that point, the run can continue without forcing the user back into a manual re-upload loop.

Screenshot 1: staged run with blocked rows

The important difference is small in code, but big in operations:

blocked rows are no longer dead ends.

Why this matters

A lot of CSV tools stop at error reporting.

That is not enough.

The real question is not just:

Can the system detect bad rows?

It is:

Can the operator repair them without losing control of the run?

That is the difference between a helpful intake console and a frustrating upload page.

This update is about recovery, not just detection.

Backend changes

On the backend side, I added row-level remediation for staged runs.

The system can now:

  • patch a single blocked row
  • re-normalize that row
  • re-run validation and intended action detection
  • update the stored row snapshot
  • refresh run-level counts
  • write an audit event

I also added a dedicated audit event for this step:

  • row_remediated

That matters because a blocked row becoming ready is not just a UI state change.

It is an operational event.

Frontend changes

On the frontend side, blocked rows can now be edited directly inside the row decisions section.

The operator can:

  • inspect the blocked reason
  • open an inline remediation form
  • fix only the relevant field
  • save the correction
  • see the row move to ready

I also adjusted a few practical UI details while doing this:

  • the apply action is labeled more clearly as Apply run
  • audit events are shown in newest-first order
  • the status field is now a short select instead of a long free-text input
  • non-blocking issues are shown as Notes instead of stronger warning language

Screenshot 2: editing a blocked row in place

None of that is flashy.

It just makes the console feel more operational.

What the full loop looks like

With this update, the console now supports a much more practical path:

  1. upload CSV
  2. preview blocked rows
  3. fix only the broken rows
  4. re-evaluate those rows
  5. confirm the summary changed
  6. apply the run

That is a much better shape than “upload a file and hope for the best.”

Screenshot 3: run after remediation and apply

Example problems handled in this pass

In this test flow, the blocked rows were caused by common operational issues:

  • missing company_name
  • invalid email format
  • unsupported status input

Those are exactly the kinds of problems that show up in real CSV imports.

The important part is not the individual values.

The important part is that the system can now recover from those issues inside the same staged workflow.

What this project is becoming

I am not trying to build a magical CSV uploader.

I am trying to build a system that can:

  • stage messy operational data
  • explain what will happen
  • let an operator repair what is broken
  • apply the run intentionally
  • preserve an audit trail of what happened

That is a much stronger shape than “upload a CSV and hope for the best.”

What comes next

The next meaningful step is company-specific import rules.

That means moving from hardcoded assumptions toward configurable rules like:

  • column mapping profiles
  • status dictionaries
  • required field rules
  • duplicate matching rules

That is the layer that turns a working remediation loop into a more reusable intake engine.

For now, though, this update solves an important practical problem:

blocked rows can be repaired inside the console instead of being pushed back out into a manual re-upload loop.


If you work on messy CSV onboarding, import remediation, or operational data intake problems, feel free to reach out: fastapienne@gmail.com

Top comments (0)