DEV Community

Xiao Ling
Xiao Ling

Posted on • Originally published at dynamsoft.com

How to Generate Custom Dynamsoft Barcode Reader Templates in Python for Hard Barcode Images

Default barcode presets are designed for broad coverage, not for every extreme image you will see in production. When a symbol is tiny, low-contrast, tightly cropped, or surrounded by distracting texture, the right fix is usually not to keep randomizing parameters. A better approach is to let AI follow a strict workflow: inspect the image, validate the current template, probe controlled variants, classify the failure mode, and then generate the smallest focused JSON template that matches the evidence.

What you'll build: A Python workflow that uses AI, OpenCV, and Dynamsoft Barcode Reader helper scripts to diagnose hard barcode images, generate focused JSON templates, and validate them before import.

Build AI-Generated Custom Barcode Reader Templates in Python with Dynamsoft

Follow the Workflow from Diagnosis to Focused Template Generation

  • Why hard barcode images often need custom DBR templates instead of more blind retries.
  • How to turn barcode-tuning knowledge into an AI skill and prompt that produce consistent results.
  • How AI visual analysis and OpenCV-based validation work together in the same tuning loop.
  • How to validate and probe DBR templates with small Python helpers before changing JSON.
  • How to generate a focused single-barcode DataMatrix template for a real tiny-symbol image.

Why Focused DBR Templates Matter for Hard Barcode Images

Dynamsoft Barcode Reader is flexible because it lets you tune scope, localization, decode, and acceptance behavior independently. That flexibility is exactly why custom templates matter. A hard image might fail because the scene contains too much background, because localization is too conservative, because the decoder needs a different grayscale or binarization path, or because the acceptance rules reject a symbol that is otherwise readable.

The problem is not capability. The problem is undirected tuning. Without a disciplined workflow, it is easy to change DeblurModes, LocalizationModes, and MinQuietZoneWidth all at once and learn nothing from the result. In this project, AI is useful because it does not just generate JSON. It follows a falsifiable decision tree encoded in the skill and prompt. It also knows when to stop. For example, the same repo contains an Aztec case where raw plus basic variants still prove that missing context pixels are the real blocker, which means preprocessing is more justified than more template churn.

Prepare the Inputs for Repeatable Template Tuning

  • Use the default DBR preset as a baseline rather than a guarantee for extreme images.
  • Let AI classify whether scope, localization, decode, or acceptance settings deserve attention first.
  • Treat raw plus basic-variant failure as a stop signal for template-only tuning.
  • Keep focused JSON templates small so they are easier to validate and import into the desktop app.

Prerequisites

  • Python 3.9+
  • Get a 30-day free trial license for Dynamsoft Barcode Reader.
  • The dependencies in template-optimizer/requirements.txt if you only want the optimizer package, or requirements.txt if you also want the full GUI app
  • A difficult barcode image
  • A baseline DBR template file if you already have one

Install the standalone optimizer dependencies:

pip install -r template-optimizer/requirements.txt
Enter fullscreen mode Exit fullscreen mode

Step 1: Encode the Tuning Rules into a Reusable AI Workflow

The first step is to stop asking AI for ad hoc parameter guesses. In this project, the canonical workflow now lives inside the self-contained template-optimizer/ directory: template-optimizer/SKILL.md captures the decision logic, template-optimizer/PROMPT.md provides a portable user-facing entrypoint, and template-optimizer/KNOWLEDGE.md stores the tuning reference. The .github/ files are still useful in VS Code Copilot, but only as thin wrappers around that portable package.

This is the copyable prompt contract from template-optimizer/PROMPT.md:

Read `template-optimizer/SKILL.md` and `template-optimizer/KNOWLEDGE.md` first. Follow the Python-only template-optimizer workflow and use the bundled helper scripts under `template-optimizer/tools/` for single-image triage before broad benchmarking.

Task: tune this barcode image.

- Image path: `<image-path>`
- Current template path: `<template-path-or-none>`
- Format hint: `<format-or-none>`
Enter fullscreen mode Exit fullscreen mode

And this discipline now lives in the canonical template-optimizer/SKILL.md workflow, with .github/skills/template-optimizer/SKILL.md acting only as an optional Copilot wrapper:

## Procedure

1. Observe the image before touching the template.
Use the image tool on the provided barcode image and describe visible symptoms in concrete terms: quiet-zone loss, border clipping, blur, low contrast, warping, wrinkles, inversion, reflection, background texture, finder-pattern damage, or perspective skew.

2. Validate the current template directly through the simplest DBR path.
The validator must initialize the DBR license before creating `CaptureVisionRouter`; otherwise treat `NO_RESULT` as invalid evidence.
Run `python template-optimizer/tools/validate_dbr_template.py <image> --template-file <template>`.

3. Establish a broader baseline only after the direct validation result is known.
Run `python template-optimizer/tools/probe_dbr_templates.py <image>`.
Enter fullscreen mode Exit fullscreen mode

That structure matters because AI now has a repeatable way to reason about DBR flexibility. It must classify the problem before it edits JSON. That is what turns AI from a template randomizer into a tuning assistant.

Step 2: Run the Portable Skill and Prompt in Practice

This part is critical. If an article shows the generated skill and prompt but never explains how to use them, many readers still will not know how to make the workflow run in a real tool.

The simplest packaging rule is this: treat template-optimizer/ as the product, and treat .github/ as optional integration glue for VS Code Copilot. In practice, a developer can copy just the template-optimizer/ directory into another workspace, install template-optimizer/requirements.txt, provide a barcode image plus any current template path, and ask the agent to read template-optimizer/PROMPT.md. That is enough to run the same workflow in Copilot, Claude, or another coding agent that can read files and execute local Python commands.

Inside VS Code Copilot, the flow becomes:

  1. Open the workspace that contains template-optimizer/.
  2. Optionally keep .github/prompts/ and .github/skills/ if you want native Copilot discovery.
  3. Provide the barcode image, the current template path, and any known format constraints in Copilot Chat.
  4. Use template-optimizer/PROMPT.md directly, or ask Copilot to use the template-optimizer skill if the optional wrapper is present.
  5. Let the agent call the bundled Python scripts under template-optimizer/tools/, such as validate_dbr_template.py, probe_dbr_templates.py, and compare_dbr_template_profiles.py, to turn the visual diagnosis into evidence.

In other words, the portable prompt decides where the workflow starts, the canonical skill decides how it should reason, and the local scripts decide how the conclusions get validated. The .github/ files just make that package easier for Copilot to discover.

What about other tools? They can use the same method, but you need to separate the workflow itself from the file format that stores it.

Tools such as OpenCode and Claude Code can usually reuse the same methodology, but they may not natively understand GitHub Copilot-style files such as .github/prompts/*.prompt.md and .github/skills/*/SKILL.md. That is exactly why the prompt was moved into template-optimizer/PROMPT.md: the portable package no longer depends on .github/ to be usable. In those tools, you usually just tell the agent to read template-optimizer/PROMPT.md, template-optimizer/SKILL.md, and template-optimizer/KNOWLEDGE.md.

That does not mean the logic has to be rewritten. The portable parts are these three:

  • the workflow itself: inspect the image, validate, probe, classify, and then generate a focused template
  • the local script capability: template-optimizer/tools/validate_dbr_template.py, template-optimizer/tools/probe_dbr_templates.py, and JSON template editing
  • the image input capability: the AI must be able to actually inspect the image, not just read a text description of it

So if you want to use this approach in OpenCode or Claude Code, at least three conditions should be true:

  1. The tool supports image input or multimodal visual analysis.
  2. The tool can access the workspace files and run local terminal commands.
  3. You are willing to translate the VS Code-style prompt and skill content into whatever instruction format that tool expects.

Once those conditions are met, the approach is not limited to VS Code. GPT-5.4 can be used, and other multimodal AI systems with visual capability can also be used. VS Code Copilot Chat can consume this file structure more directly, while OpenCode and Claude Code are better treated as environments where the same rules are adapted and reused. What needs to travel is not one specific file format, but the full loop of visual diagnosis, script-based validation, and focused template generation.

Step 3: Combine AI Vision with OpenCV-Based Validation

This workflow uses both AI and OpenCV, but they do different jobs.

In the first step, the model inspects the image itself and produces a visual diagnosis. That part is not driven by OpenCV code. It comes from the model's own multimodal reasoning: looking at the image and describing visible symptoms such as a tiny symbol, a noisy background, missing quiet zone, clipping, inversion, blur, or perspective distortion. That matters because template tuning is rarely just a brute-force search problem. Someone, or something, has to decide which parameter family deserves attention first.

GPT-5.4 is one option for that role, but it is not the only one. Other multimodal AI systems can also be used to generate focused templates, as long as they can actually inspect images and reason about visible symptoms before touching the JSON. The key requirement is not a specific vendor or model name. The key requirement is visual capability plus the ability to participate in a disciplined validation loop.

After that, OpenCV and DBR take over for the verification step. OpenCV loads the image, generates controlled variants when needed, and feeds those pixels into CaptureVisionRouter. DBR then gives the hard evidence: NO_RESULT, a successful decode, or a change in behavior after a specific template edit. In other words, AI proposes the diagnosis, while OpenCV and DBR test whether that diagnosis is actually true.

That combination is important because each side covers the other's weakness. AI alone is good at spotting visual patterns quickly, but it still needs a deterministic validation path. OpenCV alone is good at repeatable pixel operations, but it has no built-in understanding of which image symptom is most likely tied to scope, localization, decode, or acceptance settings. Put together, the workflow becomes much stronger: AI chooses a plausible next move, and OpenCV plus DBR turn that move into evidence.

Step 4: Validate the Current Template Before Editing JSON

The smallest trustworthy DBR check in this repo is template-optimizer/tools/validate_dbr_template.py. It loads one image, resolves one template name, initializes the license, and calls capture() directly. That makes it the right first step before you change anything.

The README already exposes the direct validation command:

python template-optimizer/tools/validate_dbr_template.py path/to/image.png --template-file candidate-template.json
Enter fullscreen mode Exit fullscreen mode

The implementation is deliberately minimal:

def decode(image_path, template_file=None, template_name=None, fallback_profile="none"):
    ensure_dbr_license()
    router = CaptureVisionRouter()
    if template_file:
        err, msg = router.init_settings_from_file(str(template_file))
        if err != 0:
            print(f"[DBR] Template load failed ({err}): {msg}")

    cv_img = cv2.imread(str(image_path))
    if cv_img is None:
        raise RuntimeError(f"Failed to load image with OpenCV: {image_path}")

    result = router.capture(cv_img, template_name)
    return result.get_decoded_barcodes_result()
Enter fullscreen mode Exit fullscreen mode

Original example image with a tiny DataMatrix code on a pink object against a dark textured background

Example input image used throughout Steps 4-6: a tiny DataMatrix sits on a pink object within a large dark textured scene.

On the example image used in this project, that discipline immediately pays off. The raw image is dominated by dark textured background, while the DataMatrix occupies only a tiny region on a pink object. Running the default preset and the built-in templates on the original image produced only NO_RESULT, which is strong evidence that the first problem is not acceptance tuning. It is scene scope and localization.

Step 5: Probe Variants to Classify the Failure Before Editing JSON

Once the direct check fails, the next step is not to guess. It is to probe the image systematically. The helper script does exactly that by trying the preset, every bundled example template, and optional preprocessing variants.

The README exposes the broad evidence pass like this:

python template-optimizer/tools/probe_dbr_templates.py path/to/image.png --variant-set basic --report-json tuning-report.json
Enter fullscreen mode Exit fullscreen mode

The variant generator is intentionally small and transparent:

def build_variants(image_path, variant_set):
    variants = [("original", image_path)]
    if variant_set == "none":
        return variants

    source = cv2.imread(str(image_path), cv2.IMREAD_COLOR)
    if source is None:
        return variants

    variant_dir = Path.cwd() / "_probe_variants" / image_path.stem
    variant_dir.mkdir(parents=True, exist_ok=True)

    def save_variant(name, image):
        path = variant_dir / f"{name}.png"
        cv2.imwrite(str(path), image)
        variants.append((name, path))

    padded = cv2.copyMakeBorder(
        source,
        40,
        40,
        40,
        40,
        cv2.BORDER_CONSTANT,
        value=(255, 255, 255),
    )
    save_variant("padded_40", padded)

    for scale in (2, 4):
        resized = cv2.resize(source, None, fx=scale, fy=scale, interpolation=cv2.INTER_NEAREST)
        save_variant(f"nearest_{scale}x", resized)
Enter fullscreen mode Exit fullscreen mode

For the example image in this project, that probe gave the kind of evidence you want before generating a template:

  • The original image stayed NO_RESULT across the default preset and all built-in templates.
  • Padding alone did not fix the issue.
  • nearest_4x plus ReadBarcodes_ReadRateFirst decoded one DATAMATRIX result: DM-18Batch.
  • The variant results show that the main blocker is tiny effective symbol size in a large noisy scene, not an unrecoverable pixel failure.

That pattern is important. It means the pixels are recoverable, but the raw scene is too hostile for the broad default path. In other words, this is a good candidate for AI-generated focused tuning. In the final focused template, the key fix was to narrow the ROI to the centered symbol region so DBR spends its effort on the relevant area instead of the full textured background.

Step 6: Generate a Focused Single-Barcode DataMatrix Template

After the probe showed a single tiny DataMatrix and a scale-sensitive recovery path, AI can generate a much smaller template that matches the actual image. In this case, the key decisions are straightforward:

  • limit the format to BF_DATAMATRIX
  • limit the expected count to 1
  • add stronger localization modes for a tiny centered symbol
  • narrow the ROI to the center of the image instead of searching the whole frame
  • keep both normal and mirrored format specs available

AI can generate a focused template like the following when the image evidence clearly points to a single tiny DataMatrix:

{
    "BarcodeFormatSpecificationOptions" :
    [
        {
            "BarcodeFormatIds" :
            [
                "BF_DATAMATRIX"
            ],
            "DataMatrixSizeOptions" :
            [
                "DMS_DEFAULT"
            ],
            "ExpectedBarcodesCount" : 1,
            "FindUnevenModuleBarcode" : 1,
            "HasVerticalQuietZone" : 1,
            "MinQuietZoneWidth" : 4,
            "MinResultConfidence" : 30,
            "MirrorMode" : "MM_BOTH",
            "Name" : "bfs-single-tiny-datamatrix-both",
            "PartitionModes" :
            [
                "PM_WHOLE_BARCODE",
                "PM_ALIGNMENT_PARTITION"
            ],
            "ReturnPartialBarcodeValue" : 1
        }
    ],
    "TargetROIDefOptions" :
    [
        {
            "Name" : "roi-read-single-tiny-datamatrix",
            "Location" :
            {
                "Offset" :
                {
                    "FirstPoint" : [40, 30, 1, 1],
                    "SecondPoint" : [65, 30, 1, 1],
                    "ThirdPoint" : [65, 65, 1, 1],
                    "FourthPoint" : [40, 65, 1, 1],
                    "MeasuredByPercentage" : 1
                }
            }
        }
    ],
    "BarcodeReaderTaskSettingOptions" :
    [
        {
            "BarcodeFormatIds" :
            [
                "BF_DATAMATRIX"
            ],
            "ExpectedBarcodesCount" : 1,
            "Name" : "task-read-single-tiny-datamatrix"
        }
    ]
}
Enter fullscreen mode Exit fullscreen mode

That generated template can then be validated against the target image with the same helper script:

python template-optimizer/tools/validate_dbr_template.py path/to/image.png --template-file path/to/generated-template.json
Enter fullscreen mode Exit fullscreen mode

In this workspace, the custom template decoded the DataMatrix value DM-18Batch directly on the original image. That is the practical payoff of AI-generated template work: not a giant exported preset, but a small file that matches the failure mode you actually observed and solves it through template configuration alone.

Step 7: Load the Focused Template Back into the App

Once the focused template works, the desktop app can import it directly. This is why focused single-template files are especially useful in this project: the GUI explicitly uses the first template name in the JSON file.

Here is the import logic from main.py:

def _import_dbr_template(self):
    """Load a custom DBR template JSON file for all subsequent captures."""
    path, _ = QFileDialog.getOpenFileName(
        self, "Import DBR Template",
        self._active_template_path or "",
        "JSON files (*.json)"
    )
    if not path:
        return
    try:
        # Validate by trying to load into a temporary router
        router = CaptureVisionRouter()
        err, msg = router.init_settings_from_file(path)
        if err != 0:
            QMessageBox.warning(self, "Import Failed",
                                f"DBR error [{err}]: {msg}")
            return

        # Find the first template name inside the JSON so we use the right one
        with open(path, "r", encoding="utf-8") as f:
            data = json.load(f)
        templates = data.get("CaptureVisionTemplates", [])
        if templates:
            template_name = templates[0]["Name"]
        else:
            # Fallback: keep the default name
            template_name = EnumPresetTemplate.PT_READ_BARCODES.value
Enter fullscreen mode Exit fullscreen mode

That small detail has a big architectural consequence: if you want predictable behavior in the GUI, generate a focused template whose first CaptureVisionTemplate is the one you actually want to run. AI-generated templates are much easier to keep clean than manually edited all-in-one exports.

Watch the Desktop Demo

The sample project already includes a short demo of the Python desktop tool that can import and test custom DBR templates:

Source Code

https://github.com/yushulx/python-barcode-qrcode-sdk/tree/main/examples/official/annotation_tool

Top comments (0)