DEV Community

Cover image for Building Translation Pipelines: API Integration Strategies for Human vs Machine Translation Workflows
Diogo Heleno
Diogo Heleno

Posted on • Originally published at m21global.com

Building Translation Pipelines: API Integration Strategies for Human vs Machine Translation Workflows

Building Translation Pipelines: API Integration Strategies for Human vs Machine Translation Workflows

As a developer working with multilingual content, you've probably faced the choice between human translation APIs and machine translation services. But there's a third option that's becoming increasingly common: hybrid workflows that combine both approaches based on content type and quality requirements.

This technical deep-dive covers how to architect translation pipelines that can dynamically route content through different translation processes, inspired by the ISO 17100 (human translation) and ISO 18587 (machine translation post-editing) standards that translation providers use.

Understanding the Translation Workflow Types

Before jumping into implementation, it's worth understanding what these different workflows actually look like from a technical perspective:

Human Translation Workflow (ISO 17100-style):

  • Content → Human Translator → Human Reviewer → Final Output
  • Typical turnaround: 2000-3000 words per day per translator
  • Best for: Legal docs, safety-critical content, marketing copy

Machine Translation Post-Editing (ISO 18587-style):

  • Content → MT Engine → Human Post-Editor → Final Output
  • Typical turnaround: 5000-8000 words per day per post-editor
  • Best for: Technical docs, internal communications, high-volume content

The key insight? You can build systems that automatically route content based on predefined rules.

Architecting a Hybrid Translation Pipeline

Here's a basic architecture for a content routing system:

class TranslationRouter:
    def __init__(self):
        self.human_api = HumanTranslationAPI()
        self.mt_api = MachineTranslationAPI()
        self.post_edit_api = PostEditingAPI()

    def route_content(self, content, metadata):
        content_type = self.classify_content(content, metadata)

        if content_type in ['legal', 'safety', 'marketing']:
            return self.human_translation_workflow(content)
        elif content_type in ['technical', 'documentation', 'internal']:
            return self.mt_post_edit_workflow(content)
        else:
            return self.mt_only_workflow(content)

    def classify_content(self, content, metadata):
        # Implement classification logic
        # Could use keyword matching, ML classification, or metadata tags
        pass
Enter fullscreen mode Exit fullscreen mode

Content Classification Strategies

The routing decision depends on accurate content classification. Here are several approaches:

1. Metadata-Based Classification

The simplest approach uses content tags or metadata:

def classify_by_metadata(self, metadata):
    high_risk_types = ['contract', 'safety_manual', 'legal_notice']
    medium_risk_types = ['user_manual', 'technical_spec', 'faq']

    if metadata.get('content_type') in high_risk_types:
        return 'human_only'
    elif metadata.get('content_type') in medium_risk_types:
        return 'mt_post_edit'
    else:
        return 'mt_only'
Enter fullscreen mode Exit fullscreen mode

2. Keyword-Based Classification

For content without clear metadata, analyze the text itself:

import re

def classify_by_keywords(self, content):
    safety_keywords = r'\b(warning|danger|caution|hazard|risk)\b'
    legal_keywords = r'\b(shall|liability|agreement|terms|conditions)\b'

    safety_matches = len(re.findall(safety_keywords, content, re.IGNORECASE))
    legal_matches = len(re.findall(legal_keywords, content, re.IGNORECASE))

    content_length = len(content.split())
    safety_density = safety_matches / content_length * 1000
    legal_density = legal_matches / content_length * 1000

    if safety_density > 5 or legal_density > 10:
        return 'human_only'
    elif safety_density > 1 or legal_density > 3:
        return 'mt_post_edit'
    else:
        return 'mt_only'
Enter fullscreen mode Exit fullscreen mode

API Integration Patterns

Human Translation APIs

Most professional translation services offer REST APIs. Here's a generic wrapper:

import requests
import time

class HumanTranslationAPI:
    def __init__(self, api_key, base_url):
        self.api_key = api_key
        self.base_url = base_url

    def submit_job(self, content, source_lang, target_lang, quality_level='professional'):
        payload = {
            'content': content,
            'source_language': source_lang,
            'target_language': target_lang,
            'quality_level': quality_level,
            'workflow': 'translation_editing_proofreading'  # TEP workflow
        }

        response = requests.post(
            f'{self.base_url}/jobs',
            json=payload,
            headers={'Authorization': f'Bearer {self.api_key}'}
        )

        return response.json()['job_id']

    def get_status(self, job_id):
        response = requests.get(
            f'{self.base_url}/jobs/{job_id}',
            headers={'Authorization': f'Bearer {self.api_key}'}
        )
        return response.json()

    def poll_until_complete(self, job_id, poll_interval=300):
        while True:
            status = self.get_status(job_id)
            if status['status'] == 'completed':
                return status['translated_content']
            elif status['status'] == 'failed':
                raise Exception(f"Translation failed: {status.get('error')}")
            time.sleep(poll_interval)
Enter fullscreen mode Exit fullscreen mode

Machine Translation with Post-Editing

For MT + post-editing workflows, you'll typically chain two API calls:

class MTPostEditWorkflow:
    def __init__(self, mt_api, post_edit_api):
        self.mt_api = mt_api
        self.post_edit_api = post_edit_api

    def translate_and_post_edit(self, content, source_lang, target_lang, edit_level='full'):
        # Step 1: Machine translation
        mt_output = self.mt_api.translate(content, source_lang, target_lang)

        # Step 2: Post-editing
        pe_job_id = self.post_edit_api.submit_job(
            source_text=content,
            mt_output=mt_output,
            target_language=target_lang,
            edit_level=edit_level  # 'light' or 'full'
        )

        return self.post_edit_api.poll_until_complete(pe_job_id)
Enter fullscreen mode Exit fullscreen mode

Quality Assurance Integration

Regardless of the translation method, implement QA checks:

class TranslationQA:
    def __init__(self):
        self.checks = [
            self.check_length_ratio,
            self.check_terminology_consistency,
            self.check_formatting_preservation
        ]

    def validate_translation(self, source, target, metadata):
        issues = []

        for check in self.checks:
            result = check(source, target, metadata)
            if not result['passed']:
                issues.append(result)

        return {
            'passed': len(issues) == 0,
            'issues': issues
        }

    def check_length_ratio(self, source, target, metadata):
        source_length = len(source.split())
        target_length = len(target.split())
        ratio = target_length / source_length if source_length > 0 else 0

        # Expected ratios vary by language pair
        expected_min, expected_max = self.get_expected_ratio(metadata['language_pair'])

        return {
            'passed': expected_min <= ratio <= expected_max,
            'message': f'Length ratio {ratio:.2f} outside expected range {expected_min}-{expected_max}'
        }
Enter fullscreen mode Exit fullscreen mode

Monitoring and Analytics

Track performance metrics across different workflow types:

class TranslationMetrics:
    def __init__(self, db_connection):
        self.db = db_connection

    def log_job(self, job_id, workflow_type, word_count, duration, cost):
        self.db.execute(
            "INSERT INTO translation_jobs (job_id, workflow_type, word_count, duration_hours, cost) VALUES (?, ?, ?, ?, ?)",
            (job_id, workflow_type, word_count, duration, cost)
        )

    def get_efficiency_report(self, date_range):
        return self.db.execute(
            "SELECT workflow_type, AVG(word_count/duration_hours) as words_per_hour, AVG(cost/word_count) as cost_per_word FROM translation_jobs WHERE created_at BETWEEN ? AND ? GROUP BY workflow_type",
            date_range
        ).fetchall()
Enter fullscreen mode Exit fullscreen mode

Language Pair Considerations

Machine translation quality varies significantly by language pair. Build this into your routing logic:

MT_QUALITY_SCORES = {
    ('en', 'es'): 0.85,
    ('en', 'fr'): 0.82,
    ('en', 'de'): 0.78,
    ('en', 'ja'): 0.65,
    ('en', 'ar'): 0.60,
    # Add more pairs based on your experience
}

def should_use_mt_workflow(self, language_pair, content_type):
    mt_score = MT_QUALITY_SCORES.get(language_pair, 0.5)

    if content_type == 'critical' and mt_score < 0.8:
        return False
    elif content_type == 'standard' and mt_score < 0.7:
        return False

    return True
Enter fullscreen mode Exit fullscreen mode

Putting It All Together

This approach gives you the flexibility to optimize for cost, speed, and quality based on content requirements. The key is building the classification logic that matches your specific use case and continuously monitoring performance to refine your routing rules.

The original discussion about ISO translation standards provides good background on why these different workflows exist and when to use each approach.

Start with simple metadata-based routing, then gradually add more sophisticated classification as you gather data about what works best for your content types and quality requirements.

Top comments (0)