DEV Community

Cover image for Context Engineering: La Nueva Disciplina que Reduce Alucinaciones un 85%
Abdessamad Ammi
Abdessamad Ammi

Posted on • Originally published at bcloud.consulting

Context Engineering: La Nueva Disciplina que Reduce Alucinaciones un 85%

Publicado originalmente en bcloud.consulting

TL;DR

• Context Engineering reduce alucinaciones 85% vs prompt engineering solo
• No es mejorar prompts, es arquitectura de información inteligente
• Resultados: 76% a 94% accuracy, 40% menos tokens
• Técnicas: memoria jerárquica, chunking semántico, ranking multidimensional
• Código implementable con ejemplos reales


¿Qué es Context Engineering?

Mientras todos optimizan prompts, los equipos avanzados diseñan arquitecturas de contexto.

Context Engineering es la disciplina de estructurar, gestionar y optimizar la información contextual que alimenta a los LLMs.

La Evolución: De Prompts a Context

Prompt Engineering (2022-2023)

prompt = """You are an expert financial advisor.
Answer the following question accurately.
Be concise and precise.
Question: {question}"""
Enter fullscreen mode Exit fullscreen mode

Context Engineering (2024-2025)

class ContextEngineer:
    def __init__(self):
        self.memory_manager = HierarchicalMemory()
        self.chunker = SemanticChunker()
        self.ranker = MultiDimensionalRanker()
        self.validator = CoherenceValidator()

    def build_context(self, query):
        complexity = self.analyze_complexity(query)
        raw_context = self.memory_manager.retrieve(
            query,
            layers=['immediate', 'session', 'persistent'],
            max_tokens=self.dynamic_window(complexity)
        )
        chunks = self.chunker.process(raw_context, preserve_semantic_units=True)
        ranked = self.ranker.rank(chunks, dimensions=['relevance', 'temporal', 'authority'])
        validated = self.validator.validate(ranked)
        return self.format_context(validated)
Enter fullscreen mode Exit fullscreen mode

Técnicas Core de Context Engineering

1. Gestión de Memoria Conversacional Jerárquica

class HierarchicalMemory:
    def __init__(self):
        self.immediate = deque(maxlen=5)
        self.session = []
        self.persistent = VectorDB()

    def store(self, interaction):
        self.immediate.append(interaction)
        self.session.append(self.compress_if_needed(interaction))
        if self.is_valuable(interaction):
            self.persistent.add(text=interaction, metadata={'timestamp': datetime.now()})

    def retrieve(self, query, layers=['immediate'], max_tokens=4000):
        context = []
        token_count = 0
        for layer in layers:
            layer_data = getattr(self, layer)
            for item in self.prioritize(layer_data, query):
                tokens = self.count_tokens(item)
                if token_count + tokens > max_tokens:
                    break
                context.append(item)
                token_count += tokens
        return context
Enter fullscreen mode Exit fullscreen mode

2. Chunking Semántico Inteligente

class SemanticChunker:
    def __init__(self, model='sentence-transformers/all-MiniLM-L6-v2'):
        self.encoder = SentenceTransformer(model)
        self.similarity_threshold = 0.75

    def process(self, text, preserve_semantic_units=True):
        sentences = self.split_sentences(text)
        embeddings = self.encoder.encode(sentences)
        chunks = []
        current_chunk = []
        current_embedding = None

        for i, (sent, emb) in enumerate(zip(sentences, embeddings)):
            if not current_chunk:
                current_chunk = [sent]
                current_embedding = emb
            else:
                similarity = cosine_similarity([current_embedding], [emb])[0][0]
                if similarity > self.similarity_threshold:
                    current_chunk.append(sent)
                    current_embedding = np.mean([current_embedding, emb], axis=0)
                else:
                    chunks.append(' '.join(current_chunk))
                    current_chunk = [sent]
                    current_embedding = emb

        if current_chunk:
            chunks.append(' '.join(current_chunk))

        return chunks
Enter fullscreen mode Exit fullscreen mode

3. Ranking Multi-Dimensional

class MultiDimensionalRanker:
    def __init__(self):
        self.weights = {'relevance': 0.5, 'temporal': 0.2, 'authority': 0.3}

    def rank(self, chunks, query, dimensions=['relevance', 'temporal', 'authority']):
        scores = []
        for chunk in chunks:
            score = 0
            for dim in dimensions:
                dim_score = getattr(self, f'score_{dim}')(chunk, query)
                score += dim_score * self.weights.get(dim, 0.33)
            scores.append((chunk, score))

        ranked = sorted(scores, key=lambda x: x[1], reverse=True)
        return [chunk for chunk, _ in ranked]

    def score_relevance(self, chunk, query):
        chunk_emb = self.encode(chunk)
        query_emb = self.encode(query)
        return cosine_similarity([query_emb], [chunk_emb])[0][0]

    def score_temporal(self, chunk, query):
        chunk_date = self.extract_date(chunk)
        if not chunk_date:
            return 0.5
        days_old = (datetime.now() - chunk_date).days
        return max(0, 1 - (days_old / 365))

    def score_authority(self, chunk, query):
        source = self.extract_source(chunk)
        authority_scores = {'official_docs': 1.0, 'peer_reviewed': 0.9, 'expert': 0.7, 'general': 0.5}
        return authority_scores.get(source, 0.3)
Enter fullscreen mode Exit fullscreen mode

Caso Real: Sistema Q&A Financiero

Resultados medidos:

  • Alucinaciones: 23% → 3.5% (85% reducción)
  • Accuracy: 76% → 94%
  • Tokens: -40% de ahorro
  • Tiempo respuesta: 3.2s → 2.1s
  • Satisfacción usuario: 6.2 → 8.9 NPS

Conclusiones

→ Context Engineering es evolución natural del prompt engineering
→ Reduce alucinaciones 85% con arquitectura correcta
→ No es más complejo, es más estructurado
→ ROI positivo en 2-4 semanas típicamente
→ Esencial para aplicaciones críticas


👉 Lee el artículo completo

¿Ya implementas context engineering? Comparte tu experiencia 👇

Top comments (0)