In the world of Generative AI, we often focus on the future. But what happens when we use the most advanced models of 2026 to look back at 1300?
Today, I’m diving into the technical architecture of Chronicle of Wonders, a React-based application that uses the Google Gemini API to explain modern technology through the lens of a 14th-century scholar.
The Concept: Contextual Transposition
The core technical challenge was Contextual Transposition. We didn't just want a "medieval filter"; we wanted a system that understood the limitations of 1300.
To achieve this, we leveraged System Instructions in Gemini 2.5 Flash. By constraining the model's knowledge base to alchemy, the four elements, and the feudal order, we forced it to generate creative analogies.
The Prompt Engineering
const SYSTEM_INSTRUCTION = `
You are a learned scholar and monk from the year 1300.
Explain modern marvels using ONLY concepts available in the 14th century.
Rules:
1. Use analogies related to farming, the Church, alchemy, and the celestial spheres.
2. Avoid ALL modern scientific terms.
3. Structure the explanation as if it were an illuminated manuscript.
`;
Multi-Modal Integration
Chronicle of Wonders isn't just text. It’s a multi-modal experience combining text, images, and speech.
1. Visualizing the Future (Medievally)
We used gemini-2.5-flash-image to generate "Illuminated Diagrams." The trick here was the prompt weighting. We specifically asked for "symbolic and non-perspective drawing" and "marginalia" to ensure the AI didn't output a modern 3D render.
2. The Archaic Voice
For the "Hear the Scribe" feature, we utilized the gemini-2.5-flash-preview-tts model.
We didn't just pass the text; we passed a Speech Instruction:
"Speak this chronicle with the steady, resonant, and archaic cadence of a 14th-century English scholar... with a slight West Country lilt."
Frontend Architecture: The "Manuscript" UI
The UI was built with React 19 and Tailwind CSS 4. We used Tailwind's new @theme directive to create a custom medieval palette:
@theme {
--color-parchment: #f4ecd8;
--color-ink: #2c241e;
--color-gold: #c5a059;
--color-blood: #8b0000;
}
Interactive Medieval Glossary
One of the most loved features is the interactive glossary. We built a custom MedievalText component that parses the AI-generated markdown and injects tooltips for archaic terms like tithe or vassal.
const MedievalText = ({ text }) => {
return (
<div className="markdown-body">
<Markdown
components={{
p: ({ children }) => <p><TextWithTooltips>{children}</TextWithTooltips></p>,
// ... other components
}}
>
{text}
</Markdown>
</div>
);
};
Lessons Learned
-
Model Stability Matters: During development, we found that
gemini-2.5-flashprovided the perfect balance of creative "roleplay" and response speed. -
Markdown is King: Even for a medieval app, markdown is the best way to handle structured data from an LLM. Integrating
react-markdownwas essential for a polished look. -
User Experience in Iframe: Since the app runs in an iframe, we had to be careful with browser APIs. We implemented a custom modal system for images instead of using
window.open.
"Chronicle of Wonders" demonstrates that AI isn't just a tool for the future—it's a bridge to the past. By combining specialized system instructions with multi-modal outputs, we can create educational experiences that are both informative and deeply immersive.
Check out the code on GitHub and start your journey to 1300!



Top comments (0)