<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Goh Chun Lin</title>
    <description>The latest articles on DEV Community by Goh Chun Lin (@gohchunlin).</description>
    <link>https://dev.to/gohchunlin</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/gohchunlin"/>
    <language>en</language>
    <item>
      <title>Beyond the Cert: In the Age of AI</title>
      <dc:creator>Goh Chun Lin</dc:creator>
      <pubDate>Sun, 26 Oct 2025 02:55:01 +0000</pubDate>
      <link>https://dev.to/gohchunlin/beyond-the-cert-in-the-age-of-ai-3ko2</link>
      <guid>https://dev.to/gohchunlin/beyond-the-cert-in-the-age-of-ai-3ko2</guid>
      <description>&lt;p&gt;&lt;a href="https://cuteprogramming.blog/wp-content/uploads/2025/10/image-52.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2kzaorxyqpm8bex017ab.png" width="800" height="448"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For the fourth consecutive year, I have renewed my &lt;a href="https://learn.microsoft.com/en-us/credentials/certifications/azure-developer/?practice-assessment-type=certification" rel="noopener noreferrer"&gt;Azure Developer Associate certification&lt;/a&gt;. It is a valuable discipline that keeps my knowledge of the Azure ecosystem current and sharp. The performance report I received this year was particularly insightful, highlighting both my strengths in security fundamentals and the expected gaps in platform-specific nuances, given my recent work in AWS.&lt;/p&gt;

&lt;h3&gt;
  
  
  Objectives
&lt;/h3&gt;

&lt;p&gt;Renewing Azure certification is a hallmark of a professional craftsman because it sharpens our tools, knowing our trade. For a junior or mid-level engineer, this path of structured learning and certification is the non-negotiable foundation of a solid career. It is the path I walked myself. It builds the grammar of our trade.&lt;/p&gt;

&lt;p&gt;However, for a senior engineer, for an architect, the game has changed. The world is now saturated with competent craftsmen who know the grammar. In the age of AI-assisted coding and brutal corporate “flattening,” simply knowing the tools is no longer a defensible position. It has become table stakes.&lt;/p&gt;

&lt;p&gt;The paradox of the senior cloud software engineer is that the very map that got us here, i.e. the structured curriculum and the certification path, is insufficient to guide us to the next level. The renewal assessment results for Microsoft Certified: Azure Developer Associate I received was a perfect map of the existing territory. However, an architect’s job is not to be a master of the known world. It is to be a cartographer of the unknown. The report correctly identified that I need to master Azure specific trade-offs, like choosing ‘Session’ consistency over ‘Strong’ for low-latency scenarios in CosmosDB. The senior engineer learns that rule. The architect must ask a deeper question: “How can I build a model that predicts the precise cost and P99 latency impact of that trade-off for my specific workload, before I write a single line of code?”&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cuteprogramming.blog/wp-content/uploads/2025/10/image-50.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fri0v3q0yy178aneo4boj.png" width="800" height="450"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Attending AWS Singapore User Group monthly meetup.&lt;/em&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  About the Results
&lt;/h3&gt;

&lt;p&gt;Let’s make this concrete by looking at the renewal assessment report itself. It was a gift, not because of the score, but because it is a perfect case study in the difference between the Senior Engineer’s path and the Architect’s.&lt;/p&gt;

&lt;p&gt;Where the report suggests mastering &lt;a href="https://azure.microsoft.com/en-us/products/cosmos-db" rel="noopener noreferrer"&gt;Azure Cosmos DB&lt;/a&gt; five consistency levels, it is prescribing an act of knowledge consumption. The architect’s impulse is to ask a different question entirely: “How can I quantify the trade-off?” I do not just want to know that Session is faster than Strong. I should know, for a given workload, how much faster, at what dollar cost per million requests, and with what measurable impact on data integrity. The architect’s response is to build a model to turn the vendor’s qualitative best practice into a quantitative, predictive economic decision.&lt;/p&gt;

&lt;p&gt;This pattern continues with managed services. The report correctly noted my failure to memorise the specific implementation of &lt;a href="https://azure.microsoft.com/en-us/products/container-apps" rel="noopener noreferrer"&gt;Azure Container Apps&lt;/a&gt;. The path it offers is to better learn the abstraction. The architect’s path is to become professionally paranoid about abstractions. The question is not “What is Container Apps?” but “Why does this abstraction exist, and what are its hidden costs and failure modes?” The architect’s response is to design experiments or simulations to stress-test the abstraction and discover its true operational boundaries, not just to read its documentation.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cuteprogramming.blog/wp-content/uploads/2025/10/image-47.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnzffhole04qhc75y73xh.png" width="800" height="697"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;DHH has just slain the dragon of Cloud Dependency, the largest, most fearsome dragon in our entire cloud industry. (Twitter Source: DHH)&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;This is the new mandate for senior engineers in this new world where we keep on listening senior engineers being out of work: We must evolve from being consumers of complexity to being creators of clarity. We must move beyond mastering the vendor’s pre-defined solutions and begin forging our own instruments to see the future.&lt;/p&gt;

&lt;h3&gt;
  
  
  From Cert to Personal Project
&lt;/h3&gt;

&lt;p&gt;This is why, in parallel to maintaining my certifications, I have embarked on a different kind of professional development. It is a path of deep, first-principles creation. I am building a discrete event simulation engine not as a personal hobby project, but as a way to understand more about the most expensive and unpredictable problems in our industry. My certification proves I can solve problems the “Azure way.” This new work is about discovering the the fundamental truths that govern all cloud platforms.&lt;/p&gt;

&lt;p&gt;Certifications are the foundation. They are the bedrock of our shared knowledge. However, they are not the lighthouse. In this new era, we must be both.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cuteprogramming.blog/wp-content/uploads/2025/10/image-51.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk0z42fzf563a1sxr5o8u.png" width="800" height="450"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;AWS + Azure.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Certifications are an essential foundation. They represent the bedrock of our shared professional knowledge and a commitment to maintaining a common standard of excellence. However they are not, by themselves, the final destination.&lt;/p&gt;

&lt;p&gt;Therefore, my next major “proof-of-work” will not be another certificate. It will be the first in a series of public, data-driven case studies derived from my personal project.&lt;/p&gt;

&lt;p&gt;Ultimately, a certificate proves that we are qualified and contributing members of our professional ecosystem. This next body of work is intended to prove something more than that. We need to actively solve the complex, high-impact problems that challenge our industry. In this new era, demonstrating both our foundational knowledge and our capacity to create new value is no longer an aspiration. Instead, it is the new standard.&lt;/p&gt;

&lt;p&gt;Together, we learn better.&lt;/p&gt;

</description>
      <category>cloudcomputingmicros</category>
      <category>experience</category>
      <category>microsoftcertified</category>
      <category>ai</category>
    </item>
    <item>
      <title>The Blueprint Fallacy: A Case for Discrete Event Simulation in Modern Systems Architecture</title>
      <dc:creator>Goh Chun Lin</dc:creator>
      <pubDate>Sat, 18 Oct 2025 04:34:46 +0000</pubDate>
      <link>https://dev.to/gohchunlin/the-blueprint-fallacy-a-case-for-discrete-event-simulation-in-modern-systems-architecture-2b4f</link>
      <guid>https://dev.to/gohchunlin/the-blueprint-fallacy-a-case-for-discrete-event-simulation-in-modern-systems-architecture-2b4f</guid>
      <description>&lt;p&gt;&lt;a href="https://cuteprogramming.blog/wp-content/uploads/2025/10/image-46.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1cbvbc7i6afbzvhcsd36.png" width="800" height="448"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Greetings from Taipei!&lt;/p&gt;

&lt;p&gt;I just spent two days at the &lt;a href="https://hwdc.ithome.com.tw/2025" rel="noopener noreferrer"&gt;Hello World Dev Conference 2025 in Taipei&lt;/a&gt;, and beneath the hype around cloud and AI, I observed a single, unifying theme: The industry is desperately building tools to cope with a complexity crisis of its own making.&lt;/p&gt;

&lt;p&gt;The agenda was a catalog of modern systems engineering challenges. The most valuable sessions were the “踩雷經驗” (landmine-stepping experiences), which offered hard-won lessons from the front lines.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cuteprogramming.blog/wp-content/uploads/2025/10/image-41.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuvudvtx66v6y5kcdvveh.png" width="800" height="600"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;A 2-day technical conference on AI, Kubernetes, and more!&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;However, these talks raised a more fundamental question for me. We are getting exceptionally good at building tools to detect and recover from failure but are we getting any better at preventing it?&lt;/p&gt;

&lt;p&gt;This post is not a simple translation of a Mandarin-language Taiwan conference. It is my analysis of the patterns I observed. I have grouped the key talks I attended into three areas:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Cloud Native Infrastructure;&lt;/li&gt;
&lt;li&gt;Reshaping Product Management and Engineering Productivity with AI;&lt;/li&gt;
&lt;li&gt;Deep Dives into Advanced AI Engineering.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Feel free to choose to dive into the section that interests you most.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cuteprogramming.blog/wp-content/uploads/2025/10/image-43.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb3plmpcwwqcd5yr0vz09.png" width="800" height="214"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Session: Smart Pizza and Data Observability
&lt;/h3&gt;

&lt;p&gt;This session was led by Shuhsi (林樹熙), a Data Engineering Manager at Micron. Micron needs no introduction, they are a massive player in the semiconductor industry, and their smart manufacturing facilities are a prime example of where data engineering is mission-critical.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cuteprogramming.blog/wp-content/uploads/2025/10/image-38.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftl81j0kpevbho0hjm8f9.png" width="800" height="389"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Micron in Singapore (Credit: Forbes)&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Shuhsi’s talk, “Data Observability by OpenLineage,” started with a simple story he called the “Smart Pizza” anomaly.&lt;/p&gt;

&lt;p&gt;He presented a scenario familiar to anyone in a data-intensive environment: A critical dashboard flatlines, and the next three hours are a chaotic hunt to find out why. In his “Smart Pizza” example, the culprit was a silent, upstream schema change.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cuteprogramming.blog/wp-content/uploads/2025/10/20251015_133237-1.jpg" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmhtwbw11m45bkfv7r5w5.jpg" width="800" height="600"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Smart pizza dashboard anomaly.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;His solution, &lt;a href="https://openlineage.io/" rel="noopener noreferrer"&gt;OpenLineage&lt;/a&gt;, is a powerful framework for what we would call digital forensics. It is about building a perfect, queryable map of the crime scene after the crime has been committed. By creating a clear data lineage, it reduces the “Mean Time to Discovery” from hours of panic to minutes of analysis.&lt;/p&gt;

&lt;p&gt;Let’s be clear: This is critical, valuable work. Like OpenTelemetry for applications, OpenLineage brings desperately needed order to the chaos of modern data pipelines.&lt;/p&gt;

&lt;p&gt;It is a fundamentally reactive posture. It helps us find the bullet path through the body with incredible speed and precision. However, my main point is that our ultimate goal must be to predict the bullet trajectory before the trigger is pulled. Data lineage minimises downtime. My work with simulation, which will be explained in the next session, aims to prevent it entirely by modelling these complex systems to find the breaking points before they break.&lt;/p&gt;

&lt;h3&gt;
  
  
  Session: Automating a .NET Discrete Event Simulation on Kubernetes
&lt;/h3&gt;

&lt;p&gt;My talk, “Simulation Lab on Kubernetes: Automating .NET Parameter Sweeps,” addressed the wall that every complex systems analysis eventually hits: Combinatorial explosion.&lt;/p&gt;

&lt;p&gt;While the industry is focused on understanding past failures, my session is about building the &lt;a href="https://en.wikipedia.org/wiki/Discrete-event_simulation" rel="noopener noreferrer"&gt;Discrete Event Simulation (DES)&lt;/a&gt; engine that can calculate and prevent future ones.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cuteprogramming.blog/wp-content/uploads/2025/10/image-32.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu7rt5kurya0hv6uub6ye.png" width="686" height="386"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;A restaurant simulation game in Honkai Impact 3rd. (Source: 西琳 – YouTube)&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;To make this concrete, I used the analogy of a restaurant owner asking, “Should I add another table or hire another waiter?” The only way to answer this rigorously is to simulate thousands of possible futures. The math becomes brutal, fast: testing 50 different configurations with 100 statistical runs each requires 5,000 independent simulations. This is not a task for a single machine; it requires a computational army.&lt;/p&gt;

&lt;p&gt;My solution is to treat Kubernetes not as a service host, but as a temporary, on-demand supercomputer. The strategy I presented had three core pillars:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Declarative Orchestration:&lt;/strong&gt;  The entire 5,000-run DES experiment is defined in a single, clean Argo Workflows manifest, transforming a potential scripting nightmare into a manageable, observable process.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Radical Isolation:&lt;/strong&gt;  Each DES run is containerised in its own pod, creating a perfectly clean and reproducible experimental environment.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Controlled Randomness:&lt;/strong&gt;  A robust seeding strategy is implemented to ensure that “random” events in our DES are statistically valid and comparable across the entire distributed system.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://cuteprogramming.blog/wp-content/uploads/2025/10/image-33.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqb5l8nxk5dx7c6orjpbc.png" width="800" height="600"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;The turnout for my DES session confirmed a growing hunger in our industry for proactive, simulation-driven approaches to engineering.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The final takeaway was a strategic re-framing of a tool many of us already use. Kubernetes is more than a platform for web apps. It can also be a general-purpose compute engine capable of solving massive scientific and financial modelling problems. It is time we started using it as such.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cuteprogramming.blog/wp-content/uploads/2025/10/image-42.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd0i33xkawat6ohq24ujv.png" width="800" height="214"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Session: AI for BI
&lt;/h3&gt;

&lt;p&gt;Denny’s (監舜儀) session on “AI for BI” illustrated a classic pain point: The bottleneck between business users who need data and the IT teams who provide it. The proposed solution was a natural language interface, the &lt;a href="https://www.finebi.com/blog/tag/finechabi" rel="noopener noreferrer"&gt;&lt;strong&gt;FineChatBI&lt;/strong&gt; , a tool designed to sit on top of existing BI platforms&lt;/a&gt; to make querying existing data easier.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cuteprogramming.blog/wp-content/uploads/2025/10/20251014_094716.jpg" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsq6z2413emshibpvkf1a.jpg" width="685" height="513"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Denny is introducing AI for BI.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;His core insight was that the tool is the easy part. The real work is in building the “underground root system” which includes the immense challenge of defining metrics, managing permissions, and untangling data semantics. Without this foundation, any AI is doomed to fail.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cuteprogramming.blog/wp-content/uploads/2025/10/20251014_100240.jpg" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fft91als0yjvbsnoig8cg.jpg" width="800" height="600"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Getting the underground root system right is important for building AI projects.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;This is a crucial step forward in making our organisations more data-driven. However, we must also be clear about what problem is being solved.&lt;/p&gt;

&lt;p&gt;This is a system designed to provide perfect, instantaneous answers to the question, “What happened?”&lt;/p&gt;

&lt;p&gt;My work, and the next category of even more complex AI, begins where this leaves off. It seeks to answer the far harder question: “What will happen if…?” Sharpening our view of the past is essential, but the ultimate strategic advantage lies in the ability to accurately simulate the future.&lt;/p&gt;

&lt;h3&gt;
  
  
  Session: The Impossibility of Modeling Human Productivity
&lt;/h3&gt;

&lt;p&gt;The presented Jugg (劉兆恭) is a well-known agile coach and &lt;a href="https://devopsdays.tw/2024/speaker-page/247" rel="noopener noreferrer"&gt;the organiser of Agile Tour Taiwan 2020&lt;/a&gt;. His talk, “An AI-Driven Journey of Agile Product Development – From Inspiration to Delivery,” was a masterclass in moving beyond vanity metrics to understand and truly improve engineering performance.&lt;/p&gt;

&lt;p&gt;Jugg started with a graph that every engineering lead knows in their gut. As a company grows over time:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Business grow (purple line, up);&lt;/li&gt;
&lt;li&gt;Software architecture and complexity grow (first blue line, up);&lt;/li&gt;
&lt;li&gt;The number of developers increases (second blue line, up);&lt;/li&gt;
&lt;li&gt;Expected R&amp;amp;D productivity should grow (green line, up);&lt;/li&gt;
&lt;li&gt;But paradoxically, the actual R&amp;amp;D productivity often stagnates or even declines (red line, down).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://cuteprogramming.blog/wp-content/uploads/2025/10/20251014_104741.jpg" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmaj64cxgwtz99b0tjvkw.jpg" width="800" height="600"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Jugg provided a perfect analogue for the work I do. He tackled the classic productivity paradox: Why does output stagnate even as teams grow? He correctly diagnosed the problem as a failure of measurement and proposed &lt;a href="https://getdx.com/blog/space-metrics/" rel="noopener noreferrer"&gt;the SPACE framework&lt;/a&gt; as a more holistic model for this incredibly complex human system.&lt;/p&gt;

&lt;p&gt;He was, in essence, trying to answer the same class of question I do: “If we change an input variable (team process), how can we predict the output (productivity)?”&lt;/p&gt;

&lt;p&gt;This is where the analogy becomes a powerful contrast. Jugg’s world of human systems is filled with messy, unpredictable variables. His solutions are frameworks and dashboards. They are the best tools we have for a system that resists precise calculation.&lt;/p&gt;

&lt;p&gt;This session reinforced my conviction that simulation is the most powerful tool we have for predicting performance in the systems we can actually control: Our code and our infrastructure. We do not have to settle for dashboards that show us the past because we can build models that calculate the future.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cuteprogramming.blog/wp-content/uploads/2025/10/image-44.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Froqiekf0wwrjcpzvuabb.png" width="800" height="219"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Session: Building a Map of “What Is” with GraphRAG
&lt;/h3&gt;

&lt;p&gt;The most technically demanding session came from Nils (劉岦崱), a Senior Data Scientist at Cathay Financial Holdings. He presented GraphRAG, a significant evolution beyond the “Naive RAG” most of us use today.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cuteprogramming.blog/wp-content/uploads/2025/10/20251014_153602.jpg" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw7krm8ukp11kfen2jxrn.jpg" width="800" height="600"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Nils is explaining what a Naive RAG is.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;He argued compellingly that simple vector search fails because it ignores relationships. By chunking documents, we destroy the contextual links between concepts. &lt;a href="https://medium.com/@zilliz_learn/graphrag-explained-enhancing-rag-with-knowledge-graphs-3312065f99e1" rel="noopener noreferrer"&gt;GraphRAG&lt;/a&gt; solves this by transforming unstructured data into a structured knowledge graph: a web of nodes (entities) and edges (their relationships).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cuteprogramming.blog/wp-content/uploads/2025/10/image-35.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fetb6268c1w2nxxsu5w9m.png" width="720" height="496"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Enhancing RAG-based application accuracy by constructing and leveraging knowledge graphs (Image Credit: LangChain)&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;In essence, GraphRAG is a sophisticated tool for building a static map of a known world. It answers the question, “How are all the pieces in our universe connected right now?” For AI customer service, this is a game-changer, as it provides a rich, interconnected context for every query.&lt;/p&gt;

&lt;p&gt;This means our data now has an explicit, queryable structure. So, the LLM gets a much richer, more coherent picture of the situation, allowing it to maintain context over long conversations and answer complex, multi-faceted questions.&lt;/p&gt;

&lt;p&gt;This session was a brilliant reminder that all advanced AI is built on a foundation of rigorous data modelling.&lt;/p&gt;

&lt;p&gt;However, a map, no matter how detailed, is still just a snapshot. It shows us the layout of the city, but it cannot tell us how the traffic will flow at 5 PM.&lt;/p&gt;

&lt;p&gt;This is the critical distinction. GraphRAG creates a model of a system at rest and DES creates a model of a system in motion. One shows us the relationships while the other lets us press watch how those relationships evolve and interact over time under stress. GraphRAG is the anatomy chart and simulation is the stress test.&lt;/p&gt;

&lt;h3&gt;
  
  
  Session: Securing the AI Magic Pocket with LLM Guardrails
&lt;/h3&gt;

&lt;p&gt;Nils from Cathay Financial Holdings returned to the stage for Day 2, and this time he tackled one of the most pressing issues in enterprise AI: Security. His talk “Enterprise-Grade LLM Guardrails and Prompt Hardening” was a masterclass in defensive design for AI systems.&lt;/p&gt;

&lt;p&gt;What made the session truly brilliant was his central analogy. As he put it, an LLM is a lot like  &lt;strong&gt;Doraemon&lt;/strong&gt; : a super-intelligent, incredibly powerful assistant with a “magic pocket” of capabilities. It can solve almost any problem you give it. But, just like in the cartoon, if you give it vague, malicious, or poorly thought-out instructions, it can cause absolute chaos. For a bank, preventing that chaos is non-negotiable.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cuteprogramming.blog/wp-content/uploads/2025/10/20251015_141419.jpg" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsf8ka4z3e145skkyevby.jpg" width="800" height="600"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Nils grounded the problem in the official OWASP Top 10 for LLM Applications.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;There are two lines of defence: Guardrails and Prompt Hardening. The core of the strategy lies in understanding two distinct but complementary approaches:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Guardrails (The Fortress):&lt;/strong&gt; An external firewall of input filters and output validators;&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Prompt Hardening (The Armour):&lt;/strong&gt; Internal defences built into the prompt to resist manipulation.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is an essential framework for any enterprise deploying LLMs. It represents the state-of-the-art in building static defences.&lt;/p&gt;

&lt;p&gt;While necessary, this defensive posture raises another important question for a developers: How does the fortress behave under a full-scale siege?&lt;/p&gt;

&lt;p&gt;A static set of rules can defend against known attack patterns. But what about the unknown unknowns? What about the second-order effects? Specifically:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Performance Under Attack:&lt;/strong&gt;  What is the latency cost of these five layers of validation when we are hit with 10,000 malicious requests per second? At what point does the defence itself become a denial-of-service vector?&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Emergent Failures:&lt;/strong&gt;  When the system is under load and memory is constrained, does one of these guardrails fail in an unexpected way that creates a new vulnerability?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These are not questions a security checklist can answer. They can only be answered by a dynamic stress test. &lt;a href="https://arxiv.org/abs/2504.13203" rel="noopener noreferrer"&gt;The X-Teaming&lt;/a&gt; Nils mentioned is a step in this direction, but a full-scale DES is the ultimate laboratory.&lt;/p&gt;

&lt;p&gt;Neil’s techniques are a static set of rules designed to prevent failure. Simulation is a dynamic engine designed to induce failure in a controlled environment to understand a system true breaking points. He is building the armour while my work with DES is in building the testing grounds to see where that armour will break.&lt;/p&gt;

&lt;h3&gt;
  
  
  Session: Driving Multi-Task AI with a Flowchart in a Single Prompt
&lt;/h3&gt;

&lt;p&gt;The final and most thought-provoking session was delivered by 尹相志, who presented a brilliant hack: Embedding a Mermaid flowchart directly into a prompt to force an LLM to execute a deterministic, multi-step process.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cuteprogramming.blog/wp-content/uploads/2025/10/image-39.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2dqauw5c34uf2qc1elhv.png" width="800" height="600"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;尹相志，數據決策股份有限公司技術長。&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;He provided a new way beyond the chaos of autonomous agents and the rigidity of external orchestrators like LangGraph. By teaching the LLM to read a flowchart, he effectively turns it into a reliable state machine executor. It is a masterful piece of engineering that imposes order on a probabilistic system.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cuteprogramming.blog/wp-content/uploads/2025/10/20251015_165900.jpg" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpln3othstefn9nzygvlc.jpg" width="800" height="600"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Action Grounding Principles proposed by 相志.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;What he has created is the perfect blueprint. It is a model of a process as it should run in a world with no friction, no delays, and no resource contention.&lt;/p&gt;

&lt;p&gt;And in that, he revealed the final, critical gap in our industry thinking.&lt;/p&gt;

&lt;p&gt;A blueprint is not a stress test. A flowchart cannot answer the questions that actually determine the success or failure of a system at scale:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;What happens when 10,000 users try to execute this flowchart at once and they all hit the same database lock?&lt;/li&gt;
&lt;li&gt;What is the cascading delay if one step in the flowchart has a 5% chance of timing out?&lt;/li&gt;
&lt;li&gt;Where are the hidden queues and bottlenecks in this process?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;His flowchart is the architect’s beautiful drawing of an airplane. A DES is the wind tunnel. It is the necessary, brutal encounter with reality that shows us where the blueprint will fail under stress.&lt;/p&gt;

&lt;p&gt;The ability to define a process is the beginning. The ability to simulate that process under the chaotic conditions of the real world is the final, necessary step to building systems that don’t just look good on paper, but actually work.&lt;/p&gt;

&lt;h3&gt;
  
  
  Final Thoughts and Key Takeaways from Taipei
&lt;/h3&gt;

&lt;p&gt;My two days at the Hello World Dev Conference were not a tour of technologies. In fact, they were a confirmation of a dangerous blind spot in our industry.&lt;/p&gt;

&lt;p&gt;From what I observe, they build tools for digital forensics to map past failures. They sharpen their tools with AI to perfectly understand what just happened. They create knowledge graphs to model the systems at rest. They design perfect, deterministic blueprints for how AI processes should work.&lt;/p&gt;

&lt;p&gt;These are all necessary and brilliant advancements in the art of mapmaking.&lt;/p&gt;

&lt;p&gt;However, the critical, missing discipline is the one that asks not “What is the map?”, but “What will happen to the city during the hurricane?” The hard questions of latency under load, failures, and bottlenecks are not found on any of their map.&lt;/p&gt;

&lt;p&gt;Our industry is full of brilliant mapmakers. The next frontier belongs to people who can model, simulate, and predict the behaviour of complex systems under stress, before the hurricane reaches.&lt;/p&gt;

&lt;p&gt;That is why I am building &lt;a href="https://github.com/gcl-team/SNA" rel="noopener noreferrer"&gt;SNA, my .NET-based Discrete Event Simulation engine&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cuteprogramming.blog/wp-content/uploads/2025/10/image-40.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frcqwolv74e0hekjymkcv.png" width="800" height="600"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Hello, Taipei. Taken from the window of the conference venue.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;I am leaving Taipei with a notebook full of ideas, a deeper understanding of the challenges and solutions being pioneered by my peers in the Mandarin-speaking tech community, and a renewed sense of excitement for the future we are all building.&lt;/p&gt;

</description>
      <category>artificialintelligen</category>
      <category>c</category>
      <category>data</category>
      <category>discreteeventsimulat</category>
    </item>
    <item>
      <title>Building a Gacha Bot in Power Automate and MS Teams</title>
      <dc:creator>Goh Chun Lin</dc:creator>
      <pubDate>Tue, 07 Oct 2025 13:47:34 +0000</pubDate>
      <link>https://dev.to/gohchunlin/building-a-gacha-bot-in-power-automate-and-ms-teams-57f8</link>
      <guid>https://dev.to/gohchunlin/building-a-gacha-bot-in-power-automate-and-ms-teams-57f8</guid>
      <description>&lt;p&gt;&lt;a href="https://cuteprogramming.blog/wp-content/uploads/2025/10/image-27.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft3efgavssx4vder936y7.png" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Every agile team knows the “Support Hero” role, that one person designated to handle the interruptions of the day, bug reports, and urgent requests. In our team, we used a messy spreadsheet to track the rotation. People forgot whose turn it was, someone would be on leave, and the whole thing was a low-grade, daily friction point.&lt;/p&gt;

&lt;p&gt;One day, a teammate had a brilliant idea: “What if we made it fun? What if we gamified it?”&lt;/p&gt;

&lt;p&gt;He quickly prototyped an &lt;em&gt;gacha&lt;/em&gt; bot using &lt;a href="https://www.microsoft.com/en-us/power-platform/products/power-automate" rel="noopener noreferrer"&gt;Power Automate&lt;/a&gt; that would randomly select the hero of the day. It was a huge hit. It turned a daily chore into a fun moment of team engagement. It was a perfect example of a small automation making a big impact on our culture.&lt;/p&gt;

&lt;p&gt;Over time, as team members changed and responsibilities shifted, that original &lt;em&gt;gacha&lt;/em&gt; bot was lost. The fun morning ritual disappeared, and we went back to the old, boring way. We all felt the difference.&lt;/p&gt;

&lt;p&gt;Recently, I decided it was time to bring that spark back. I took the original, brilliant concept and decided to re-build it from the ground up as a robust, reusable, and shareable solution.&lt;/p&gt;

&lt;p&gt;This post is a tribute to that original idea, and a detailed, step-by-step guide on how you can build a similar &lt;em&gt;gacha&lt;/em&gt; bot for your own team. Let’s make our daily routines fun again.&lt;/p&gt;

&lt;h3&gt;
  
  
  How it Works: The Daily Gacha Ritual
&lt;/h3&gt;

&lt;p&gt;Before we open the hood and look at the Power Automate engine, let me walk you through what my team actually experiences every morning at 10:00 AM.&lt;/p&gt;

&lt;p&gt;It all starts with a message from the bot to the &lt;a href="https://www.microsoft.com/en-sg/microsoft-teams/group-chat-software" rel="noopener noreferrer"&gt;Microsoft Teams&lt;/a&gt; group of our team. The message says the following.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Hi, Louisa. You are the lucky Support Hero today. 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is the moment of suspense. Everyone sees the ping. Louisa, one of our teammates, is now in the spotlight.&lt;/p&gt;

&lt;p&gt;However, what if Louisa is on vacation, sipping a drink on a beach in Bali? The bot is prepared. Immediately following the announcement, &lt;a href="https://learn.microsoft.com/en-us/power-automate/create-adaptive-cards" rel="noopener noreferrer"&gt;it posts a second message which is an interactive Adaptive Card&lt;/a&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Is our teammate mentioned above working today?
[] Yes.
[] No.
[] I volunteer!
[Submit Status]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is where the team interaction happens.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;If  &lt;strong&gt;Louisa is around&lt;/strong&gt; , she proudly clicks ‘Yes.’ The card updates to say ‘Louisa has accepted the quest!’ and the ritual is over.&lt;/li&gt;
&lt;li&gt;If  &lt;strong&gt;Louisa is on leave&lt;/strong&gt; , anyone on the team can click ‘No.’ This immediately triggers the bot to run the &lt;em&gt;gacha&lt;/em&gt; again, announcing a new hero.&lt;/li&gt;
&lt;li&gt;And my favourite part is that if someone else, for example Austin, is feeling particularly heroic that day, he can click ‘ &lt;strong&gt;I volunteer!&lt;/strong&gt; ‘ This lets him steal the spotlight and take on the role, giving Louisa a day off. The card updates to say ‘A new hero has emerged! Austin has volunteered for the quest!'”&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Within a minute, the daily chore is assigned, not through a boring spreadsheet, but through a fun, interactive, and slightly dramatic team ritual. It is a small thing, but it starts our day with a smile and a sense of shared fun.&lt;/p&gt;

&lt;p&gt;Now that you have seen what it does, let’s build it.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 1: Define The Trigger
&lt;/h3&gt;

&lt;p&gt;First, I setup a “Schedule cloud flow” so that every morning 10am, a message will be sent to the Teams on who is the lucky one.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cuteprogramming.blog/wp-content/uploads/2025/10/image.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feg6fe7kh6yc684jhw5bs.png" width="800" height="403"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Second, I will name the flow and define its starting date and time. As shown in the following screenshot, we will set the occurrence to be every day, starting from 1st Oct 2025, 00:00.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cuteprogramming.blog/wp-content/uploads/2025/10/image-1.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs4w5tt4hui2fs9uym8pi.png" width="800" height="403"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Please take note that in the step above, the “12am” is the beginning time, not the time when this job will be executed daily. So in the first node of the flow itself, I have to define at what time the &lt;em&gt;gacha&lt;/em&gt; bot will start and at which timezone. Since our daily support needs to be done in the morning, we will make it run at 10am everyday, as shown in the screenshot below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cuteprogramming.blog/wp-content/uploads/2025/10/image-13.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp1jdz31kkfw956bu661c.png" width="800" height="379"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 2: Define Variables and Controls
&lt;/h3&gt;

&lt;p&gt;After that, we add a new “ &lt;strong&gt;Initialize Variable&lt;/strong&gt; ” node where we can define name of all the teammates.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cuteprogramming.blog/wp-content/uploads/2025/10/image-14.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjx5qma7o2hvkpwuq5yku.png" width="800" height="379"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We also need another variable to later store the response of the user on the adaptive card, as shown in the screenshot below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cuteprogramming.blog/wp-content/uploads/2025/10/image-15.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcqnbzkk6x5tqytgtf9af.png" width="800" height="378"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Since this &lt;em&gt;gacha&lt;/em&gt; only makes sense during weekday, so I need a “ &lt;strong&gt;Condition&lt;/strong&gt; ” block to check whether the day is a weekday or not. If it is a weekend, the bot will not send any message.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cuteprogramming.blog/wp-content/uploads/2025/10/image-16.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fay49rspb8ggue0q9gh50.png" width="800" height="380"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As shown in the screenshot above, what I do is checking the value of &lt;code&gt;dayOfWeek(convertFromUtc(utcNow(), 'Singapore Standard Time'))&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Since there is nothing to be done when it is a weekend, so we will leave the “False” block as empty. For the “True” block, we will have a “ &lt;strong&gt;Do Until&lt;/strong&gt; ” block because the &lt;em&gt;gacha&lt;/em&gt; bot needs to keep on selecting a name until someone clicks “Yes” or “Volunteer”. Hence, as shown in the screenshot below, the loop will loop until &lt;code&gt;responseChoice&lt;/code&gt; is not “No”.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cuteprogramming.blog/wp-content/uploads/2025/10/image-17.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Figx0v9g7z2xqz903ld6s.png" width="800" height="378"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 3: Inside the Loop
&lt;/h3&gt;

&lt;p&gt;There are three important “ &lt;strong&gt;Compose&lt;/strong&gt; ” data operations.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Generate Random Index&lt;/strong&gt; : To generate a random number from 0 to the number of the team members.
&lt;code&gt;rand(0, length(variables('teamMembers')))&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Select Random Teammate Object&lt;/strong&gt; : The random number is used to pick the hero from the array.
&lt;code&gt;variables('teamMembers')[int(outputs('Compose:_Generate_Random_Index'))]&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Get Name of Hero&lt;/strong&gt; : Get the name of the person from the array.
&lt;code&gt;outputs('Compose:_Select_Random_Teammate_Object')['name']&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;After the three data operations are added, the flow now looks as shown below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cuteprogramming.blog/wp-content/uploads/2025/10/image-18.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi9c9vozc959pncwcn25q.png" width="800" height="379"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;According the our designed workflow, after a hero is selected, we can send a message with the “ &lt;strong&gt;Post message in a chat or channel&lt;/strong&gt; ” action to inform the team who is being selected by the &lt;em&gt;gacha&lt;/em&gt; bot.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cuteprogramming.blog/wp-content/uploads/2025/10/image-20.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn0xrav4j8mgrkv2xn8f1.png" width="800" height="379"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next we need to post an adaptive card to Microsoft Teams and wait for a response. In our case, since the adaptive card is posted to group chat, we need to put an entire JSON below to the Message field.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "type": "AdaptiveCard",
    "$schema": "http://adaptivecards.io/schemas/adaptive-card.json",
    "version": "1.4",
    "body": [
        {
            "type": "TextBlock",
            "text": "Daily Check-In",
            "wrap": true,
            "size": "Large",
            "weight": "Bolder"
        },
        {
            "type": "TextBlock",
            "text": "Please pick an option accordingly.",
            "wrap": true
        },
        {
            "type": "Input.ChoiceSet",
            "id": "userChoice",
            "style": "expanded",
            "isMultiSelect": false,
            "label": "Is our teammate mentioned above working today?",
            "choices": [
                {
                    "title": "Yes.",
                    "value": "Yes"
                },
                {
                    "title": "No.",
                    "value": "No"
                },
                {
                    "title": "I volunteer!",
                    "value": "Volunteer"
                }
            ]
        }
    ],
    "actions": [
        {
            "type": "Action.Submit",
            "title": "Submit Status"
        }
    ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In short, the “ &lt;strong&gt;Post adaptive card and wait for a response&lt;/strong&gt; ” action will be setup as shown in the following screenshot.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cuteprogramming.blog/wp-content/uploads/2025/10/image-21.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9fzz1xjvqmz8nz2t9rza.png" width="800" height="378"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 4: Handle the User’s Response
&lt;/h3&gt;

&lt;p&gt;Right after the adaptive card, I setup a “Switch” control to handle the user’s response.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cuteprogramming.blog/wp-content/uploads/2025/10/image-23.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3xxwcco13lgjubalrppc.png" width="800" height="378"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If the response is “Yes”, there will be a confirmation sent to the Microsoft Teams group chat. If the response is “Volunteer”, before a confirmation message is sent, the bot needs to know who responds so that it can indicate the volunteer’s name. To do so, I use a “&lt;strong&gt;Get user profile (V2)&lt;/strong&gt;” action with &lt;code&gt;body/responder/userPrincipalName&lt;/code&gt; as the UPN, as shown in the screenshot below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cuteprogramming.blog/wp-content/uploads/2025/10/image-24.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd30yc1scx4tepll3pvx5.png" width="800" height="378"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The Office 365 Users node will give us the friendly display name of the person who volunteers, as shown in the screenshot below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cuteprogramming.blog/wp-content/uploads/2025/10/image-25.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F24ytuvo1wj12ansuxtik.png" width="800" height="381"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Your Turn
&lt;/h3&gt;

&lt;p&gt;So, what have we really built here? On the surface, it is just a simple Power Automate flow. However, the real product is not the bot. Instead, it is the daily moment of shared fun. We did not just automate a chore but we engineered a small spark of joy and human connection into our daily routine. We used technology to solve a human problem, not just a technical one.&lt;/p&gt;

&lt;p&gt;Now, it is your turn.&lt;/p&gt;

&lt;p&gt;Your mission, should you choose to accept it, is to find the single most boring, repetitive chore that your own team has to deal with. Find that small, grey corner of the life of your team, and ask yourself: “How can I make this fun?”&lt;/p&gt;

&lt;p&gt;Together, we learn better.&lt;/p&gt;

</description>
      <category>experience</category>
      <category>powerautomate</category>
      <category>microsoftteams</category>
    </item>
    <item>
      <title>Securing APIs with OAuth2 Introspection</title>
      <dc:creator>Goh Chun Lin</dc:creator>
      <pubDate>Sat, 09 Aug 2025 05:06:51 +0000</pubDate>
      <link>https://dev.to/gohchunlin/securing-apis-with-oauth2-introspection-1lkp</link>
      <guid>https://dev.to/gohchunlin/securing-apis-with-oauth2-introspection-1lkp</guid>
      <description>&lt;p&gt;&lt;a href="https://cuteprogramming.blog/wp-content/uploads/2025/08/image-2.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9y20x5jouksxatput44g.png" width="800" height="448"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In today’s interconnected world, APIs are the backbone of modern apps. Protecting these APIs and ensuring only authorised users access sensitive data is now more crucial than ever. While many authentication and authorisation methods exist, OAuth2 Introspection stands out as a robust and flexible approach. In this post, we will explore what OAuth2 Introspection is, why we should use it, and how to implement it in our .NET apps.&lt;/p&gt;

&lt;p&gt;Before we dive into the technical details, let’s remind ourselves why API security is so important. Think about it: APIs often handle the most sensitive stuff. If those APIs are not well protected, we are basically opening the door to some nasty consequences. Data breaches? Yep. Regulatory fines (GDPR, HIPAA, you name it)? Potentially. Not to mention, losing the trust of our users. A secure API shows that we value their data and are committed to keeping it safe. And, of course, it helps prevent the bad guys from exploiting vulnerabilities to steal data or cause all sorts of trouble.&lt;/p&gt;

&lt;p&gt;The most common method of securing APIs is using access tokens as proof of authorization. These tokens, typically in the form of &lt;a href="https://www.jwt.io/introduction#what-is-json-web-token" rel="noopener noreferrer"&gt;JWTs (JSON Web Tokens)&lt;/a&gt;, are passed by the client to the API with each request. The API then needs a way to validate these tokens to verify that they are legitimate and haven’t been tampered with. This is where &lt;a href="https://www.oauth.com/oauth2-servers/token-introspection-endpoint/" rel="noopener noreferrer"&gt;OAuth2 Introspection&lt;/a&gt; comes in.&lt;/p&gt;

&lt;h3&gt;
  
  
  OAuth2 Introspection
&lt;/h3&gt;

&lt;p&gt;OAuth2 Introspection is a mechanism for validating bearer tokens in an OAuth2 environment. We can think of it as a secure lookup service for our access tokens. It allows an API to query an auth server, which is also the “issuer” of the token, to determine the validity and attributes of a given token.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cuteprogramming.blog/wp-content/uploads/2025/08/image-1.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F33kaktpk3spmcoaipe5o.png" width="800" height="353"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;The workflow of an OAuth2 Introspection request.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;To illustrate the process, the diagram above visualises the flow of an OAuth2 Introspection request. The Client sends the bearer token to the Web API, which then forwards it to the auth server via the introspection endpoint. The auth server validates the token and returns a JSON response, which is then processed by the Web API. Finally, the Web API grants (or denies) access to the requested resource based on the token validity.&lt;/p&gt;
&lt;h3&gt;
  
  
  Introspection vs. Direct JWT Validation
&lt;/h3&gt;

&lt;p&gt;You might be thinking, “Isn’t this just how we normally validate a JWT token?” Well, yes… and no. What is the difference, and why is there a special term “Introspection” for this?&lt;/p&gt;

&lt;p&gt;With direct JWT validation, we essentially check the token ourselves, verifying its signature, expiry, and sometimes audience. Introspection takes a different approach because it involves asking the auth server about the token status. This leads to differences in the pros and cons, which we will explore next.&lt;/p&gt;

&lt;p&gt;With OAuth2 Introspection, we gain several key advantages. First, it works with various token formats (JWTs, opaque tokens, etc.) and auth server implementations. Furthermore, because the validation logic resides on the auth server, we get consistency and easier management of token revocation and other security policies. Most importantly, OAuth2 Introspection makes token revocation straightforward (e.g., if a user changes their password or a client is compromised). In contrast, revoking a JWT after it has been issued is significantly more complex.&lt;/p&gt;
&lt;h3&gt;
  
  
  .NET Implementation
&lt;/h3&gt;

&lt;p&gt;Now, let’s see how to implement OAuth2 Introspection in a .NET Web API using the &lt;code&gt;AddOAuth2Introspection&lt;/code&gt; authentication scheme.&lt;/p&gt;

&lt;p&gt;The core configuration lives in our &lt;code&gt;Program.cs&lt;/code&gt; file, where we set up the authentication and authorisation services.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// ... (previous code for building the app)

builder.Services.AddAuthentication("Bearer")
   .AddOAuth2Introspection("Bearer", options =&amp;gt;
   {
       options.IntrospectionEndpoint = "&amp;lt;Auth server base URL&amp;gt;/connect/introspect";
       options.ClientId = "&amp;lt;Client ID&amp;gt;";
       options.ClientSecret = "&amp;lt;Client Secret&amp;gt;";

       options.DiscoveryPolicy = new IdentityModel.Client.DiscoveryPolicy
       {
           RequireHttps = false, 
       };
   });

builder.Services.AddAuthorization();

// ... (rest of the Program.cs)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This code above configures the authentication service to use the “Bearer” scheme, which is the standard for bearer tokens. &lt;code&gt;AddOAuth2Introspection(…)&lt;/code&gt; is where the magic happens because it adds the OAuth2 Introspection authentication handler by pointing to &lt;code&gt;IntrospectionEndpoint&lt;/code&gt;, the URL our API will use to send the token for validation.&lt;/p&gt;

&lt;p&gt;Usually, &lt;code&gt;RequireHttps&lt;/code&gt; needs to be &lt;code&gt;true&lt;/code&gt; in production. However , in situations like when the API and the auth server are both deployed to the same &lt;a href="https://aws.amazon.com/ecs/" rel="noopener noreferrer"&gt;Elastic Container Service (ECS)&lt;/a&gt; cluster and they communicate internally within the AWS network, we can set it to &lt;code&gt;false&lt;/code&gt;. This is because the &lt;a href="https://docs.aws.amazon.com/elasticloadbalancing/latest/application/introduction.html" rel="noopener noreferrer"&gt;Application Load Balancer (ALB)&lt;/a&gt; handles the TLS/SSL termination and the internal communication between services happens over HTTP, we can safely disable &lt;code&gt;RequireHttps&lt;/code&gt; in the DiscoveryPolicy for the introspection endpoint within the ECS cluster. This simplifies the setup without compromising security, as the communication from the outside world to our ALB is already secured by HTTPS.&lt;/p&gt;

&lt;p&gt;Finally, to secure our API endpoints and require authentication, we can simply use the [Authorize] attribute, as demonstrated below.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[ApiController]
[Route("[controller]")]
[Authorize]
public class MyController : ControllerBase
{
   [HttpGet("GetData")]
   public IActionResult GetData()
   {
       ...
   }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Wrap-Up
&lt;/h3&gt;

&lt;p&gt;OAuth2 Introspection is a powerful and flexible approach for securing our APIs, providing a centralised way to validate bearer tokens and manage access. By understanding the process, implementing it correctly, and following best practices, we can significantly improve the security posture of your applications and protect your valuable data.&lt;/p&gt;

&lt;h3&gt;
  
  
  References
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://www.oauth.com/oauth2-servers/token-introspection-endpoint/" rel="noopener noreferrer"&gt;Token Introspection Endpoint&lt;/a&gt;;&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://dev.to/godspowercuche/complete-guide-on-oauth-20-reference-tokens-in-aspnet-core-7-using-openiddict-2o1g-temp-slug-9029892"&gt;Complete Guide on OAuth 2.0 Reference tokens in Asp.Net Core 7 Using Openiddict&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>aspnet</category>
      <category>csharp</category>
      <category>aws</category>
      <category>ecs</category>
    </item>
    <item>
      <title>Observing Orchard Core: Traces with Grafana Tempo and ADOT</title>
      <dc:creator>Goh Chun Lin</dc:creator>
      <pubDate>Mon, 26 May 2025 15:01:07 +0000</pubDate>
      <link>https://dev.to/gohchunlin/observing-orchard-core-traces-with-grafana-tempo-and-adot-p4i</link>
      <guid>https://dev.to/gohchunlin/observing-orchard-core-traces-with-grafana-tempo-and-adot-p4i</guid>
      <description>&lt;p&gt;&lt;a href="https://cuteprogramming.blog/wp-content/uploads/2025/05/image-15.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgwmj6dc5vz5r6okx61cp.png" width="800" height="447"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In &lt;a href="https://dev.to/gohchunlin/observing-orchard-core-metrics-and-logs-with-grafana-and-amazon-cloudwatch-5e1b-temp-slug-2240967"&gt;the previous article&lt;/a&gt;, we have discussed about how we can build a custom monitoring pipeline that has Grafana running on Amazon ECS to receive metrics and logs, which are two of the observability pillars, sent from the Orchard Core on Amazon ECS. Today, we will proceed to talk about the third pillar of observability, traces.&lt;/p&gt;

&lt;h3&gt;
  
  
  Source Code
&lt;/h3&gt;

&lt;p&gt;The CloudFormation templates and relevant C# source codes discussed in this article is available on GitHub as part of the Orchard Core Basics Companion (OCBC) Project:&lt;a href="https://github.com/gcl-team/Experiment.OrchardCore.Main/blob/main/Infrastructure.yml" rel="noopener noreferrer"&gt; &lt;/a&gt;&lt;a href="https://github.com/gcl-team/Experiment.OrchardCore.Main" rel="noopener noreferrer"&gt;https://github.com/gcl-team/Experiment.OrchardCore.Main&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cuteprogramming.blog/wp-content/uploads/2025/05/image-6.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1xpmnwqiz21oz56y95hl.png" width="800" height="446"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Lisa Jung, senior developer advocate at Grafana, talks about the three pillars in observability (Image Credit: Grafana Labs)&lt;/em&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  About Grafana Tempo
&lt;/h3&gt;

&lt;p&gt;To capture and visualise traces, we will use &lt;a href="https://grafana.com/oss/tempo/" rel="noopener noreferrer"&gt;Grafana Tempo, an open-source, scalable, and cost-effective tracing backend developed by Grafana Labs&lt;/a&gt;. Unlike other tracing tools, Tempo does not require an index, making it easy to operate and scale.&lt;/p&gt;

&lt;p&gt;We choose Tempo because it is fully compatible with OpenTelemetry, the open standard for collecting distributed traces, which ensures flexibility and vendor neutrality. In addition, Tempo seamlessly integrates with Grafana, allowing us to visualise traces alongside metrics and logs in a single dashboard.&lt;/p&gt;

&lt;p&gt;Finally, being a Grafana Labs project means Tempo has strong community backing and continuous development.&lt;/p&gt;
&lt;h3&gt;
  
  
  About OpenTelemetry
&lt;/h3&gt;

&lt;p&gt;With a solid understanding of why Tempo is our tracing backend of choice, let’s now dive deeper into OpenTelemetry, the open-source framework we use to instrument our Orchard Core app and generate the trace data Tempo collects.&lt;/p&gt;

&lt;p&gt;OpenTelemetry is a &lt;a href="https://www.cncf.io/projects/opentelemetry/" rel="noopener noreferrer"&gt;Cloud Native Computing Foundation (CNCF) project&lt;/a&gt; and a vendor-neutral, open standard for collecting traces, metrics, and logs from our apps. This makes it an ideal choice for building a flexible observability pipeline.&lt;/p&gt;

&lt;p&gt;OpenTelemetry provides SDKs for instrumenting apps across many programming languages, including C# via the .NET SDK, which we use for Orchard Core.&lt;/p&gt;

&lt;p&gt;OpenTelemetry uses the standard &lt;a href="https://opentelemetry.io/docs/specs/otel/protocol/" rel="noopener noreferrer"&gt;OTLP (OpenTelemetry Protocol)&lt;/a&gt; to send telemetry data to any compatible backend, such as Tempo, allowing seamless integration and interoperability.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cuteprogramming.blog/wp-content/uploads/2025/05/image.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8l1afu6d00uzua4l9jon.png" width="800" height="427"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Both Grafana Tempo and OpenTelemetry are projects under the CNCF umbrella. (Image Source: CNCF Cloud Native Interactive Landscape)&lt;/em&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Setup Tempo on EC2 With CloudFormation
&lt;/h3&gt;

&lt;p&gt;It is straightforward to deploy Tempo on EC2.&lt;/p&gt;

&lt;p&gt;Let’s walk through the EC2 UserData script that installs and configures Tempo on the instance.&lt;/p&gt;

&lt;p&gt;First, we download the Tempo release binary, extract it, move it to a proper system path, and ensure it is executable.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;wget https://github.com/grafana/tempo/releases/download/v2.7.2/tempo_2.7.2_linux_amd64.tar.gz
tar -xzvf tempo_2.7.2_linux_amd64.tar.gz
mv tempo /usr/local/bin/tempo
chmod +x /usr/local/bin/tempo
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, we create a basic Tempo configuration file at &lt;code&gt;/etc/tempo.yaml&lt;/code&gt; to define how Tempo listens for traces and where it stores trace data.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;echo "
server:
  http_listen_port: 3200
distributor:
  receivers:
    otlp:
      protocols:
        grpc:
          endpoint: 0.0.0.0:4317
        http:
          endpoint: 0.0.0.0:4318
storage:
  trace:
    backend: local
    local:
      path: /tmp/tempo/traces
" &amp;gt; /etc/tempo.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let’s breakdown the configuration file above.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;http_listen_port&lt;/code&gt; allows us to set the HTTP port (3200) for Tempo internal web server. This port is used for health checks and Prometheus metrics.&lt;/p&gt;

&lt;p&gt;After that, we configure where Tempo listens for incoming trace data. In the configuration above, we enabled OTLP receivers via both &lt;a href="https://grpc.io/docs/guides/" rel="noopener noreferrer"&gt;gRPC&lt;/a&gt; and HTTP, the two protocols that OpenTelemetry SDKs and agents use to send data to Tempo. Here, the ports &lt;code&gt;4317&lt;/code&gt; (gRPC) and &lt;code&gt;4318&lt;/code&gt; (HTTP) are standard for OTLP.&lt;/p&gt;

&lt;p&gt;Last but not least, in the configuration, as demonstration purpose, we use the simplest one, &lt;code&gt;local&lt;/code&gt; storage, to write trace data to the EC2 instance disk under &lt;code&gt;/tmp/tempo/traces&lt;/code&gt;. This is fine for testing or small setups, but for production we will likely want to use services like &lt;a href="https://aws.amazon.com/pm/serv-s3/" rel="noopener noreferrer"&gt;Amazon S3&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;In addition, since we are using local storage on EC2, we can easily SSH into the EC2 instance and directly inspect whether traces are being written. This is incredibly helpful during debugging. What we need to do is to run the following command to see whether files are being generated when our Orchard Core app emits traces.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ls -R /tmp/tempo/traces
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The configuration above is intentionally minimal. As our setup grows, we can explore advanced options like remote storage, multi-tenancy, or even scaling with Tempo components.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cuteprogramming.blog/wp-content/uploads/2025/05/image-1.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcligqeglsz1e8wh69of4.png" width="800" height="485"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Each flushed trace block (folder with UUID) contains a data.parquet file, which holds the actual trace data.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Finally, in order to enable Tempo to start on boot, we create a &lt;code&gt;systemd&lt;/code&gt; unit file that allows Tempo to start on boot and automatically restart if it crashes.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cat &amp;lt;&amp;lt;EOF &amp;gt; /etc/systemd/system/tempo.service
[Unit]
Description=Grafana Tempo service
After=network.target

[Service]
ExecStart=/usr/local/bin/tempo -config.file=/etc/tempo.yaml
Restart=always
RestartSec=5
User=root
LimitNOFILE=1048576

[Install]
WantedBy=multi-user.target
EOF

systemctl daemon-reexec
systemctl daemon-reload
systemctl enable --now tempo
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This &lt;code&gt;systemd&lt;/code&gt; service ensures that Tempo runs in the background and automatically starts up after a reboot or a crash. This setup is crucial for a resilient observability pipeline.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cuteprogramming.blog/wp-content/uploads/2025/05/image-3.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F69w4b8fc40nhvxutshbi.png" width="800" height="410"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Did You Know: When we SSH into an EC2 instance running Amazon Linux 2023, we will be greeted by a cockatiel in ASCII art! (Image Credit: OMG! Linux)&lt;/em&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Understanding OTLP Transport Protocols
&lt;/h3&gt;

&lt;p&gt;In the previous section, we configured Tempo to receive OTLP data over both gRPC and HTTP. These two transport protocols are supported by the OTLP, and each comes with its own strengths and trade-offs. Let’s break them down.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cuteprogramming.blog/wp-content/uploads/2025/05/image-8.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwael8voksf68i7k87lhk.png" width="800" height="444"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Ivy Zhuang from Google gave a presentation on gRPC and Protobuf at gRPConf 2024. (Image Credit: gRPC YouTube)&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Tempo has native support for gRPC, and many OpenTelemetry SDKs default to using it. gRPC is a modern, high-performance transport protocol built on top of &lt;a href="https://http2.github.io/faq/#who-made-http2" rel="noopener noreferrer"&gt;HTTP/2&lt;/a&gt;. It is the preferred option when performanceis critical. gRPC also supports streaming, which makes it ideal for high-throughput scenarios where telemetry data is sent continuously.&lt;/p&gt;

&lt;p&gt;However, gRPC is not natively supported in browsers, so it is not ideal for frontend or web-based telemetry collection unless a proxy or gateway is used. In such scenarios, we will normally choose HTTP which is browser-friendly. HTTP is a more traditional request/response protocol that works well in restricted environments.&lt;/p&gt;

&lt;p&gt;Since we are collecting telemetry from server-side like Orchard Core running on ECS, gRPC is typically the better choice due to its performance benefits and native support in Tempo.&lt;/p&gt;

&lt;p&gt;Please take note that since gRPC requires HTTP/2, which some environments, for example, IoT devices and embedding systems, might not have mature gRPC client support, OTLP over HTTP is often preferred in simpler or constrained systems.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cuteprogramming.blog/wp-content/uploads/2025/05/image-7.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9ittkq4wv2t2de576wrv.png" width="800" height="447"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Daniel Stenberg, Senior Network Engineer at Mozilla, sharing about HTTP/2 at GOTO Copenhagen 2015. (Image Credit: GOTO Conferences YouTube)&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://learn.microsoft.com/en-us/aspnet/core/grpc/comparison?view=aspnetcore-9.0" rel="noopener noreferrer"&gt;gRPC allows multiplexing over a single connection using HTTP/2&lt;/a&gt;. Hence, in gRPC, all telemetry signals, i.e. logs, metrics, and traces, can be sent concurrently over one connection. However, with HTTP, each telemetry signal needs a separate POST request to its own endpoint as listed below to enforce clean schema boundaries, simplify implementation, and stay aligned with HTTP semantics.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Logs:&lt;/strong&gt; &lt;code&gt;/v1/logs&lt;/code&gt;;&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Metrics:&lt;/strong&gt; &lt;code&gt;/v1/metrics&lt;/code&gt;;&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Traces:&lt;/strong&gt; &lt;code&gt;/v1/traces&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In HTTP, since each signal has its own POST endpoint with its own protobuf schema in the body, there is no need for the receiver to guess what is in the body.&lt;/p&gt;
&lt;h3&gt;
  
  
  AWS Distro for Open Telemetry (ADOT)
&lt;/h3&gt;

&lt;p&gt;Now that we have Tempo running on EC2 and understand the OTLP protocols it supports, the next step is to instrument our Orchard Core to generate and send trace data.&lt;/p&gt;

&lt;p&gt;The following code snippet shows what a typical direct integration with Tempo might look like in an Orchard Core.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;builder.Services
    .AddOpenTelemetry()
    .ConfigureResource(resource =&amp;gt; resource.AddService(serviceName: "cld-orchard-core"))
    .WithTracing(tracing =&amp;gt; tracing
        .AddAspNetCoreInstrumentation()
        .AddOtlpExporter(options =&amp;gt;
        {
            options.Endpoint = new Uri("http://&amp;lt;tempo-ec2-host&amp;gt;:4317");
            options.Protocol = OpenTelemetry.Exporter.OtlpExportProtocol.Grpc;
        })
        .AddConsoleExporter());
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This approach works well for simple use cases during development stage, but it comes with trade-offs that are worth considering. Firstly, we couple our app directly to the observability backend, reducing flexibility. Secondly, central management becomes harder when we scale to many services or environments.&lt;/p&gt;

&lt;p&gt;This is where &lt;a href="https://aws.amazon.com/otel/" rel="noopener noreferrer"&gt;AWS Distro for OpenTelemetry (ADOT)&lt;/a&gt; comes into play.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cuteprogramming.blog/wp-content/uploads/2025/05/image-14.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi799tte8281y51cxddvh.png" width="675" height="432"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;The ADOT collector. (Image credit: ADOT technical docs)&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;ADOT is a secure, AWS-supported distribution of the OpenTelemetry project that simplifies collecting and exporting telemetry data from apps running on AWS services, for example our Orchard Core on ECS now. ADOT decouples our apps from the observability backend, provides centralised configuration, and handles telemetry collection more efficiently.&lt;/p&gt;
&lt;h3&gt;
  
  
  Sidecar Pattern
&lt;/h3&gt;

&lt;p&gt;We can deploy the ADOT in several ways, such as running it on a dedicated node or ECS service to receive telemetry from multiple apps. We can also take the sidecar approach which cleanly separates concerns. Our Orchard Core app will focus on business logic, while a nearby ADOT sidecar handles telemetry collection and forwarding. This mirrors modern cloud-native patterns and gives us more flexibility down the road.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cuteprogramming.blog/wp-content/uploads/2025/05/image-11.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8k6zhxwk6yynvspagcgk.png" width="781" height="279"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;The sidecar pattern running in Amazon ECS. (Image Credit: AWS Open Source Blog)&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The following CloudFormation template shows &lt;a href="https://github.com/gcl-team/Experiment.OrchardCore.Main/blob/main/App.yml" rel="noopener noreferrer"&gt;how we deploy ADOT as a sidecar in ECS using CloudFormation&lt;/a&gt;. The collector config is stored in AWS Systems Manager Parameter Store under &lt;code&gt;/myapp/otel-collector-config&lt;/code&gt;, and injected via the &lt;code&gt;AOT_CONFIG_CONTENT&lt;/code&gt; environment variable. This keeps our infrastructure clean, decoupled, and secure.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ecsTaskDefinition:
  Type: AWS::ECS::TaskDefinition
  Properties:
    Family: !Ref ServiceName
    NetworkMode: awsvpc 
    ExecutionRoleArn: !GetAtt ecsTaskExecutionRole.Arn
    TaskRoleArn: !GetAtt iamRole.Arn
    ContainerDefinitions:
      - Name: !Ref ServiceName
        Image: !Ref OrchardCoreImage
        ...

      - Name: adot-collector
        Image: public.ecr.aws/aws-observability/aws-otel-collector:latest
        LogConfiguration:
          LogDriver: awslogs
          Options:
            awslogs-group: !Sub "/ecs/${ServiceName}-log-group"
            awslogs-region: !Ref AWS::Region
            awslogs-stream-prefix: adot
        Essential: false
        Cpu: 128
        Memory: 512
        HealthCheck:
          Command: ["/healthcheck"]
          Interval: 30
          Timeout: 5
          Retries: 3
          StartPeriod: 60
        Secrets:
          - Name: AOT_CONFIG_CONTENT
            ValueFrom: !Sub "arn:${AWS::Partition}:ssm:${AWS::Region}:${AWS::AccountId}:parameter/otel-collector-config"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://cuteprogramming.blog/wp-content/uploads/2025/05/image-10.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpk5m6bptsf0n127ljlpl.png" width="668" height="399"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Deploy an ADOT sidecar on ECS to collect observability data from Orchard Core.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;There are several interesting and important details in the CloudFormation snippet above that are worth calling out. Let’s break them down one by one.&lt;/p&gt;

&lt;p&gt;Firstly, we choose &lt;code&gt;awsvpc&lt;/code&gt; as the &lt;code&gt;NetworkMode&lt;/code&gt; of the ECS task. In &lt;code&gt;awsvpc&lt;/code&gt;, each container in the ECS task, i.e. our Orchard Core container and the ADOT sidecar, receives its own &lt;a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-eni.html" rel="noopener noreferrer"&gt;ENI (Elastic Network Interface)&lt;/a&gt;. This is great for network-level isolation. With this setup, we can reference the sidecar from our Orchard Core using its container name through ECS internal DNS, i.e. &lt;code&gt;http://adot-collector:4317&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Secondly, we include a health check for the ADOT container. ECS will use this health check to restart the container if it becomes unhealthy, improving reliability without manual intervention. In November 2022, &lt;a href="https://github.com/PaurushGarg" rel="noopener noreferrer"&gt;Paurush Garg from AWS&lt;/a&gt; added the healthcheck component with &lt;a href="https://github.com/aws-observability/aws-otel-collector/issues/1124#issuecomment-1301416143" rel="noopener noreferrer"&gt;the new ADOT collector release&lt;/a&gt;, so we can simply specify that we will be using this healthcheck component in the configuration that we will discuss next.&lt;/p&gt;

&lt;p&gt;Yes, the configuration! Instead of hardcoding the ADOT configuration into the task definition, we &lt;a href="https://aws-otel.github.io/docs/setup/ecs/config-through-ssm#1-update-task-defintion" rel="noopener noreferrer"&gt;inject it securely at runtime using the &lt;code&gt;AOT_CONFIG_CONTENT&lt;/code&gt; secret&lt;/a&gt;. This environment variable &lt;code&gt;AOT_CONFIG_CONTENT&lt;/code&gt; is designed to enable us to configure the ADOT collector. It will override the config file used in the ADOT collector entrypoint command.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cuteprogramming.blog/wp-content/uploads/2025/05/image-12.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fth9aqsxio1jhjdv1gx1l.png" width="800" height="429"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;The SSM Parameter for the environment variable AOT_CONFIG_CONTENT.&lt;/em&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Wrap-Up
&lt;/h3&gt;

&lt;p&gt;By now, we have completed the journey of setting up Grafana Tempo on EC2, exploring how traces flow through OTLP protocols like gRPC and HTTP, and understanding why ADOT is often the better choice in production-grade observability pipelines.&lt;/p&gt;

&lt;p&gt;With everything connected, our Orchard Core app is now able to send traces into Tempo reliably. This will give us end-to-end visibility with OpenTelemetry and AWS-native tooling.&lt;/p&gt;

&lt;h3&gt;
  
  
  References
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://medium.com/cloud-native-daily/level-up-your-tracing-platform-opentelemetry-grafana-tempo-8db66d7462e2" rel="noopener noreferrer"&gt;Level Up Your Tracing Platform with OpenTelemetry and Grafana Tempo&lt;/a&gt;;&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/open-telemetry/opentelemetry-dotnet/blob/main/src/OpenTelemetry.Exporter.OpenTelemetryProtocol/README.md#otlpexporteroptions" rel="noopener noreferrer"&gt;OTLP Exporter for OpenTelemetry .NET – OltpExporterOptions&lt;/a&gt;;&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://daniel.haxx.se/http2/" rel="noopener noreferrer"&gt;http2 explained by Daniel Stenberg&lt;/a&gt;;&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://aws-otel.github.io/docs/introduction" rel="noopener noreferrer"&gt;AWS Distro for OpenTelemetry (ADOT) technical docs – Introduction&lt;/a&gt;;&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://aws.amazon.com/blogs/opensource/deployment-patterns-for-the-aws-distro-for-opentelemetry-collector-with-amazon-elastic-container-service/" rel="noopener noreferrer"&gt;Deployment patterns for the AWS Distro for OpenTelemetry Collector with Amazon Elastic Container Service&lt;/a&gt;;&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://medium.com/@balmacedanicolas4/deploying-an-opentelemetry-sidecar-on-ecs-fargate-with-grafana-for-logs-metrics-and-traces-0b213bc9ec38" rel="noopener noreferrer"&gt;Deploying an OpenTelemetry Sidecar on ECS Fargate with Grafana for Logs, Metrics, and Traces&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>amazonwebservices</category>
      <category>aspnet</category>
      <category>c</category>
      <category>cloudcomputingamazon</category>
    </item>
    <item>
      <title>Observing Orchard Core: Metrics and Logs with Grafana and Amazon CloudWatch</title>
      <dc:creator>Goh Chun Lin</dc:creator>
      <pubDate>Sun, 27 Apr 2025 09:02:05 +0000</pubDate>
      <link>https://dev.to/gohchunlin/observing-orchard-core-metrics-and-logs-with-grafana-and-amazon-cloudwatch-e8m</link>
      <guid>https://dev.to/gohchunlin/observing-orchard-core-metrics-and-logs-with-grafana-and-amazon-cloudwatch-e8m</guid>
      <description>&lt;p&gt;&lt;a href="https://cuteprogramming.blog/wp-content/uploads/2025/04/image-14.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F57wzmociu9ris9e9dlbr.png" width="800" height="449"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I recently deployed an &lt;a href="https://orchardcore.net/" rel="noopener noreferrer"&gt;Orchard Core&lt;/a&gt; app on &lt;a href="https://aws.amazon.com/ecs/" rel="noopener noreferrer"&gt;Amazon ECS&lt;/a&gt; and wanted to gain better visibility into its performance and health.&lt;/p&gt;

&lt;p&gt;Instead of relying solely on basic &lt;a href="https://aws.amazon.com/cloudwatch/" rel="noopener noreferrer"&gt;Amazon CloudWatch&lt;/a&gt; metrics, I decided to build a custom monitoring pipeline that has Grafana running on &lt;a href="https://aws.amazon.com/ec2/" rel="noopener noreferrer"&gt;Amazon EC2&lt;/a&gt; receiving metrics and EMF (Embedded Metrics Format) logs sent from the Orchard Core on ECS via CloudFormation configuration.&lt;/p&gt;

&lt;p&gt;In this post, I will walk through how I set this up from scratch, what challenges I faced, and how you can do the same.&lt;/p&gt;

&lt;h3&gt;
  
  
  Source Code
&lt;/h3&gt;

&lt;p&gt;The CloudFormation templates and relevant C# source codes discussed in this article is available on GitHub as part of the Orchard Core Basics Companion (OCBC) Project:&lt;a href="https://github.com/gcl-team/Experiment.OrchardCore.Main/blob/main/Infrastructure.yml" rel="noopener noreferrer"&gt;&lt;/a&gt;&lt;a href="https://github.com/gcl-team/Experiment.OrchardCore.Main" rel="noopener noreferrer"&gt;https://github.com/gcl-team/Experiment.OrchardCore.Main&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Why Grafana?
&lt;/h3&gt;

&lt;p&gt;In the previous post where we setup the Orchard Core on ECS, we talked about how we can send metrics and logs to CloudWatch. While it is true that CloudWatch offers us out-of-the-box infrastructure metrics and AWS-native alarms and logs, the dashboards CloudWatch provides are limited and not as customisable. Managing observability with just CloudWatch gets tricky when our apps span multiple AWS regions, accounts, or other cloud environments.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cuteprogramming.blog/wp-content/uploads/2025/04/image-11.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftfh5da60yzq7wzz0akmz.png" width="800" height="599"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;The GrafanaLive event in Singapore in September 2023. (Event Page)&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;If we are looking for solution that is not tied to single vendor like AWS, Grafana can be one of the options. Grafana is an open-source visualisation platform that lets teams monitor real-time metrics from multiple sources, like CloudWatch, X-Ray, Prometheus and so on, all in unified dashboards. It is lightweight, extensible, and ideal for observability in cloud-native environments.&lt;/p&gt;

&lt;p&gt;Is Grafana the only solution? Definitely not! However, personally I still prefer Grafana because it is open-source and free to start. In this blog post, we will also see how easy to host Grafana on EC2 and integrate it directly with CloudWatch with no extra agents needed.&lt;/p&gt;
&lt;h3&gt;
  
  
  Three Pillars of Observability
&lt;/h3&gt;

&lt;p&gt;In observability, there are three pillars, i.e. logs, metrics, and traces.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cuteprogramming.blog/wp-content/uploads/2025/04/image-15.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9jmeiyhzfp8qvdhxipen.png" width="800" height="446"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Lisa Jung, senior developer advocate at Grafana, talks about the three pillars in observability (Image Credit: Grafana Labs)&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Firstly, logs are text records that capture events happening in the system.&lt;/p&gt;

&lt;p&gt;Secondly, metrics are numeric measurements tracked over time, such as HTTP status code counts, response times, or ECS CPU and memory utilisation rates.&lt;/p&gt;

&lt;p&gt;Finally, traces show the form a strong observability foundation which can help us to identify issues faster, reduce downtime, and improve system reliability. This will ultimately support better user experience for our apps.&lt;/p&gt;

&lt;p&gt;This is where we need a tool like Grafana because Grafana assists us to visualise, analyse, and alert based on our metrics, making observability practical and actionable.&lt;/p&gt;
&lt;h3&gt;
  
  
  Setup Grafana on EC2 with CloudFormation
&lt;/h3&gt;

&lt;p&gt;It is straightforward to install Grafana on EC2.&lt;/p&gt;

&lt;p&gt;Firstly, let’s define the security group that we will be use for the EC2.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ec2SecurityGroup:
  Type: AWS::EC2::SecurityGroup
  Properties:
    GroupDescription: Allow access to the EC2 instance hosting Grafana
    VpcId: {"Fn::ImportValue": !Sub "${CoreNetworkStackName}-${AWS::Region}-vpcId"}
    SecurityGroupIngress:
      - IpProtocol: tcp
        FromPort: 22
        ToPort: 22
        CidrIp: 0.0.0.0/0 # Caution: SSH open to public, restrict as needed
      - IpProtocol: tcp
        FromPort: 3000
        ToPort: 3000
        CidrIp: 0.0.0.0/0 # Caution: Grafana open to public, restrict as needed
      Tags:
        - Key: Stack
          Value: !Ref AWS::StackName
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The VPC ID is imported from another of the common network stack, the cld-core-network, we setup. Please &lt;a href="https://github.com/gcl-team/Experiment.OrchardCore.Main/blob/main/CoreNetwork.yml" rel="noopener noreferrer"&gt;refer to the stack cld-core-network here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;For demo purpose, please notice that &lt;strong&gt;both SSH (port 22) and Grafana (port 3000) are open to the world (&lt;code&gt;0.0.0.0/0&lt;/code&gt;)&lt;/strong&gt;. It is important to protect the access to EC2 by adding a bastion host, VPN, or IP restriction later.&lt;/p&gt;

&lt;p&gt;In addition, the SSH should only be opened temporarily. The SSH access is for when we need to log in to the EC2 instance and troubleshoot Grafana installation manually.&lt;/p&gt;

&lt;p&gt;Now, we can proceed to setup EC2 with Grafana installed using the CloudFormation resource below.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ec2Instance:
  Type: AWS::EC2::Instance
  Properties:
    InstanceType: !Ref InstanceType
    ImageId: !Ref Ec2Ami
    NetworkInterfaces:
      - AssociatePublicIpAddress: true
        DeviceIndex: 0
        SubnetId: {"Fn::ImportValue": !Sub "${CoreNetworkStackName}-${AWS::Region}-publicSubnet1Id"}
        GroupSet:
          - !Ref ec2SecurityGroup
    UserData:
      Fn::Base64: !Sub |
        #!/bin/bash
        yum update -y
        yum install -y wget unzip
        wget https://dl.grafana.com/oss/release/grafana-10.1.0-1.x86_64.rpm
        yum install -y grafana-10.1.0-1.x86_64.rpm
        systemctl enable --now grafana-server
    Tags:
      - Key: Name
        Value: "Observability-Instance"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the CloudFormation template above, we are expecting our users to access the Grafana dashboard directly over the Internet. Hence, we put the EC2 in public subnet and assign an Elastic IP (EIP) to it, as demonstrated below, so that we can have a consistent public accessible static IP for our Grafana.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ecsEip:
  Type: AWS::EC2::EIP

ec2EIPAssociation:
  Type: AWS::EC2::EIPAssociation
  Properties:
    AllocationId: !GetAtt ecsEip.AllocationId
    InstanceId: !Ref ec2Instance
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For production systems, placing instances in public subnets and exposing them with a public IP requires us to have strong security measures in place. Otherwise, it is recommended to place our Grafana EC2 instance in private subnets and accessed via Application Load Balancer (ALB) or NAT Gateway to reduce the attack surface.&lt;/p&gt;

&lt;h3&gt;
  
  
  Pump CloudWatch Metrics to Grafana
&lt;/h3&gt;

&lt;p&gt;Grafana supports CloudWatch as a native data source.&lt;/p&gt;

&lt;p&gt;With the appropriate AWS credentials and region, we can use Access Key ID and Secret Access Key to grant Grafana the access to CloudWatcch. The user that the credentials belong to must have the &lt;code&gt;AmazonGrafanaCloudWatchAccess&lt;/code&gt; policy.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cuteprogramming.blog/wp-content/uploads/2025/04/image.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Funm6ndbrga3lesttgkgy.png" width="800" height="503"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;The user that Grafana uses to access CloudWatch must have the AmazonGrafanaCloudWatchAccess policy.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;However, using AWS Access Key/Secret in Grafana data source connection details is less secure and not ideal for EC2 setups. In addition, &lt;code&gt;AmazonGrafanaCloudWatchAccess&lt;/code&gt; is a managed policy optimised for running Grafana as a managed service within AWS. Thus, it is recommended to create our own custom policy so that we can limit the permissions to only what is needed, as demonstrated with the following CloudWatch template.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ec2InstanceRole:
  Type: AWS::IAM::Role
  Properties:
    AssumeRolePolicyDocument:
      Version: '2012-10-17'
      Statement:
        - Effect: Allow
          Principal:
            Service: ec2.amazonaws.com
          Action: sts:AssumeRole

    Policies:
      - PolicyName: EC2MetricsAndLogsPolicy
        PolicyDocument:
          Version: '2012-10-17'
          Statement:
            - Sid: AllowReadingMetricsFromCloudWatch
              Effect: Allow
              Action:
                - cloudwatch:ListMetrics
                - cloudwatch:GetMetricData
              Resource: "*"
            - Sid: AllowReadingLogsFromCloudWatch
              Effect: Allow
              Action:
                - logs:DescribeLogGroups
                - logs:GetLogGroupFields
                - logs:StartQuery
                - logs:StopQuery
                - logs:GetQueryResults
                - logs:GetLogEvents
              Resource: "*"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Again, using our custom policy provides better control and follows the best practices of least privilege.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cuteprogramming.blog/wp-content/uploads/2025/04/image-13.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8rwsu15asqbfx8gbprp3.png" width="800" height="503"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;With IAM role, we do not need to provide AWS Access Key/Secret in Grafana connection details for CloudWatch as a data source.&lt;/em&gt;&lt;/p&gt;
&lt;h4&gt;
  
  
  Visualising ECS Service Metrics
&lt;/h4&gt;

&lt;p&gt;Now that Grafana is configured to pull data from CloudWatch, ECS metrics like CPUUtilization and MemoryUtilization, are available. We can proceed to create a dashboard and select the right namespace as well as the right metric name.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cuteprogramming.blog/wp-content/uploads/2025/04/image-2.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdx5ovapoaarggnd7wjuq.png" width="800" height="503"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Setting up the diagram for memory utilisation of our Orchard Core app in our ECS cluster.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;As shown in the following dashboard, we show memory and CPU utilisation rates because they help us ensure that our ECS services are performing within safe limits and not overusing or underutilizing resources. By monitoring the utilisation, we ensure our services are using just the right amount of resources.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cuteprogramming.blog/wp-content/uploads/2025/04/image-1.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feckne12wx4la6gd2ihhj.png" width="800" height="503"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Both ECS service metrics and container insights are displayed on Grafana dashboard.&lt;/em&gt;&lt;/p&gt;
&lt;h4&gt;
  
  
  Visualising ECS Container Insights Metrics
&lt;/h4&gt;

&lt;p&gt;ECS Container Insights Metrics are deeper metrics like task counts, network I/O, storage I/O, and so on.&lt;/p&gt;

&lt;p&gt;In the dashboard above, we can also see the number of Task Count. Task Count helps us make sure our services are running the right number of instances at all times.&lt;/p&gt;

&lt;p&gt;Task Count by itself is not a cost metric, but if we consistently see high task counts with low CPU/memory usage, it indicates we can potentially consolidate workloads and reduce costs.&lt;/p&gt;
&lt;h3&gt;
  
  
  Instrumenting Orchard Core to Send Custom App Metrics
&lt;/h3&gt;

&lt;p&gt;Now that we have seen how ECS metrics are visualised in Grafana, let’s move on to instrumenting our Orchard Core app to send custom app-level metrics. This will give us deeper visibility into what our app is really doing.&lt;/p&gt;

&lt;p&gt;Metrics should be tied to business objectives. It’s crucial that the metrics you collect align with KPIs that can drive decision-making.&lt;/p&gt;

&lt;p&gt;Metrics should be actionable. The collected data should help identify where to optimise, what to improve, and how to make decisions. For example, by tracking app-metrics such as response time and HTTP status codes, we gain insight into both performance and reliability of our Orchard Core. This allows us to catch slowdowns or failures early, improving user satisfaction.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cuteprogramming.blog/wp-content/uploads/2025/04/image-10.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftlet9spds747jgih2pvu.png" width="800" height="432"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;SLA vs SLO vs SLI: Key Differences in Service Metrics (Image Credit: Atlassian)&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;By tracking response times and HTTP code counts at the endpoint level,&lt;br&gt;&lt;br&gt;
we are measuring SLIs that are necessary to monitor if we are meeting our SLOs.&lt;br&gt;&lt;br&gt;
With clear SLOs and SLIs, we can then focus on what really matters from a performance and reliability perspective. For example, a common SLO could be “99.9% of requests to our Orchard Core API endpoints must be processed within 500ms.”&lt;/p&gt;

&lt;p&gt;In terms of sending custom app-level metrics from our Orchard Core to CloudWatch and then to Grafana, there are many approaches depending on our use case. If we are looking for simplicity and speed, CloudWatch SDK and EMF are definitely the easiest and most straightforward methods we can use to get started with sending custom metrics from Orchard Core to CloudWatch, and then visualising them in Grafana.&lt;/p&gt;
&lt;h4&gt;
  
  
  Using CloudWatch SDK to Send Metrics
&lt;/h4&gt;

&lt;p&gt;We will start with creating &lt;a href="https://github.com/gcl-team/Experiment.OrchardCore.Main/blob/main/OCBC.HeadlessCMS/Middlewares/EndpointStatisticsMiddleware.cs" rel="noopener noreferrer"&gt;a middleware called EndpointStatisticsMiddleware&lt;/a&gt; with &lt;a href="https://www.nuget.org/packages/AWSSDK.CloudWatch" rel="noopener noreferrer"&gt;AWSSDK.CloudWatch NuGet package&lt;/a&gt; referenced. In the middleware, we create a &lt;code&gt;MetricDatum&lt;/code&gt; object to define the metric that we want to send to CloudWatch.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;var metricData = new MetricDatum
    {
        MetricName = metricName,
        Value = value,
        Unit = StandardUnit.Count,
        Dimensions = new List&amp;lt;Dimension&amp;gt;
        {
            new Dimension
            {
                Name = "Endpoint", 
                Value = endpointPath
            }
        }
    };

var request = new PutMetricDataRequest
    {
        Namespace = "Experiment.OrchardCore.Main/Performance",
        MetricData = new List&amp;lt;MetricDatum&amp;gt; { metricData }
    };
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the code above, we see new concepts like Namespace, Metric, and Dimension. They are foundational in CloudWatch. We can think of them as ways to organize and label our data to make it easy to find, group, and analyse.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Namespace&lt;/strong&gt; : A container or category for our metrics. It helps to group related metrics together;&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Metric&lt;/strong&gt; : A series of data points that we want to track. The thing we are measuring, in our example, it could be &lt;code&gt;Http2xxCount&lt;/code&gt; and &lt;code&gt;Http4xxCount&lt;/code&gt;;&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Dimension&lt;/strong&gt; :A key-value pair that adds context to a metric.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If we do not define the Namespace, Metric, and Dimensions carefully when we send data, Grafana later will not find them, or our charts on the dashboards will be very messy and hard to filter or analyse.&lt;/p&gt;

&lt;p&gt;In addition, as shown in the code above, we are capturing the HTTP status code for our Orchard Core endpoints. We will then use &lt;code&gt;PutMetricDataAsync&lt;/code&gt; to send the metric data &lt;code&gt;PutMetricDataRequest&lt;/code&gt; asynchronously to CloudWatch.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cuteprogramming.blog/wp-content/uploads/2025/04/image-3.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fshoc16938uvmsbu4wdij.png" width="800" height="503"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;The HTTP status codes of each of our Orchard Core endpoints are now captured on CloudWatch.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;In Grafana, now when we want to configure a CloudWatch panel to show the HTTP status codes for each of the endpoint, the first thing we select is the Namespace, which is &lt;code&gt;Experiment.OrchardCore.Main/Performance&lt;/code&gt; in our example. Namespace tells Grafana which group of metrics to query.&lt;/p&gt;

&lt;p&gt;After picking the Namespace, Grafana lists the available Metrics inside that Namespace. We pick the Metrics we want to plot, such as &lt;code&gt;Http2xxCount&lt;/code&gt; and &lt;code&gt;Http4xxCount&lt;/code&gt;. Finally, since we are tracking metrics by endpoint, we set the Dimension to &lt;code&gt;Endpoint&lt;/code&gt; and select the specific endpoint we are interested in, as shown in the following screenshot.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cuteprogramming.blog/wp-content/uploads/2025/04/image-4.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1c5az3269xi8j94p5l7r.png" width="800" height="503"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h4&gt;
  
  
  Using EMF to Send Metrics
&lt;/h4&gt;

&lt;p&gt;While using the CloudWatch SDK works well for sending individual metrics, &lt;a href="https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch_Embedded_Metric_Format_Specification.html" rel="noopener noreferrer"&gt;EMF (Embedded Metric Format)&lt;/a&gt; offers a more powerful and scalable way to log structured metrics directly from our app logs.&lt;/p&gt;

&lt;p&gt;Before we can use EMF, we must first ensure that the Orchard Core application logs from our ECS tasks are correctly sent to CloudWatch Logs. This is done by configuring the &lt;code&gt;LogConfiguration&lt;/code&gt; inside the ECS &lt;code&gt;TaskDefinition&lt;/code&gt; &lt;a href="https://dev.to/gohchunlin/automate-orchard-core-deployment-on-aws-ecs-with-cloudformation-4ep4-temp-slug-3845090"&gt;as we discussed last time&lt;/a&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  # Unit 12: ECS Task Definition and Service
  ecsTaskDefinition:
    Type: AWS::ECS::TaskDefinition
    Properties:
      ...
      ContainerDefinitions:
        - Name: !Ref ServiceName
          Image: !Ref OrchardCoreImage
          LogConfiguration:
            LogDriver: awslogs
            Options:
              awslogs-group: !Sub "/ecs/${ServiceName}-log-group"
              awslogs-region: !Ref AWS::Region
              awslogs-stream-prefix: ecs
          ...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once the ECS task is sending logs to CloudWatch Logs, we can start embedding custom metrics into the logs using EMF.&lt;/p&gt;

&lt;p&gt;Instead of pushing metrics directly using the CloudWatch SDK, we send structured JSON messages into the container logs. CloudWatch will then auto detects these EMF messages and converts them into CloudWatch Metrics.&lt;/p&gt;

&lt;p&gt;The following shows what a simple EMF log message looks like.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "_aws": {
    "Timestamp": 1745653519000,
    "CloudWatchMetrics": [
      {
        "Namespace": "Experiment.OrchardCore.Main/Performance",
        "Dimensions": [["Endpoint"]],
        "Metrics": [
          { "Name": "ResponseTimeMs", "Unit": "Milliseconds" }
        ]
      }
    ]
  },
  "Endpoint": "/api/v1/packages",
  "ResponseTimeMs": 142
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When a log message reaches CloudWatch Logs, CloudWatch scans the text and looks for a valid &lt;code&gt;_aws&lt;/code&gt; JSON object inside anywhere in the message. Thus, even if our log line has extra text before or after, as long as the EMF JSON is properly formatted, CloudWatch extracts it and publishes the metrics automatically.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cuteprogramming.blog/wp-content/uploads/2025/04/image-5.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6qqhb7634fnj387rh23s.png" width="800" height="503"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;An example of log with EMF JSON in it on CloudWatch.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;After CloudWatch extracts the EMF block from our log message, it automatically turns it into a proper CloudWatch Metric. These metrics are then queryable just like any normal CloudWatch metric and thus available inside Grafana too, as shown in the screenshot below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cuteprogramming.blog/wp-content/uploads/2025/04/image-6.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foiry1nxcotmhwy30gj8t.png" width="800" height="503"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Metrics extracted from logs containing EMF JSON are automatically turned into metrics that can be visualised in Grafana just like any other metric.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;As we can see, using EMF is easier as compared to going the CloudWatch SDK route because we do not need to change or add extra AWS infrastructure. With EMF, what our app does is just writing special JSON-format logs.&lt;/p&gt;

&lt;p&gt;Then CloudWatch Metrics automatically extracts the metrics from those logs with EMF JSON. The entire process requires no new service, no special SDK code, and no CloudWatch PutMetric API calls.&lt;/p&gt;
&lt;h3&gt;
  
  
  Cost Optimisation with Logs vs Metrics
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://aws.amazon.com/cloudwatch/pricing/" rel="noopener noreferrer"&gt;Logs are more expensive than metrics&lt;/a&gt;, especially when we are storing large amounts of data over time. This is also true when logs are stored at a higher retention rate and are more detailed, which means higher storage costs.&lt;/p&gt;

&lt;p&gt;Metrics are cheaper to store because they are aggregated data points that do not require the same level of detail as logs.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cuteprogramming.blog/wp-content/uploads/2025/04/image-8.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9ovu3ep9xvwgck7fm9xu.png" width="800" height="503"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;CloudWatch treats each unique combination of dimensions as a separate metric, even if the metrics have the same metric name. However, compared to logs, metrics are still usually much cheaper at scale.&lt;/p&gt;

&lt;p&gt;By embedding metrics into your log data via EMF, we are actually piggybacking metrics into logs, and letting CloudWatch extract metrics without duplicating effort. Thus, when using EMF, we will be paying for both, i.e.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Log ingestion and storage (for the raw logs);&lt;/li&gt;
&lt;li&gt;The extracted custom metric (for the metric).&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Hence, when we are leveraging EMF, we should consider expire logs faster if we only need the extracted metrics long-term.&lt;/p&gt;
&lt;h4&gt;
  
  
  Granularity and Sampling
&lt;/h4&gt;

&lt;p&gt;Granularity refers to how frequent the metric data is collected. Fine granularity provides more detailed insights but can lead to increased data volume and costs.&lt;/p&gt;

&lt;p&gt;Sampling is a technique to reduce the amount of data collected by capturing only a subset of data points (especially helpful in high-traffic systems). However, the challenge is ensuring that you maintain enough data to make informed decisions while keeping storage and processing costs manageable.&lt;/p&gt;

&lt;p&gt;In our Orchard Core app above, currently the middleware that we implement will immediately &lt;code&gt;PutMetricDataAsync&lt;/code&gt; to CloudWatch which will then not only slow down our API but it costs more because we need to pay when we send custom metrics to CloudWatch. Thus, we usually “buffer” the metrics first, and then batch-send periodically. This can be done with, for example, HostedService which is an ASP.NET Core background service, to flush metrics at interval.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;using Amazon.CloudWatch;
using Amazon.CloudWatch.Model;
using Microsoft.Extensions.Hosting;
using Microsoft.Extensions.Options;
using System.Collections.Concurrent;

public class MetricsPublisher(
        IAmazonCloudWatch cloudWatch, 
        IOptions&amp;lt;MetricsOptions&amp;gt; options,
        ILogger&amp;lt;MetricsPublisher&amp;gt; logger) : BackgroundService
{
    private readonly ConcurrentBag&amp;lt;MetricDatum&amp;gt; _pendingMetrics = new();

    public void TrackMetric(string metricName, double value, string endpointPath)
    {
        _pendingMetrics.Add(new MetricDatum
        {
            MetricName = metricName,
            Value = value,
            Unit = StandardUnit.Count,
            Dimensions = new List&amp;lt;Dimension&amp;gt;
            {
                new Dimension 
                { 
                    Name = "Endpoint", 
                    Value = endpointPath
                }
            }
        });
    }

    protected override async Task ExecuteAsync(CancellationToken stoppingToken)
    {
        logger.LogInformation("MetricsPublisher started.");
        while (!stoppingToken.IsCancellationRequested)
        {
            await Task.Delay(TimeSpan.FromSeconds(options.FlushIntervalSeconds), stoppingToken);
            await FlushMetricsAsync();
        }
    }

    private async Task FlushMetricsAsync()
    {
        if (_pendingMetrics.IsEmpty) return;

        const int MaxMetricsPerRequest = 1000;

        var metricsToSend = new List&amp;lt;MetricDatum&amp;gt;();
        var metricsCount = 0;
        while (_pendingMetrics.TryTake(out var datum))
        {
            metricsToSend.Add(datum);

            metricsCount += 1;
            if (metricsCount &amp;gt;= MaxMetricsPerRequest) break;
        }

        var request = new PutMetricDataRequest
        {
            Namespace = options.Namespace,
            MetricData = metricsToSend
        };

        int attempt = 0;
        while (attempt &amp;lt; options.MaxRetryAttempts)
        {
            try
            {
                await cloudWatch.PutMetricDataAsync(request);
                logger.LogInformation("Flushed {Count} metrics to CloudWatch.", metricsToSend.Count);
                break;
            }
            catch (Exception ex)
            {
                attempt++;
                logger.LogWarning(ex, "Failed to flush metrics. Attempt {Attempt}/{MaxAttempts}", attempt, options.MaxRetryAttempts);
                if (attempt &amp;lt; options.MaxRetryAttempts)
                    await Task.Delay(TimeSpan.FromSeconds(options.RetryDelaySeconds));
                else
                    logger.LogError("Max retry attempts reached. Dropping {Count} metrics.", metricsToSend.Count);
            }
        }
    }

    public override async Task StopAsync(CancellationToken cancellationToken)
    {
        logger.LogInformation("MetricsPublisher stopping.");
        await FlushMetricsAsync();
        await base.StopAsync(cancellationToken);
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In our Orchard Core API, each incoming HTTP request may run on a different thread. Hence, we need a thread-safe data structure like &lt;code&gt;ConcurrentBag&lt;/code&gt; for storing the pending metrics.&lt;/p&gt;

&lt;p&gt;Please take note that &lt;code&gt;ConcurrentBag&lt;/code&gt; is designed to be an &lt;strong&gt;unordered collection&lt;/strong&gt;. It &lt;strong&gt;does not maintain the order of insertion&lt;/strong&gt; when items are taken from it. However, since the metrics we are sending, which is the counts of HTTP status codes, it does not matter in what order the requests were processed.&lt;/p&gt;

&lt;p&gt;In addition, &lt;a href="https://docs.aws.amazon.com/AmazonCloudWatch/latest/APIReference/API_PutMetricData.html#API_PutMetricData_RequestParameters" rel="noopener noreferrer"&gt;the limit of &lt;code&gt;MetricData&lt;/code&gt; that we can send to CloudWatch per request is 1,000&lt;/a&gt;. Thus, we have the constant &lt;code&gt;MaxMetricsPerRequest&lt;/code&gt; to help us make sure that we &lt;a href="https://learn.microsoft.com/en-us/dotnet/api/system.collections.concurrent.concurrentbag-1.trytake" rel="noopener noreferrer"&gt;retrieve and remove&lt;/a&gt; at most 1,000 metrics from the ConcurrentBag.&lt;/p&gt;

&lt;p&gt;Finally, we can inject &lt;code&gt;MetricsPublisher&lt;/code&gt; to our middleware &lt;code&gt;EndpointStatisticsMiddleware&lt;/code&gt; so that it can auto track every API request.&lt;/p&gt;

&lt;h3&gt;
  
  
  Wrap-Up
&lt;/h3&gt;

&lt;p&gt;In this post, we started by setting up Grafana on EC2, connected it to CloudWatch to visualise ECS metrics. After that, we explored two ways, i.e. CloudWatch SDK and EMF log, to send custom app-level metrics from our Orchard Core app:&lt;/p&gt;

&lt;p&gt;Whether we are monitoring system health or reporting on business KPIs, Grafana with CloudWatch offers a powerful observability stack that is both flexible and cost-aware.&lt;/p&gt;

&lt;h3&gt;
  
  
  References
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.youtube.com/watch?v=TQur9GJHIIQ" rel="noopener noreferrer"&gt;What is Observability?&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://medium.com/@angusyuen/why-you-should-use-cloudwatch-embedded-metric-format-a44eb821f97e" rel="noopener noreferrer"&gt;Why you should use CloudWatch Embedded Metric Format&lt;/a&gt;;&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch_Embedded_Metric_Format.html" rel="noopener noreferrer"&gt;Embedding metrics within logs&lt;/a&gt;;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.atlassian.com/incident-management/kpis/sla-vs-slo-vs-sli" rel="noopener noreferrer"&gt;SLA vs. SLO vs. SLI: What’s the difference?&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch-ServiceLevelObjectives.html#CloudWatch-ServiceLevelObjectives-concepts" rel="noopener noreferrer"&gt;Service level objectives (SLOs)&lt;/a&gt;;&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://docs.aws.amazon.com/AmazonCloudWatch/latest/APIReference/API_PutMetricData.html" rel="noopener noreferrer"&gt;Amazon CloudWatch PutMetricData&lt;/a&gt;;&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://learn.microsoft.com/en-us/dotnet/api/system.collections.concurrent.concurrentbag-1.trytake" rel="noopener noreferrer"&gt;ConcurrentBag.TryTake(T) Method&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>amazonwebservices</category>
      <category>c</category>
      <category>experience</category>
      <category>grafana</category>
    </item>
    <item>
      <title>From Design to Implementation: Crafting Headless APIs in Orchard Core with Apidog</title>
      <dc:creator>Goh Chun Lin</dc:creator>
      <pubDate>Mon, 31 Mar 2025 10:47:27 +0000</pubDate>
      <link>https://dev.to/gohchunlin/from-design-to-implementation-crafting-headless-apis-in-orchard-core-with-apidog-4g8f</link>
      <guid>https://dev.to/gohchunlin/from-design-to-implementation-crafting-headless-apis-in-orchard-core-with-apidog-4g8f</guid>
      <description>&lt;p&gt;&lt;a href="https://cuteprogramming.blog/wp-content/uploads/2025/03/image-55.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe5xvtr4qepfor1ohlutu.png" width="800" height="447"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Last month, I had the opportunity to attend &lt;a href="https://www.youtube.com/watch?v=TV3OqKtd4qM" rel="noopener noreferrer"&gt;an online meetup&lt;/a&gt; hosted by the local &lt;a href="https://mvp.microsoft.com/en-US/mvp/profile/4a30abe5-708c-e711-811f-3863bb2ed1f8" rel="noopener noreferrer"&gt;Microsoft MVP Dileepa Rajapaksa&lt;/a&gt; from the &lt;a href="https://www.dotnet.sg" rel="noopener noreferrer"&gt;Singapore .NET Developers Community&lt;/a&gt;, where I was introduced to &lt;a href="https://apidog.com/" rel="noopener noreferrer"&gt;ApiDog&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;During the session, Mohammad L. U. Tanjim, the Product Manager of ApiDog, gave a detailed walkthrough of the API-First design and how Apidog can be used for this approach.&lt;/p&gt;

&lt;p&gt;Apidog helps us to define, test, and document APIs in one place. Instead of manually writing Swagger docs and using API tool separately, ApiDog combines everything. This means frontend developers can get mock APIs instantly, and backend developers as well as QAs can get clear API specs with automatic testing support.&lt;/p&gt;

&lt;p&gt;Hence, for the customised headless APIs, we will adopt an API-First design approach. This approach ensures clarity, consistency, and efficient collaboration between backend and frontend teams while reducing future rework.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cuteprogramming.blog/wp-content/uploads/2025/03/image-22.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8b4n7beh9eks211b32x7.png" width="800" height="471"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Session “Build APIs Faster and Together with Apidog, ASP.NET, and Azure” conducted by Mohammad L. U. Tanjim.&lt;/em&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  API-First Design Approach
&lt;/h3&gt;

&lt;p&gt;By designing APIs upfront, we reduce the likelihood of frequent changes that disrupt development. It also ensures consistent API behaviour and better long-term maintainability.&lt;/p&gt;

&lt;p&gt;For our frontend team, with a well-defined API specification, they can begin working with mock APIs, enabling parallel development. This eliminates dependencies where frontend work is blocked by backend completion.&lt;/p&gt;

&lt;p&gt;For QA team, API spec will be important to them because it serve as a reference for automated testing. The QA engineers can validate API responses before implementation.&lt;/p&gt;
&lt;h3&gt;
  
  
  API Design Journey
&lt;/h3&gt;

&lt;p&gt;In this article, we will embark on an API Design Journey by transforming a traditional travel agency in Singapore into an API-first system. To achieve this, we will use Apidog for API design and testing, and Orchard Core as a CMS to manage travel package information. Along the way, we will explore different considerations in API design, documentation, and integration to create a system that is both practical and scalable.&lt;/p&gt;

&lt;p&gt;Many traditional travel agencies in Singapore still rely on manual processes. They store travel package details in spreadsheets, printed brochures, or even handwritten notes. This makes it challenging to update, search, and distribute information efficiently.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cuteprogramming.blog/wp-content/uploads/2025/03/image-23.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7m1lgc5ulrukx67skgtg.png" width="800" height="532"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;The reliance on physical posters and brochures of a travel agency is interesting in today’s digital age.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;By introducing a headless CMS like Orchard Core, we can centralise travel package management while allowing different clients like mobile apps to access the data through APIs. This approach not only modernises the operations in the travel agency but also enables seamless integration with other systems.&lt;/p&gt;
&lt;h3&gt;
  
  
  API Design Journey 01: The Design Phase
&lt;/h3&gt;

&lt;p&gt;Now that we understand the challenges of managing travel packages manually, we will build the API with Orchard Core to enable seamless access to travel package data.&lt;/p&gt;

&lt;p&gt;Instead of jumping straight into coding, we will first focus on the design phase, ensuring that our API meets the business requirements. At this stage, we focus on designing endpoints, such as &lt;code&gt;GET /api/v1/packages&lt;/code&gt;, to manage the travel packages. We also plan how we will structure the response.&lt;/p&gt;

&lt;p&gt;Given the scope and complexity of a full travel package CMS, this article will focus on designing a subset of API endpoints, as shown in the screenshot below. This allows us to highlight essential design principles and approaches that can be applied across the entire API journey with Apidog.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cuteprogramming.blog/wp-content/uploads/2025/03/image-24.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6sqcrgttwl3sh385idfw.png" width="800" height="486"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Let’s start with eight simple endpoints.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;For the first endpoint “Get all travel packages”, we design it with the following query parameters to support flexible and efficient result filtering, pagination, sorting, and text search. This approach ensures that users can easily retrieve and navigate through travel packages based on their specific needs and preferences.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;GET /api/v1/packages?page=1&amp;amp;pageSize=20&amp;amp;sortBy=price&amp;amp;sortOrder=asc&amp;amp;destinationId=4&amp;amp;priceRange[min]=500&amp;amp;priceRange[max]=2000&amp;amp;rating=4&amp;amp;searchTerm=spa
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://cuteprogramming.blog/wp-content/uploads/2025/03/image-26.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi5bulumoh3sxzylt5r0y.png" width="800" height="486"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Pasting the API path with query parameters to the Endpoint field will auto populate the Request Params section in Apidog.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Same with the request section, the Response also can be generated based on a sample JSON that we expect the endpoint to return, as shown in the following screenshot.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cuteprogramming.blog/wp-content/uploads/2025/03/image-27.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz4nvdueu5ofa86lk56d6.png" width="800" height="486"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;As shown in the Preview, the response structure can be derived from a sample JSON.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;In the screenshot above, the field “description” is marked as optional because it is the only property that does not exist in all the other entry in “data”.&lt;/p&gt;

&lt;p&gt;Besides the success status, we also need another important HTTP 400 status code which tells the client that something is wrong with their request.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cuteprogramming.blog/wp-content/uploads/2025/03/image-28.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp3yvq7i9efe5gtc3p25q.png" width="800" height="485"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;By default, for generic error responses like HTTP 400, there are response components that we can directly use in Apidog.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The reason why we need HTTP 400 is that, instead of processing an invalid request and returning incorrect or unexpected results, our API should explicitly reject it, ensuring that the client knows what needs to be fixed. This improves both developer experience and API reliability.&lt;/p&gt;

&lt;p&gt;After completing the endpoint for getting all travel packages, we also have another POST endpoint to search travel packages.&lt;/p&gt;

&lt;p&gt;While GET is the standard method for retrieving data from an API, complex search queries involving multiple parameters, filters, or file uploads might require the use of a POST request. This is particularly true when dealing with advanced search forms or large amounts of data, which cannot be easily represented as URL query parameters. In these cases, POST allows us to send the parameters in the body of the request, ensuring the URL remains manageable and avoiding URL length limits.&lt;/p&gt;

&lt;p&gt;For example, let’s assume this POST endpoint allows us to search for travel packages with the following body.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "destination": "Singapore",
    "priceRange": {
        "min": 500,
        "max": 2000
    },
    "rating": 4,
    "amenities": ["pool", "spa"],
    "files": [
        {
            "fileType": "image",
            "file": "base64-encoded-image-content"
        }
    ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We can also easily generate the data schema for the body by pasting this JSON as example into Apidog, as shown in the screenshot below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cuteprogramming.blog/wp-content/uploads/2025/03/image-29.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsj0dp5t6uamovt5teecz.png" width="800" height="485"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Setting up the data schema for the body of an HTTP POST request.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;When making an HTTP POST request, the client sends data to the server. While JSON in the request body is common, there is also another format used in APIs, i.e. &lt;strong&gt;&lt;code&gt;multipart/form-data&lt;/code&gt;&lt;/strong&gt; (also known as &lt;code&gt;form-data&lt;/code&gt;).&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;form-data&lt;/code&gt; is used when the &lt;strong&gt;request body contains files, images, or binary data along with text fields&lt;/strong&gt;. So, if our endpoint &lt;code&gt;/api/v1/packages/{id}/reviews&lt;/code&gt; allows users to submit both text (review content and rating) and an image, using &lt;code&gt;form-dat&lt;/code&gt;a is the best choice, as demonstrated in the following screenshot.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cuteprogramming.blog/wp-content/uploads/2025/03/image-32.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj69vtmcj82xpbowfppq8.png" width="800" height="485"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Setting up a request body which is multipart/form-data in Apidog.&lt;/em&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  API Design Journey 02: Prototyping with Mockups
&lt;/h3&gt;

&lt;p&gt;When designing the API, it is common to debate, for example, whether reviews should be nested inside packages or treated as a separate resource. By using Apidog, we can quickly create mock APIs for both versions and tested how they would work in different use cases. This helps us make a data-driven decision instead of endless discussions.&lt;/p&gt;

&lt;p&gt;Once our endpoint is created, &lt;a href="https://apidog.com/articles/mock-api/" rel="noopener noreferrer"&gt;Apidog automatically generates a mock API based on our defined API spec&lt;/a&gt;, as shown in the following screenshot.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cuteprogramming.blog/wp-content/uploads/2025/03/image-35.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsk2fqnt0kaslx9q6cjab.png" width="800" height="484"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;A list of mock API URLs for our “Get all travel packages” endpoint.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Clicking on the “Request” button next to each of the mock API URL will bring us to the corresponding mock response, as shown in the following screenshot.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cuteprogramming.blog/wp-content/uploads/2025/03/image-36.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpcixkuwh5igowt28mc0v.png" width="800" height="484"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Default mock response for HTTP 200 of our first endpoint “Get all travel packages”.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;As shown in the screenshot above, some values in the mock response are not making any sense, for example negative &lt;code&gt;id&lt;/code&gt; and &lt;code&gt;destinationId&lt;/code&gt;, &lt;code&gt;rating&lt;/code&gt; which is supposed to be between 1 and 5, “East” as sorting &lt;code&gt;direction&lt;/code&gt;, and so on. How could we fix them?&lt;/p&gt;

&lt;p&gt;Firstly, we will set the &lt;code&gt;id&lt;/code&gt; (and &lt;code&gt;destinationId&lt;/code&gt;) to be any positive integer number starting from 1.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cuteprogramming.blog/wp-content/uploads/2025/03/image-39.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjwgml4y2wrmykgcpegif.png" width="800" height="484"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Setting id to be a positive integer number starting from 1.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Secondly, we update both the &lt;code&gt;price&lt;/code&gt; and &lt;code&gt;rating&lt;/code&gt; to be float. In the following screenshot, we specify that the &lt;code&gt;rating&lt;/code&gt; can be any float from 1.0 to 5.0 with single fraction digit.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cuteprogramming.blog/wp-content/uploads/2025/03/image-40.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnlac5kqk02n5c8c5ce0l.png" width="800" height="484"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Apidog is able to generate an example based on our condition under “Preview”.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Finally, we will indicate that the sorting &lt;code&gt;direction&lt;/code&gt; can only be either &lt;code&gt;ASC&lt;/code&gt; or &lt;code&gt;DESC&lt;/code&gt;, as shown in the following screenshot.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cuteprogramming.blog/wp-content/uploads/2025/03/image-42.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnrsatxp1pw118vgsfhg2.png" width="800" height="484"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Configuring the possible value for the direction field.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;With all the necessary mock values configuration, if we fetch the mock response again, we should be able to get a response with more reasonable values, as demonstrated in the screenshot below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cuteprogramming.blog/wp-content/uploads/2025/03/image-43.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzca3vyn2xka17o6xg7f8.png" width="800" height="484"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Now the mock response looks more reasonable.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;With the mock APIs, our frontend developers will be able to start building UI components without waiting for the backend to be completed. Also, as shown above, a mock API responds instantly, unlike real APIs that depend on database queries, authentication, or network latency. This makes UI development and unit testing faster.&lt;/p&gt;

&lt;p&gt;Speaking of testing, some test cases are difficult to create with a real API. For example, what if an API returns an error (500 Internal Server Error)? What if there are thousands of travel packages? With a mock API, we can control the responses and simulate rare cases easily.&lt;/p&gt;

&lt;p&gt;In addition, &lt;a href="https://docs.apidog.com/mock-expectations-618204m0" rel="noopener noreferrer"&gt;Apidog supports returning different mock data based on different request parameters&lt;/a&gt;. This makes the mock API more realistic and useful for developers. This is because if the mock API returns static data, frontend developers may only test one scenario. A dynamic mock API allows testing of various edge cases.&lt;/p&gt;

&lt;p&gt;For example, our travel package API allows admins to see all packages, including unpublished ones, while regular users only see public packages. We thus can setup in such a way that different bearer token will return different set of mock data.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cuteprogramming.blog/wp-content/uploads/2025/03/image-44.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Focn4r0tyyrw3w7a8slug.png" width="800" height="484"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;We are setting up the endpoint to return drafts when a correct admin token is provided in the request header with Mock Expectation.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;With &lt;a href="https://docs.apidog.com/mock-expectations-618204m0#returning-conditional-data" rel="noopener noreferrer"&gt;Mock Expectation&lt;/a&gt; feature, Apidog can return custom responses based on request parameters as well. For instance, it can return normal packages when the &lt;code&gt;destinationId&lt;/code&gt; is 1 and trigger an error when the &lt;code&gt;destinationId&lt;/code&gt; is 2.&lt;/p&gt;
&lt;h3&gt;
  
  
  API Design Journey 03: Documenting Phase
&lt;/h3&gt;

&lt;p&gt;With endpoints designed properly in earlier two phases, we can now proceed to create documentation which is offers a detailed explanation of the endpoints in our API. This documentation will include the information such as HTTP methods, request parameters, and response formats.&lt;/p&gt;

&lt;p&gt;Fortunately, Apidog makes the documentation process smooth by integrating well within the API ecosystem. It also makes sharing easy, letting us export the documentation in formats like OpenAPI, HTML, and Markdown.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cuteprogramming.blog/wp-content/uploads/2025/03/image-45.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flu2lu861o7ft4swzt68n.png" width="800" height="484"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Apidog can export API spec in formats like OpenAPI, HTML, and Markdown.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;We can also export our documentation on folder basis to OpenAPI Specification in Overview, as shown below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cuteprogramming.blog/wp-content/uploads/2025/03/image-47.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff0k84uxm8k0u618u1fcg.png" width="800" height="484"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Custom export configuration for OpenAPI Specification.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;We can also export the data as an offline document. Just click on the “Open URL” or “Permalink” button to view the raw JSON/YAML content directly in the Internet browser. We then can place the raw content into the Swagger Editor to view the Swagger UI of our API, as demonstrated in the following screenshot.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cuteprogramming.blog/wp-content/uploads/2025/03/image-48.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnpkhfyrnml1dxxmxvllm.png" width="800" height="487"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;The exported content from Apidog can be imported to Swagger Editor directly.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Let’s say now we need to share the documentation with our team, stakeholders, or even the public. Our documentation thus needs to be accessible and easy to navigate. That is where exporting to HTML or Markdown comes in handy.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cuteprogramming.blog/wp-content/uploads/2025/03/image-49.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fccphygd34867prfcaamo.png" width="800" height="487"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Documentation is Markdown format, generated by Apidog.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Finally, Apidog also allows us to conveniently publish our API documentation as a webpage. There are two options: &lt;strong&gt;Quick Share&lt;/strong&gt; , for sharing parts of the docs with collaborators, and &lt;strong&gt;Publish Docs&lt;/strong&gt; , for making the full documentation publicly available.&lt;/p&gt;

&lt;p&gt;Quick Share is great for API collaborators because we can set a password for access and define an expiration time for the shared documentation. If no expiration is set, the link stays active indefinitely.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cuteprogramming.blog/wp-content/uploads/2025/03/image-50.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbozahkvn85kqknger432.png" width="800" height="487"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;API spec presented as a website and accessible by the collaborators. It also enables collaborators to generate client code for different languages.&lt;/em&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  API Design Journey 04: The Development Phase
&lt;/h3&gt;

&lt;p&gt;With our API fully designed, mocked, and documented, it is time to bring it to life with actual code. Since we have already defined information such as the endpoints, request format, and response formats, implementation becomes much more straightforward. Now, let’s start building the backend to match our API specifications.&lt;/p&gt;

&lt;p&gt;Orchard Core generally supports two main approaches for designing APIs, i.e. Headless and Decoupled.&lt;/p&gt;

&lt;p&gt;In the headless approach, Orchard Core acts purely as a backend CMS, exposing content via APIs without a frontend. The frontend is built separately.&lt;/p&gt;

&lt;p&gt;In the decoupled approach, Orchard Core still provides APIs like in the headless approach, but it also serves some frontend rendering. It is a hybrid approach because we use Razor Pages some parts of the UI are rendered by Orchard, while others rely on APIs.&lt;/p&gt;

&lt;p&gt;So in fact, we can combine the good of both approaches so that we can build a customised headless APIs on Orchard Core using services like &lt;code&gt;IOrchardHelper&lt;/code&gt; to fetch content dynamically and &lt;code&gt;IContentManager&lt;/code&gt; to allow us full CRUD operations on content items. This is in fact the approach mentioned in &lt;a href="https://gcl.gitbook.io/orchard-core-basics-companion-ocbc/content/headless-cms" rel="noopener noreferrer"&gt;the Orchard Core Basics Companion (OCBC) documentation&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;For the endpoint of getting a list of travel packages, i.e. &lt;code&gt;/api/v1/packages&lt;/code&gt;, we can define it as follows.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[ApiController]
[Route("api/v1/packages")]
public class PackageController(
    IOrchardHelper orchard,
    ...) : Controller
{
    [HttpGet]
    public async Task&amp;lt;IActionResult&amp;gt; GetTravelPackages()
    {
        var travelPackages = await orchard.QueryContentItemsAsync(q =&amp;gt; 
            q.Where(c =&amp;gt; c.ContentType == "TravelPackage"));

        ...

        return Ok(travelPackages);
    }

    ...
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the code above, we are using Orchard Core Headless CMS API and leveraging &lt;code&gt;IOrchardHelper&lt;/code&gt; to query content items of type “TravelPackage”. We are then exposing a REST API (GET &lt;code&gt;/api/v1/packages&lt;/code&gt;) that returns all travel packages stored as content items in the Orchard Core CMS.&lt;/p&gt;

&lt;h3&gt;
  
  
  API Design Journey 05: Testing of Actual Implementation
&lt;/h3&gt;

&lt;p&gt;Let’s assume our Dev Server Base URL is &lt;code&gt;localhost&lt;/code&gt;. This URL is set as a variable in the Develop Env, as shown in the screenshot below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cuteprogramming.blog/wp-content/uploads/2025/03/image-51.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc5rm4777hopxpyfovzqc.png" width="800" height="484"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Setting Base URL for Develop Env on Apidog.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;With the environment setup, we can now proceed to run our endpoint under that environment. As shown in the following screenshot, we are able to immediately validate the implementation of our endpoint.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cuteprogramming.blog/wp-content/uploads/2025/03/image-52.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4nuaibkcgcaoqizx8eqg.png" width="800" height="484"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Validated the GET endpoint under Develop Env.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The screenshot above shows that through &lt;a href="https://apidog.com/blog/validation-testing/" rel="noopener noreferrer"&gt;API Validation Testing&lt;/a&gt;, the implementation of that endpoint has met all expected requirements.&lt;/p&gt;

&lt;p&gt;API validation tests are not just for simple checks. The feature is great for &lt;a href="https://docs.apidog.com/create-a-test-scenario-599311m0" rel="noopener noreferrer"&gt;handling complex, multi-step API workflows&lt;/a&gt; too. With them, we can chain multiple requests together, simulate real-world scenarios, and even run the same requests with different test data. This makes it easier to catch issues early and keep our API running smoothly.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cuteprogramming.blog/wp-content/uploads/2025/03/image-53.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fctnrynwxbha7sem5ggy5.png" width="800" height="484"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Populate testing steps based on our API spec in Apidog.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;In addition, we can also set up &lt;a href="https://docs.apidog.com/scheduled-tasks-603702m0" rel="noopener noreferrer"&gt;Scheduled Tasks&lt;/a&gt;, which is still in Beta now, to automatically run our test scenarios at specific times. This helps us monitor API performance, catch issues early, and ensure everything works as expected automatically. Plus, we can review the execution results to stay on top of any failures.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cuteprogramming.blog/wp-content/uploads/2025/03/image-54.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpylr4ddddgq57b5baydq.png" width="800" height="484"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Result of running one of the endpoints on Develop Env.&lt;/em&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Wrap-Up
&lt;/h3&gt;

&lt;p&gt;Throughout this article, we have walked through the process of designing, mocking, documenting, implementing, and testing a headless API in Orchard Core using Apidog. By following an API-first approach, we ensure that our API is well-structured, easy to maintain, and developer-friendly.&lt;/p&gt;

&lt;p&gt;With this approach, teams can collaborate more effectively, reduce friction in development. Now that the foundation is set, the next step could be integrating this API into a frontend app, optimising our API performance, or automating even more tests.&lt;/p&gt;

&lt;p&gt;Finally, with &lt;a href="https://towardsdev.com/swagger-ui-is-gone-in-net-9-heres-what-you-need-to-do-next-9a13e4fdcd4b" rel="noopener noreferrer"&gt;.NET 9 moving away from built-in Swagger UI&lt;/a&gt;, developers now have to find alternatives to set up API documentation. As we can see, Apidog offers a powerful alternative, because it combines API design, testing, and documentation in one tool. It simplifies collaboration while ensuring a smooth API-first design approach.&lt;/p&gt;

</description>
      <category>aspnet</category>
      <category>c</category>
      <category>event</category>
      <category>experience</category>
    </item>
    <item>
      <title>Automate Orchard Core Deployment on AWS ECS with CloudFormation</title>
      <dc:creator>Goh Chun Lin</dc:creator>
      <pubDate>Sun, 09 Mar 2025 08:56:40 +0000</pubDate>
      <link>https://dev.to/gohchunlin/automate-orchard-core-deployment-on-aws-ecs-with-cloudformation-14b8</link>
      <guid>https://dev.to/gohchunlin/automate-orchard-core-deployment-on-aws-ecs-with-cloudformation-14b8</guid>
      <description>&lt;p&gt;&lt;a href="https://cuteprogramming.blog/wp-content/uploads/2025/03/image-16.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd6o8ckxwem33koac6ogd.png" width="800" height="449"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For .NET developers looking for Content Management System (CMS) solution, Orchard Core presents a compelling, open-source option. Orchard Core is a CMS built on ASP.NET Core. When deploying Orchard Core on AWS, the Elastic Container Service (ECS) provides a good hosting platform that can handle high traffic, keep costs down, and remain stable.&lt;/p&gt;

&lt;p&gt;However, finding clear instructions for deploying Orchard Core to ECS end-to-end can be difficult. This may require us to do more testing and troubleshooting, and potentially lead to a less efficient or secure setup. A lack of a standard deployment process can also complicate infrastructure management and hinder the implementation of CI/CD. This is where Infrastructure as Code (IaC) comes in.&lt;/p&gt;

&lt;h3&gt;
  
  
  Source Code
&lt;/h3&gt;

&lt;p&gt;The complete CloudFormation template we built in this article is available on GitHub: &lt;a href="https://github.com/gcl-team/Experiment.OrchardCore.Main/blob/main/Infrastructure.yml" rel="noopener noreferrer"&gt;https://github.com/gcl-team/Experiment.OrchardCore.Main/blob/main/Infrastructure.yml&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  CloudFormation
&lt;/h3&gt;

&lt;p&gt;IaC provides a solution for automating infrastructure management. With IaC, we define our entire infrastructure which hosts Orchard Core setup as code. This code can then be version-controlled, tested, and deployed just like application code.&lt;/p&gt;

&lt;p&gt;CloudFormation is an AWS service that implements IaC. By using CloudFormation, AWS automatically provisions and configures all the necessary resources for our Orchard Core hosting, ensuring consistent and repeatable deployments across different environments.&lt;/p&gt;

&lt;p&gt;This article is for .NET developers who know a bit about AWS concepts such as ECS or CloudFormation. We’ll demonstrate how CloudFormation can help to setup the infrastructure for hosting Orchard Core on AWS.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cuteprogramming.blog/wp-content/uploads/2025/03/image-15.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa920cwq3ct64ycmrbtzg.png" width="800" height="392"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;The desired infrastructure of our CloudFormation setup.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Now let’s start writing our CloudFormation as follows. We start by defining some useful parameters that we will be using later. Some of the parameters will be discussed in the following relevant sections.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;AWSTemplateFormatVersion: '2010-09-09'
Description: "Infrastructure for Orchard Core CMS"

Parameters:
  VpcCIDR:
    Type: String
    Description: "VPC CIDR Block"
    Default: 10.0.0.0/16
    AllowedPattern: '((\d{1,3})\.){3}\d{1,3}/\d{1,2}'
  ApiGatewayStageName:
    Type: String
    Default: "production"
    AllowedValues:
      - production
      - staging
      - development
  ServiceName:
    Type: String
    Default: cld-orchard-core
    Description: "The service name"
  CmsDBName:
    Type: String
    Default: orchardcorecmsdb
    Description: "The name of the database to create"
  CmsDbMasterUsername:
    Type: String
    Default: orchardcoreroot
  HostedZoneId:
    Type: String
    Default: _ **&amp;lt;your Route 53 hosted zone id&amp;gt;** _
  HostedZoneName:
    Type: String
    Default: _ **&amp;lt;your custom domain&amp;gt;** _
  CmsHostname:
    Type: String
    Default: orchardcms
  OrchardCoreImage:
    Type: String
    Default: **_&amp;lt;your ECR link&amp;gt;_** /orchard-core-cms:latest
  EcsAmi:
    Description: The Amazon Machine Image ID used for the cluster
    Type: AWS::SSM::Parameter::Value&amp;lt;AWS::EC2::Image::Id&amp;gt;
    Default: /aws/service/ecs/optimized-ami/amazon-linux-2023/recommended/image_id
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Dockerfile
&lt;/h3&gt;

&lt;p&gt;The Dockerfile is quite straightforward.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Global Arguments
ARG DCR_URL=mcr.microsoft.com
ARG BUILD_IMAGE=${DCR_URL}/dotnet/sdk:8.0-alpine
ARG RUNTIME_IMAGE=${DCR_URL}/dotnet/aspnet:8.0-alpine

# Build Container
FROM ${BUILD_IMAGE} AS builder
WORKDIR /app

COPY . .

RUN dotnet restore
RUN dotnet publish ./OCBC.HeadlessCMS/OCBC.HeadlessCMS.csproj -c Release -o /app/src/out

# Runtime Container
FROM ${RUNTIME_IMAGE}

## Install cultures
RUN apk add --no-cache \
   icu-data-full \
   icu-libs

ENV ASPNETCORE_URLS http://*:5000

WORKDIR /app

COPY --from=builder /app/src/out .

EXPOSE 5000

ENTRYPOINT ["dotnet", "OCBC.HeadlessCMS.dll"]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With the Dockerfile, we then can build the Orchard Core project locally with the command below.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker build --platform=linux/amd64 -t orchard-core-cms:v1 .
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;--platform&lt;/code&gt; flag specifies the target OS and architecture for the image being built. Even though it is optional, it is particularly useful when building images on a different platform (like macOS or Windows) and deploying them to another platform (like Amazon Linux) that has a different architecture.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cuteprogramming.blog/wp-content/uploads/2025/03/image.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4djutpn99k0nqzv2pqfk.png" width="800" height="447"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;ARM-based Apple Silicon was announced in 2020. (Image Credit: The Verge)&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;I am using macOS with ARM-based Apple Silicon, whereas Amazon Linux AMI uses &lt;code&gt;amd64&lt;/code&gt; (x86_64) architecture. Hence, if I do not specify the platform, the image I build on my Macbook will be incompatible with EC2 instance.&lt;/p&gt;

&lt;p&gt;Once the image is built, we will push it to the &lt;strong&gt;Elastic Container Registry (ECR)&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;We choose ECR because it is directly integrated with ECS, which means deploying images from ECR to ECS is smooth. When ECS needs to pull an image from ECR, it automatically uses the IAM role to authenticate and authorise the request to ECR. The execution role of our ECS is associated with the &lt;strong&gt;&lt;code&gt;AmazonECSTaskExecutionRolePolicy&lt;/code&gt;&lt;/strong&gt; IAM policy, which allows ECS to pull images from ECR.&lt;/p&gt;

&lt;p&gt;ECR also comes with built-in support for image scanning, which automatically scans our images for vulnerabilities.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cuteprogramming.blog/wp-content/uploads/2025/03/image-17.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpzyndutth4ckqy3yne4c.png" width="800" height="486"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Image scanning in ECR helps ensure our images are secure before we deploy them.&lt;/em&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Unit 01: IAM Role
&lt;/h3&gt;

&lt;p&gt;Technically, we are able to run Orchard Core on ECS without any ECS task role. However, that is possible only if our Orchard Core app does not need to interact with AWS services. Not only for our app, but actually most of the modern web apps, we always need to integrate our app with AWS services such as S3, CloudWatch, etc. Hence, the first thing that we need to work on is setting up an ECS task role.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;iamRole:
  Type: AWS::IAM::Role
  Properties:
    RoleName: !Sub "${AWS::StackName}-ecs"
    Path: !Sub "/${AWS::StackName}/"
    AssumeRolePolicyDocument:
      Version: 2012-10-17
      Statement:
        - Effect: Allow
          Principal:
            Service:
              - ecs-tasks.amazonaws.com
          Action:
            - sts:AssumeRole
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In AWS IAM, permissions are assigned to roles, not directly to the services that need them. Thus, we cannot directly assign IAM policies to ECS tasks. Instead, we assign those policies to a role, and then the ECS task temporarily assumes that role to gain those permissions, as shown in the configuration above.&lt;/p&gt;

&lt;p&gt;Roles are considered temporary because they are only assumed for the duration that the ECS task needs to interact with AWS resources. Once the ECS task stops, the temporary permissions are no longer valid, and the service loses access to the resources.&lt;/p&gt;

&lt;p&gt;Hence, by using roles and AssumeRole, we follow the principle of least privilege. The ECS task is granted only the permissions it needs and can only use them temporarily.&lt;/p&gt;

&lt;h3&gt;
  
  
  Unit 02: CloudWatch Log Group
&lt;/h3&gt;

&lt;p&gt;ECS tasks, by default, do not have logging enabled.&lt;/p&gt;

&lt;p&gt;Hence, assigning a role to our ECS task for logging to CloudWatch Logs is definitely one of the first roles we should assign when setting up ECS tasks. Setting logging up early helps to avoid surprises later on when our ECS tasks are running.&lt;/p&gt;

&lt;p&gt;To setup the logging, we first need to specify &lt;strong&gt;Log Group,&lt;/strong&gt; a place in CloudWatch that logs go. While ECS itself can create the log group automatically when the ECS task starts (if it does not already exist), it is a good practice to define the log group in CloudFormation to ensure it exists ahead of time and can be managed within our IaC.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ecsLogGroup:
  Type: AWS::Logs::LogGroup
  Properties:
    LogGroupName: !Sub "/ecs/${ServiceName}-log-group"
    RetentionInDays: 3
    Tags:
      - Key: Stack
        Value: !Ref AWS::StackName
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The following policy will grant the necessary permissions to write logs to CloudWatch.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ecsLoggingPolicy:
  Type: AWS::IAM::Policy
  Properties:
    PolicyName: !Sub "${AWS::StackName}-cloudwatch-logs-policy"
    Roles:
      - !Ref iamRole
    PolicyDocument:
      Version: 2012-10-17
      Statement:
        - Effect: Allow
          Action:
            - logs:CreateLogStream
            - logs:PutLogEvents
          Resource:
            - !Sub "arn:aws:logs:${AWS::Region}:${AWS::AccountId}:log-group:/ecs/${ServiceName}-log-group/*"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;By separating the logging policy into its own resource, we make it easier to manage and update policies independently of the ECS task role. After defining the policy, we attach it to the ECS task role by referencing it in the &lt;code&gt;Roles&lt;/code&gt; section.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cuteprogramming.blog/wp-content/uploads/2025/03/image-1.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1slvv7htw2o6xauaudfk.png" width="800" height="486"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;The logging setup helps us consolidate log events from the container into a centralised log group in CloudWatch.&lt;/em&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Unit 03: S3 Bucket
&lt;/h3&gt;

&lt;p&gt;We will be storing the files uploaded to the Orchard Core through its Media module on Amazon S3. So, we need to configure our S3 Bucket as follows.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mediaContentBucket:
  Type: AWS::S3::Bucket
  Properties:
    BucketName: !Join
      - '-'
      - - !Ref ServiceName
        - !Ref AWS::Region
        - !Ref AWS::AccountId
    OwnershipControls:
      Rules:
        - ObjectOwnership: BucketOwnerPreferred
    Tags:
      - Key: Stack
        Value: !Ref AWS::StackName
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Since bucket names must be globally unique, we dynamically create it using AWS Region and AWS Account ID.&lt;/p&gt;

&lt;p&gt;Since our Orchard Core can be running in multiple ECS tasks that upload media files to a shared S3 bucket, the &lt;code&gt;BucketOwnerPreferred&lt;/code&gt; setting ensures that even if media files are uploaded by different ECS tasks, the owner of the S3 bucket can still access, delete, or modify any of those media files without needing additional permissions for each uploaded object.&lt;/p&gt;

&lt;p&gt;The bucket owner having full control is a &lt;strong&gt;security necessity&lt;/strong&gt; in many cases because it allows the owner to apply policies, access controls, and auditing in a centralised way, maintaining the security posture of the bucket.&lt;/p&gt;

&lt;p&gt;However, even if the bucket owner has control, the principle of least privilege should still apply. For example, only the ECS task responsible for Orchard Core should be allowed to interact with the media objects.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mediaContentBucketPolicy:
  Type: AWS::IAM::Policy
  Properties:
    PolicyName: !Sub "${mediaContentBucket}-s3-policy"
    Roles:
      - !Ref iamRole
    PolicyDocument:
      Version: 2012-10-17
      Statement:
        - Effect: Allow
          Action:
            - s3:ListBucket
          Resource: !GetAtt mediaContentBucket.Arn
        - Effect: Allow
          Action:
            - s3:PutObject
            - s3:GetObject
          Resource: !Join ["/", [!GetAtt mediaContentBucket.Arn, "*"]]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Keeping the &lt;code&gt;s3:ListBucket&lt;/code&gt; permission in the policy is a necessary permission for Orchard Core Media module to work properly. Meanwhile, both &lt;code&gt;s3:PutObject&lt;/code&gt; and &lt;code&gt;s3:GetObject&lt;/code&gt; are used for uploading and downloading media files.&lt;/p&gt;

&lt;h3&gt;
  
  
  IAM Policy
&lt;/h3&gt;

&lt;p&gt;Now, let’s pause a while to talk about the policies that we have added above for the log group and S3.&lt;/p&gt;

&lt;p&gt;In AWS, we mostly deal with managed policies and inline policies depending on whether the policy needs to be reused or tightly scoped to one role.&lt;/p&gt;

&lt;p&gt;We use &lt;code&gt;AWS::IAM::ManagedPolicy&lt;/code&gt; when the permission needs to be reused by multiple roles or services. So it is frequently used in company-wide security policies. Thus it is not suitable for our Orchard Core examples above. Instead, we use &lt;code&gt;AWS::IAM::Policy&lt;/code&gt; because it is for a permission which is tightly connected to a single role and will not be reused elsewhere.&lt;/p&gt;

&lt;p&gt;In addition, since &lt;code&gt;AWS::IAM::Policy&lt;/code&gt; is tightly tied to entities, it will be deleted when the corresponding entities are deleted. This is a key difference from &lt;code&gt;AWS::IAM::ManagedPolicy&lt;/code&gt;, which remains even if the entities that use it are deleted. This explains why managed policy is used in company-wide policies because managed policy provides better long-term management for permissions that may be reused across multiple roles.&lt;/p&gt;

&lt;p&gt;We can summarise the differences between two of them into the following table.&lt;/p&gt;

&lt;p&gt;| &lt;strong&gt;Feature&lt;/strong&gt; | &lt;strong&gt;Managed Policy&lt;/strong&gt; | &lt;strong&gt;Policy&lt;/strong&gt; |&lt;br&gt;
| &lt;strong&gt;Scope&lt;/strong&gt; | Company-wide. | Tight coupling to a single entity. |&lt;br&gt;
| &lt;strong&gt;Deletion Behaviour&lt;/strong&gt; | &lt;strong&gt;&lt;em&gt;Persists&lt;/em&gt;&lt;/strong&gt; even if attached entities are deleted. | Deleted along with the associated entity. |&lt;br&gt;
| &lt;strong&gt;Versioning Support&lt;/strong&gt; | Supports versioning (can roll back). | No. |&lt;br&gt;
| &lt;strong&gt;Limit per Entity&lt;/strong&gt; | 20. | 10. |&lt;br&gt;
| &lt;strong&gt;Best Use Case&lt;/strong&gt; | Long-term, reusable permissions (e.g., company-wide security policies). | One-off, tightly scoped permissions (e.g., role-specific needs). |&lt;/p&gt;
&lt;h3&gt;
  
  
  Unit 04: Aurora Database Cluster
&lt;/h3&gt;

&lt;p&gt;Orchard Core supports Relational DataBase Management System (RDBMS). Unlike traditional CMS platforms that rely on a single database engine, Orchard Core offers flexibility by supporting multiple RDBMS options, including:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Microsoft SQL Server;&lt;/li&gt;
&lt;li&gt;PostgreSQL;&lt;/li&gt;
&lt;li&gt;MySQL;&lt;/li&gt;
&lt;li&gt;SQLite.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;While SQLite is lightweight and easy to use, it is not suitable for production deployments on AWS. SQLite is designed for local storage, not multi-user concurrent access. On AWS, there are fully managed relational databases (RDS and Aurora) provided instead.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cuteprogramming.blog/wp-content/uploads/2025/02/image-3.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr1sea0fat5c545lgwesw.png" width="800" height="449"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;The database engines supported by Amazon RDS and Amazon Aurora.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;While Amazon RDS is a well-known choice for relational databases, we can also consider &lt;a href="https://aws.amazon.com/rds/aurora/" rel="noopener noreferrer"&gt;Amazon Aurora&lt;/a&gt;, which &lt;a href="https://aws.amazon.com/blogs/aws/highly-scalable-mysql-compat-rds-db-engine/" rel="noopener noreferrer"&gt;was launched in 2014&lt;/a&gt;. Unlike traditional RDS, Aurora automatically scales up and down, reducing costs by ensuring we only pay for what we use.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cuteprogramming.blog/wp-content/uploads/2025/02/image-2.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F44c1ehvpzmc0skanwhcg.png" width="800" height="484"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;High performance and scalability of Amazon Aurora. (Image Source: Amazon Aurora MySQL PostgreSQL Features)&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;In addition, Aurora is faster than standard PostgreSQL and MySQL, as shown in the screenshot above. It also offers &lt;a href="https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Concepts.AuroraHighAvailability.html" rel="noopener noreferrer"&gt;built-in high availability with Multi-AZ replication&lt;/a&gt;. This is critical for a CMS like Orchard Core, which relies on fast queries and efficient data handling.&lt;/p&gt;

&lt;p&gt;It is important to note that, while Aurora is optimised for AWS, it does not lock us in, as we retain full control over our data and schema. Hence, if we ever need to switch, we can export data and move to standard MySQL/PostgreSQL on another cloud or on-premises.&lt;/p&gt;

&lt;p&gt;Instead of manually setting up Aurora, we will be using CloudFormation to ensure that the correct database instance, networking, security settings, and additional configurations are managed consistently.&lt;/p&gt;

&lt;p&gt;Aurora is cluster-based rather than standalone DB instances like traditional RDS. Thus, instead of a single instance, we deploy a DB cluster, which consists of a primary writer node and multiple reader nodes for scalability and high availability.&lt;/p&gt;

&lt;p&gt;Because of this cluster-based architecture, Aurora does not use the usual &lt;code&gt;DBParameterGroup&lt;/code&gt; like standalone RDS instances. Instead, it requires a &lt;code&gt;DBClusterParameterGroup&lt;/code&gt; to apply settings &lt;strong&gt;at the cluster level&lt;/strong&gt; , ensuring all instances in the cluster inherit the same configuration, as shown in the following Cloudformation template.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cmsDBClusterParameterGroup:
  Type: AWS::RDS::DBClusterParameterGroup
  Properties:
    Description: "Aurora Provisioned Postgres DB Cluster Parameter Group"
    Family: aurora-postgresql16
    Parameters:
      timezone: UTC # Ensures consistent timestamps
      rds.force_ssl: 1 # Enforce SSL for security
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The first parameter we configure is the &lt;code&gt;timezone&lt;/code&gt;. We set it to UTC to ensure consistency. So when we store date-time values in the database, we should use &lt;code&gt;TIMESTAMPTZ&lt;/code&gt; for timestamps, and store the time zone as a &lt;code&gt;TEXT&lt;/code&gt; field. After that, when we need to display the time in a local format, we can use the &lt;code&gt;AT TIME ZONE&lt;/code&gt; feature in PostgreSQL to convert from UTC to the desired local time zone. This is important because PostgreSQL returns all times in UTC, so storing the time zone ensures we can always retrieve and present the correct local time when needed, as shown in the query below.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;SELECT event_time_utc AT TIME ZONE timezone AS event_local_time
FROM events;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After that, we enabled the &lt;code&gt;rds.force_ssl&lt;/code&gt; so that all connections to our Aurora are encrypted using SSL. This is necessary to prevent data from being sent in plaintext. Even if our Aurora database is behind a bastion host, enforcing SSL connections is still recommended because SSL ensures the encryption of all data in transit, adding an extra layer of security. It is also worth mentioning that enabling SSL does not negatively impact performance much, but it adds a significant security benefit.&lt;/p&gt;

&lt;p&gt;Once the &lt;code&gt;DBClusterParameterGroup&lt;/code&gt; is configured, the next step is to configure the &lt;strong&gt;&lt;code&gt;AWS::RDS::DBCluster&lt;/code&gt;&lt;/strong&gt; resource, where we will define the cluster main configuration with the parameter group defined above.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cmsDatabaseCluster:
  Type: AWS::RDS::DBCluster
  Properties:
    BackupRetentionPeriod: 7  
    DatabaseName: !Ref CmsDBName
    DBClusterIdentifier: !Ref AWS::StackName
    DBClusterParameterGroupName: !Ref cmsDBClusterParameterGroup
    DeletionProtection: true
    Engine: aurora-postgresql
    EngineMode: provisioned
    EngineVersion: 16.1
    MasterUsername: !Ref CmsDbMasterUsername
    MasterUserPassword: !Sub "{{resolve:ssm-secure:/OrchardCoreCms/DbPassword:1}}"
    DBSubnetGroupName: !Ref cmsDBSubnetGroup
    VpcSecurityGroupIds:
      - !GetAtt cmsDBSecurityGroup.GroupId
    Tags:
      - Key: Stack
        Value: !Ref AWS::StackName
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let’s go through the &lt;code&gt;Properties&lt;/code&gt;.&lt;/p&gt;

&lt;h4&gt;
  
  
  About BackupRetentionPeriod
&lt;/h4&gt;

&lt;p&gt;The &lt;code&gt;BackupRetentionPeriod&lt;/code&gt; parameter in the Aurora DB cluster determines how many days automated backups are retained by AWS. It can be from a minimum of 1 day to a maximum of 35 days for Aurora databases. For most business applications, 7 days of backups is often enough to handle common recovery scenarios unless we are required by law or regulation to keep backups for a certain period.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-storage-backup.html#aurora-storage-backup.automated" rel="noopener noreferrer"&gt;Aurora automatically performs &lt;strong&gt;incremental&lt;/strong&gt; backups&lt;/a&gt; for our database every day, which means that it does not back up the entire database each time. Instead, it only stores the changes since the previous backup. This makes the backup process very efficient, especially for databases with little or no changes over time. If our CMS database remains relatively static, then the backup storage cost will remain very low or even free as long as our total backup data for the whole retention period does not exceed the storage capacity of our database.&lt;/p&gt;

&lt;p&gt;So the total billed usage for backup depends on how much data is being changed each day, and whether the total backup size exceeds the volume size. If our database does not experience massive daily changes, the backup storage will likely remain within the database size and be free.&lt;/p&gt;

&lt;h4&gt;
  
  
  About DBClusterIdentifier
&lt;/h4&gt;

&lt;p&gt;For the &lt;code&gt;DBClusterIdentifier&lt;/code&gt;, we set it to the stack name, which makes it unique to the specific CloudFormation stack. This can be useful for differentiating clusters.&lt;/p&gt;

&lt;h4&gt;
  
  
  About DeletionProtection
&lt;/h4&gt;

&lt;p&gt;In production environments, data loss or downtime is critical. &lt;code&gt;DeletionProtection&lt;/code&gt; ensures that our CMS DB cluster will not be deleted unless it is explicitly disabled. There is no “shortcut” to bypass it for production resources. If &lt;code&gt;DeletionProtection&lt;/code&gt; is enabled on the DB cluster, even CloudFormation will fail to delete the DB cluster. The only way to delete the DB cluster is that we disable DeletionProtection first via the AWS Console, CLI or SDK.&lt;/p&gt;

&lt;h4&gt;
  
  
  About EngineMode
&lt;/h4&gt;

&lt;p&gt;In Aurora, &lt;code&gt;EngineMode&lt;/code&gt; refers to the database operational mode. There are two primary modes, i.e. Provisioned and Serverless. For Orchard Core, Provisioned mode is typically the better choice because the mode ensures high availability, automatic recovery, and read scaling. Hence, if the CMS is going to have a consistent level of traffic, Provisioned mode will be able to handle that load. Serverless is useful if our CMS workload has unpredictable traffic patterns or usage spikes.&lt;/p&gt;

&lt;h4&gt;
  
  
  About MasterUserPassword
&lt;/h4&gt;

&lt;p&gt;Storing database passwords directly in the CloudFormation template is a security risk.&lt;/p&gt;

&lt;p&gt;There are a few other ways to handle sensitive data like passwords in CloudFormation, for example using &lt;strong&gt;AWS Secrets Manager&lt;/strong&gt; and &lt;strong&gt;AWS Systems Manager (SSM) Parameter Store&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;AWS Secrets Manager is a more advanced solution that offers automatic &lt;strong&gt;password rotation&lt;/strong&gt; , which is useful for situations where we need to regularly rotate credentials. However, &lt;a href="https://aws.amazon.com/secrets-manager/pricing/" rel="noopener noreferrer"&gt;it may incur additional costs&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;On the other hand, SSM Parameter Store provides a &lt;strong&gt;simpler and cost-effective solution&lt;/strong&gt; for securely storing and referencing secrets, including database passwords. We can store up to &lt;a href="https://docs.aws.amazon.com/systems-manager/latest/userguide/parameter-store-advanced-parameters.html" rel="noopener noreferrer"&gt;10,000 parameters (standard type) without any cost&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Hence, we need to use SSM Parameter Store to securely store the database password and reference it in CloudFormation without exposing it directly in our template, reducing the security risks and providing an easier management path for our secrets.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cuteprogramming.blog/wp-content/uploads/2025/02/image-10.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxn46disuqckyq1imv1nw.png" width="800" height="484"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Database password is stored as a SecureString in Parameter Store.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cuteprogramming.blog/wp-content/uploads/2025/02/image-9.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foj8j4rsi7mvrqfvh0qt6.png" width="800" height="484"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Database password is stored as a SecureString in Parameter Store.&lt;/em&gt;&lt;/p&gt;
&lt;h4&gt;
  
  
  About DBSubnetGroupName and VpcSecurityGroupIds
&lt;/h4&gt;

&lt;p&gt;These two configurations about Subnet and VPC will involve networking considerations. We will discuss further when we dive into the networking setup later.&lt;/p&gt;
&lt;h3&gt;
  
  
  Unit 05: Aurora Database Instance
&lt;/h3&gt;

&lt;p&gt;Now that we have covered the Aurora DB cluster, which is the overall container for the database, let’s move on to the DB instance.&lt;/p&gt;

&lt;p&gt;Think of the cluster as the foundation, and the DB instances are where the actual database operations take place. The DB instances are the ones that handle the read and write operations, replication, and scaling for the workload. So, in order for our CMS to work correctly, we need to define the DB instance configuration, which runs on top of the DB cluster.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cmsDBInstance:
  Type: 'AWS::RDS::DBInstance'
  DeletionPolicy: Retain
  Properties:
    DBInstanceIdentifier: !Sub "${AWS::StackName}-db-instance"
    DBInstanceClass: db.t4g.medium
    DBClusterIdentifier: !Ref cmsDatabaseCluster
    DBSubnetGroupName: !Ref cmsDBSubnetGroup
    Engine: aurora-postgresql
    Tags:
      - Key: Stack
        Value: !Ref AWS::StackName
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For our Orchard Core CMS, we do not expect very high traffic or intensive database operations. Hence, we choose to use &lt;strong&gt;&lt;code&gt;db.t4g&lt;/code&gt;&lt;/strong&gt;. &lt;a href="https://aws.amazon.com/about-aws/whats-new/2021/09/amazon-aurora-supports-aws-graviton2-based-t4g-instances/" rel="noopener noreferrer"&gt;T4g database instances are &lt;strong&gt;AWS Graviton2-based&lt;/strong&gt;&lt;/a&gt;, thus they are more cost-efficient than traditional instance types, especially for workloads like a CMS that does not require continuous high performance. However, there are &lt;a href="https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.BestPractices.Performance.html#AuroraMySQL.BestPractices.T2Medium" rel="noopener noreferrer"&gt;a few things we make need to look into when using T instance classes&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Unit 06: Virtual Private Cloud (VPC)
&lt;/h3&gt;

&lt;p&gt;Now that we have covered how the Aurora cluster and instance work, the next important thing is ensuring they are deployed in a secure and well-structured network. This is where the Virtual Private Cloud (VPC) comes in.&lt;/p&gt;

&lt;p&gt;VPC is a virtual network in AWS where we define the infrastructure networking. It is like a private network inside AWS where we can control IP ranges, subnets, routing, and security.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cuteprogramming.blog/wp-content/uploads/2025/02/image-4.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3uoj4jt799n0nzunbhb9.png" width="800" height="485"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;The default VPC in Malaysia region.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;By the way, you might have noticed that AWS automatically provides a &lt;strong&gt;default VPC&lt;/strong&gt; in every region. It is a ready-to-use network setup that allows us to launch resources without configuring networking manually.&lt;/p&gt;

&lt;p&gt;While it is convenient, it is recommended not to use the default VPC. This is because the default VPC is automatically created with predefined settings, which means we do not have full control over its configuration, such as subnet sizes, routing, security groups, etc. It also has public subnets by default which can accidentally expose internal resources to the Internet.&lt;/p&gt;

&lt;p&gt;Since we are setting up our own VPC, one key decision we need to make is the CIDR block, i.e. the range of private IPs we allocate to our network. This is important because it determines how many subnets and IP addresses we can have within our VPC.&lt;/p&gt;

&lt;p&gt;To future-proof our infrastructure, we will be using a &lt;code&gt;/16&lt;/code&gt; CIDR block, as shown in the &lt;code&gt;VpcCIDR&lt;/code&gt; in our CloudFormation template. This gives us 65,536 IP addresses, which we can break into 64 subnets of &lt;code&gt;/22&lt;/code&gt; (each having 1,024 IPs). 64 subnets is usually more than enough for a well-structured VPC because most companies do not even need so many subnets in a single VPC unless they have very complex workloads. Just in case if one service needs more IPs, we can allocate a larger subnet, for example &lt;code&gt;/21&lt;/code&gt; instead of &lt;code&gt;/22&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;In the VPC setup, we are also trying to avoid creating too many VPCs unnecessarily. Managing multiple VPCs means handling &lt;a href="https://docs.aws.amazon.com/vpc/latest/peering/what-is-vpc-peering.html" rel="noopener noreferrer"&gt;VPC peering&lt;/a&gt; which increases operational overhead.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;vpc:
  Type: AWS::EC2::VPC
  Properties:
    CidrBlock: !Ref VpcCIDR
    InstanceTenancy: default
    EnableDnsSupport: true
    EnableDnsHostnames: true
    Tags:
      - Key: Name
        Value: !Sub "${AWS::AccountId}-${AWS::Region}-vpc"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Since our ECS workloads and Orchard Core CMS are public-facing, we need &lt;code&gt;EnableDnsHostnames: true&lt;/code&gt; so that public-facing instances get a &lt;strong&gt;public DNS name&lt;/strong&gt;. We also need &lt;code&gt;EnableDnsSupport: true&lt;/code&gt; to allow ECS tasks, internal services, and AWS resources like S3 and Aurora to resolve domain names internally.&lt;/p&gt;

&lt;p&gt;For InstanceTenancy, which determines whether instances in our VPC run on shared (default) or dedicated hardware, it is recommended to use the default because AWS automatically places instances on shared hardware, which is cost-effective and scalable. We only need to change it if we are asked to use dedicated instances with full hardware isolation.&lt;/p&gt;

&lt;p&gt;Now that we have defined our VPC, the next step is planning its subnet structure. We need both public and private subnets for our workloads.&lt;/p&gt;

&lt;h3&gt;
  
  
  Unit 07: Subnets and Subnet Groups
&lt;/h3&gt;

&lt;p&gt;For our VPC with a &lt;code&gt;/16&lt;/code&gt; CIDR block, we will be breaking it into &lt;code&gt;/2&lt;/code&gt;4 subnets for better scalability:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Public Subnet 1: &lt;code&gt;10.0.0.0/2&lt;/code&gt;4&lt;/li&gt;
&lt;li&gt;Public Subnet 2: &lt;code&gt;10.0.1.0/2&lt;/code&gt;4&lt;/li&gt;
&lt;li&gt;Private Subnet 1: &lt;code&gt;10.0.2.0/24&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Private Subnet 2: &lt;code&gt;10.0.3.0/2&lt;/code&gt;4&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Instead of manually specifying CIDRs, we will let CloudFormation automatically calculates the CIDR blocks for public and private subnets using &lt;code&gt;!Select&lt;/code&gt; and &lt;code&gt;!Cidr&lt;/code&gt;, as shown below.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Public Subnets
publicSubnet1:
  Type: AWS::EC2::Subnet
  Properties:
    VpcId: !Ref vpc
    CidrBlock: 10.0.0.0/24
    AvailabilityZone: !Select [0, !GetAZs '']
    Tags:
      - Key: Name
        Value: !Sub "${AWS::AccountId}-${AWS::Region}-public-subnet-1"

publicSubnet2:
  Type: AWS::EC2::Subnet
  Properties:
    VpcId: !Ref vpc
    CidrBlock: 10.0.1.0/24
    AvailabilityZone: !Select [1, !GetAZs '']
    Tags:
      - Key: Name
        Value: !Sub "${AWS::AccountId}-${AWS::Region}-public-subnet-2"

# Private Subnets
privateSubnet1:
  Type: AWS::EC2::Subnet
  Properties:
    VpcId: !Ref vpc
    CidrBlock: 10.0.2.0/24
    AvailabilityZone: !Select [0, !GetAZs '']
    Tags:
      - Key: Name
        Value: !Sub "${AWS::AccountId}-${AWS::Region}-private-subnet-1"

privateSubnet2:
  Type: AWS::EC2::Subnet
  Properties:
    VpcId: !Ref vpc
    CidrBlock: 10.0.3.0/24
    AvailabilityZone: !Select [1, !GetAZs '']
    Tags:
      - Key: Name
        Value: !Sub "${AWS::AccountId}-${AWS::Region}-private-subnet-2"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For availability zones (AZs), &lt;a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html#available-availability-zones" rel="noopener noreferrer"&gt;all commercial AWS regions have at least two AZs, with most having three or more&lt;/a&gt;. Hence, we do not need to worry about the assignment of &lt;code&gt;!Select [1, !GetAZs '']&lt;/code&gt; in the template above will fail.&lt;/p&gt;

&lt;p&gt;Now with our subnets setup, we can revisit the &lt;code&gt;DBSubnetGroupName&lt;/code&gt; in Aurora cluster and instance. Aurora clusters are highly available, and AWS recommends placing Aurora DB instances across multiple AZs to ensure redundancy and better fault tolerance. The &lt;strong&gt;Subnet Group&lt;/strong&gt; allows us to define the subnets where Aurora will deploy its instances, which enables the multi-AZ deployment for high availability.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cmsDBSubnetGroup:
  Type: AWS::RDS::DBSubnetGroup
  Properties:
    DBSubnetGroupDescription: "Orchard Core CMS Postgres DB Subnet Group"
    SubnetIds:
      - !Ref privateSubnet1
      - !Ref privateSubnet2
    Tags:
      - Key: Stack
        Value: !Ref AWS::StackName
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Unit 08: Security Groups
&lt;/h3&gt;

&lt;p&gt;Earlier, we configured the Subnet Group for Aurora, which defines which subnets the Aurora instances will reside in. Now, we need to ensure that only authorised systems or services can access our database. That is where the Security Group &lt;code&gt;cmsDBSecurityGroup&lt;/code&gt; comes into play.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/vpc/latest/userguide/vpc-security-groups.html" rel="noopener noreferrer"&gt;A Security Group acts like a virtual firewall that controls inbound and outbound traffic to our resources&lt;/a&gt;, such as our Aurora instances. It is like setting permissions to determine which IP addresses and which ports can communicate with the database.&lt;/p&gt;

&lt;p&gt;For Aurora, we will configure the security group to only allow traffic from our private subnets, so that only trusted services within our VPC can reach the database.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cmsDBSecurityGroup:
  Type: AWS::EC2::SecurityGroup
  Properties:
    GroupName: !Sub "${CmsDBName}-security-group"
    GroupDescription: "Permits Access To CMS Aurora Database"
    VpcId: !Ref vpc
    SecurityGroupIngress:
    - CidrIp: !GetAtt privateSubnet1.CidrBlock
      IpProtocol: tcp
      FromPort: 5432
      ToPort: 5432
    - CidrIp: !GetAtt privateSubnet2.CidrBlock
      IpProtocol: tcp
      FromPort: 5432
      ToPort: 5432
    Tags:
      - Key: Name
        Value: !Sub "${CmsDBName}-security-group"
      - Key: Stack
        Value: !Ref AWS::StackName
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here we only setup security group for ingress but not egress because AWS security groups, by default, allow all outbound traffic.&lt;/p&gt;

&lt;h3&gt;
  
  
  Unit 09: Elastic Load Balancing (ELB)
&lt;/h3&gt;

&lt;p&gt;Before diving into how we host Orchard Core on ECS, let’s first figure out how traffic will reach our ECS service. In modern cloud web app development and hosting, three key factors matter: &lt;strong&gt;reliability&lt;/strong&gt; , &lt;strong&gt;scalability&lt;/strong&gt; , and &lt;strong&gt;performance&lt;/strong&gt;. And that is why a load balancer is essential.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Reliability&lt;/strong&gt; – If we only have one container and it crashes, the whole app goes down. A load balancer allows us to run multiple containers so that even if one fails, the others keep running.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Scalability&lt;/strong&gt; – As traffic increases, a single container will not be enough. A load balancer lets us add more containers dynamically when needed, ensuring smooth performance.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Performance&lt;/strong&gt; – Handling many requests in parallel prevents slowdowns. A load balancer efficiently distributes traffic to multiple containers, improving response times.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For that, we need an Elastic Load Balancing (ELB) to distribute requests properly.&lt;/p&gt;

&lt;p&gt;AWS originally launched ELB with only &lt;a href="https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-elasticloadbalancing-loadbalancer.html" rel="noopener noreferrer"&gt;Classic Load Balancers (CLB)&lt;/a&gt;. Later, AWS completely redesigned its load balancing services and introduced the following in &lt;code&gt;ElasticLoadBalancingV2&lt;/code&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Network Load Balancer (NLB);&lt;/li&gt;
&lt;li&gt;Application Load Balancer (ALB);&lt;/li&gt;
&lt;li&gt;Gateway Load Balancer (GLB).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://cuteprogramming.blog/wp-content/uploads/2025/02/image-6.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fki6h3dogka6e1v6h90dl.png" width="800" height="416"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Summary of differences: ALB vs. NLB vs. GLB (Image Source: AWS)&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;NLB is designed for high performance, low latency, and TCP/UDP traffic, which makes it perfect for situations like ours, where we are dealing with an Orchard Core CMS web app. NLB is optimised for handling millions of requests per second and is ideal for routing traffic to ECS containers.&lt;/p&gt;

&lt;p&gt;ALB is usually better suited for HTTP/HTTPS traffic. ALB offers more advanced routing features for HTTP. Since we are mostly concerned with handling general traffic to ECS, NLB is simpler and more efficient.&lt;/p&gt;

&lt;p&gt;GLB works well if we manage traffic between cloud and on-premises environments or across different regions, which does not apply to our use case here.&lt;/p&gt;
&lt;h4&gt;
  
  
  Configure NLB
&lt;/h4&gt;

&lt;p&gt;Setting up an NLB in AWS always involves these three key components:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;AWS::ElasticLoadBalancingV2::LoadBalancer;&lt;/li&gt;
&lt;li&gt;AWS::ElasticLoadBalancingV2::TargetGroup;&lt;/li&gt;
&lt;li&gt;AWS::ElasticLoadBalancingV2::Listener.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Firstly, &lt;code&gt;LoadBalancer&lt;/code&gt; distributes traffic across multiple targets such as ECS tasks.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;internalNlb:
  Type: AWS::ElasticLoadBalancingV2::LoadBalancer
  Properties:
    Name: !Sub "${ServiceName}-private-nlb"
    Scheme: internal
    Type: network
    Subnets:
      - !Ref privateSubnet1
      - !Ref privateSubnet2
    LoadBalancerAttributes:
      - Key: deletion_protection.enabled
        Value: "true"
    Tags:
      - Key: Stack
        Value: !Ref AWS::StackName
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the template above, we create a NLB (&lt;code&gt;Type: network&lt;/code&gt;) that is not exposed to the public internet (&lt;code&gt;Scheme: internal&lt;/code&gt;). It is deployed across two private subnets, ensuring high availability. Finally, to prevent accidental deletion, we enable the deletion protection. In the future, we must disable it before we can delete the NLB.&lt;/p&gt;

&lt;p&gt;Please take note that we do not enable &lt;a href="https://docs.aws.amazon.com/elasticloadbalancing/latest/network/network-load-balancers.html#cross-zone-load-balancing" rel="noopener noreferrer"&gt;Cross-Zone Load Balancing&lt;/a&gt; here because AWS charges for inter-AZ traffic. Also, since we are planning each AZ to have the &lt;strong&gt;same number of targets&lt;/strong&gt; , disabling cross-zone helps preserve optimal routing.&lt;/p&gt;

&lt;p&gt;Secondly, we need to setup &lt;code&gt;TargetGroup&lt;/code&gt; to tell the NLB to send traffic to our ECS tasks running Orchard Core CMS.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;nlbTargetGroup:
  Type: AWS::ElasticLoadBalancingV2::TargetGroup
  DependsOn:
    - internalNlb
  Properties:
    Name: !Sub "${ServiceName}-target-group"
    Port: 80
    Protocol: TCP
    TargetType: instance
    VpcId: !Ref vpc
    HealthCheckProtocol: HTTP
    HealthCheckPort: 80
    HealthCheckPath: /health
    TargetGroupAttributes:
      - Key: deregistration_delay.timeout_seconds
        Value: 10
    Tags:
      - Key: Stack
        Value: !Ref AWS::StackName
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here, we indicate that the &lt;code&gt;TargetGroup&lt;/code&gt; is listening on port 80 and expects TCP traffic. &lt;code&gt;TargetType: instance&lt;/code&gt; means NLB will send traffic directly to EC2 instances that are hosting our ECS tasks. We also link it to our VPC to ensure traffic stays within our network.&lt;/p&gt;

&lt;p&gt;Even though the NLB uses TCP at the transport layer, it performs health checks at the application layer (HTTP). This ensures that the NLB can intelligently route traffic only to instances that are responding correctly to the application-level health check endpoint. Our choice of HTTP for the health check protocol instead of TCP is because the Orchard Core running on ECS is listening on port 80 and exposing an HTTP health check endpoint &lt;code&gt;/health&lt;/code&gt;. By using HTTP for health checks, we can ensure that the NLB can detect not only if the server is up but also if the Orchard Core is functioning correctly.&lt;/p&gt;

&lt;p&gt;We also setup Deregistration Delay to be 10 seconds. Thus, when an ECS task is stopped or removed, the NLB waits 10 seconds before fully removing it. This helps prevent dropped connections by allowing any in-progress requests to finish. We can keep 10 for now if the CMS does not have long requests. However, when we start to notice 502/503 errors when deploying updates, we should increase it to 30 or more.&lt;/p&gt;

&lt;p&gt;In addition, normally, a Target Group checks if the app is healthy before sending traffic.&lt;br&gt;&lt;br&gt;
Since NLB only supports TCP health checks and our Orchard Core app does not expose a TCP check, we skip health checks for now.&lt;/p&gt;

&lt;p&gt;Thirdly, we need to configure the &lt;code&gt;Listener&lt;/code&gt;. This Listener is responsible for handling incoming traffic on our NLB. When a request comes in, the Listener &lt;strong&gt;forwards the traffic to the Target Group&lt;/strong&gt; , which then routes it to our ECS instances running Orchard Core CMS.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;internalNlbListener:
  Type: AWS::ElasticLoadBalancingV2::Listener
  Properties:
    LoadBalancerArn: !Ref internalNlb
    Port: 80
    Protocol: TCP
    DefaultActions:
      - Type: forward
        TargetGroupArn: !Ref nlbTargetGroup
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;Listener&lt;/code&gt; port is the entry point where the NLB receives traffic from. It is different from the &lt;code&gt;TargetGroup&lt;/code&gt; port which is the port on the ECS instances where the Orchard Core app is actually running. The &lt;code&gt;Listener&lt;/code&gt; forwards traffic from its port to the &lt;code&gt;TargetGroup&lt;/code&gt; port. In most cases, they are the &lt;strong&gt;same for simplicity&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;DefaultActions&lt;/code&gt; section ensures that all incoming requests are automatically directed to the correct target &lt;strong&gt;without any additional processing&lt;/strong&gt;. This setup allows our NLB to efficiently distribute traffic to the ECS tasks while keeping the configuration simple and scalable.&lt;/p&gt;

&lt;p&gt;In the NLB setup above, have you noticed that we do not handle port 443 (HTTPS)? Right now, our setup only works with HTTP on port 80.&lt;/p&gt;

&lt;p&gt;So, if users visit our Orchard Core with HTTPS, the request stays encrypted as it passes through the NLB. But here is the problem because that means our ECS task must be able to handle HTTPS itself. If our ECS tasks only listen on port 80, they will receive encrypted HTTPS traffic, which they cannot process.&lt;/p&gt;

&lt;p&gt;So why not we configure Orchard Core to accept HTTPS directly by having it listen on port 443 in Program.cs? Sure! However, this would require our ECS tasks to handle SSL termination themselves. We thus need to manage SSL certificates ourselves, which adds complexity to our setup.&lt;/p&gt;

&lt;p&gt;Hence, we need a way to properly handle HTTPS before it reaches ECS. Now, let’s see how we can solve this with API Gateway!&lt;/p&gt;

&lt;h3&gt;
  
  
  Unit 10: API Gateway
&lt;/h3&gt;

&lt;p&gt;As we discussed earlier, not always, but it is best practice to offload SSL termination to API Gateway because NLB does not handle SSL decryption. The SSL termination happens automatically with API Gateway for HTTPS traffic. It is a built-in feature, so we do not have to worry about manually managing SSL certificates on our backend.&lt;/p&gt;

&lt;p&gt;In addition, API Gateway brings extra benefits such as blocking unwanted traffic and ensures only the right users can access our services. It also caches frequent requests, reducing load on our backend. Finally, it is able to log all requests, making troubleshooting faster.&lt;/p&gt;

&lt;p&gt;By using API Gateway, we keep our infrastructure secure, efficient, and easy to manage.&lt;/p&gt;

&lt;p&gt;Let’s start with a basic setup of API Gateway with NLB by setting up the following required components:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;AWS::ApiGateway::RestApi&lt;/strong&gt; : The &lt;strong&gt;root API&lt;/strong&gt; that ties everything together. It defines the API itself before adding resources and methods.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AWS::ApiGateway::VpcLink&lt;/strong&gt; : Connects API Gateway to the NLB.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AWS::ApiGateway::Resource&lt;/strong&gt; : Defines the API endpoint path.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AWS::ApiGateway::Method&lt;/strong&gt; : Specifies how the API handles requests (e.g. GET, POST).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AWS::ApiGateway::Deployment&lt;/strong&gt; : Deploys the API configuration.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AWS::ApiGateway::Stage&lt;/strong&gt; : Assigns a stage (e.g. dev, prod) to the deployment.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Setup Rest API
&lt;/h4&gt;

&lt;p&gt;API Gateway is like a front door to our backend services. Before we define any resources, methods, or integrations, we need to create this front door first, i.e. the &lt;code&gt;AWS::ApiGateway::RestApi&lt;/code&gt; resource.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiGatewayRestApi:
  Type: AWS::ApiGateway::RestApi
  Properties:
    Name: !Sub "${ServiceName}-api-gateway"
    DisableExecuteApiEndpoint: True
    EndpointConfiguration:
      Types:
        - REGIONAL
    Policy: ''
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here we disable the execute-api endpoint because we want to stop AWS from exposing a default execute-api endpoint. We want to enforce access through our own custom domain which we will setup later.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;REGIONAL&lt;/code&gt; ensures that the API is available &lt;strong&gt;only within our AWS region&lt;/strong&gt;. Setting it to &lt;code&gt;REGIONAL&lt;/code&gt; is generally the recommended option for most apps, especially for our Orchard Core CMS, because both the ECS instances and the API Gateway are in the same region. This setup allows requests to be handled locally, which minimises latency. In the future, if our CMS user base grows and is distributed globally, we may need to consider switching to &lt;code&gt;EDGE&lt;/code&gt; to serve our CMS to a larger global audience with better performance and lower latency across regions.&lt;/p&gt;

&lt;p&gt;Finally, since this API is mainly acting as a reverse proxy to our Orchard Core homepage on ECS, CORS is not needed. We also leave &lt;code&gt;Policy: ''&lt;/code&gt; empty means anyone can access the public-facing Orchard Core. Instead, security should be handled by the Orchard Core authentication.&lt;/p&gt;

&lt;p&gt;Now that we have our root API, the next step is to &lt;strong&gt;connect it to our VPC using &lt;code&gt;VpcLink&lt;/code&gt;&lt;/strong&gt;!&lt;/p&gt;

&lt;h4&gt;
  
  
  Setup VPC Link
&lt;/h4&gt;

&lt;p&gt;The VPC Link allows API Gateway to access private resources in our VPC, such as our ECS services via the NLB. This connection ensures that requests from the API Gateway can securely reach the Orchard Core CMS hosted in ECS, even though those resources are not publicly exposed.&lt;/p&gt;

&lt;p&gt;In simple terms, VPC Link acts as a bridge between the public-facing API Gateway and the internal resources within our VPC.&lt;/p&gt;

&lt;p&gt;So in our template, we define the VPC Link and specify the NLB as the target, which means that all API requests coming into the Gateway will be forwarded to the NLB, which will then route them to our ECS tasks securely.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiGatewayVpcLink:
  Type: AWS::ApiGateway::VpcLink
  Description: "VPC link for API Gateway of Orchard Core"
  Properties:
    Name: !Sub "${ServiceName}-vpc-link"
    TargetArns:
      - !Ref internalNlb
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now that we have set up the &lt;code&gt;VpcLink&lt;/code&gt;, which connects our API Gateway to our ECS, the next step is to define how requests will actually reach our ECS. That is where the API Gateway Resource comes into play.&lt;/p&gt;

&lt;h4&gt;
  
  
  Setup API Gateway Resource
&lt;/h4&gt;

&lt;p&gt;For the API Gateway to know what to do with the incoming requests once they cross that VPC Link bridge, we need to define specific resources, i.e. the URL paths our users will use to access the Orchard Core CMS.&lt;/p&gt;

&lt;p&gt;In our case, we use a &lt;strong&gt;proxy resource&lt;/strong&gt; to catch all requests and send them to the backend ECS service. This lets us handle dynamic requests with minimal configuration, as any path requested will be forwarded to ECS.&lt;/p&gt;

&lt;p&gt;Using proxy resource is particularly useful for web apps like Orchard Core CMS, where the routes could be dynamic and vary widely, such as &lt;code&gt;/home&lt;/code&gt;, &lt;code&gt;/content-item/{id}&lt;/code&gt;, &lt;code&gt;/admin/{section}&lt;/code&gt;. With the proxy resource, we do not need to define each individual route or API endpoint in the API Gateway. As the CMS grows and new routes are added, we also will not need to constantly update the API Gateway configuration.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiGatewayRootProxyResource:
  Type: AWS::ApiGateway::Resource
  Properties:
    RestApiId: !Ref apiGatewayRestApi
    ParentId: !GetAtt apiGatewayRestApi.RootResourceId
    PathPart: '{proxy+}'
  DependsOn:
    - apiGatewayRestApi
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After setting up the resources and establishing the VPC link to connect API Gateway to our ECS instances, the next step is to define how we handle incoming requests to those resources. This is where the &lt;code&gt;AWS::ApiGateway::Method&lt;/code&gt; comes in. It defines the specific HTTP methods that API Gateway should accept for a particular resource.&lt;/p&gt;

&lt;h4&gt;
  
  
  Setup Method
&lt;/h4&gt;

&lt;p&gt;The &lt;code&gt;Resource&lt;/code&gt; component above is used to define where the requests will go. However, just defining the path alone is not enough to handle incoming requests. We need to tell API Gateway how to handle requests that come to those paths. This is where the &lt;code&gt;AWS::ApiGateway::Method&lt;/code&gt; component comes into play.&lt;/p&gt;

&lt;p&gt;For a use case like hosting Orchard Core CMS, the following configuration can be a good starting point.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiGatewayRootMethod:
  Type: AWS::ApiGateway::Method
  Properties:
    HttpMethod: ANY
    AuthorizationType: NONE
    ApiKeyRequired: False
    RestApiId: !Ref apiGatewayRestApi
    ResourceId: !GetAtt apiGatewayRestApi.RootResourceId
    Integration:
      ConnectionId: !Ref apiGatewayVpcLink
      ConnectionType: VPC_LINK
      Type: HTTP_PROXY
      IntegrationHttpMethod: ANY
      Uri: !Sub "http://${internalNlb.DNSName}"
  DependsOn:
    - apiGatewayRootProxyResource

apiGatewayRootProxyMethod:
  Type: AWS::ApiGateway::Method
  Properties:
    ApiKeyRequired: False
    RestApiId: !Ref apiGatewayRestApi
    ResourceId: !Ref apiGatewayRootProxyResource
    HttpMethod: ANY
    AuthorizationType: NONE
    RequestParameters:
      method.request.path.proxy: True
    Integration:
      ConnectionId: !Ref apiGatewayVpcLink
      ConnectionType: VPC_LINK
      Type: HTTP_PROXY
      RequestParameters:
        integration.request.path.proxy: method.request.path.proxy
      CacheKeyParameters:
        - method.request.path.proxy
      IntegrationHttpMethod: ANY
      IntegrationResponses:
        - StatusCode: 200
          SelectionPattern: 200
      Uri: !Sub "http://${internalNlb.DNSName}/{proxy}"
  DependsOn:
    - apiGatewayRootProxyResource
    - apiGatewayVpcLink
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;By setting up both the root method and the proxy method, the API Gateway can handle both general traffic via the root method and dynamic path-based traffic via the proxy method in a flexible way. This reduces the need for additional methods and resources to manage various paths.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cuteprogramming.blog/wp-content/uploads/2025/03/image-2.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F06bsga4hnyys36gqc9xc.png" width="800" height="486"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Handling dynamic path-based traffic for Orchard Core via the proxy method.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Since Orchard Core is designed for browsing, updating, and deleting content, as a start, we may need support for multiple HTTP methods. By using &lt;code&gt;ANY&lt;/code&gt;, we are ensuring that all these HTTP methods are supported without having to define separate methods for each one.&lt;/p&gt;

&lt;p&gt;Setting &lt;code&gt;AuthorizationType&lt;/code&gt; to &lt;code&gt;NONE&lt;/code&gt; is a good starting point, especially in cases where we are not expecting to implement authentication directly at the API Gateway level. Instead, we are relying on Orchard Core built-in authentication module, which already provides user login, membership, and access control. Later, if needed, we can enhance security by adding authentication layers at the API Gateway level, such as AWS IAM, Cognito, or Lambda authorisers.&lt;/p&gt;

&lt;p&gt;Similar to the authorisation, setting &lt;code&gt;ApiKeyRequired&lt;/code&gt; to &lt;code&gt;False&lt;/code&gt; is also a good choice for a starting point, especially since we are not yet exposing a public API. The setup above is primarily for routing requests to Orchard Core CMS. We could change if we need to secure our CMS API endpoints in the future when 3rd-party integrations or external apps need access to the CMS API.&lt;/p&gt;

&lt;p&gt;Up to this point, API Gateway has a &lt;code&gt;Resource&lt;/code&gt; and a &lt;code&gt;Method&lt;/code&gt;, but it still does not know where to send the request. That is where &lt;code&gt;Integration&lt;/code&gt; comes in. In our setup above, it tells API Gateway to use VPC Link to talk to the ECS. It also makes API Gateway act as a reverse proxy by setting &lt;code&gt;Type&lt;/code&gt; to &lt;code&gt;HTTP_PROXY&lt;/code&gt;. It will simply forward all types of HTTP requests to Orchard Core without modifying them.&lt;/p&gt;

&lt;p&gt;Even though API Gateway enforces HTTPS for external traffic, it decrypts (aka terminates SSL), validates the request, and then forwards it over HTTP to NLB within the AWS private network. Since this internal communication happens securely inside AWS, the &lt;code&gt;Uri&lt;/code&gt; is using HTTP.&lt;/p&gt;

&lt;p&gt;After setting up the resources and methods in API Gateway, we are essentially defining the blueprint for our API. However, these configurations are only in a draft state so they are not yet live and accessible to our end-users. We need a step called &lt;code&gt;Deployment&lt;/code&gt; to publish the configuration.&lt;/p&gt;
&lt;h4&gt;
  
  
  Setup Deployment
&lt;/h4&gt;

&lt;p&gt;Without deploying, the changes we discussed above are just concepts and plans. We can test them within CloudFormation, but they will not be real in the API Gateway until they are deployed.&lt;/p&gt;

&lt;p&gt;There is an important thing to take note is that &lt;strong&gt;API Gateway does not automatically detect changes in our CloudFormation template&lt;/strong&gt;. If we do not create a new deployment, our changes will not take effect in the live environment. So, we must force a new deployment by changing something in &lt;strong&gt;&lt;code&gt;AWS::ApiGateway::Deployment&lt;/code&gt;&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Another thing to take note is that a new &lt;strong&gt;&lt;code&gt;AWS::ApiGateway::Deployment&lt;/code&gt; will not automatically be triggered&lt;/strong&gt; when we update our API Gateway configurations unless the logical ID of the deployment resource itself changes. This means that every time we make changes to our API Gateway configurations, we need to manually change the logical ID of the &lt;code&gt;AWS::ApiGateway::Deployment&lt;/code&gt;. The reason CloudFormation does not automatically redeploy is to avoid unnecessary changes or disruptions.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiGatewayDeployment202501011048:
  Type: AWS::ApiGateway::Deployment
  Properties:
    RestApiId: !Ref apiGatewayRestApi
  DependsOn:
    - apiGatewayRootMethod
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the template above, we append a timestamp &lt;code&gt;202501011048&lt;/code&gt; to the logical ID of the &lt;code&gt;Deployment&lt;/code&gt;. This way, even if we make multiple deployments on the same day, each will have a unique logical ID due to the timestamp.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;Deployment&lt;/code&gt; alone does not make our API available to the users. We still need to assign it to a specific &lt;code&gt;Stage&lt;/code&gt; to ensure it has a versioned endpoint with all configurations applied.&lt;/p&gt;

&lt;h4&gt;
  
  
  Setup Stage
&lt;/h4&gt;

&lt;p&gt;A &lt;code&gt;Stage&lt;/code&gt; in API Gateway is a deployment environment that allows us to manage and control different versions of our API. It acts as a live endpoint for clients to interact with our API. Without a &lt;code&gt;Stage&lt;/code&gt;, the API exists but is not publicly available. We can create stages like &lt;code&gt;dev&lt;/code&gt;, &lt;code&gt;test&lt;/code&gt;, and &lt;code&gt;prod&lt;/code&gt; to separate development and production traffic.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiGatewayStage:
  Type: AWS::ApiGateway::Stage
  Properties:
    StageName: !Ref ApiGatewayStageName
    RestApiId: !Ref apiGatewayRestApi
    DeploymentId: !Ref apiGatewayDeployment202501011048
    MethodSettings:
      - ResourcePath: '/*'
        HttpMethod: '*'
        ThrottlingBurstLimit: 100
        ThrottlingRateLimit: 50
    Tags:
      - Key: Stack
        Value: !Ref AWS::StackName
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For now, we will use &lt;code&gt;production&lt;/code&gt; as the default stage name to keep things simple. This will help us get everything set up and running quickly. Once we are ready for more environments, we can easily update the &lt;code&gt;ApiGatewayStageName&lt;/code&gt; in the &lt;code&gt;Parameters&lt;/code&gt; based on our environment setup.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;MethodSettings&lt;/code&gt; are configurations defining how requests are handled in terms of performance, logging, and throttling. Using &lt;code&gt;/*&lt;/code&gt; and &lt;code&gt;*&lt;/code&gt; is perfectly fine at the start as our goal is to apply global throttling and logging settings for all our Orchard Core routes in one go. However, in the future we might want to adjust the settings as follows:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Content Modification (&lt;code&gt;POST&lt;/code&gt;, &lt;code&gt;PUT&lt;/code&gt;, &lt;code&gt;DELETE&lt;/code&gt;)&lt;/strong&gt;: Stricter throttling and more detailed logging.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Content Retrieval (&lt;code&gt;GET&lt;/code&gt;)&lt;/strong&gt;: More relaxed throttling for GET requests since they are usually read-only and have lower impact.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Having a burst and rate limit is useful for protecting our Orchard Core backend from excessive traffic. Even if we have a CMS with predictable traffic patterns, having rate limiting helps to prevent abuse and ensure fair usage.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cuteprogramming.blog/wp-content/uploads/2025/03/image-3.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj9o37fsjzqyfngbt5duu.png" width="800" height="486"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;The production stage in our API Gateway.&lt;/em&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Unit 11: Route53 for API Gateway
&lt;/h3&gt;

&lt;p&gt;Now that we have successfully set up API Gateway, it is accessible through an AWS-generated URL, i.e. something like &lt;code&gt;https://xxxxxx.execute-api.ap-southeast-5.amazonaws.com/production&lt;/code&gt; which is functional but not user-friendly. Hence, we need to setup a custom domain for it so that it easier to remember, more professional, and consistent with our branding.&lt;/p&gt;

&lt;p&gt;AWS provides a straightforward way to implement this using two key configurations:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;AWS::ApiGateway::DomainName&lt;/strong&gt; – Links our custom domain to API Gateway.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AWS::ApiGateway::BasePathMapping&lt;/strong&gt; – Organises API versions and routes under the same domain.&lt;/li&gt;
&lt;/ul&gt;
&lt;h4&gt;
  
  
  Setup Hosted Zone and DNS
&lt;/h4&gt;

&lt;p&gt;Since I have my domain on &lt;a href="https://www.godaddy.com/en-uk" rel="noopener noreferrer"&gt;GoDaddy&lt;/a&gt;, I will need to migrate DNS management to AWS Route 53 by creating a Hosted Zone.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cuteprogramming.blog/wp-content/uploads/2025/02/image-7.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo4qkr6wigjp83c00wepd.png" width="800" height="484"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;My personal hosted zone: chunlinprojects.com.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;After creating a Hosted Zone in AWS, we need to manually copy the NS records to GoDaddy. This step is manual anyway, so we will not be automating this part of setup in CloudFormation. In addition, hosted zones are sensitive resources and should be managed carefully. We do not want hosted zones to be removed when our CloudFormation stacks are deleted too.&lt;/p&gt;

&lt;p&gt;Once the switch is done, we can go back to our CloudFormation template to setup the custom domain name for our API Gateway.&lt;/p&gt;
&lt;h4&gt;
  
  
  Setup Custom Domain Name for API Gateway
&lt;/h4&gt;

&lt;p&gt;API Gateway requires an SSL/TLS certificate to use a custom domain.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiGatewayCustomDomainCert:
  Type: AWS::CertificateManager::Certificate
  Properties:
    DomainName: !Ref HostedZoneName
    ValidationMethod: 'DNS'
    DomainValidationOptions:
      - DomainName: !Sub "${CmsHostname}.{HostedZoneName}"
        HostedZoneId: !Ref HostedZoneId
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Take note that please update the &lt;code&gt;DomainName&lt;/code&gt;s in the template above to use your domain name. Also, the &lt;code&gt;HostedZoneId&lt;/code&gt; can be retrieved from the AWS Console under “Hosted zone details” in the screenshot above.&lt;/p&gt;

&lt;p&gt;In the resource, &lt;code&gt;DomainValidationOptions&lt;/code&gt; tells CloudFormation to use DNS validation. When we use the &lt;code&gt;AWS::CertificateManager::Certificate&lt;/code&gt; resource in a CloudFormation stack, &lt;a href="https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-certificatemanager-certificate.html" rel="noopener noreferrer"&gt;domain validation is handled automatically&lt;/a&gt; if all three of the following are true:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;We are using DNS validation;&lt;/li&gt;
&lt;li&gt;The certificate domain is hosted in Amazon Route 53;&lt;/li&gt;
&lt;li&gt;The domain resides in our AWS account.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;However, if the certificate uses email validation, or if the domain is not hosted in Route 53, then the stack will remain in the &lt;code&gt;CREATE_IN_PROGRESS&lt;/code&gt; state. Here, we will show how we can &lt;a href="https://docs.aws.amazon.com/acm/latest/userguide/dns-validation.html#setting-up-dns-validation" rel="noopener noreferrer"&gt;log in to AWS Console to manually set up DNS validation&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cuteprogramming.blog/wp-content/uploads/2025/02/image-12.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftp1abmdeetckg7mcyhu0.png" width="800" height="484"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Remember to log in to AWS Console to check for ACM Certificate Status.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;After that, we need to choose the &lt;strong&gt;Create records in Route 53&lt;/strong&gt; button to create records. The Certificate status page should open with a status banner reporting Successfully created DNS records. According to the documentation, &lt;a href="https://docs.aws.amazon.com/acm/latest/userguide/dns-validation.html#setting-up-dns-validation" rel="noopener noreferrer"&gt;our new certificate might continue to display a status of Pending validation for up to 30 minutes&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cuteprogramming.blog/wp-content/uploads/2025/02/image-13.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3ymkovkithzell4o4ekg.png" width="800" height="484"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Successfully created DNS records.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Now that the SSL certificate is ready and the DNS validation is done, we will need to link the SSL certificate to our API Gateway using a custom domain. We are using &lt;code&gt;RegionalCertificateArn&lt;/code&gt;, which is intended for a &lt;strong&gt;regional&lt;/strong&gt; API Gateway.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiGatewayCustomDomainName:
  Type: AWS::ApiGateway::DomainName
  Properties:
    RegionalCertificateArn: !Ref apiGatewayCustomDomainCert
    DomainName: !Sub "${CmsHostname}.{HostedZoneName}"
    EndpointConfiguration:
      Types:
        - REGIONAL
    SecurityPolicy: TLS_1_2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This allows our API to be securely accessed using our custom domain. We also set up a &lt;code&gt;SecurityPolicy&lt;/code&gt; to use the latest TLS version (TLS 1.2), ensuring that the connection is secure and follows modern standards.&lt;/p&gt;

&lt;p&gt;Even though it is optional, it is a good practice to specify the TLS version for both security and consistency, especially for production environments. Enforcing a TLS version helps avoid any potential vulnerabilities from outdated protocols.&lt;/p&gt;

&lt;h4&gt;
  
  
  Setup Custom Domain Routing
&lt;/h4&gt;

&lt;p&gt;Next, we need to create a base path mapping to map the custom domain to our specific API stage in API Gateway.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;BasePathMapping&lt;/code&gt; is the crucial bridge between our custom domain and our API Gateway because when users visit our custom domain, we need a way to tell AWS API Gateway which specific API and stage should handle the incoming requests for that domain.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiGatewayCustomDomainBasePathMapping:
  Type: AWS::ApiGateway::BasePathMapping
  Properties:
    DomainName: !Ref apiGatewayCustomDomainName
    RestApiId: !Ref apiGatewayRestApi
    Stage: !Ref apiGatewayStage
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;While the &lt;code&gt;BasePathMapping&lt;/code&gt; connects our custom domain to a specific stage inside our API Gateway, we need to setup DNS routing outside AWS which handles the DNS resolution.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;RecordSet&lt;/code&gt; creates a DNS record (typically an A or CNAME record) that points to the API Gateway endpoint. Without this record, DNS systems outside AWS will not know where to direct traffic for our custom domain.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiGatewayCustomDomainARecord:
  Type: AWS::Route53::RecordSet
  Properties:
    HostedZoneName: !Sub "${HostedZoneName}."
    Name: !Sub "${CmsHostname}.{HostedZoneName}"
    Type: A
    AliasTarget:
      DNSName: !GetAtt apiGatewayCustomDomainName.RegionalDomainName
      HostedZoneId: !GetAtt apiGatewayCustomDomainName.RegionalHostedZoneId
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;There is one interesting stuff to take note here is that when we use an &lt;code&gt;AWS::Route53::RecordSet&lt;/code&gt; that specifies &lt;code&gt;HostedZoneName&lt;/code&gt;, &lt;a href="https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-route53-recordset.html#cfn-route53-recordset-hostedzonename" rel="noopener noreferrer"&gt;we must include a trailing dot (for example, &lt;code&gt;chunlinprojects.com.&lt;/code&gt;)&lt;/a&gt; as part of the &lt;code&gt;HostedZoneName&lt;/code&gt;. Otherwise, we &lt;a href="https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-route53-recordset.html#cfn-route53-recordset-hostedzoneid" rel="noopener noreferrer"&gt;can also choose to specify &lt;code&gt;HostedZoneId&lt;/code&gt; instead, but never specifying both&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;For API Gateway with a custom domain, &lt;a href="https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/resource-record-sets-choosing-alias-non-alias.html" rel="noopener noreferrer"&gt;AWS recommends using an &lt;strong&gt;Alias Record&lt;/strong&gt;&lt;/a&gt; (which is similar to an A record) instead of a CNAME because the endpoint for API Gateway changes based on region and the nature of the service.&lt;/p&gt;

&lt;p&gt;Alias records are a special feature in AWS Route 53 designed for pointing domain names directly to AWS resources like API Gateway, ELB, and so on. While CNAME records are often used in DNS to point to another domain, Alias records are unique to AWS and &lt;a href="https://repost.aws/questions/QUH6vOhyB6RcCWLbmzRAPLbg/route-53-cname-and-alias-differences" rel="noopener noreferrer"&gt;allow us to &lt;strong&gt;avoid extra DNS lookup costs&lt;/strong&gt;&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;For the &lt;code&gt;HostedZoneId&lt;/code&gt; of &lt;code&gt;AliasTarget&lt;/code&gt;, &lt;a href="https://docs.aws.amazon.com/general/latest/gr/apigateway.html#apigateway_region" rel="noopener noreferrer"&gt;it is the Route 53 Hosted Zone ID of the API Gateway&lt;/a&gt;, do not mess up with the ID of our own hosted zone in Route 53.&lt;/p&gt;

&lt;p&gt;Finally, please take note that &lt;a href="https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-route53-recordset.html#cfn-route53-recordset-ttl" rel="noopener noreferrer"&gt;when we are creating an alias resource record set, we need to omit &lt;code&gt;TTL&lt;/code&gt;&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Reference 01: ECS Cluster
&lt;/h3&gt;

&lt;p&gt;As we move forward with hosting Orchard Core CMS, let’s go through a few hosting options available within AWS, as listed below.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;EC2 (Elastic Compute Cloud)&lt;/strong&gt;: A traditional option for running virtual machines. We can fully control the environment but need to manage everything, from scaling to OS patching;&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Elastic Beanstalk&lt;/strong&gt; : PaaS optimised for traditional .NET apps on Windows/IIS, not really suitable for Orchard Core which runs best on Linux containers with Kestrel;&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Lightsail&lt;/strong&gt; : A traditional VPS (Virtual Private Server), where we manage the server and applications ourselves. It is a good fit for simple, low-traffic websites but not ideal for scalable workloads like Orchard Core CMS.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;EKS (Elastic Kubernetes Service)&lt;/strong&gt;: A managed Kubernetes offering from AWS. It allows us to run Kubernetes clusters, which are great for large-scale apps with complex micro-services. However, managing Kubernetes adds complexity.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;ECS (Elastic Container Service)&lt;/strong&gt;: A service designed for running containerised apps. We can run containers on serverless &lt;strong&gt;Fargate&lt;/strong&gt; or &lt;strong&gt;EC2-backed clusters&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The reason why we choose ECS is because it offers a scalable, reliable, and cost-effective way to deploy Orchard Core in a containerised environment. ECS allows us to take advantage of containerisation benefits such as isolated, consistent deployments and easy portability across environments. With built-in support for auto-scaling and seamless integration with AWS services like RDS for databases, S3 for media storage, and CloudWatch for monitoring, ECS ensures high availability and performance.&lt;/p&gt;

&lt;p&gt;In ECS, we can choose to use either Fargate or EC2-backed ECS for hosting Orchard Core, depends on our specific needs and use case. For highly customised, predictable, or resource-intensive workloads CMS, &lt;strong&gt;EC2-based ECS might be more appropriate&lt;/strong&gt; due to the need for fine-grained control over resources and configurations.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cuteprogramming.blog/wp-content/uploads/2025/03/image-6.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqt5uht1wrpf24dvu4o48.png" width="800" height="486"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Official documentation with CloudFormation template on how to setup an ECS cluster.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;There is &lt;a href="https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/quickref-ecs.html" rel="noopener noreferrer"&gt;an official documentation on how to an setup ECS cluster&lt;/a&gt;. Hence, we will not discuss in depth about how to set it up. Instead, we will focus on some of the key points that we need to take note of.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cuteprogramming.blog/wp-content/uploads/2025/03/image-5.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F32j6oy8v0d9xhkkmj1vj.png" width="800" height="486"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Official ECS-optimised AMIs from AWS.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;While we can technically use any Linux AMI for running ECS tasks, &lt;a href="https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-optimized_AMI.html" rel="noopener noreferrer"&gt;the &lt;strong&gt;Amazon ECS-Optimised AMI&lt;/strong&gt; offers several key benefits and optimisations that make it a better choice, particularly for ECS workloads&lt;/a&gt;. The Amazon ECS-Optimised AMI is designed and optimised by AWS to run ECS tasks efficiently on EC2 instances. By using the ECS-Optimised AMI, we benefit from pre-installed ECS agent + Docker as well as optimised configuration for ECS. &lt;a href="https://docs.aws.amazon.com/AmazonECS/latest/developerguide/bootstrap_container_instance.html#bootstrap_container_agent" rel="noopener noreferrer"&gt;Those AMI look for agent configuration data in the &lt;code&gt;/etc/ecs/ecs.config&lt;/code&gt; file when the container agent starts&lt;/a&gt;. That’s why can specify this configuration data at launch with Amazon EC2 user data, as shown below.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;containerInstances:
  Type: AWS::EC2::LaunchTemplate
  Properties:
    LaunchTemplateName: "asg-launch-template"
    LaunchTemplateData:
      ImageId: !Ref EcsAmi
      InstanceType: "t3.large"
      IamInstanceProfile:
        Name: !Ref ec2InstanceProfile
      SecurityGroupIds:
        - !Ref ecsContainerHostSecurityGroup
      # This injected configuration file is how the EC2 instance
      # knows which ECS cluster it should be joining
      UserData:
        Fn::Base64: !Sub |
         #!/bin/bash -xe
         echo "ECS_CLUSTER=core-cluster" &amp;gt;&amp;gt; /etc/ecs/ecs.config
      # Disable IMDSv1, and require IMDSv2
      MetadataOptions:
        HttpEndpoint: enabled
        HttpTokens: required
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As shown in the above CloudFormation template, instead of hardcoding an AMI ID which will become outdated over time, we have a parameter to ensure that the cluster always provisions instances using the most recent Amazon Linux 2023 ECS-optimised AMI.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;EcsAmi:
  Description: The Amazon Machine Image ID used for the cluster
  Type: AWS::SSM::Parameter::Value&amp;lt;AWS::EC2::Image::Id&amp;gt;
  Default: /aws/service/ecs/optimized-ami/amazon-linux-2023/recommended/image_id
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Also, &lt;a href="https://docs.aws.amazon.com/AmazonECS/latest/developerguide/launch_container_instance.html#linux-liw-network-settings" rel="noopener noreferrer"&gt;the EC2 instances need access to communicate with the ECS service endpoint&lt;/a&gt;. This can be through an interface VPC endpoint or through our EC2 instances having public IP addresses. In our case, we are placing our EC2 instances in private subnets, so we use the Network Address Translation (NAT) to provide this access.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ecsNatGateway:
  Type: AWS::EC2::NatGateway
  Properties:
    AllocationId: !GetAtt ecsEip.AllocationId
    SubnetId: !Ref publicSubnet1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Unit 12: ECS Task Definition and Service
&lt;/h3&gt;

&lt;p&gt;This ECS cluster definition is just the starting point. Next, we will define how the containers &lt;strong&gt;run&lt;/strong&gt; and &lt;strong&gt;interact&lt;/strong&gt; through &lt;code&gt;AWS::ECS::TaskDefinition&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ecsTaskDefinition:
  Type: AWS::ECS::TaskDefinition
  Properties:
    Family: !Ref ServiceName
    TaskRoleArn: !GetAtt iamRole.Arn
    ContainerDefinitions:
      - Name: !Ref ServiceName
        Image: !Ref OrchardCoreImage
        LogConfiguration:
          LogDriver: awslogs
          Options:
            awslogs-group: !Sub "/ecs/${ServiceName}-log-group"
            awslogs-region: !Ref AWS::Region
            awslogs-stream-prefix: ecs
        PortMappings:
          - ContainerPort: 5000
            HostPort: 80
            Protocol: tcp
        Cpu: 256
        Memory: 1024
        MemoryReservation: 512
        Environment:
          - Name: DatabaseEndpoint
            Value: !GetAtt cmsDBInstance.Endpoint.Address
        Essential: true
        HealthCheck:
          Command:
            - CMD-SHELL
            - "wget -q --spider http://localhost:5000/health || exit 1"
          Interval: 30
          Timeout: 5
          Retries: 3
          StartPeriod: 30
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the setup above, we are sending logs to &lt;strong&gt;CloudWatch Logs&lt;/strong&gt; so that we can centralise logs from all ECS tasks, making it easier to monitor and troubleshoot our containers.&lt;/p&gt;

&lt;p&gt;By default, &lt;a href="https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-networking.html" rel="noopener noreferrer"&gt;ECS is using bridge network mode&lt;/a&gt;. In bridge mode, containers do not get their own network interfaces. Instead, the container port (5000) must be mapped to a port on the host EC2 instance (80). Without this mapping, the Orchard Core on EC2 would not be reachable from outside. The reason we set the &lt;code&gt;ContainerPort: 5000&lt;/code&gt; in is to match the port our Orchard Core app is exposed on within the Docker container.&lt;/p&gt;

&lt;p&gt;As CMS platforms like Orchard Core generally require more memory for smooth operations, especially in production environments with more traffic, it is better to start with a CPU allocation like 256 (0.25 vCPU) and 1024 MB for memory, depending on expected load.&lt;/p&gt;

&lt;p&gt;For the &lt;code&gt;MemoryReservation&lt;/code&gt; which is a guaranteed amount of memory for our container, we set it to be 512 MB of memory. By reserving memory, we are ensuring that your container has enough memory to run reliably. Orchard Core, being a modular CMS, can consume more memory depending on the number of features/modules you have enabled. Later if we realise Orchard Core does not need that much guaranteed memory, we can leave &lt;code&gt;MemoryReservation&lt;/code&gt; lower. The key idea is to reserve enough memory to ensure stable operations without overcommitting.&lt;/p&gt;

&lt;p&gt;Next, we have &lt;code&gt;Essential&lt;/code&gt; where we set it to &lt;code&gt;true&lt;/code&gt;. This property specifies whether the container is essential to the ECS task. We set it to true so that ECS will treat this Orchard Core container as vital for the task. If the container stops or fails, ECS will stop the entire task. Otherwise, ECS will not automatically stop the task if this Orchard Core container fails, which could lead to issues, especially in a production environment.&lt;/p&gt;

&lt;p&gt;Finally, we must not forget about &lt;code&gt;HealthCheck&lt;/code&gt;. In most web apps like Orchard Core, a simple HTTP endpoint &lt;code&gt;/health&lt;/code&gt; is normally used as a health check. Here, we need to understand that many minimal container images like ECS-optimised AMIs do not include &lt;code&gt;curl&lt;/code&gt; by default to keep them lightweight. However, &lt;code&gt;wget&lt;/code&gt; is often available by default, making it a good alternative for checking if an HTTP endpoint is reachable. Hence, in the template above, ECS is using &lt;code&gt;wget&lt;/code&gt; to check the &lt;code&gt;/health&lt;/code&gt; endpoint on port 5000. If it receives an error, the container is considered unhealthy.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cuteprogramming.blog/wp-content/uploads/2025/03/image-10.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7lmvq0h52lokj84lnqzh.png" width="800" height="246"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;We can test locally to check if curl or wget is available in the image.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Once the &lt;code&gt;TaskDefinition&lt;/code&gt; is set up, it defines the container specs. However, the ECS service is needed to manage how and where the task runs within the ECS cluster. We need the ECS service tells ECS how to run the task, manage it, and keep it running smoothly.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ecsService:
  Type: AWS::ECS::Service
  DependsOn:
    - iamRole
    - internalNlb
    - nlbTargetGroup
    - internalNlbListener
  Properties:
    Cluster: !Ref ecsCluster
    DesiredCount: 2
    DeploymentConfiguration:
      MaximumPercent: 200
      MinimumHealthyPercent: 50
    LoadBalancers:
      - ContainerName: !Ref ServiceName
        ContainerPort: 5000
        TargetGroupArn: !Ref nlbTargetGroup
    PlacementStrategies:
      - Type: spread
        Field: attribute:ecs.availability-zone
      - Type: spread
        Field: instanceId
    TaskDefinition: !Ref ecsTaskDefinition
    ServiceName: !Ref ServiceName
    Role: !Sub "arn:${AWS::Partition}:iam::${AWS::AccountId}:role/aws-service-role/ecs.amazonaws.com/AWSServiceRoleForECS"
    HealthCheckGracePeriodSeconds: 60
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;DesiredCount&lt;/code&gt; is the number of tasks (or containers) we want ECS to run at all times for Orchard Core app. In this case, we set it to 2 which means that ECS will try to keep exactly 2 tasks running for our service. Setting it to 2 helps ensure that we have redundancy. If one task goes down, the other task can continue serving, ensuring that our CMS stays available and resilient.&lt;/p&gt;

&lt;p&gt;Based on the number of &lt;code&gt;DesiredCount&lt;/code&gt;, we indicate that during deployment, ECS can temporarily run up to 4 tasks (&lt;code&gt;MaximumPercent: 200&lt;/code&gt;) and at least 1 task (&lt;code&gt;MinimumHealthyPercent: 50&lt;/code&gt;) must be healthy during updates to ensure smooth deployment.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;LoadBalancers&lt;/code&gt; section in the ECS service definition is where we link our service to the NLB that we set up earlier, ensuring that the NLB will distribute the traffic to the correct tasks running within the ECS service. Also, since our container is configured to run on port 5000 as per our Dockerfile, this is the port we use.&lt;/p&gt;

&lt;p&gt;Next, we have &lt;code&gt;PlacementStrategies&lt;/code&gt; to help us control how our tasks are distributed across different instances and availability zones, making sure our CMS is resilient and well-distributed. Here, &lt;code&gt;attribute:ecs.availability-zone&lt;/code&gt; ensures the tasks are spread evenly across different availability zones within the same region. At the same time, &lt;code&gt;Field: instanceId&lt;/code&gt; ensures that our tasks are spread across different EC2 instances within the cluster.&lt;/p&gt;

&lt;p&gt;Finally, it is a good practice to set a &lt;code&gt;HealthCheckGracePeriodSeconds&lt;/code&gt; to give our containers some time to start and become healthy before ECS considers them unhealthy during scaling or deployments.&lt;/p&gt;

&lt;h3&gt;
  
  
  Unit 13: CloudWatch Alarm
&lt;/h3&gt;

&lt;p&gt;To ensure we effectively monitor the performance of Orchard Core on our ECS service, we also need to set up CloudWatch alarms to track metrics like CPU utilisation, memory utilisation, health check, running task count, etc.&lt;/p&gt;

&lt;p&gt;We set up the following CloudWatch alarm to monitor CPU utilisation for our ECS service. This alarm triggers if the CPU usage exceeds 75% for a specified period (5 minutes). By doing this, we can quickly identify when our service is under heavy load, which helps us take action to prevent performance issues.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;highCpuUtilizationAlarm:
  Type: AWS::CloudWatch::Alarm
  Properties:
    AlarmName: !Sub "${AWS::StackName}-high-cpu"
    AlarmDescription: !Sub "ECS service ${AWS::StackName}: Cpu utilization above 75%"
    Namespace: AWS/ECS
    MetricName: CPUUtilization
    Dimensions:
      - Name: ClusterName
        Value: !Ref ecsCluster
      - Name: ServiceName
        Value: !Ref ServiceName
    Statistic: Average
    Period: 60
    EvaluationPeriods: 5
    Threshold: 75
    ComparisonOperator: GreaterThanOrEqualToThreshold
    TreatMissingData: notBreaching
    ActionsEnabled: true
    AlarmActions: []
    OKActions: []
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Even if we leave &lt;code&gt;AlarmActions&lt;/code&gt; and &lt;code&gt;OKActions&lt;/code&gt; as empty arrays, the alarm state will still be visible in the AWS CloudWatch Console. We can monitor the alarm state directly on the CloudWatch dashboard.&lt;/p&gt;

&lt;p&gt;Similar to the CPU utilisation alarm above, we have another alarm to trigger when the count of running tasks is less 0 for 5 consecutive periods, indicating that there have been no running tasks for a full 5 minutes.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;noRunningTasksAlarm:
  Type: AWS::CloudWatch::Alarm
  Properties:
    AlarmName: !Sub "${AWS::StackName}-no-task"
    AlarmDescription: !Sub "ECS service ${AWS::StackName}: No running ECS tasks for more than 5 mins"
    Namespace: AWS/ECS
    MetricName: RunningTaskCount
    Dimensions:
      - Name: ClusterName
        Value: !Ref ecsCluster
      - Name: ServiceName
        Value: !Ref ServiceName
    Statistic: Average
    Period: 60
    EvaluationPeriods: 5
    Threshold: 1
    ComparisonOperator: LessThanThreshold
    TreatMissingData: notBreaching
    ActionsEnabled: true
    AlarmActions: []
    OKActions: []
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://cuteprogramming.blog/wp-content/uploads/2025/03/image-13.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3yy63wlexkite1sk4qx6.png" width="800" height="486"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;The two alarms are available on CloudWatch dashboard.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;By monitoring these key metrics, we can proactively address any performance or availability issues, ensuring our Orchard Core CMS runs smoothly and efficiently.&lt;/p&gt;

&lt;h3&gt;
  
  
  Wrap-Up
&lt;/h3&gt;

&lt;p&gt;Setting up Orchard Core on ECS with CloudFormation does have its complexities, especially with the different moving parts like API Gateway, load balancers, and domain configurations. However, once we have the infrastructure defined in CloudFormation, it becomes much easier to deploy, update, and manage our AWS environment. This is one of the key benefits of using CloudFormation, as it gives us consistency, repeatability, and automation in our deployments.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cuteprogramming.blog/wp-content/uploads/2025/03/image-11.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyyigsuxlzwavajc9o9ea.png" width="800" height="486"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Orchard Core website is up and accessible via our custom domain!&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The heavy lifting is done up front, and after that, it is mostly about making updates to our CloudFormation stack and redeploying without having to worry about manually reconfiguring everything.&lt;/p&gt;

</description>
      <category>cloudcomputingamazon</category>
      <category>cloudformation</category>
      <category>experience</category>
      <category>orchardcore</category>
    </item>
    <item>
      <title>When Pinecone Wasn’t Enough: My Journey to pgvector</title>
      <dc:creator>Goh Chun Lin</dc:creator>
      <pubDate>Mon, 20 Jan 2025 12:21:03 +0000</pubDate>
      <link>https://dev.to/gohchunlin/when-pinecone-wasnt-enough-my-journey-to-pgvector-5ol</link>
      <guid>https://dev.to/gohchunlin/when-pinecone-wasnt-enough-my-journey-to-pgvector-5ol</guid>
      <description>&lt;p&gt;&lt;a href="https://cuteprogramming.blog/wp-content/uploads/2025/01/image-9.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh5zrk9k0mdmtq68m96ns.png" width="800" height="448"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you work with machine learning or natural language processing, you have probably dealt with storing and searching through vector embeddings.&lt;/p&gt;

&lt;p&gt;When I created &lt;a href="https://dev.to/gohchunlin/from-zero-to-gemini-building-an-ai-powered-game-helper-407g-temp-slug-625432"&gt;the Honkai: Star Rail (HSR) relic recommendation system using Gemini&lt;/a&gt;, I started with Pinecone. &lt;a href="https://www.pinecone.io/" rel="noopener noreferrer"&gt;Pinecone is a managed vector database&lt;/a&gt; that made it easy to index relic descriptions and character data as embeddings. It helped me find the best recommendations based on how similar they were.&lt;/p&gt;

&lt;p&gt;Pinecone worked well, but as the project grew, I wanted more control, something open-source, and a cheaper option. That is when I found pgvector, a tool that adds vector search to PostgreSQL and gives the flexibility of an open-source database.&lt;/p&gt;

&lt;h3&gt;
  
  
  About HSR and Relic Recommendation System
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://hsr.hoyoverse.com/en-us/home" rel="noopener noreferrer"&gt;Honkai: Star Rail (HSR)&lt;/a&gt; is a popular RPG that has captured the attention of players worldwide. One of the key features of the game is its relic system, where players equip their characters with relics like hats, gloves, or boots to boost stats and unlock special abilities. Each relic has unique attributes, and selecting the right sets of relics for a character can make a huge difference in gameplay.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cuteprogramming.blog/wp-content/uploads/2025/01/image-8.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvjdo3ceanwqs5y7cq8z2.png" width="800" height="445"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;An HSR streamer, Unreal Dreamer, learning the new relic feature. (Image Source: Unreal Dreamer YouTube)&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;As a casual player, I often found myself overwhelmed by the number of options and the subtle synergies between different relic sets. Finding the good relic combination for each character was time-consuming.&lt;/p&gt;

&lt;p&gt;This is where LLMs like Gemini come into play. With the ability to process and analyse complex data, Gemini can help players make smarter decisions.&lt;/p&gt;

&lt;p&gt;In November 2024, I started a project to develop &lt;a href="https://dev.to/gohchunlin/from-zero-to-gemini-building-an-ai-powered-game-helper-407g-temp-slug-625432"&gt;a Gemini-powered HSR relic recommendation system&lt;/a&gt; which can analyse a player’s current characters to suggest the best options for them. In the project, I have been storing embeddings in Pinecone.&lt;/p&gt;
&lt;h3&gt;
  
  
  Embeddings and Vector Database
&lt;/h3&gt;

&lt;p&gt;An embedding is a way to turn data, like text or images, into a list of numbers called a vector. These vectors make it easier for a computer to compare and understand the relationships between different pieces of data.&lt;/p&gt;

&lt;p&gt;For example, in the HSR relic recommendation system, we use embeddings to represent descriptions of relic sets. The numbers in the vector capture the meaning behind the words, so similar relics and characters have embeddings that are closer together in a mathematical sense.&lt;/p&gt;

&lt;p&gt;This is where vector databases like Pinecone or pgvector come in. Vector databases are designed for performing fast similarity searches on large collections of embeddings. This is essential for building systems that need to recommend, match, or classify data.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/pgvector/pgvector" rel="noopener noreferrer"&gt;pgvector is an open-source extension for PostgreSQL&lt;/a&gt; that allows us to store and search for vectors directly in our database. It adds specialised functionality for handling vector data, like embeddings in our HSR project, making it easier to perform similarity searches without needing a separate system.&lt;/p&gt;

&lt;p&gt;Unlike managed services like Pinecone, pgvector is open source. This meant we could use it freely and avoid vendor lock-in. This is a huge advantage for developers.&lt;/p&gt;

&lt;p&gt;Finally, since pgvector runs on PostgreSQL, there is no need for additional managed service fees. This makes it a budget-friendly option, especially for projects that need to scale without breaking the bank.&lt;/p&gt;
&lt;h3&gt;
  
  
  Choosing the Right Model
&lt;/h3&gt;

&lt;p&gt;While the choice of the vector database is important, it is not the key factor in achieving great results. The quality of our embeddings actually is determined by the model we choose.&lt;/p&gt;

&lt;p&gt;For my HSR relic recommendation system, when our embeddings were stored in Pinecone, I started by using the multilingual-e5-large model from Microsoft Research offered in Pinecone.&lt;/p&gt;

&lt;p&gt;When I migrated to pgvector, I had the freedom to explore other options. For this migration, I chose &lt;a href="https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2" rel="noopener noreferrer"&gt;the &lt;code&gt;all-MiniLM-L6-v2&lt;/code&gt; model hosted on Hugging Face&lt;/a&gt;, which is a lightweight sentence-transformer designed for semantic similarity tasks. Switching to this model allowed me to quickly generate embeddings for relic sets and integrate them into pgvector, giving me a solid starting point while leaving room for future experimentation.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cuteprogramming.blog/wp-content/uploads/2025/01/image-11.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo5al7zfk5ztx48u52knh.png" width="800" height="484"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;The all-MiniLM-L6-v2 model hosted on Hugging Face.&lt;/em&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Using all-MiniLM-L6-v2 Model
&lt;/h3&gt;

&lt;p&gt;Once we have decided to use the &lt;code&gt;all-MiniLM-L6-v2&lt;/code&gt; model, the next step is to generate vector embeddings for the relic descriptions. This model is from the &lt;code&gt;sentence-transformers&lt;/code&gt; library, so we first need to install the library.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pip install sentence-transformers
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The library offers &lt;code&gt;SentenceTransformer&lt;/code&gt; class to load pre-trained models.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from sentence_transformers import SentenceTransformer

model_name = 'all-MiniLM-L6-v2'
model = SentenceTransformer(model_name)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;At this point, the model is ready to encode text into embeddings.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;SentenceTransformer&lt;/code&gt; model takes care of tokenisation and other preprocessing steps internally, so we can directly pass text to it.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Function to generate embedding for a single text
def generate_embedding(text):
    # No need to tokenise separately, it's done internally
    # No need to average the token embeddings
    embeddings = model.encode(text) 

    return embeddings
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this function, when we call &lt;code&gt;model.encode(text)&lt;/code&gt;, the model processes the text through its transformer layers, generating an embedding that captures its semantic meaning. The output is already optimised for tasks like similarity search.&lt;/p&gt;

&lt;h3&gt;
  
  
  Setting up the Database
&lt;/h3&gt;

&lt;p&gt;After generating embeddings for each relic sets using the &lt;code&gt;all-MiniLM-L6-v2&lt;/code&gt; model, the next step is to store them in the PostgreSQL database with the pgvector extension.&lt;/p&gt;

&lt;p&gt;For developers using AWS, there is a good news. &lt;a href="https://aws.amazon.com/about-aws/whats-new/2023/05/amazon-rds-postgresql-pgvector-ml-model-integration/" rel="noopener noreferrer"&gt;In May 2023, AWS announced that Amazon Relational Database Service (RDS) for PostgreSQL would be supporting pgvector&lt;/a&gt;. &lt;a href="https://aws.amazon.com/about-aws/whats-new/2024/11/amazon-rds-for-postgresql-pgvector-080/" rel="noopener noreferrer"&gt;In November 2024, Amazon RDS started to support pgvector 0.8.0&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cuteprogramming.blog/wp-content/uploads/2025/01/image-12.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdbs0hrmepglw30b7p0yp.png" width="800" height="484"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;pgvector is now supported on Amazon RDS for PostgreSQL.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;To install the extension, we will run the following command in our database. This will introduce a new datatype called &lt;code&gt;VECTOR&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;CREATE EXTENSION vector;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After this, we can define our table as follows.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;CREATE TABLE IF NOT EXISTS embeddings (
    id TEXT PRIMARY KEY,
    vector VECTOR(384),
    text TEXT
);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Besides the &lt;code&gt;id&lt;/code&gt; column which is for the unique identifier, there are two other columns that are important.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;text&lt;/code&gt; column stores the original text for each relic (the two-piece and four-piece bonus descriptions).&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;vector&lt;/code&gt; column stores the embeddings. The &lt;code&gt;VECTOR(384)&lt;/code&gt; type is used to store embeddings, and &lt;code&gt;384&lt;/code&gt; here refers to the number of dimensions in the vector. In our case, &lt;a href="https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2#all-minilm-l6-v2" rel="noopener noreferrer"&gt;the embeddings generated by the &lt;code&gt;all-MiniLM-L6-v2&lt;/code&gt; model are 384-dimensional&lt;/a&gt;, meaning each embedding will have 384 numbers.&lt;/p&gt;

&lt;p&gt;Here, a dimension refers to one of the “features” that helps describe something. When we talk about vectors and embeddings, each dimension is just one of the many characteristics used to represent a piece of text. These features could be things like the type of words used, their relationships, and even the overall meaning of the text.&lt;/p&gt;

&lt;h3&gt;
  
  
  Updating the Database
&lt;/h3&gt;

&lt;p&gt;After the table is created, we can proceed to create &lt;code&gt;INSERT INTO&lt;/code&gt; SQL statements to insert the embeddings and their associated text into the database.&lt;/p&gt;

&lt;p&gt;In this step, I load the relic information from a JSON file and process it.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import json

# Load your relic set data from a JSON file
with open('/content/hsr-relics.json', 'r') as f:
    relic_data = json.load(f)

# Prepare data
relic_info_data = [
    {"id": relic['name'], "text": relic['two_piece'] + " " + relic['four_piece']} # Combine descriptions
    for relic in relic_data
]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;relic_info_data&lt;/code&gt; will then be passed to the following function to generate the &lt;code&gt;INSERT INTO&lt;/code&gt; statements.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Function to generate INSERT INTO statements with vectors
def generate_insert_statements(data):
    # Initialise list to store SQL statements
    insert_statements = []

    for record in data:
        # Extracting text and id from the record
        id = record.get('id')
        text = record.get('text')

        # Generate the embedding for the text
        embedding = generate_embedding(text)

        # Convert the embedding to a list
        embedding_list = embedding.tolist()

        # Create the SQL INSERT INTO statement
        sql_statement = f"""
        INSERT INTO embeddings (id, vector, text)
        VALUES (
          '{id.replace("'", "''")}', 
          ARRAY{embedding_list}, 
          '{text.replace("'", "''")}')
        ON CONFLICT (id) DO UPDATE
        SET vector = EXCLUDED.vector, text = EXCLUDED.text;
        """

        # Append the statement to the list
        insert_statements.append(sql_statement)

    return insert_statements
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://cuteprogramming.blog/wp-content/uploads/2025/01/image-1.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk35o5s4gh7f50bj2ajsg.png" width="800" height="484"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;The embeddings of the relic sets are successfully inserted to the database.&lt;/em&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  How It All Fits Together: Query the Database
&lt;/h3&gt;

&lt;p&gt;Once we have stored the vector embeddings of all the relic sets in our PostgreSQL database, the next step is to find the relic sets that are most similar to a given character’s relic needs.&lt;/p&gt;

&lt;p&gt;Just like what we have done for storing relic set embeddings, we need to generate an embedding for the query describing the character’s relic needs. This is done by passing the query through the model as demonstrated in the following code.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;def query_similar_embeddings(query_text):
    query_embedding = generate_embedding(query_text)

    return query_embedding.tolist()
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The generated embedding is an array of 384 numbers. We simply use this array in our SQL query below.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;SELECT id, text, vector &amp;lt;=&amp;gt; '[&amp;lt;embedding here&amp;gt;]' AS distance
FROM embeddings
ORDER BY distance
LIMIT 3;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The key part of the query is the &lt;code&gt;&amp;lt;=&amp;gt;&lt;/code&gt; operator. This operator calculates the “distance” between two vectors based on cosine similarity. In our case, it measures how similar the query embedding is to each stored embedding. The smaller the distance, the more similar the embeddings are.&lt;/p&gt;

&lt;p&gt;We use LIMIT 3 to get the top 3 most similar relic sets.&lt;/p&gt;

&lt;h3&gt;
  
  
  Test Case: Finding Relic Sets for Gallagher
&lt;/h3&gt;

&lt;p&gt;Gallagher is a Fire and Abundance character in HSR. He is a sustain unit that can heal allies by inflicting a debuff on the enemy.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cuteprogramming.blog/wp-content/uploads/2025/01/image-7.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3fag3k9dzawyre0tznva.png" width="800" height="452"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;According to the official announcement, Gallagher is a healer. (Image Source: Honkai: Star Rail YouTube)&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The following screenshot shows the top 3 relic sets which are closely related to a HSR character called Gallagher using the query “Suggest the best relic sets for this character: Gallagher is a Fire and Abundance character in Honkai: Star Rail. He can heal allies.”&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cuteprogramming.blog/wp-content/uploads/2025/01/image-4.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feodoxqtw7anhshwxfwmw.png" width="800" height="484"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;The returned top 3 relic sets are indeed recommended for Gallagher.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;One of the returned relic sets is called the “Thief of Shooting Meteor”. It is the official recommended relic set in-game, as shown in the screenshot below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cuteprogramming.blog/wp-content/uploads/2025/01/image-10.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6ny8ddhi1fgnrwyzjrqg.png" width="800" height="368"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Gallagher’s official recommended relic set.&lt;/em&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Future Work
&lt;/h3&gt;

&lt;p&gt;In our project, we will not be implementing indexing because currently in HSR, there are only a small number of relic sets. Without an index, PostgreSQL will still perform vector similarity searches efficiently because the dataset is small enough that searching through it directly will not take much time. For small-scale apps like ours, querying the vector data directly is both simple and fast.&lt;/p&gt;

&lt;p&gt;However, when our dataset grows larger in the future, it is a good idea to explore indexing options, such as &lt;a href="https://skyzh.github.io/write-you-a-vector-db/cpp-05-ivfflat.html" rel="noopener noreferrer"&gt;the ivfflat index&lt;/a&gt;, to speed up similarity searches.&lt;/p&gt;

&lt;h3&gt;
  
  
  References
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://aws.amazon.com/blogs/database/building-ai-powered-search-in-postgresql-using-amazon-sagemaker-and-pgvector/" rel="noopener noreferrer"&gt;Building AI-powered search in PostgreSQL using Amazon SageMaker and pgvector&lt;/a&gt;;&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://skyzh.github.io/write-you-a-vector-db/cpp-05-ivfflat.html" rel="noopener noreferrer"&gt;IVFFlat (Inverted File Flat) Index&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>cloudcomputingamazon</category>
      <category>experience</category>
      <category>pinecone</category>
      <category>postgres</category>
    </item>
    <item>
      <title>Configure Portable Object: Localisation in .NET 8 Web API</title>
      <dc:creator>Goh Chun Lin</dc:creator>
      <pubDate>Tue, 17 Dec 2024 10:11:36 +0000</pubDate>
      <link>https://dev.to/gohchunlin/configure-portable-object-localisation-in-net-8-web-api-h2h</link>
      <guid>https://dev.to/gohchunlin/configure-portable-object-localisation-in-net-8-web-api-h2h</guid>
      <description>&lt;p&gt;&lt;a href="https://cuteprogramming.blog/wp-content/uploads/2024/12/image-49.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjk2a5dk482dzrvha37aa.png" width="800" height="448"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Localisation is an important feature when building apps that cater to users from different countries, allowing them to interact with our app in their native language. In this article, we will walk you through how to set up and configure Portable Object (PO) Localisation in an ASP.NET Core Web API project.&lt;/p&gt;

&lt;p&gt;Localisation is about adapting the app for a specific culture or language by translating text and customising resources. It involves translating user-facing text and content into the target language.&lt;/p&gt;

&lt;p&gt;While .NET localisation normally uses resource files (&lt;code&gt;.resx&lt;/code&gt;) to store localised texts for different cultures, Portable Object files (&lt;code&gt;.po&lt;/code&gt;) are another popular choice, especially in apps that use open-source tools or frameworks.&lt;/p&gt;

&lt;h3&gt;
  
  
  About Portal Object (PO)
&lt;/h3&gt;

&lt;p&gt;PO files are a standard format used for storing localised text. They are part of &lt;a href="https://www.gnu.org/software/gettext/" rel="noopener noreferrer"&gt;the &lt;code&gt;gettext&lt;/code&gt; localisation framework&lt;/a&gt;, which is widely used across different programming ecosystems.&lt;/p&gt;

&lt;p&gt;A PO file contains translations in the form of key-value pairs, where:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Key: The original text in the source language.&lt;/li&gt;
&lt;li&gt;Value: The translated text in the target language.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Because PO files are simple, human-readable text files, they are easily accessible and editable by translators. This flexibility makes PO files a popular choice for many open-source projects and apps across various platforms.&lt;/p&gt;

&lt;p&gt;You might wonder why should we use PO files instead of the traditional .resx files for localisation? Here are &lt;a href="https://learn.microsoft.com/en-us/aspnet/core/fundamentals/portable-object-localization?view=aspnetcore-9.0" rel="noopener noreferrer"&gt;some advantages of using PO files instead of .resx files&lt;/a&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Unlike &lt;code&gt;.resx&lt;/code&gt; files, PO files have built-in support for plural forms. This makes it much easier to handle situations where the translation changes based on the quantity, like “1 item” vs. “2 items.”&lt;/li&gt;
&lt;li&gt;While &lt;code&gt;.resx&lt;/code&gt; files require compilation, PO files are plain text files. Hence, we do not need any special tooling or complex build steps to use PO files.&lt;/li&gt;
&lt;li&gt;PO files work great with collaborative translation tools. For those who are working with crowdsourcing translations, they will find that PO files are much easier to manage in these settings.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  SHOW ME THE CODE!
&lt;/h3&gt;

&lt;p&gt;The complete source code of this project can be found at &lt;a href="https://github.com/goh-chunlin/Experiment.PO" rel="noopener noreferrer"&gt;https://github.com/goh-chunlin/Experiment.PO&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Project Setup
&lt;/h3&gt;

&lt;p&gt;Let’s begin by creating a simple ASP.NET Web API project. We can start by generating a basic template with the following command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;dotnet new webapi
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will set up a minimal API with a weather forecast endpoint.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cuteprogramming.blog/wp-content/uploads/2024/12/image-41.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjflm80helz6gvny6n2ta.png" width="800" height="484"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;The default /weatherforecast endpoint generated by .NET Web API boilerplate.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The default endpoint in the boilerplate returns a JSON object that includes a &lt;code&gt;summary&lt;/code&gt; field. This field describes the weather using terms like freezing, bracing, warm, or hot. Here’s the array of possible summary values:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;var summaries = new[]
{
    "Freezing", "Bracing", "Chilly", "Cool", 
    "Mild", "Warm", "Balmy", "Hot", "Sweltering", "Scorching"
};
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As you can see, currently, it only supports English. To extend support for multiple languages, we will introduce localisation.&lt;/p&gt;

&lt;h3&gt;
  
  
  Prepare PO Files
&lt;/h3&gt;

&lt;p&gt;Let’s start by adding a translation for the weather summary in Chinese. Below is a sample PO file that contains the Chinese translation for the weather summaries.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#: Weather summary (Chinese)
msgid "weather_Freezing"
msgstr "寒冷"

msgid "weather_Bracing"
msgstr "冷冽"

msgid "weather_Chilly"
msgstr "凉爽"

msgid "weather_Cool"
msgstr "清爽"

msgid "weather_Mild"
msgstr "温和"

msgid "weather_Warm"
msgstr "暖和"

msgid "weather_Balmy"
msgstr "温暖"

msgid "weather_Hot"
msgstr "炎热"

msgid "weather_Sweltering"
msgstr "闷热"

msgid "weather_Scorching"
msgstr "灼热"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In most cases, PO file names are tied to locales, as they represent translations for specific languages and regions. The naming convention typically includes both the language and the region, so the system can easily identify and use the correct file. For example, the PO file above should be named &lt;code&gt;zh-CN.po&lt;/code&gt;, which represents the Chinese translation for the China region.&lt;/p&gt;

&lt;p&gt;In some cases, if our app supports a language without being region-specific, we could have a PO file named only with the language, such as &lt;code&gt;ms.po&lt;/code&gt; for Malay. This serves as a fallback for all Malay speakers, regardless of their region.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cuteprogramming.blog/wp-content/uploads/2024/12/image-42.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fip545vt6fkcks9mha3dj.png" width="800" height="484"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;We have prepared three Malay PO files: one for Malaysia (ms-MY.po), one for Singapore (ms-SG.po), and one fallback file (ms.po) for all Malay speakers, regardless of region.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;After that, since our PO files are placed in the Localisation folder, please do not forget to include them in the .csproj file, as shown below.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;Project Sdk="Microsoft.NET.Sdk.Web"&amp;gt;

  ...

  &amp;lt;ItemGroup&amp;gt;
    &amp;lt;Folder Include="Localisation\" /&amp;gt;
    &amp;lt;Content Include="Localisation\**"&amp;gt;
      &amp;lt;CopyToOutputDirectory&amp;gt;PreserveNewest&amp;lt;/CopyToOutputDirectory&amp;gt;
    &amp;lt;/Content&amp;gt;
  &amp;lt;/ItemGroup&amp;gt;

&amp;lt;/Project&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Adding this &lt;code&gt;&amp;lt;ItemGroup&amp;gt;&lt;/code&gt; ensures that the localisation files from the Localisation folder are included in our app output. This helps the application find and use the proper localisation resources when running.&lt;/p&gt;

&lt;h3&gt;
  
  
  Configure Localisation Option in .NET
&lt;/h3&gt;

&lt;p&gt;In an ASP .NET Web API project, we have to install a NuGet library from &lt;a href="https://docs.orchardcore.net/en/main/" rel="noopener noreferrer"&gt;Orchard Core&lt;/a&gt; called &lt;a href="https://www.nuget.org/packages/OrchardCore.Localization.Core" rel="noopener noreferrer"&gt;OrchardCore.Localization.Core&lt;/a&gt; (Version 2.1.3).&lt;/p&gt;

&lt;p&gt;Once the package is installed, we need to tell the application where to find the PO files. This is done by configuring the localisation options in the &lt;code&gt;Program.cs&lt;/code&gt; file.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;builder.Services.AddMemoryCache();
builder.Services.AddPortableObjectLocalization(options =&amp;gt; 
    options.ResourcesPath = "Localisation");
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;AddMemoryCache&lt;/code&gt; method is necessary here because &lt;code&gt;LocalizationManager&lt;/code&gt; of Orchard Core uses the &lt;code&gt;IMemoryCache&lt;/code&gt; service. This caching mechanism helps avoid repeatedly parsing and loading the PO files, improving performance by keeping the localised resources in memory.&lt;/p&gt;

&lt;h3&gt;
  
  
  Supported Cultures and Default Culture
&lt;/h3&gt;

&lt;p&gt;Now, we need to configure how the application will select the appropriate culture for incoming requests.&lt;/p&gt;

&lt;p&gt;In .NET, we need to specify which cultures our app supports. While .NET is capable of supporting multiple cultures out of the box, it still needs to know which specific cultures we are willing to support. By defining only the cultures we actually support, we can avoid unnecessary overhead and ensure that our app is optimised.&lt;/p&gt;

&lt;p&gt;We have two separate things to manage when making an app available in different languages and regions in .NET:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;SupportedCultures&lt;/strong&gt; : This is about how the app displays numbers, dates, and currencies. For example, how a date is shown (like MM/dd/yyyy in the US);&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;SupportedUICultures&lt;/strong&gt; : This is where we specify the languages our app supports for displaying text (the content inside the PO files).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To keep things consistent and handle both text translations and regional formatting properly, it is a good practice to configure both &lt;code&gt;SupportedCultures&lt;/code&gt; and &lt;code&gt;SupportedUICultures&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;We also need to setup the &lt;code&gt;DefaultRequestCulture&lt;/code&gt;. It is the fallback culture that our app uses when it does not have any explicit culture information from the request.&lt;/p&gt;

&lt;p&gt;The following code shows how we configure all these. To make our demo simple, we assume the locale that user wants is passed via query string.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;builder.Services.Configure&amp;lt;RequestLocalizationOptions&amp;gt;(options =&amp;gt;
{
    var supportedCultures = LocaleConstants.SupportedAppLocale
        .Select(cul =&amp;gt; new CultureInfo(cul))
        .ToArray();

    options.DefaultRequestCulture = new RequestCulture(
        culture: "en", uiCulture: "en");
    options.SupportedCultures = supportedCultures;
    options.SupportedUICultures = supportedCultures;
    options.AddInitialRequestCultureProvider(
        new CustomRequestCultureProvider(async httpContext =&amp;gt;
        {
            var currentCulture = 
                CultureInfo.InvariantCulture.Name;
            var requestUrlPath = 
                httpContext.Request.Path.Value;

            if (httpContext.Request.Query.ContainsKey("locale"))
            {
                currentCulture =         
httpContext.Request.Query["locale"].ToString();
            }

            return await Task.FromResult(
                new ProviderCultureResult(currentCulture));
        })
    );
});
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, we need to add the &lt;code&gt;RequestLocalizationMiddleware&lt;/code&gt; in &lt;code&gt;Program.cs&lt;/code&gt; to automatically set culture information for requests based on information provided by the client.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;app.UseRequestLocalization();
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After setting up the &lt;code&gt;RequestLocalizationMiddleware&lt;/code&gt;, we can now move on to localising the API endpoint by using &lt;code&gt;IStringLocalizer&lt;/code&gt; to retrieve translated text based on the culture information set for the current request.&lt;/p&gt;

&lt;h3&gt;
  
  
  About IStringLocalizer
&lt;/h3&gt;

&lt;p&gt;&lt;code&gt;IStringLocalizer&lt;/code&gt; is &lt;a href="https://learn.microsoft.com/en-us/aspnet/core/fundamentals/localization/make-content-localizable?view=aspnetcore-9.0#istringlocalizer" rel="noopener noreferrer"&gt;a service in ASP.NET Core used for retrieving localised resources&lt;/a&gt;, such as strings, based on the current culture of our app. In essence, &lt;code&gt;IStringLocalizer&lt;/code&gt; acts as a bridge between our code and the language resources (like PO files) that contain translations. If the localised value of a key is not found, then the indexer key is returned.&lt;/p&gt;

&lt;p&gt;We first need to inject &lt;code&gt;IStringLocalizer&lt;/code&gt; into our API controllers or any services where we want to retrieve localised text.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;app.MapGet("/weatherforecast", (IStringLocalizer&amp;lt;WeatherForecast&amp;gt; stringLocalizer) =&amp;gt;
{
    var forecast = Enumerable.Range(1, 5).Select(index =&amp;gt;
        new WeatherForecast
        (
            DateOnly.FromDateTime(DateTime.Now.AddDays(index)),
            Random.Shared.Next(-20, 55),
            stringLocalizer["weather_" + summaries[Random.Shared.Next(summaries.Length)]]
        ))
        .ToArray();
    return forecast;
})
.WithName("GetWeatherForecast")
.WithOpenApi();
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The reason we use &lt;code&gt;IStringLocalizer&amp;lt;WeatherForecast&amp;gt;&lt;/code&gt; instead of just &lt;code&gt;IStringLocalizer&lt;/code&gt; is because we are relying on Orchard Core package to handle the PO files. According to &lt;a href="https://github.com/sebastienros" rel="noopener noreferrer"&gt;Sebastian Ros&lt;/a&gt;, the Orchard Core maintainer, &lt;a href="https://github.com/OrchardCMS/OrchardCore/issues/1232#issuecomment-345356106" rel="noopener noreferrer"&gt;we cannot resolve IStringLocalizer, we need IStringLocalizer&lt;/a&gt;. When we use &lt;code&gt;IStringLocalizer&amp;lt;T&amp;gt;&lt;/code&gt; instead of just &lt;code&gt;IStringLocalizer&lt;/code&gt; is also related to how localisation is typically scoped in .NET applications.&lt;/p&gt;

&lt;h3&gt;
  
  
  Running on Localhost
&lt;/h3&gt;

&lt;p&gt;Now, if we run the project using &lt;code&gt;dotnet run&lt;/code&gt;, the Web API should compile successfully. Once the API is running on &lt;code&gt;localhost&lt;/code&gt;, visiting the endpoint with &lt;code&gt;zh-CN&lt;/code&gt; as the locale should return the weather summary in Chinese, as shown in the screenshot below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cuteprogramming.blog/wp-content/uploads/2024/12/image-43.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr2nzwtgvdmx0fcqnko1z.png" width="800" height="484"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;The summary is getting the translated text from zh-CN.po now.&lt;/em&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Dockerisation
&lt;/h3&gt;

&lt;p&gt;Since the Web API is tested to be working, we can proceed to dockerise it.&lt;/p&gt;

&lt;p&gt;We will first create a Dockerfile as shown below to define the environment our Web API will run in. Then we will build the Docker image, using the Dockerfile. After building the image, we will run it in a container, making our Web API available for use.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;## Build Container
FROM mcr.microsoft.com/dotnet/sdk:8.0-alpine AS builder
WORKDIR /app

# Copy the project file and restore any dependencies (use .csproj for the project name)
COPY *.csproj ./
RUN dotnet restore

# Copy the rest of the application code
COPY . .

# Publish the application
RUN dotnet publish -c Release -o out

## Runtime Container
FROM mcr.microsoft.com/dotnet/aspnet:8.0-alpine AS runtime

ENV ASPNETCORE_URLS=http://*:80

WORKDIR /app
COPY --from=builder /app/out ./

# Expose the port your application will run on
EXPOSE 80

ENTRYPOINT ["dotnet", "Experiment.PO.dll"]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As shown in the Dockerfile, we are using .NET Alpine images. Alpine is a lightweight Linux distribution often used in Docker images because it is much smaller than other base images. It is a best practice when we want a minimal image with fewer security vulnerabilities and faster performance.&lt;/p&gt;

&lt;h3&gt;
  
  
  Globalisation Invariant Mode in .NET
&lt;/h3&gt;

&lt;p&gt;When we run our Web API as a Docker container on our local machine, we will soon realise that our container has stopped because our Web API inside it crashed. It turns out that there is an exception called System.Globalization.CultureNotFoundException.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cuteprogramming.blog/wp-content/uploads/2024/12/image-44.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhg0ly4xk52bsfmzyit09.png" width="800" height="484"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Our Web API crashes due to System.Globalization.CultureNotFoundException, as shown in docker logs.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;As pointed out in the error message, only the invariant culture is supported in globalization-invariant mode.&lt;/p&gt;

&lt;p&gt;The globalization-invariant mode &lt;a href="https://devblogs.microsoft.com/dotnet/announcing-net-core-2-0/#globalization-invariant-mode" rel="noopener noreferrer"&gt;was introduced in .NET 2.0 in 2017&lt;/a&gt;. It allows our apps to run without using the full globalization data, which can significantly reduce the runtime size and improve the performance of our application, especially in environments like Docker or microservices.&lt;/p&gt;

&lt;p&gt;In globalization-invariant mode, only the invariant culture is used. This culture is based on English (United States) but it is not specifically tied to en-US. It is just a neutral culture used to ensure consistent behaviour across environments.&lt;/p&gt;

&lt;p&gt;Before .NET 6, globalization-invariant mode allowed us to create any custom culture, as long as its name conformed to the BCP-47 standard. &lt;a href="https://en.wikipedia.org/wiki/IETF_language_tag" rel="noopener noreferrer"&gt;BCP-47 stands for Best Current Practice 47&lt;/a&gt;, and it defines a way to represent language tags that include the language, region, and other relevant cultural data. A BCP-47 language tag typically follows this pattern: &lt;code&gt;language-region&lt;/code&gt;, for example zh-CN and zh-Hans.&lt;/p&gt;

&lt;p&gt;Thus, &lt;a href="https://learn.microsoft.com/en-us/dotnet/core/compatibility/globalization/6.0/culture-creation-invariant-mode#old-behavior" rel="noopener noreferrer"&gt;before .NET 6, if an app creates a culture that is not the invariant culture, the operation succeeds&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;However, &lt;a href="https://learn.microsoft.com/en-us/dotnet/core/compatibility/globalization/6.0/culture-creation-invariant-mode" rel="noopener noreferrer"&gt;starting from .NET 6, an exception is thrown if we create any culture other than the invariant culture in globalization-invariant mode&lt;/a&gt;. This explains why our app throws System.Globalization.CultureNotFoundException.&lt;/p&gt;

&lt;p&gt;We thus need to disable the globalization-invariant mode in the &lt;code&gt;.csproj&lt;/code&gt; file, as shown below, so that we can use the full globalization data, which will allow .NET to properly handle localisation.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;Project Sdk="Microsoft.NET.Sdk.Web"&amp;gt;

  &amp;lt;PropertyGroup&amp;gt;
    ...
    &amp;lt;InvariantGlobalization&amp;gt;false&amp;lt;/InvariantGlobalization&amp;gt;
  &amp;lt;/PropertyGroup&amp;gt;

  ...

&amp;lt;/Project&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Missing of ICU in Alpine
&lt;/h3&gt;

&lt;p&gt;Since Alpine is a very minimal Linux distribution, it does not include many libraries, tools, or system components that are present in more standard distributions like Ubuntu.&lt;/p&gt;

&lt;p&gt;In terms of globalisation, Alpine does not come pre-installed with ICU (International Components for Unicode), which .NET uses for localisation in our case.&lt;/p&gt;

&lt;p&gt;Hence, after we turned off the globalization-invariant mode, we will encounter another issue, which is our Web API not being able to locate a valid ICU package.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cuteprogramming.blog/wp-content/uploads/2024/12/image-45.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpyk87gxqahrjzpk7y8la.png" width="800" height="228"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Our Web API crashes due to the missing of ICU package, as shown in docker logs.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;As suggested in the error message, we need to install the ICU libraries (icu-libs).&lt;/p&gt;

&lt;p&gt;In .NET, &lt;code&gt;icu-libs&lt;/code&gt; provides the necessary ICU libraries that allow our Web API to handle globalisation. However, the ICU libraries rely on culture-specific data to function correctly. This culture-specific data is provided by &lt;code&gt;icu-data-full&lt;/code&gt;, which includes the full set of localisation and globalisation data for different languages and regions. Therefore, we need to install both &lt;code&gt;icu-libs&lt;/code&gt; and &lt;code&gt;icu-data-full&lt;/code&gt;, as shown below.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;...

## Runtime Container
FROM mcr.microsoft.com/dotnet/aspnet:8.0-alpine AS runtime

# Install cultures
RUN apk add --no-cache \
   icu-data-full \
   icu-libs

...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After installing the ICU libraries, our weather forecast Web API container should be running successfully now. Now, when we visit the endpoint, we will realise that it is able to retrieve the correct value from the PO files, as shown in the following screenshot.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cuteprogramming.blog/wp-content/uploads/2024/12/image-48.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8sx7rahxe4p5adog0i5k.png" width="800" height="484"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Yay, we can get the translated texts now!&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;One last thing I would like to share is that, as shown in the screenshot above, since we do not have a PO file for ms-BN (Malay for Brunei), the fallback mechanism automatically uses the &lt;code&gt;ms.po&lt;/code&gt; file instead.&lt;/p&gt;
&lt;h3&gt;
  
  
  Additional Configuration
&lt;/h3&gt;

&lt;p&gt;If you still could not get the translation with PO files to work, perhaps you can try out some of the suggestions from my teammates below.&lt;/p&gt;

&lt;p&gt;Firstly, you may need to setup the &lt;code&gt;AppLocalIcu&lt;/code&gt; in &lt;code&gt;.csproj&lt;/code&gt; file. This setting is used to specify whether the app should use a local copy of ICU or rely on the system-installed ICU libraries. This is particularly useful in containerised environments like Docker.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;Project Sdk="Microsoft.NET.Sdk.Web"&amp;gt;

  &amp;lt;PropertyGroup&amp;gt;
    ...
    &amp;lt;AppLocalIcu&amp;gt;true&amp;lt;/AppLocalIcu&amp;gt;
  &amp;lt;/PropertyGroup&amp;gt;

&amp;lt;/Project&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Secondly, even though we have installed &lt;code&gt;icu-libs&lt;/code&gt; and &lt;code&gt;icu-data-full&lt;/code&gt; in our Alpine container, some .NET apps rely on data beyond just having the libraries available. In such case, we need to turn on the &lt;code&gt;IncludeNativeLibrariesForSelfExtract&lt;/code&gt; setting as well in &lt;code&gt;.csproj&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;Project Sdk="Microsoft.NET.Sdk.Web"&amp;gt;

  &amp;lt;PropertyGroup&amp;gt;
    ...
    &amp;lt;IncludeNativeLibrariesForSelfExtract&amp;gt;true&amp;lt;/IncludeNativeLibrariesForSelfExtract&amp;gt;
  &amp;lt;/PropertyGroup&amp;gt;

&amp;lt;/Project&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Thirdly, please check if you need to configure &lt;a href="https://learn.microsoft.com/en-us/dotnet/core/runtime-config/globalization#predefined-cultures" rel="noopener noreferrer"&gt;DOTNET_SYSTEM_GLOBALIZATION_PREDEFINED_CULTURES_ONLY&lt;/a&gt; as well. However, please take note that this setting only makes sense when when globalization-invariant mode is enabled.&lt;/p&gt;

&lt;p&gt;Finally, you may also need to include the runtime ICU libraries with &lt;a href="https://www.nuget.org/packages/Microsoft.ICU.ICU4C.Runtime" rel="noopener noreferrer"&gt;the Microsoft.ICU.ICU4C.Runtime NuGet package&lt;/a&gt; (Version 72.1.0.3), enabling your app to use culture-specific data for globalisation features.&lt;/p&gt;

&lt;h3&gt;
  
  
  References
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://learn.microsoft.com/en-us/aspnet/core/fundamentals/portable-object-localization?view=aspnetcore-9.0" rel="noopener noreferrer"&gt;Configure portable object localization in ASP.NET Core&lt;/a&gt;;&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://learn.microsoft.com/en-us/aspnet/core/fundamentals/localization/make-content-localizable?view=aspnetcore-9.0" rel="noopener noreferrer"&gt;Make an ASP.NET Core app’s content localizable&lt;/a&gt;;&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://learn.microsoft.com/en-us/dotnet/api/microsoft.aspnetcore.builder.applicationbuilderextensions.userequestlocalization?view=aspnetcore-9.0" rel="noopener noreferrer"&gt;ApplicationBuilderExtensions.UseRequestLocalization Method&lt;/a&gt;;&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://learn.microsoft.com/en-us/aspnet/core/fundamentals/localization/select-language-culture?view=aspnetcore-9.0" rel="noopener noreferrer"&gt;Implement a strategy to select the language/culture for each request in a localized ASP.NET Core app&lt;/a&gt;;&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://medium.com/@jaydeepvpatil225/containerization-of-the-net-core-7-web-api-using-docker-3abdd543f78a" rel="noopener noreferrer"&gt;Containerization of the .NET Core 7 Web API using Docker&lt;/a&gt;;&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://mcr.microsoft.com/en-us/artifact/mar/dotnet/sdk/tags" rel="noopener noreferrer"&gt;Microsoft Artifact Registry – .NET SDK&lt;/a&gt;;&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://mcr.microsoft.com/en-us/artifact/mar/dotnet/aspnet/tags" rel="noopener noreferrer"&gt;Microsoft Artifact Registry – ASP .NET Runtime&lt;/a&gt;;&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://devblogs.microsoft.com/dotnet/announcing-net-core-2-0/#globalization-invariant-mode" rel="noopener noreferrer"&gt;Announcing .NET Core 2.0 – Globalization Invariant Mode&lt;/a&gt;;&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://learn.microsoft.com/en-us/dotnet/core/compatibility/globalization/6.0/culture-creation-invariant-mode" rel="noopener noreferrer"&gt;Culture creation and case mapping in globalization-invariant mode&lt;/a&gt;;&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://stackoverflow.com/questions/71045784/running-net-6-project-in-docker-throws-globalization-culturenotfoundexception" rel="noopener noreferrer"&gt;[StackOverflow] Running .NET 6 project in Docker throws Globalization.CultureNotFoundException&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>aspnet</category>
      <category>c</category>
      <category>experience</category>
      <category>orchardcore</category>
    </item>
    <item>
      <title>From Zero to Gemini: Building an AI-Powered Game Helper</title>
      <dc:creator>Goh Chun Lin</dc:creator>
      <pubDate>Sun, 08 Dec 2024 03:03:32 +0000</pubDate>
      <link>https://dev.to/gohchunlin/from-zero-to-gemini-building-an-ai-powered-game-helper-56ci</link>
      <guid>https://dev.to/gohchunlin/from-zero-to-gemini-building-an-ai-powered-game-helper-56ci</guid>
      <description>&lt;p&gt;&lt;a href="https://cuteprogramming.blog/wp-content/uploads/2024/12/image-40.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F77wyqchm9nivnjsqvg5e.png" width="800" height="446"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;On a chilly November morning, I attended &lt;a href="https://gdg.community.dev/events/details/google-gdg-singapore-presents-devfest-singapore-2024-workshop/cohost-gdg-singapore" rel="noopener noreferrer"&gt;the Google DevFest 2024 in Singapore&lt;/a&gt;. Together with my friends, we attended a workshop titled &lt;strong&gt;“Gemini Masterclass: How to Unlock Its Power with Prompting, Functions, and Agents.”&lt;/strong&gt; The session was led by two incredible speakers, &lt;a href="https://www.linkedin.com/in/martinandrews/" rel="noopener noreferrer"&gt;Martin Andrews&lt;/a&gt; and &lt;a href="https://www.linkedin.com/in/samwitteveen/" rel="noopener noreferrer"&gt;Sam Witteveen&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Martin, who holds a PhD in Machine Learning and has been an Open Source advocate since 1999. Sam is a Google Developer Expert in Machine Learning. Both of them are also &lt;a href="https://www.meetup.com/machine-learning-singapore/members/?op=leaders" rel="noopener noreferrer"&gt;organisers of the Machine Learning Singapore Meetup group&lt;/a&gt;. Together, they delivered an engaging and hands-on workshop about Gemini, the advanced LLM from Google.&lt;/p&gt;

&lt;p&gt;Thanks to their engaging Gemini Masterclass, I have taken my first steps into the world of LLMs. This blog post captures what I learned and my journey into the fascinating world of &lt;a href="https://gemini.google.com/" rel="noopener noreferrer"&gt;Gemini&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cuteprogramming.blog/wp-content/uploads/2024/12/image-4.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmwal1yywu4brhyot75hg.png" width="800" height="450"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Martin Andrews presenting in Google DevFest 2024 in Singapore.&lt;/em&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  About LLM and Gemini
&lt;/h3&gt;

&lt;p&gt;LLM stands for Large Language Model. To most people, an LLM is like a smart friend who can answer almost all our questions with responses that are often accurate and helpful.&lt;/p&gt;

&lt;p&gt;As a LLM, Gemini is trained on large amount of text data and can perform a wide range of tasks: answering questions, writing stories, summarising long documents, or even helping to debug code. What makes them special is their ability to “understand” and generate language in a way that feels natural to us.&lt;/p&gt;

&lt;p&gt;Many of my developer friends have started using Gemini as a coding assistant in their IDEs. While it is good at that, Gemini is much more than just a coding tool.&lt;/p&gt;

&lt;p&gt;Gemini is designed to not only respond to prompts but also act as an assistant with an extra set of tools. To make the most of Gemini, it is important to understand how it works and what it can (and cannot) do. With the knowledge gained from the DevFest workshop, I decided to explore how Gemini could assist with optimising relic choices in a game called Honkai: Star Rail.&lt;/p&gt;
&lt;h3&gt;
  
  
  Honkai: Star Rail and Gemini for Its Relic Recommendations
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://hsr.hoyoverse.com/en-us/home" rel="noopener noreferrer"&gt;Honkai: Star Rail (HSR)&lt;/a&gt; is a popular RPG that has captured the attention of players worldwide. One of the key features of the game is its relic system, where players equip their characters with relics like hats, gloves, or boots to boost stats and unlock special abilities. Each relic has unique attributes, and selecting the right sets of relics for a character can make a huge difference in gameplay.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cuteprogramming.blog/wp-content/uploads/2024/12/image-1.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg8cv66o33xz3gkqcxhwb.png" width="800" height="500"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;An HSR streamer, MurderofBirds, browsing through thousands of relics. (Image Sourcce: MurderofBirds Twitch)&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;As a casual player, I often found myself overwhelmed by the number of options and the subtle synergies between different relic sets. Finding the good relic combination for each character was time-consuming.&lt;/p&gt;

&lt;p&gt;This is where LLMs like Gemini come into play. With the ability to process and analyse complex data, Gemini can help players make smarter decisions.&lt;/p&gt;

&lt;p&gt;In this blog post, I will briefly show how this Gemini-powered relic recommendation system can analyse a player’s current characters to suggest the best options for them. Then it will also explain the logic behind its recommendations, helping us to understand why certain relics are ideal.&lt;/p&gt;
&lt;h3&gt;
  
  
  Setup the Project
&lt;/h3&gt;

&lt;p&gt;To make my project code available to everyone, I used &lt;a href="https://colab.google/" rel="noopener noreferrer"&gt;Google Colab, a hosted Jupyter Notebook service&lt;/a&gt; that requires no setup to use and provides free access to computing resources, including GPUs and TPUs. You can access my code by clicking on the button below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://colab.research.google.com/drive/10KWDTTLapC1i147rLZViaeE1FDyCnZkO?usp=sharing" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcolab.research.google.com%2Fassets%2Fcolab-badge.svg" alt="Open In Colab" width="117" height="20"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In my project, I used the &lt;a href="https://pypi.org/project/google-generativeai/" rel="noopener noreferrer"&gt;google-generativeai Python library&lt;/a&gt;, which is pre-installed in Colab. This library serves as a user-friendly API for interacting with Google LLMs, including Gemini. It makes it easy for us to integrate Gemini capabilities directly into our code.&lt;/p&gt;

&lt;p&gt;Next, we will need to import the necessary libraries.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cuteprogramming.blog/wp-content/uploads/2024/12/image-7.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F38nvgsq90mlq5vyyp194.png" width="800" height="241"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Importing the libraries and setup Gemini client.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The first library to import is definitely the &lt;code&gt;google.generativeai&lt;/code&gt;. Without it, we cannot interact with Gemini easily. Then we have &lt;code&gt;google.colab.userdata&lt;/code&gt; which securely retrieves sensitive data, like our API key, directly from the Colab notebook environment.&lt;/p&gt;

&lt;p&gt;We will also use &lt;code&gt;IPython.display&lt;/code&gt; for displaying results in a readable format, such as Markdown.&lt;/p&gt;

&lt;p&gt;In the &lt;a href="https://medium.com/@parthdasawant/how-to-use-secrets-in-google-colab-450c38e3ec75" rel="noopener noreferrer"&gt;Secret section&lt;/a&gt;, we will have two records, i.e.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;HONKAI_STAR_RAIL_PLAYER_ID&lt;/code&gt;: Your &lt;a href="https://honkai-star-rail.fandom.com/wiki/UID" rel="noopener noreferrer"&gt;HSR player UID&lt;/a&gt;. It is used later to personalise relic recommendations.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;GOOGLE_API_KEY&lt;/code&gt;: The API key that we can get from &lt;a href="https://aistudio.google.com/" rel="noopener noreferrer"&gt;Google AI Studio&lt;/a&gt; to authenticate with Gemini.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://cuteprogramming.blog/wp-content/uploads/2024/12/image-3.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx7bxbi07ewxctyre1eri.png" width="800" height="484"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Creating and retrieving our API keys in Google AI Studio.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Once we have initialised the &lt;code&gt;google.generativeai&lt;/code&gt; library with the &lt;code&gt;GOOGLE_API_KEY&lt;/code&gt;, we can proceed to specify the Gemini model we will be using.&lt;/p&gt;

&lt;p&gt;The choice of model is crucial in LLM projects. Google AI Studio offers several options, each representing a trade-off between accuracy and cost. For my project, I choose &lt;code&gt;models/gemini-1.5-flash-8b-001&lt;/code&gt;, which provided a good balance for this experiment. Larger models might offer slightly better accuracy but at a significant cost increase.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cuteprogramming.blog/wp-content/uploads/2024/12/image-6.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fczqvi6470qxn5yfhmt10.png" width="800" height="484"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Google AI Studio offers a range of models, from smaller, faster models suitable for quick tasks to larger, more powerful models capable of more complex processing.&lt;/em&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Hallucination and Knowledge Limitation
&lt;/h3&gt;

&lt;p&gt;We often think of LLMs like Gemini as our smart friends who can answer any question. But just like even our smartest friend can sometimes make mistakes, LLMs have their limits too.&lt;/p&gt;

&lt;p&gt;Gemini knowledge is based on the data it was trained on, which means it doesn’t actually know everything. Sometimes, it might hallucinate, i.e. model invents information that sounds plausible but not actually true.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cuteprogramming.blog/wp-content/uploads/2024/12/image-8.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faag38tt4hjcgl3tlk1vm.png" width="800" height="373"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Kiana is not a character from Honkai: Star Rail but she is from another game called Honkai Impact 3rd.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;While Gemini is trained on a massive dataset, its knowledge is not unlimited. As a responsible AI, it acknowledges its limitations. So, when it cannot find the answer, it will tell us that it lacks the necessary information rather than fabricating a response. This is how Google builds safer AI systems, as part of its &lt;a href="https://safety.google/cybersecurity-advancements/saif/" rel="noopener noreferrer"&gt;Secure AI Framework (SAIF)&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cuteprogramming.blog/wp-content/uploads/2024/12/image-9.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flw9mcrad53y7rfo0xtk2.png" width="800" height="111"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Knowledge cutoff in action.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;To overcome these constraints, we need to employ strategies to augment the capabilities of LLMs. Techniques such as integrating Retrieval-Augmented Generation (RAG) and leveraging external APIs can help bridge the gap between what the model knows and what it needs to know to perform effectively.&lt;/p&gt;
&lt;h3&gt;
  
  
  System Instructions
&lt;/h3&gt;

&lt;p&gt;Leveraging System Instructions is a way to improve the accuracy and reliability of Gemini responses.&lt;/p&gt;

&lt;p&gt;System instructions are prompts given before the main query in order to guide Gemini. These instructions provide crucial context and constraints, significantly enhancing the accuracy and reliability of the generated output.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cuteprogramming.blog/wp-content/uploads/2024/12/image-19.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3quvtf1rui11ogvft5ni.png" width="800" height="580"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;System Instruction with contextual information about HSR characters ensures Gemini has the necessary background knowledge.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The specific design and phrasing of the system instructions provided to the Gemini is crucial. Effective system instructions provide Gemini with the necessary context and constraints to generate accurate and relevant responses. Without carefully crafted system instructions, even the most well-designed prompt can yield poor results.&lt;/p&gt;
&lt;h3&gt;
  
  
  Context Framing
&lt;/h3&gt;

&lt;p&gt;As we can see from the example above, writing clear and effective system instructions requires careful thought and a lot of testing.&lt;/p&gt;

&lt;p&gt;This is just one part of a much bigger picture called Context Framing, which includes preparing data, creating embeddings, and deciding how the system retrieves and uses that data. Each of these steps needs expertise and planning to make sure the solution works well in real-world scenarios.&lt;/p&gt;

&lt;p&gt;You might have heard the term “Prompt Engineering,” and it sounds kind of technical, but it is really about figuring out how to ask the LLM the right questions in the right way to get the best answers from an LLM.&lt;/p&gt;

&lt;p&gt;While context framing and prompt engineering are closely related and often overlap, they emphasise different aspects of the interaction with the LLM.&lt;/p&gt;
&lt;h3&gt;
  
  
  Stochasticity
&lt;/h3&gt;

&lt;p&gt;While experimenting with Gemini, I noticed that even if I use the exact same prompt, the output can vary slightly each time. This happens because LLMs like Gemini have a built-in element of randomness , known as Stochasticity.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cuteprogramming.blog/wp-content/uploads/2024/12/image-39.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr9qtcrklkposezadzi85.png" width="688" height="387"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Lingsha, an HSR character released in 2024. (Image Credit: Game8)&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;For example, when querying for DPS characters, Lingsha was inconsistently included in the results. While this might seem like a minor variation, it underscores the probabilistic nature of LLM outputs and suggests that running multiple queries might be needed to obtain a more reliable consensus.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cuteprogramming.blog/wp-content/uploads/2024/12/image-17.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2p7s0ucposi02e33qfgm.png" width="800" height="600"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Lingsha was inconsistently included in the response to the query about multi-target DPS characters.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cuteprogramming.blog/wp-content/uploads/2024/12/image-18.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fly76osy6orrq0fzdjygb.png" width="800" height="500"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;According to the official announcement, even though Lingsha is a healer, she can cause significant damage to all enemies too. (Image Source: Honkai: Star Rail YouTube)&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Hence, it is important to treat writing efficient system instruction and prompt as iterative processes. so that we can experiment with different phrasings to find what works best and yields the most consistent results.&lt;/p&gt;
&lt;h3&gt;
  
  
  Temperature Tuning
&lt;/h3&gt;

&lt;p&gt;We can also reduce the stochasticity of Gemini response through adjusting parameters like temperature. Lower temperatures typically reduce randomness, leading to more consistent outputs, but also may reduce creativity and diversity.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cloud.google.com/vertex-ai/generative-ai/docs/learn/prompts/adjust-parameter-values#temperature" rel="noopener noreferrer"&gt;Temperature&lt;/a&gt; is an important parameter for balancing predictability and diversity in the output. Temperature, a number in the range of 0.0 to 2.0 with default to be 1.0 in gemini-1.5-flash model, indicates the probability distribution over the vocabulary in the model when generating text. Hence, a lower temperature makes the model more likely to select words with higher probabilities, resulting in more predictable and focused text.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cuteprogramming.blog/wp-content/uploads/2024/12/image-20.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3odvpsme0bqr42okslz7.png" width="800" height="243"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Having Temperature=0 means that the model will always select the most likely word at each step. The output will be highly deterministic and repetitive.&lt;/em&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Function Calls
&lt;/h3&gt;

&lt;p&gt;A major limitation of using system instructions alone is their static nature.&lt;/p&gt;

&lt;p&gt;For example, my initial system instructions included a list of HSR characters, but this list is static. The list does not include newly released characters or characters specific to the player’s account. In order to dynamically access a player’s character database and provide personalised recommendations, I integrated &lt;a href="https://ai.google.dev/gemini-api/docs/function-calling" rel="noopener noreferrer"&gt;Function Calls&lt;/a&gt; to retrieve real-time data.&lt;/p&gt;

&lt;p&gt;For fetching the player’s HSR character data, I leveraged the open-source Python library &lt;a href="https://github.com/KT-Yeh/mihomo" rel="noopener noreferrer"&gt;mihomo&lt;/a&gt;. This library provides an interface for accessing game data, enabling dynamic retrieval of a player’s characters and their attributes. This dynamic data retrieval is crucial for generating truly personalised relic recommendations.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cuteprogramming.blog/wp-content/uploads/2024/12/image-26.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F86vzlygw3olj5f2ry94f.png" width="800" height="551"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Using the mihomo library, I retrieve five of my Starfaring Companions.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Defining the functions in my Python code was only the first step. To use function calls, Gemini needed to know which functions were available. We can provide this information to Gemini as shown below.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;model = genai.GenerativeModel('models/gemini-1.5-flash-8b-001', tools=[get_player_name, get_player_starfaring_companions])
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After we pass a query to a Gemini, &lt;a href="https://codelabs.developers.google.com/codelabs/gemini-function-calling#0" rel="noopener noreferrer"&gt;the model returns a structured object that includes the names of relevant functions and their arguments based on the prompt&lt;/a&gt;, as shown in the screenshot below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cuteprogramming.blog/wp-content/uploads/2024/12/image-27.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fludly1otbiuee81j9cn1.png" width="800" height="208"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;The correct function call is picked up by Gemini based on the prompt.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Using descriptive function names is essential for successful function calling with LLMs because the accuracy of function calls depends heavily on well-designed function names in our Python code. Inaccurate naming can directly impact the reliability of the entire system.&lt;/p&gt;

&lt;p&gt;If our Python function is named incorrectly, for example, calling a function &lt;code&gt;get_age&lt;/code&gt; but it returns the name of the person, Gemini might select that function wrongly when the prompt is asking for age.&lt;/p&gt;

&lt;p&gt;As shown in the screenshot above, the prompt requested information about all the characters of the player. &lt;a href="https://ai.google.dev/gemini-api/docs/function-calling" rel="noopener noreferrer"&gt;Gemini simply determines which function to call and provides the necessary arguments. Gemini does not directly execute the functions.&lt;/a&gt; The actual execution of the function needs to be handled by us, as demonstrated in the screenshot below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cuteprogramming.blog/wp-content/uploads/2024/12/image-28.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0d6es44nbrai0ytqrl4i.png" width="800" height="365"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;After Gemini telling us which function to call, our code needs to call the function to get the result.&lt;/em&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Grounding with Google Search
&lt;/h3&gt;

&lt;p&gt;Function calls are a powerful way to access external data, but they require pre-defined functions and APIs.&lt;/p&gt;

&lt;p&gt;To go beyond these limits and gather information from many online sources, we can use Gemini &lt;a href="https://ai.google.dev/gemini-api/docs/grounding?lang=python" rel="noopener noreferrer"&gt;grounding feature with Google Search&lt;/a&gt;. This feature allows Gemini to google and include what it finds in its answers. This makes it easier to get up-to-date information and handle questions that need real-time data.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cuteprogramming.blog/wp-content/uploads/2024/12/image-30.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw14f6urkx0djg2ahg2jv.png" width="800" height="484"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;If you are getting the HTTP 429 errors when using the Google Search feature, please make sure you have setup a billing account here with enough quota.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;With this feature enabled, we thus can ask Gemini to get some real-time data from the Internet, as shown below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cuteprogramming.blog/wp-content/uploads/2024/12/image-29.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyki7yk3fdaaavlecc27h.png" width="800" height="123"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;The upcoming v2.7 patch of HSR is indeed scheduled to be released on 4th December.&lt;/em&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Building a Semantic Knowledge Base with Pinecone
&lt;/h3&gt;

&lt;p&gt;System instructions and Google search grounding provide valuable context, but a structured knowledge base is needed to handle the extensive data about HSR relics.&lt;/p&gt;

&lt;p&gt;Having explored system instructions and Google search grounding, the next challenge is to manage the extensive data about HSR relics. We need a way to store and quickly retrieve this information, enabling the system to generate timely and accurate relic recommendations. Thus we will need to use a vector database ideally suited for managing the vast dataset of relic information.&lt;/p&gt;

&lt;p&gt;Vector databases, unlike traditional databases that rely on keyword matching, store information as vectors enabling efficient similarity searches. This allows for retrieving relevant relic sets based on the semantic meaning of a query, rather than relying solely on keywords.&lt;/p&gt;

&lt;p&gt;There are many options for vector database, but I choose &lt;a href="https://www.pinecone.io/" rel="noopener noreferrer"&gt;Pinecone&lt;/a&gt;. Pinecone, a managed service, offered the scalability needed to handle the HSR relic dataset and the robust API essential for reliable data access. Its availability of a free tier is also a significant factor because it allows me to keep costs low during the development of my project.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cuteprogramming.blog/wp-content/uploads/2024/12/image-32.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5vqv157ad5ivu0tm7img.png" width="800" height="484"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;API keys in Pinecone dashboard.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Pinecone’s well-documented API and straightforward SDK make integration surprisingly easy. To get started, simply follow &lt;a href="https://docs.pinecone.io/guides/get-started/quickstart" rel="noopener noreferrer"&gt;the Pinecone documentation&lt;/a&gt; to install the SDK in our code and retrieve the API key.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Import the Pinecone library
from pinecone.grpc import PineconeGRPC as Pinecone
from pinecone import ServerlessSpec
import time

# Initialize a Pinecone client with your API key
pc = Pinecone(api_key=userdata.get('PINECONE_API_KEY'))
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I prepare my Honkai: Star Rail relic data, which I have previously organised into a JSON structure. This data includes information on each relic set’s two-piece and four-piece effects. Here’s a snippet to illustrate the format:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[
  {
    "name": "Sacerdos' Relived Ordeal",
    "two_piece": "Increases SPD by 6%",
    "four_piece": "When using Skill or Ultimate on one ally target, increases the ability-using target's CRIT DMG by 18%, lasting for 2 turn(s). This effect can stack up to 2 time(s)."
  },
  {
    "name": "Scholar Lost in Erudition",
    "two_piece": "Increases CRIT Rate by 8%",
    "four_piece": "Increases DMG dealt by Ultimate and Skill by 20%. After using Ultimate, additionally increases the DMG dealt by the next Skill by 25%."
  },
  ...
]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With the relic data organised in Pinecone, the next challenge is to enable similarity searches with vector embedding. &lt;a href="https://www.pinecone.io/learn/vector-embeddings/" rel="noopener noreferrer"&gt;Vector embedding&lt;/a&gt; captures the semantic meaning of the text, allowing Pinecone to identify similar relic sets based on their inherent properties and characteristics.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cuteprogramming.blog/wp-content/uploads/2024/12/image-31.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9odmzgs4azga6oca059k.png" width="800" height="294"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Vector embedding representations (Image Credit: Pinecode)&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Now, we can generate vector embeddings for the HSR relic data using Pinecone. The following code snippet illustrates this process which is to convert textual descriptions of relic sets into numerical vector embeddings. These embeddings capture the semantic meaning of the relic set descriptions, enabling efficient similarity searches later.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Load relic set data from the JSON file
with open('/content/hsr-relics.json', 'r') as f:
    relic_data = json.load(f)

# Prepare data for Pinecone
relic_info_data = [
    {"id": relic['name'], "text": relic['two_piece'] + " " + relic['four_piece']} # Combine relic set descriptions
    for relic in relic_data
]

# Generate embeddings using Pinecone
embeddings = pc.inference.embed(
    model="multilingual-e5-large",
    inputs=[d['text'] for d in relic_info_data],
    parameters={"input_type": "passage", "truncate": "END"}
)

print(embeddings)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As shown in the code above,  we use &lt;a href="https://docs.pinecone.io/models/multilingual-e5-large" rel="noopener noreferrer"&gt;the multilingual-e5-large model, a text embedding model from Microsoft research&lt;/a&gt;, to generate a vector embedding for each relic set. The multilingual-e5-large model works well on messy data and it is good for short queries.&lt;/p&gt;

&lt;p&gt;Pinecone ability to perform fast similarity searches relies on its indexing mechanism. Without an index, searching for similar relic sets would require comparing each relic set’s embedding vector to every other one, which would be extremely slow, especially with a large dataset. I choose &lt;a href="https://docs.pinecone.io/guides/indexes/understanding-indexes#serverless-indexes" rel="noopener noreferrer"&gt;Pinecone serverless index&lt;/a&gt; hosted on AWS for its automatic scaling and reduced infrastructure management.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Create a serverless index
index_name = "hsr-relics-index"

if not pc.has_index(index_name):
    pc.create_index(
        name=index_name,
        dimension=1024,
        metric="cosine",
        spec=ServerlessSpec(
            cloud='aws', 
            region='us-east-1'
        ) 
    ) 

# Wait for the index to be ready
while not pc.describe_index(index_name).status['ready']:
    time.sleep(1)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The dimension parameter specifies the dimensionality of the vector embeddings. Higher dimensionality generally allows for capturing more nuanced relationships between data points. For example, two relic sets might both increase ATK, but one might also increase SPD while the other increases Crit DMG. A higher-dimensional embedding allows the system to capture these subtle distinctions, leading to more relevant recommendations.&lt;/p&gt;

&lt;p&gt;For the metric parameter which measures the similarity between two vectors (representing relic sets), we use the cosine metric which is suitable for measuring the similarity between vector embeddings generated from text. This is crucial for understanding how similar two relic descriptions are.&lt;/p&gt;

&lt;p&gt;With the vector embeddings generated, the next step was to upload them into my Pinecone index. Pinecone uses the upsert function to add or update vectors in the index. The following code snippet shows how we can upsert the generated embeddings into the Pinecone index.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Target the index where you'll store the vector embeddings
index = pc.Index("hsr-relics-index")

# Prepare the records for upsert
# Each contains an 'id', the embedding 'values', and the original text as 'metadata'
records = []
for r, e in zip(relic_info_data, embeddings):
    records.append({
        "id": r['id'],
        "values": e['values'],
        "metadata": {'text': r['text']}
    })

# Upsert the records into the index
index.upsert(
    vectors=records,
    namespace="hsr-relics-namespace"
)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The code uses the zip function to iterate through both the list of prepared relic data and the list of generated embeddings simultaneously. For each pair, it creates a record for Pinecone with the following attributes.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;id: Name of the relic set to ensure uniqueness;&lt;/li&gt;
&lt;li&gt;values: The vector representing the semantic meaning of the relic set effects;&lt;/li&gt;
&lt;li&gt;metadata: The original description of the relic effects, which will be used later for providing context to the user’s recommendations. &lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Implementing Similarity Search in Pinecone
&lt;/h3&gt;

&lt;p&gt;With the relic data stored in Pinecone now, we can proceed to implement the similarity search functionality.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;def query_pinecone(query: str) -&amp;gt; dict:

  # Convert the query into a numerical vector that Pinecone can search with
  query_embedding = pc.inference.embed(
      model="multilingual-e5-large",
      inputs=[query],
      parameters={
          "input_type": "query"
      }
  )

  # Search the index for the three most similar vectors
  results = index.query(
      namespace="hsr-relics-namespace",
      vector=query_embedding[0].values,
      top_k=3,
      include_values=False,
      include_metadata=True
  )

  return results
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The function above takes a user’s query as input, converts it into a vector embedding using Pinecone’s inference endpoint, and then uses that embedding to search the index, returning the top three most similar relic sets along with their metadata.&lt;/p&gt;

&lt;h3&gt;
  
  
  Relic Recommendations with Pinecone and Gemini
&lt;/h3&gt;

&lt;p&gt;With the integration with Pinecode, we design the initial prompt to pick relevant relic sets from Pinecone. After that, we take the results from Pinecone and combine them with the initial prompt to create a richer, more informative prompt for Gemini, as shown in the following code.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from google.generativeai.generative_models import GenerativeModel

async def format_pinecone_results_for_prompt(model: GenerativeModel, player_id: int) -&amp;gt; dict:
  character_relics_mapping = await get_player_character_relic_mapping(player_id)

  result = {}

  for character_name, (character_avatar_image_url, character_description) in character_relics_mapping.items():
    print(f"Processing Character: {character_name}")

    additional_character_data = character_profile.get(character_name, "")

    character_query = f"Suggest some good relic sets for this character: {character_description} {additional_character_data}"

    pinecone_response = query_pinecone(character_query)

    prompt = f"User Query: {character_query}\n\nRelevant Relic Sets:\n"
    for match in pinecone_response['matches']:
        prompt += f"* {match['id']}: {match['metadata']['text']}\n" # Extract relevant data
    prompt += "\nBased on the above information, recommend two best relic sets and explain your reasoning. Each character can only equip with either one 4-piece relic or one 2-piece relic with another 2-piece relic. You cannot recommend a combination of 4-piece and 2-piece together. Consider the user's query and the characteristics of each relic set."

    response = model.generate_content(prompt)

    result[character_avatar_image_url] = response.text

  return result
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The code shows that we are doing both prompt engineering (designing the initial query to get relevant relics) and context framing (combining the initial query with the retrieved relic information to get a better overall recommendation from Gemini).&lt;/p&gt;

&lt;p&gt;First the code retrieves data about the player’s characters, including their descriptions, images, and relics the characters currently are wearing. The code then gathers potentially relevant data about each character from a separate data source &lt;code&gt;character_profile&lt;/code&gt; which has more information, such as gameplay mechanic about the characters that we got from &lt;a href="https://game8.co/games/Honkai-Star-Rail/archives/409604" rel="noopener noreferrer"&gt;the Game8 Character List&lt;/a&gt;. With the character data, the query will find similar relic sets in the Pinecone database.&lt;/p&gt;

&lt;p&gt;After Pinecone returns matches, the code constructs a detailed prompt for the Gemini model. This prompt includes the character’s description, relevant relic sets found by Pinecone, and crucial instructions for the model. The instructions emphasise the constraints of choosing relic sets: either a 4-piece set, or two 2-piece sets, not a mix. Importantly, it also tells Gemini to consider the character’s existing profile and to prioritise fitting relic sets.&lt;/p&gt;

&lt;p&gt;Finally, the code sends this detailed prompt to Gemini, receiving back the recommended relic sets.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cuteprogramming.blog/wp-content/uploads/2024/12/image-33.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4ghdoge9i5eg8gdi6x7p.png" width="800" height="453"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Knight of Purity Palace, is indeed a great option for Gepard!&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cuteprogramming.blog/wp-content/uploads/2024/12/image-35.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdijbfvg16ojs8hq3goo3.png" width="800" height="508"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Enviosity, a popular YouTuber known for his in-depth Honkai: Star Rail strategy guides, introduced Knight of Purity Palace for Gepard too. (Source: YouTube)&lt;/em&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Langtrace
&lt;/h3&gt;

&lt;p&gt;Using LLMs like Gemini is sure exciting, but figuring out what is happening “under the hood” can be tricky.&lt;/p&gt;

&lt;p&gt;If you are a web developer, you are probably familiar with &lt;a href="https://grafana.com/grafana/dashboards/" rel="noopener noreferrer"&gt;Grafana dashboards&lt;/a&gt;. They show you how your web app is performing, highlighting areas that need improvement.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.langtrace.ai/introduction" rel="noopener noreferrer"&gt;Langtrace&lt;/a&gt; is like Grafana, but specifically for LLMs. It gives you a similar visual overview, tracking our LLM calls, showing us where they are slow or failing, and helping us optimise the performance of our AI app.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cuteprogramming.blog/wp-content/uploads/2024/12/image-36.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa1v66u7sc5p5cb23b5nn.png" width="800" height="485"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Traces for the Gemini calls are displayed individually.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Langtrace is not only useful for tracing our LLM calls, it also offers metrics on token counts and costs, as shown in the following screenshot.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cuteprogramming.blog/wp-content/uploads/2024/12/image-37.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1x27z34docuivoxzlith.png" width="800" height="485"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Beyond tracing calls, Langtrace collects metrics too.&lt;/em&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Wrap-Up
&lt;/h3&gt;

&lt;p&gt;Building this Honkai: Star Rail (HSR) relic recommendation system is a rewarding journey into the world of Gemini and LLMs.&lt;/p&gt;

&lt;p&gt;I am incredibly grateful to Martin Andrews and Sam Witteveen for their inspiring Gemini Masterclass at Google DevFest in Singapore. Their guidance helped me navigate the complexities of LLM development, and I learned firsthand the importance of careful prompt engineering, the power of system instructions, and the need for dynamic data access through function calls. These lessons underscore the complexities of developing robust LLM apps and will undoubtedly inform my future AI projects.&lt;/p&gt;

&lt;p&gt;Building this project is an enjoyable journey of learning and discovery. I encountered many challenges along the way, but overcoming them deepened my understanding of Gemini. If you’re interested in exploring the code and learning from my experiences, you can access my Colab notebook through the button below. I welcome any feedback you might have!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://colab.research.google.com/drive/10KWDTTLapC1i147rLZViaeE1FDyCnZkO?usp=sharing" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcolab.research.google.com%2Fassets%2Fcolab-badge.svg" alt="Open In Colab" width="117" height="20"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  References
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://pureai.com/Articles/2023/12/20/try-gemini.aspx" rel="noopener noreferrer"&gt;You Can Explore the New Gemini Large Language Model Even if You’re Not a Data Scientist&lt;/a&gt;;&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://ai.google.dev/gemini-api/docs/function-calling" rel="noopener noreferrer"&gt;Intro to function calling with the Gemini API&lt;/a&gt;;&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/KT-Yeh/mihomo" rel="noopener noreferrer"&gt;[GitHub] MetaCubeX/mihomo&lt;/a&gt;;&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://medium.com/google-cloud/tracing-with-langtrace-and-gemini-5eee69fe895e" rel="noopener noreferrer"&gt;Tracing with Langtrace and Gemini&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>cloudcomputinggoogle</category>
      <category>event</category>
      <category>experience</category>
      <category>gemini</category>
    </item>
    <item>
      <title>[KOSD] Change of FromQuery Model Binding from .NET 6 to .NET8</title>
      <dc:creator>Goh Chun Lin</dc:creator>
      <pubDate>Thu, 31 Oct 2024 12:44:16 +0000</pubDate>
      <link>https://dev.to/gohchunlin/kosd-change-of-fromquery-model-binding-from-net-6-to-net8-17jf</link>
      <guid>https://dev.to/gohchunlin/kosd-change-of-fromquery-model-binding-from-net-6-to-net8-17jf</guid>
      <description>&lt;p&gt;&lt;a href="https://cuteprogramming.blog/wp-content/uploads/2024/10/image.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7qbsxuzrae2aptqgk6s8.png" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Recently, while migrating our project from .NET 6 to .NET 8, my teammate &lt;a href="https://www.linkedin.com/in/chanjem/" rel="noopener noreferrer"&gt;Jeremy Chan&lt;/a&gt; uncovered an undocumented change in model binding behaviour that seems to appear since .NET 7. This change is not clearly explained in the official .NET documentation, so it can be something developers easily overlook.&lt;/p&gt;

&lt;p&gt;To illustrate the issue, let’s begin with a simple Web API project and explore a straightforward controller method that highlights the change.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[ApiController]
public class FooController
{
  [HttpGet()]
  public async void Get([FromQuery] string value = "Hello")
  {
    Console.WriteLine($"Value is {value}");

    return new JsonResult() { StatusCode = StatusCodes.Status200OK };
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then we assume that we have nullable enabled in both .NET 6 and .NET 8 projects.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;Project Sdk="Microsoft.NET.Sdk.Web"&amp;gt;

    &amp;lt;PropertyGroup&amp;gt;
        &amp;lt;Nullable&amp;gt;enable&amp;lt;/Nullable&amp;gt;
        ...
    &amp;lt;/PropertyGroup&amp;gt;

    ...

&amp;lt;/Project&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Situation in .NET 6
&lt;/h3&gt;

&lt;p&gt;In .NET 6, when we call the endpoint with &lt;code&gt;/foo?value=&lt;/code&gt;, we shall receive the following error.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "type": "https://tools.ietf.org/html/rfc7231#section-6.5.1",
  "title": "One or more validation errors occurred.",
  "status": 400,
  "traceId": "00-5bc66c755994b2bba7c9d2337c1e5bc4-e116fa61d942199b-00",
  "errors": {
    "value": [
      "The value field is required."
    ]
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;However, if we change the method to be as follows, the error will not be there.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;public async void Get([FromQuery] string? value)
{
    if (value is null)
        Console.WriteLine($"Value is null!!!");
    else
        Console.WriteLine($"Value is {value}");

    return new JsonResult() { StatusCode = StatusCodes.Status200OK };
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The log when calling the endpoint with &lt;code&gt;/foo?value=&lt;/code&gt; will then be “Value is null!!!”.&lt;/p&gt;

&lt;p&gt;Hence, we can know that query string without value will be interpreted as being null. That is why there will be a validation error when &lt;code&gt;value&lt;/code&gt; is not nullable.&lt;/p&gt;

&lt;p&gt;Thus, we can say that, in order to make the endpoint work in .NET 6, we need to change it to be as follows to make the &lt;code&gt;value&lt;/code&gt; optional. This will not mark &lt;code&gt;value&lt;/code&gt; as a required field.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;public async void Get([FromQuery] string? value = "Hello")
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, if we call the endpoint with &lt;code&gt;/foo?value=&lt;/code&gt;, we shall receive see the log “Value is Hello” printed.&lt;/p&gt;

&lt;h3&gt;
  
  
  Situation in .NET 8 (and .NET 7)
&lt;/h3&gt;

&lt;p&gt;Then how about in .NET 8 with the same original setup, i.e. as shown below.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;public async void Get([FromQuery] string value = "Hello")
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In .NET 8, when we call the endpoint with &lt;code&gt;/foo?value=&lt;/code&gt;, we shall see the log “Value is Hello” printed.&lt;/p&gt;

&lt;p&gt;So, what is happening here?&lt;/p&gt;

&lt;p&gt;In .NET 7, &lt;a href="https://blog.ndepend.com/the-new-net-7-0-iparsable-interface/" rel="noopener noreferrer"&gt;a new Interface IParsable was introduced&lt;/a&gt;. Thus, starting from the .NET 7, &lt;a href="https://learn.microsoft.com/en-us/aspnet/core/mvc/models/model-binding?view=aspnetcore-7.0#bind-with-iparsablettryparse" rel="noopener noreferrer"&gt;IParsable.TryParse API is used for binding controller action parameter values&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Initial research shows that, under the hood, .NET 7 onwards, the new model binding implementation is used and it causes this to happen.&lt;/p&gt;

&lt;h3&gt;
  
  
  References
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://github.com/dotnet/runtime/issues/78842" rel="noopener noreferrer"&gt;[API Proposal]: String should implement IParsable&lt;/a&gt;;&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/dotnet/runtime/pull/82836" rel="noopener noreferrer"&gt;Have bool and string implement ISpanParsable&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;KOSD, or Kopi-O Siew Dai, is a type of Singapore coffee that I enjoy. It is basically a cup of coffee with a little bit of sugar. This series is meant to blog about technical knowledge that I gained while having a small cup of Kopi-O Siew Dai.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>aspnet</category>
      <category>c</category>
      <category>core</category>
      <category>experience</category>
    </item>
  </channel>
</rss>
