<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: David Pereira</title>
    <description>The latest articles on DEV Community by David Pereira (@bolt04).</description>
    <link>https://dev.to/bolt04</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/bolt04"/>
    <language>en</language>
    <item>
      <title>Book Review: Co-Intelligence by Ethan Mollick</title>
      <dc:creator>David Pereira</dc:creator>
      <pubDate>Sun, 01 Mar 2026 14:42:53 +0000</pubDate>
      <link>https://dev.to/bolt04/book-review-co-intelligence-by-ethan-mollick-f5k</link>
      <guid>https://dev.to/bolt04/book-review-co-intelligence-by-ethan-mollick-f5k</guid>
      <description>&lt;h3&gt;
  
  
  Table of Contents
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Introduction&lt;/li&gt;
&lt;li&gt;AI as a Thinking Companion&lt;/li&gt;
&lt;li&gt;
The Human-in-the-Loop Principle

&lt;ul&gt;
&lt;li&gt;Critical Thinking&lt;/li&gt;
&lt;li&gt;Disruption in the job market&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Centaur vs Cyborg approaches&lt;/li&gt;

&lt;li&gt;Resources&lt;/li&gt;

&lt;li&gt;Conclusion&lt;/li&gt;

&lt;/ul&gt;

&lt;h3&gt;
  
  
  Introduction
&lt;/h3&gt;

&lt;p&gt;We recently finished reading &lt;em&gt;Co-Intelligence: Living and Working with AI&lt;/em&gt; by Ethan Mollick in our company's book club. The book shares four core principles for AI collaboration and outlines various practical applications. Some really stuck with me, and I've tried to incorporate them in my work. Reading the author's perspective and learning his way of thinking definitely improved how I look at these tools. But if you know me, you know how skeptical I am. There are some chapters and opinions that I don't agree with.&lt;/p&gt;

&lt;p&gt;So in this post, I'll share the key insights from our book club in the context of software development, plus some personal opinions as always 🙂.&lt;/p&gt;

&lt;h3&gt;
  
  
  AI as a Thinking Companion
&lt;/h3&gt;

&lt;p&gt;One of the most practical takeaways for me was viewing AI as a co-worker and thinking companion. When done right, this can be incredibly useful. Some people use it heavily for deep research, not so much to delegate tasks for it to do.  &lt;a href="https://www.linkedin.com/in/andredsantos/" rel="noopener noreferrer"&gt;André Santos&lt;/a&gt; gave some examples on the tasks it has been useful, like Terraform code or generating bash scripts. On those tasks, we can write a detailed prompt, alongside proper documentation (e.g. Context7 MCP), and ask it to write Terraform since it's simpler and faster. Even just making a POC, or demo, turning an idea you have into working software to see how viable the idea is. That is a perfect use case for delegating the front-end and back-end to AI. It's not code that will ship to production, it's a way to make prototypes or quick demo apps that otherwise you'd never spend the time to build.&lt;/p&gt;

&lt;p&gt;I've enjoyed using models like Claude to help me around my tasks at work because they often uncover possibilities I haven't thought about. The conversational style of going back and forth helps me fine-tune my own solution. It's not just "give me code," it's "let's discuss this architecture". At the end of the conversation, we can generate a good draft of a PRD (Product Requirements Document). Notice I don't delegate my thinking to it, it's a tool that helps me think of solutions or just &lt;a href="https://x.com/trq212/status/2005315275026260309" rel="noopener noreferrer"&gt;interview me sometimes&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;However, it can be annoying. I'd like to minimize the number of times I have to tell it "no, you're wrong. The Microsoft documentation for Azure Container Apps does not state X as you said" 😅.&lt;br&gt;
To fix this, I've tried giving an explicit instruction in my system prompts:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;"It's also very important for you to verify if there is official documentation that supports your claims and statements. Please find official documentation supporting your claims before responding to a user. If there isn't documentation confirming your statement, don't include it in the response."&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;I have had better results with this, still not perfect. In a longer conversation, I think it doesn't always verify the docs (memory limits, perhaps), but sometimes I get the response: "(...) Based on my search through the official documentation, I need to be honest with you (...)".&lt;/p&gt;

&lt;p&gt;I really find it funny that Claude "needs" to be honest with me 😄. Sycophancy is truly annoying, especially since we are talking about AI as a thinking companion. If your AI partner always agrees with you, how useful is it really as a thinking companion?&lt;/p&gt;

&lt;h3&gt;
  
  
  The Human-in-the-Loop Principle
&lt;/h3&gt;

&lt;p&gt;While Mollick's vision of a collaborative future with AI is profoundly optimistic, he is also a realist. One of the most important principles, and a recurring theme in the book, is the absolute necessity of human oversight - the "human-in-the-loop" principle.&lt;br&gt;
This is a key quote from the book:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;For now, AI works best with human help, and you want to be that helpful human. As AI gets more capable and requires less human help — you still want to be that human. So the second principle is to learn to be the human in the loop.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;One of Mollick's key warnings is about &lt;a href="https://www.linkedin.com/posts/emollick_a-fundamental-mistake-i-see-people-building-activity-7153484182134923265-IxAg" rel="noopener noreferrer"&gt;falling asleep at the wheel&lt;/a&gt;. When AI performs well, humans stop paying attention. This has been referenced by Simon Willison as well, in his recent insightful post &lt;a href="https://simonwillison.net/2025/Dec/31/the-year-in-llms/#the-year-of-yolo-and-the-normalization-of-deviance" rel="noopener noreferrer"&gt;2025: The year in LLMs&lt;/a&gt;.&lt;br&gt;
All I'm saying is I understand &lt;code&gt;--dangerously-skip-permissions&lt;/code&gt; is useful as a tool when used in a secure sandbox environment. But we should verify our confidence level on the AI's output and the autonomy + tools we give it. If we don't, we risk using AI on tasks that fall outside the Jagged Frontier, which can lead to security issues, nasty bugs, and hurt our ability to learn.&lt;/p&gt;

&lt;p&gt;I say this knowing full well that I trust Claude Opus 4.5 more on any task I give it. So I have to actively force myself to verify its suggestions just as rigorously, verify which tools I gave it access to, and which are denied. For example, I use Claude Code hooks to prevent any &lt;code&gt;appsettings&lt;/code&gt;, &lt;code&gt;.env&lt;/code&gt;, or similar files from being accessed. I still try to read the LLM reasoning/thinking text, so that I understand better, and simply out of curiosity as well.&lt;/p&gt;

&lt;p&gt;I simply can't forget when I saw the &lt;a href="https://www.anthropic.com/system-cards" rel="noopener noreferrer"&gt;Claude Sonnet 4 and Opus 4 System Card&lt;/a&gt;, the "High-agency behavior" Anthropic examined. Whistleblowing and other misalignment problems are possible, for example, this is a quote from the Opus 4.6 System card:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;In our &lt;strong&gt;whistleblowing and morally-motivated sabotage evaluations&lt;/strong&gt;, we observed a low but persistent rate of the model acting against its operator’s interests in unanticipated ways. Overall, Opus 4.6 was slightly more inclined to this behavior than Opus 4.5.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;All I'm saying is let's be conscious of these behaviors and results on the evals.&lt;/p&gt;

&lt;p&gt;In my opinion, the human-in-the-loop principle is crucial. Don't just copy/paste or try to vibe your way into production. Engineers are the ones &lt;strong&gt;responsible&lt;/strong&gt; for software systems, not tools or alien minds. If there are users who depend on your software, and your AI code causes an incident in production, you are responsible. Claude or Copilot won't wake up at 3 AM if prod is on fire (or maybe &lt;a href="https://learn.microsoft.com/en-us/azure/sre-agent/incident-management?tabs=azmon-alerts" rel="noopener noreferrer"&gt;Azure SRE agent&lt;/a&gt; will if you pay for it 🤔...). Having an engineering mindset and being in the driver's seat is what I expect from myself and anyone I work with.&lt;/p&gt;

&lt;h4&gt;
  
  
  Critical Thinking
&lt;/h4&gt;

&lt;p&gt;Within this principle, we have a topic I have a lot of strong opinions on. This quote says it all:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;LLMs are not generally optimized to say "I don’t know" when they don't have enough information. Instead, they will give you an answer, expressing confidence.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Basically, to be the human in the loop, we really must have good critical thinking skills. This ability plus our experience, brings something very valuable to this AI collaboration - detect the "I don't know". It may help to know some ways we can &lt;a href="https://docs.anthropic.com/en/docs/test-and-evaluate/strengthen-guardrails/reduce-hallucinations" rel="noopener noreferrer"&gt;reduce hallucinations&lt;/a&gt; in our prompts.&lt;br&gt;
But still, we can't blindly believe AI output is correct based on its confidence that the proposed solution works. Now more than ever, we need to continue developing critical thinking skills and apply them when working with AI, so that in the scenarios where it should have responded "I don't know", you rely more on your own abilities.&lt;/p&gt;

&lt;p&gt;Sure, there are tasks we are more confident delegating for AI to work on, but the ones we know fall outside the Jagged Frontier, we must proceed with caution and care. We discussed our &lt;strong&gt;confidence level&lt;/strong&gt; with AI output a lot. For example, &lt;a href="https://www.linkedin.com/in/andredsantos/" rel="noopener noreferrer"&gt;André Santos&lt;/a&gt; said it depends on the task we give it, but &lt;a href="https://www.linkedin.com/in/asoliveira/" rel="noopener noreferrer"&gt;André Oliveira&lt;/a&gt; also argues that we can only validate the output in the topics we know. It serves as an &lt;strong&gt;amplifier&lt;/strong&gt; because it's only a tool. If the wielder of the tool doesn't fact-check the output, we risk believing the hallucinations and false statements/claims.&lt;/p&gt;

&lt;p&gt;&lt;a href="//www.linkedin.com/in/pedrovala/"&gt;Pedro Vala&lt;/a&gt; also talked about a really good quote from the &lt;a href="https://www.amazon.com/Agentic-Design-Patterns-Hands-Intelligent/dp/3032014018" rel="noopener noreferrer"&gt;Agentic Design Patterns book&lt;/a&gt; that is super relevant to this topic:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;An AI trained on "garbage" data doesn’t just produce garbage-out; it produces plausible, confident garbage that can poison an entire process - Marco Argenti, CIO, Goldman Sachs&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Now imagine, if we read the AI output, and at first glance it looks okay, but it's only plausible garbage. Which is a real risk, especially on the AI-generated content that is already &lt;a href="https://graphite.io/five-percent/more-articles-are-now-created-by-ai-than-humans" rel="noopener noreferrer"&gt;available in the internet&lt;/a&gt;. Again, I hope developers continue to develop their critical thinking skills and don't delegate their thinking to tools.&lt;br&gt;
Right now, the only process I have of filtering out garbage on the internet is consuming most content from authors I respect, and I know for a fact are real people 😅. &lt;/p&gt;

&lt;h4&gt;
  
  
  Disruption in the job market
&lt;/h4&gt;

&lt;p&gt;Mollick also talks about the disruption in the job market, which is a hot topic in our industry. Especially the impact AI has on junior roles. We have debated this in a few sessions of our book club, and again, critical thinking and adaptability are crucial. We simply have to adapt and learn how to use this tool, nothing less, nothing more. How much value we bring to the table when working with AI matters, especially if the &lt;strong&gt;value&lt;/strong&gt; you bring is very tiny. If you don't bring any value to the table and just copy/paste, you are not a valuable professional in my view.&lt;/p&gt;

&lt;p&gt;It's a good idea to keep &lt;strong&gt;developing our skills and expertise&lt;/strong&gt;. Andrej Karpathy talks about &lt;a href="https://www.youtube.com/watch?v=LCEmiRjPEtQ" rel="noopener noreferrer"&gt;intelligence "brownout" when LLMs go down&lt;/a&gt;, this is extremely scary to me, especially if I see this behaviour in junior or college grads. I truly hope we stop delegating so much intelligence to a tool. I don't want engineers to &lt;strong&gt;rely&lt;/strong&gt; on LLMs when production is down and on fire. It would be sad to see engineers not knowing how to troubleshoot, how to fix these accidents in production... just because AI tools are not available 😐.&lt;/p&gt;

&lt;h3&gt;
  
  
  Centaur vs Cyborg approaches
&lt;/h3&gt;

&lt;p&gt;The book distinguishes between two ways of working with AI:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Centaur&lt;/strong&gt;: You divide tasks between human and machine. You handle the "Just me" tasks (outside the Jagged Frontier), and delegate specific sub-tasks to the AI that you later verify.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cyborg&lt;/strong&gt;: You integrate AI so deeply that the workflow becomes a hybrid, often automating entire processes.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;For software development, I'm definitely in the &lt;strong&gt;Centaur&lt;/strong&gt; camp right now.&lt;br&gt;
We should be careful about what tasks we delegate. Mollick warns about &lt;strong&gt;"falling asleep at the wheel."&lt;/strong&gt; When the AI is very good, humans have no reason to work hard and pay attention. They let the AI take over instead of using it as a tool, which can hurt our learning process and skill development. Or in some scenarios, it can lead to your production database being deleted...&lt;/p&gt;

&lt;p&gt;This is just a tool. We are still responsible at work. If the AI pushes a bug to production, &lt;em&gt;you&lt;/em&gt; pushed a bug to production!&lt;/p&gt;

&lt;p&gt;The author does give some "Cyborg examples" of working with AI, here is a quote from the book:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;I would become a Cyborg and tell the AI: I am stuck on a paragraph in a section of a book about how AI can help get you unstuck. Can you help me rewrite the paragraph and finish it by giving me 10 options for the entire paragraph in various professional styles? Make the styles and approaches different from each other, making them extremely well written.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;This is that ideation use case that is super useful when you have writer's block, or just want to brainstorm a bit on a given topic. In our industry, a lot of teams are integrating AI in many phases of the SDLC. I haven't found many workflows that work well in some parts of the SDLC, since we are focusing on adopting AI for coding and code review. But in most workflows, the cyborg practice is to steer more the AI and manage the tasks where you collaborate with AI as a co-worker. The risk remains even when someone uses cyborg practices, but then fails to spot hallucinations or false claims. The takeaway is really to be conscious of our AI adoption and usage. The number one cyborg practice I try to do naturally is to push back. If I smell something is off, I will disagree with the output and ask the AI to reconsider. This leads to a far more interesting back-and-forth conversation on a given topic.&lt;/p&gt;

&lt;h3&gt;
  
  
  Resources
&lt;/h3&gt;

&lt;p&gt;Here are some resources if you want to dive deeper:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.penguinrandomhouse.com/books/741969/co-intelligence-by-ethan-mollick/" rel="noopener noreferrer"&gt;Co-intelligence by Ethan Mollick&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.hbs.edu/ris/Publication%20Files/24-013_d9b45b68-9e74-42d6-a1c6-c72fb70c7282.pdf" rel="noopener noreferrer"&gt;Navigating the Jagged Technological Frontier&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.amazon.com/Agentic-Design-Patterns-Hands-Intelligent/dp/3032014018" rel="noopener noreferrer"&gt;Agentic Design Patterns: A Hands-On Guide to Building Intelligent Systems&lt;/a&gt; &lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.youtube.com/watch?v=LCEmiRjPEtQ" rel="noopener noreferrer"&gt;Andrej Karpathy: Software Is Changing (Again)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.oneusefulthing.org/p/centaurs-and-cyborgs-on-the-jagged" rel="noopener noreferrer"&gt;Centaurs and Cyborgs on the Jagged Frontier&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://zed.dev/blog/why-llms-cant-build-software" rel="noopener noreferrer"&gt;Why LLMs Can't Really Build Software&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://openai.com/index/why-language-models-hallucinate/" rel="noopener noreferrer"&gt;Why language models hallucinate | OpenAI&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;This was a great book, I truly recommend it to anyone who is interested in the slightest by AI. Co-intelligence is something we can strive for, focusing on adopting this new tool that can help us develop ourselves.&lt;br&gt;
Our expertise and our skills. When it was written, we had GPT 3.5 and GPT-4 was recent I believe... now we have GPT-5.3-Codex, Opus 4.6, GLM 4.7, and Kimi K2.5. I mean, in 2 years things just keep on changing 😅. The Jagged Frontier will keep changing, so this calls for experimentation. AI pioneers will do most of this experimentation, running evals and whatnot, to understand where each type of task falls in the Jagged Frontier. Pay attention to what they share, what works, and what doesn't.&lt;/p&gt;

&lt;p&gt;AI has augmented my team and me, mostly on "Centaur" tasks while we improve our AI fluency and usage. In my personal opinion, I don't see us reaching the AGI scenario Ethan talks about in the last chapter. Actually, most of our industry talks and continues to hype AGI... even the exponential growth scenario raises some doubts for me. But I agree with Ethan when he says: "No one wants to go back to working six days a week (...)" 😅.&lt;br&gt;
We should continue to focus on building our own expertise, and not delegating critical thinking to AI. There is a new skill in town, we now have LLM whisperers 😅, and having this skill can indeed augment you even further. Just remember the fundamentals don't change. Engineers still need to know those!&lt;/p&gt;

&lt;p&gt;There are hundreds of "Vibe Coding Cleanup Specialist" now 🤣. Let's remember to be the human in the loop. Apply critical thinking to any AI output, do fact-checking, and take &lt;strong&gt;ownership&lt;/strong&gt; of the final result. Please don't create AI slop 😅.&lt;/p&gt;

&lt;p&gt;Hope you enjoyed this post! My next blog post will be about how we are using agentic coding tools, so stay tuned! Feel free to share in the comments your opinion too, or reach out and we can have a chat 🙂.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>books</category>
    </item>
    <item>
      <title>The Reality of GenAI in Software Teams</title>
      <dc:creator>David Pereira</dc:creator>
      <pubDate>Sat, 20 Dec 2025 17:02:23 +0000</pubDate>
      <link>https://dev.to/bolt04/the-reality-of-genai-in-software-teams-59i8</link>
      <guid>https://dev.to/bolt04/the-reality-of-genai-in-software-teams-59i8</guid>
      <description>&lt;h2&gt;
  
  
  Table of Contents
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Introduction&lt;/li&gt;
&lt;li&gt;The productivity myth&lt;/li&gt;
&lt;li&gt;Where AI actually adds value&lt;/li&gt;
&lt;li&gt;Develop critical thinking skills&lt;/li&gt;
&lt;li&gt;Conclusion&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;After reading countless studies and observing real-world implementations, I've learned to use AI as an augmentation tool rather than something that replaces my job. It's clear as day that AI adoption is on the rise in our industry. We can use it in various ways like as a co-teacher or co-worker, but the gap between marketing promises and actual results should be top of mind for all of us.&lt;/p&gt;

&lt;p&gt;I'm a very pragmatic person, so I don't like hearing the positive perspective of using GenAI without talking about the downsides. There is a lot hype and investment in this field and only some reap the benefits of GenAI.&lt;/p&gt;

&lt;h2&gt;
  
  
  The productivity myth
&lt;/h2&gt;

&lt;p&gt;In my opinion, the biggest mistake organizations make is chasing the wrong metrics in terms of software development productivity. It's easier to understand (and do marketing) on simple numbers like: &lt;a href="https://github.blog/news-insights/research/research-quantifying-github-copilots-impact-on-developer-productivity-and-happiness/" rel="noopener noreferrer"&gt;"55% faster than the developers who didn’t use GitHub Copilot"&lt;/a&gt;, &lt;a href="https://blog.google/inside-google/message-ceo/alphabet-earnings-q3-2024/" rel="noopener noreferrer"&gt;"more than a quarter of all new code at Google is generated by AI"&lt;/a&gt; or &lt;a href="https://metr.org/blog/2025-07-10-early-2025-ai-experienced-os-dev-study/" rel="noopener noreferrer"&gt;"developers using AI are 19% slower"&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Productivity gains aren't about producing more code, especially when it's easy to create &lt;strong&gt;AI slop&lt;/strong&gt;. They also shouldn't be measured on producing boilerplate or simple tasks. A TODO app is different from a real production system. We can only make sure we have such gains by choosing metrics that make sense for our team and context, then measuring and reflecting on the results. This is how we can become more effective and steer the ship in the right direction.&lt;/p&gt;

&lt;p&gt;I'm much more skeptical of statements done by AI vendors, CEO's or content creators, and that helps me keep focus on my goal which is to continue improving and bringing value to my team. If AI can help with that great, if not, life goes on.&lt;/p&gt;

&lt;h2&gt;
  
  
  Where AI actually adds value
&lt;/h2&gt;

&lt;p&gt;From my perspective, the real value of AI in software teams lies in three specific areas that traditional tools cannot address effectively.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AI as Strategic Thinking Partner:&lt;/strong&gt; I believe the most undervalued application is using AI for architectural discussions and trade-off analysis. When an engineer can have a deep conversation about a technical problem, generate 10 possible solutions, and then filter out the bad ideas, that's really helpful. This isn't about getting perfect implementation details - it's about expanding the solution space before making critical decisions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Having a Co-Teacher:&lt;/strong&gt; It's hard to be a force multiplier that improves everyone around you, which is why this is a key differentiator on senior developers that have this skill. The challenge of onboarding junior developers, explaining business logic, and sharing design patterns has always been there. We always want our senior devs to share and help junior devs grow, and using AI as a co-teacher helps us with that. Anthropic mentioned in &lt;a href="https://www.anthropic.com/engineering/claude-code-best-practices" rel="noopener noreferrer"&gt;their article&lt;/a&gt; how Claude Code helps them:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;At Anthropic, using Claude Code in this way has become our core onboarding workflow, significantly improving ramp-up time and reducing load on other engineers.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;Practical Augmentation as a Co-Worker:&lt;/strong&gt; I'm convinced AI augments my team on the mundane but time-consuming tasks. Initial code reviews, generating PR summaries, drafting Architecture Decision Records, creating unit tests for specific scenarios, and generating KQL queries for troubleshooting. Our team at &lt;a href="https://cloudcockpit.com/" rel="noopener noreferrer"&gt;CloudCockpit&lt;/a&gt; has also been creating reusable prompts and custom agents that help every dev develop new features and have architecture reviews on proposals.&lt;/p&gt;

&lt;p&gt;So far, I noticed that using AI is helping me &lt;strong&gt;think better&lt;/strong&gt;, but it has the potential of helping me work faster with the use of these agents. Still, I mostly use it for "deep research" into possible solutions, learning new technologies through analogies, finding relevant documentation and troubleshooting problems. The most important piece remains, which is keeping a high level of technical excellence and quality in our team.&lt;/p&gt;

&lt;h2&gt;
  
  
  Develop critical thinking skills
&lt;/h2&gt;

&lt;p&gt;Here's what concerns me most: the tendency for developers to &lt;strong&gt;become over-reliant&lt;/strong&gt; on AI outputs without developing judgment to evaluate them. On the recent &lt;a href="https://dora.dev/research/2025/" rel="noopener noreferrer"&gt;DORA 2025 report&lt;/a&gt;, they found 65% of technology professionals report to relying on AI at least a "moderate amount". It's important to understand this behaviour in our teams. All software engineers need to exhibit &lt;strong&gt;critical thinking skills&lt;/strong&gt;, in my opinion, seniors more than juniors. But still, this skill must be learned and developed. We can't have good professionals in our field without this skill. But I am seeing more software engineers delegate their thinking to a machine, a tool.&lt;/p&gt;

&lt;p&gt;Teams that accept AI suggestions without the "push-back" that experienced practitioners recommend, usually are trading off speed for quality. Sure, there is nothing wrong with that in some scenarios like prototypes and demo apps. For products with millions of users that need to be &lt;strong&gt;robust&lt;/strong&gt;? No, not a good trade-off.&lt;/p&gt;

&lt;p&gt;You should think critically about the AI output and be the human in the loop. Are you confident it will behave well if you give it more tools and autonomy? Are you confident the output is based on facts and truth, instead of lies and hallucinations? Don't delegate your &lt;strong&gt;critical thinking&lt;/strong&gt; to tools, and don't become over-reliant on them either without fact-checking. Always evaluate if what the AI is telling you very confidently is even true, and be mindful of its limitations.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;From everything I've observed, learning to use GenAI tools is something I recommend. Learn its strengths and limitations. Organizations that approach AI adoption with healthy skepticism while investing in experimentation, innovation and learning, will build sustainable competitive advantages.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Ask yourself these hard questions&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Are you measuring business outcomes, or just code output from AI tools?&lt;/li&gt;
&lt;li&gt;Are your teams getting augmented, or just more dependent on external intelligence?&lt;/li&gt;
&lt;li&gt;Are you blindly believing the AI hype or learning how to leverage this new tool?&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>ai</category>
      <category>programming</category>
    </item>
    <item>
      <title>GitHub Universe 2025 Recap</title>
      <dc:creator>David Pereira</dc:creator>
      <pubDate>Sat, 22 Nov 2025 15:53:14 +0000</pubDate>
      <link>https://dev.to/bolt04/github-universe-2025-recap-9gl</link>
      <guid>https://dev.to/bolt04/github-universe-2025-recap-9gl</guid>
      <description>&lt;p&gt;GitHub Universe 2025 ended a few weeks ago, but there was a ton of cool announcements and some stuff is still yet to come (I'm waiting for the end of the year 👀). I want to share quickly what I learned from watching the sessions and my opinion on some topics.&lt;/p&gt;

&lt;p&gt;Here is a quick recap too:&lt;br&gt;


  &lt;iframe src="https://www.youtube.com/embed/1JxLIxbEzxQ"&gt;
  &lt;/iframe&gt;


&lt;/p&gt;

&lt;h2&gt;
  
  
  Table of Contents
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Agent HQ - Mission Control&lt;/li&gt;
&lt;li&gt;GitHub Code Quality&lt;/li&gt;
&lt;li&gt;Copilot Upgrades&lt;/li&gt;
&lt;li&gt;Conclusion&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  TL;DR
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Agent HQ is a big focus&lt;/strong&gt;: GitHub becomes mission control for all your coding agents (Anthropic, OpenAI, Google, xAI, and more) with unified task management, granular security controls — all included in your Copilot subscription.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Copilot leveled up significantly&lt;/strong&gt;: Code Quality, custom agents and code review improvements. There were significant upgrades to Copilot and it's definitely a lot better!&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Copilot metrics dashboard&lt;/strong&gt;: GitHub launched usage dashboards and APIs showing acceptance rates, engagement, Lines of Code (LoC) related metrics and more&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Agent HQ - Mission Control
&lt;/h2&gt;

&lt;p&gt;Here is the promotional video on Agent HQ:&lt;br&gt;


  &lt;iframe src="https://www.youtube.com/embed/KniyIrpTDE8"&gt;
  &lt;/iframe&gt;


&lt;/p&gt;

&lt;p&gt;Sure, it looks cool. Many demos during day 1, from Anthropic and OpenAI looked good, but demos always look good 😅. They are pushing to give us tools to &lt;strong&gt;orchestrate&lt;/strong&gt; a fleet of specialized agents, then monitor and iterate on their work while the agents work in &lt;strong&gt;parallel&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;A great new addition is the ability to steer the agent while it's working!! Super awesome, since we could even send a large PRD in that chat session, to introduce new requirements or context to the agent. Previously, we only "communicated" with Copilot coding agent through comments in the issue/PR.&lt;br&gt;
They also want the Agent HQ to be the place where you define all security controls regarding agents, for example, an agent runs with a locked-down GitHub token that limits exactly what it can do. We get audit logs, the firewall copilot has right now, and other cool stuff.&lt;/p&gt;

&lt;p&gt;Here is another big promise from GitHub: &lt;em&gt;Over the coming months, coding agents from Anthropic, OpenAI, Google, Cognition, xAI, and more will become available directly within GitHub as part of your paid GitHub Copilot subscription&lt;/em&gt;. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgpky8m2bwsi3zdjste77.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgpky8m2bwsi3zdjste77.webp" alt="quotes-from-github"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If this is true for all paid plans (not just Pro+ and Enterprise), it would be fantastic. We can pay $10 for Copilot Pro and get most agents, plus most frontier models, with some being free (e.g. GPT-5 mini). Of course, there would be downsides, for example, you may have Claude code, but we probably lose access to claude.ai and Claude code CLI. Will this GitHub integration be 100% the same as the CLI? Will there be some parts of the integration open-source? Will &lt;code&gt;AGENTS.md&lt;/code&gt; work on them all or not (e.g. currently Claude code doesn't support it)? Will we have the &lt;strong&gt;deep research&lt;/strong&gt; tool in Claude code agent like we do in claude.ai? Will rate limits be the same? Right now I'm not sure marketing and sales teams are only selling dreams. GitHub wants to be the hub of everything agent related, they are partnering with everyone, but I'm skeptical by nature 😅 . I'll believe it when I see it.&lt;/p&gt;

&lt;p&gt;Still, very cool announcement!&lt;/p&gt;

&lt;h2&gt;
  
  
  GitHub Code Quality
&lt;/h2&gt;

&lt;p&gt;New feature in preview to ensure code quality on the repository. This feature focus on maintainability and reliability of the code right now, then in future releases add test coverage and "AI era challenges". Plus we can block PRs that don't meet code quality standards.&lt;/p&gt;

&lt;p&gt;There is another tab now for AI findings where we can see suggestions to recently modified code. I mean if we ran code review agents at PR time, this feature wouldn't be that helpful, but if it does find problems we can assign the Copilot coding agent to fix them all.&lt;/p&gt;

&lt;p&gt;I like the idea of having a summary of code quality in the entire repo (mostly for "old" code). At least it might help more teams visualize their technical debt, monitor it and fix it!&lt;/p&gt;

&lt;p&gt;I think this is interesting but still in its early stages.&lt;/p&gt;

&lt;h2&gt;
  
  
  Copilot Upgrades
&lt;/h2&gt;

&lt;p&gt;Copilot also got some upgrades that are worth talking about. The most boring for me is "Plan Mode". Why? It's not new, we have this in Claude code, Cursor and Copilot with a custom &lt;a href="https://github.com/github/awesome-copilot/blob/main/chatmodes/plan.chatmode.md" rel="noopener noreferrer"&gt;chatmode&lt;/a&gt; that I have used a lot. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Copilot Code Review&lt;/strong&gt; has improved, and I can already tell in my recent PRs. You can see the session now, and it runs CodeQL to find quality issues, and run other linters (e.g. ESlint). If you didn't know, GitHub Advanced Security is a paid feature and for private repos, it was the only way to have CodeQL and run it for code scanning and security analysis. It's really cool we get that with Copilot now, at least a piece of it. Code review now takes longer too since it's running these tools, but it's better IMO.&lt;/p&gt;

&lt;p&gt;They also introduced &lt;strong&gt;Copilot custom agents&lt;/strong&gt;, I mean... nothing new as well 😅. Sure it's new for Copilot, but we had sub-agents with Claude code, and agents in Cursor, Codex or others. But yeah, it's cool to now have custom agents in  &lt;code&gt;.github/agents&lt;/code&gt;. From what I have tested, they are basic for now, not a lot of configuration like configuring the model for Copilot coding agent (for &lt;a href="https://code.visualstudio.com/docs/copilot/customization/custom-agents#_custom-agent-file-structure" rel="noopener noreferrer"&gt;VS Code you can&lt;/a&gt;), besides setting up more MCP servers.&lt;/p&gt;

&lt;p&gt;We also have &lt;a href="https://github.blog/changelog/2025-10-28-copilot-usage-metrics-dashboard-and-api-in-public-preview/" rel="noopener noreferrer"&gt;Copilot metrics&lt;/a&gt;! It's in public preview for all paid plans it seems, but I haven't looked at it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Focicf7w06tduoibqq5qk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Focicf7w06tduoibqq5qk.png" alt=" "&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;announcement from their slides at GH Universe&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;I think it's cool to see how our teams and individuals are using AI. Still, it probably lacks many other useful metrics, but oh well. It's a good addition.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;So yes, there are tons of new cool features and stuff. If you're thinking about adopting AI coding tools at your organization, give Copilot a try. All these new features, use and experiment them all. I'll continue to use copilot/AI on multiple tasks and experiment with custom agents and the rest. It already augments myself and my team, so incorporating more agents in our software development lifecycle could be even more beneficial.&lt;/p&gt;

&lt;p&gt;Last but not least, there was a cool talk from GitHub Next (their R&amp;amp;D) team about &lt;a href="https://www.youtube.com/watch?v=V-sdNfETPYQ" rel="noopener noreferrer"&gt;Continuous AI&lt;/a&gt; 🙂. Cool ideas and prototypes overall, they are working on things I'll keep my eye on. Nothing I'll experiment soon though.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>github</category>
      <category>githubcopilot</category>
    </item>
    <item>
      <title>AI can be a great augmentation tool, for code-review or AI-assisted coding, but all engineers need to have strong critical thinking skills, in my opinion. In this post, I share how I'm using it along with my own opinions so far.</title>
      <dc:creator>David Pereira</dc:creator>
      <pubDate>Sun, 14 Sep 2025 14:00:02 +0000</pubDate>
      <link>https://dev.to/bolt04/ai-can-be-a-great-augmentation-tool-for-code-review-or-ai-assisted-coding-but-all-engineers-need-22lj</link>
      <guid>https://dev.to/bolt04/ai-can-be-a-great-augmentation-tool-for-code-review-or-ai-assisted-coding-but-all-engineers-need-22lj</guid>
      <description>&lt;div class="ltag__link--embedded"&gt;
  &lt;div class="crayons-story "&gt;
  &lt;a href="https://dev.to/bolt04/becoming-augmented-by-ai-3f1" class="crayons-story__hidden-navigation-link"&gt;Becoming augmented by AI&lt;/a&gt;


  &lt;div class="crayons-story__body crayons-story__body-full_post"&gt;
    &lt;div class="crayons-story__top"&gt;
      &lt;div class="crayons-story__meta"&gt;
        &lt;div class="crayons-story__author-pic"&gt;

          &lt;a href="/bolt04" class="crayons-avatar  crayons-avatar--l  "&gt;
            &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F192676%2Fcb99d336-4580-47c0-becb-ee381d71b4e8.jpg" alt="bolt04 profile" class="crayons-avatar__image"&gt;
          &lt;/a&gt;
        &lt;/div&gt;
        &lt;div&gt;
          &lt;div&gt;
            &lt;a href="/bolt04" class="crayons-story__secondary fw-medium m:hidden"&gt;
              David Pereira
            &lt;/a&gt;
            &lt;div class="profile-preview-card relative mb-4 s:mb-0 fw-medium hidden m:inline-block"&gt;
              
                David Pereira
                
              
              &lt;div id="story-author-preview-content-2820588" class="profile-preview-card__content crayons-dropdown branded-7 p-4 pt-0"&gt;
                &lt;div class="gap-4 grid"&gt;
                  &lt;div class="-mt-4"&gt;
                    &lt;a href="/bolt04" class="flex"&gt;
                      &lt;span class="crayons-avatar crayons-avatar--xl mr-2 shrink-0"&gt;
                        &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F192676%2Fcb99d336-4580-47c0-becb-ee381d71b4e8.jpg" class="crayons-avatar__image" alt=""&gt;
                      &lt;/span&gt;
                      &lt;span class="crayons-link crayons-subtitle-2 mt-5"&gt;David Pereira&lt;/span&gt;
                    &lt;/a&gt;
                  &lt;/div&gt;
                  &lt;div class="print-hidden"&gt;
                    
                      Follow
                    
                  &lt;/div&gt;
                  &lt;div class="author-preview-metadata-container"&gt;&lt;/div&gt;
                &lt;/div&gt;
              &lt;/div&gt;
            &lt;/div&gt;

          &lt;/div&gt;
          &lt;a href="https://dev.to/bolt04/becoming-augmented-by-ai-3f1" class="crayons-story__tertiary fs-xs"&gt;&lt;time&gt;Sep 14 '25&lt;/time&gt;&lt;span class="time-ago-indicator-initial-placeholder"&gt;&lt;/span&gt;&lt;/a&gt;
        &lt;/div&gt;
      &lt;/div&gt;

    &lt;/div&gt;

    &lt;div class="crayons-story__indention"&gt;
      &lt;h2 class="crayons-story__title crayons-story__title-full_post"&gt;
        &lt;a href="https://dev.to/bolt04/becoming-augmented-by-ai-3f1" id="article-link-2820588"&gt;
          Becoming augmented by AI
        &lt;/a&gt;
      &lt;/h2&gt;
        &lt;div class="crayons-story__tags"&gt;
            &lt;a class="crayons-tag  crayons-tag--monochrome " href="/t/ai"&gt;&lt;span class="crayons-tag__prefix"&gt;#&lt;/span&gt;ai&lt;/a&gt;
            &lt;a class="crayons-tag  crayons-tag--monochrome " href="/t/learning"&gt;&lt;span class="crayons-tag__prefix"&gt;#&lt;/span&gt;learning&lt;/a&gt;
        &lt;/div&gt;
      &lt;div class="crayons-story__bottom"&gt;
        &lt;div class="crayons-story__details"&gt;
            &lt;a href="https://dev.to/bolt04/becoming-augmented-by-ai-3f1#comments" class="crayons-btn crayons-btn--s crayons-btn--ghost crayons-btn--icon-left flex items-center"&gt;
              Comments


              &lt;span class="hidden s:inline"&gt;Add Comment&lt;/span&gt;
            &lt;/a&gt;
        &lt;/div&gt;
        &lt;div class="crayons-story__save"&gt;
          &lt;small class="crayons-story__tertiary fs-xs mr-2"&gt;
            11 min read
          &lt;/small&gt;
            
              &lt;span class="bm-initial"&gt;
                

              &lt;/span&gt;
              &lt;span class="bm-success"&gt;
                

              &lt;/span&gt;
            
        &lt;/div&gt;
      &lt;/div&gt;
    &lt;/div&gt;
  &lt;/div&gt;
&lt;/div&gt;

&lt;/div&gt;


</description>
      <category>ai</category>
      <category>learning</category>
    </item>
    <item>
      <title>Becoming augmented by AI</title>
      <dc:creator>David Pereira</dc:creator>
      <pubDate>Sun, 14 Sep 2025 13:56:58 +0000</pubDate>
      <link>https://dev.to/bolt04/becoming-augmented-by-ai-3f1</link>
      <guid>https://dev.to/bolt04/becoming-augmented-by-ai-3f1</guid>
      <description>&lt;h2&gt;
  
  
  Table of contents
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;The "Jagged Frontier" concept&lt;/li&gt;
&lt;li&gt;Becoming augmented by AI&lt;/li&gt;
&lt;li&gt;My augmentation list&lt;/li&gt;
&lt;li&gt;
AI-assisted coding

&lt;ul&gt;
&lt;li&gt;Custom instructions&lt;/li&gt;
&lt;li&gt;Meta-prompting&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Resources&lt;/li&gt;

&lt;li&gt;Conclusion&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;We're deep into &lt;a href="https://www.amazon.com/-/pt/dp/059371671X/ref=sr_1_1" rel="noopener noreferrer"&gt;Co-Intelligence&lt;/a&gt; in Create IT's book club — definitely worth your time! Between that and the endless stream of LLM content online, I've been in full research mode. Still, I can't just watch and hear others talk about these tools, I must experiment myself and learn how to use them for my use cases.&lt;/p&gt;

&lt;p&gt;Software development is complex. My job isn't just churning out code, but there are many concepts in this book that we've internalized and started adopting.&lt;br&gt;
In this post, I'll share my opinions and some of the practical guidelines our team has been following to be augmented by AI.&lt;/p&gt;
&lt;h2&gt;
  
  
  The "Jagged Frontier" concept
&lt;/h2&gt;

&lt;p&gt;The Jagged Frontier described by the author Ethan Mollick is an amazing concept in my opinion. It's where tasks that appear to be of similar difficulty may either be performed better or worse by humans using AI. Due to the “jagged” nature of the frontier, the same knowledge workflow of tasks can have tasks on both sides of the frontier according to a &lt;a href="https://www.hbs.edu/ris/Publication%20Files/24-013_d9b45b68-9e74-42d6-a1c6-c72fb70c7282.pdf" rel="noopener noreferrer"&gt;publication where the author took part&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;This leads to the &lt;strong&gt;Centaur vs. Cyborg&lt;/strong&gt; distinction which is really interesting. Using both approaches (deeply integrated collaboration and separation of tasks) seems to be the goal to achieve co-intelligence. One very important Cyborg practice seen in that publication is "push-back" and "demanding logic explanation", meaning we disagree with the AI output, give it feedback, and ask it to reconsider and explain better. Or as I often do, ask it to double-check with official documentation that what it's telling me is correct.&lt;br&gt;
It's also important to understand that this frontier can change as these models improve. Hence, the focus on experimentation to understand where the Jagged Frontier lies in each LLM. It's definitely knowledge that everyone in the industry right now wants to acquire (maybe share it afterwards 😅).&lt;/p&gt;
&lt;h2&gt;
  
  
  Becoming augmented by AI
&lt;/h2&gt;

&lt;p&gt;I'm aware of the marketed productivity gains, where &lt;a href="https://github.blog/news-insights/research/research-quantifying-github-copilots-impact-on-developer-productivity-and-happiness/" rel="noopener noreferrer"&gt;GitHub Copilot usage makes devs 55% faster&lt;/a&gt;, and other studies that have been posted about GenAI increasing productivity. I'm also aware of the studies claiming the opposite 😄 like the &lt;a href="https://metr.org/blog/2025-07-10-early-2025-ai-experienced-os-dev-study/" rel="noopener noreferrer"&gt;METR study&lt;/a&gt; showing AI makes devs &lt;strong&gt;19% slower&lt;/strong&gt;. However, I don't see 55% productivity gains for myself, and I don't think it makes me slower either.&lt;/p&gt;

&lt;p&gt;In my opinion, productivity gains aren't measured by producing more code. Number of PRs? Nope. Acceptance rate for AI suggestions? Definitely not! I firmly believe the less code, the better. The less slop the better too 😄. I'm currently focused on assessing &lt;strong&gt;DORA metrics&lt;/strong&gt; and others for my team, because we want to measure how AI-assisted coding and the other ways we use it as an augmentation tool, actually improves those metrics, or make them worse. The rest of marketing and hype doesn't matter.&lt;/p&gt;
&lt;h3&gt;
  
  
  AI as a co-worker
&lt;/h3&gt;

&lt;p&gt;For a tech lead that works with Azure services, an important skill is to know how to leverage the correct Azure services to build, deploy, and manage a scalable solution. So it becomes very useful to have an AI partner that can have a conversation about this, for example about Azure Durable Functions. This conversation can be shallow, and not get all the implementation details 100% correct. That's okay, because the tech lead (and any dev 😅) also needs to exhibit &lt;strong&gt;critical thinking&lt;/strong&gt; and evaluate the AI responses. &lt;strong&gt;This is not a skill we want to delegate&lt;/strong&gt; to these models, at least in my opinion and in the &lt;a href="https://www.oneusefulthing.org/p/against-brain-damage" rel="noopener noreferrer"&gt;author's opinion&lt;/a&gt;. There is a relevant &lt;a href="https://www.microsoft.com/en-us/research/wp-content/uploads/2025/01/lee_2025_ai_critical_thinking_survey.pdf" rel="noopener noreferrer"&gt;research paper&lt;/a&gt; about this by Microsoft as well.&lt;/p&gt;

&lt;p&gt;The goal can simply be to have a conversation with a co-worker to spark some new ideas or possible solutions that we haven't thought of. Using AI for ideation is a great use case, not just for engineering, but for product features too like UI/UX, important metrics to capture, etc. If it generates 20 ideas, there is a higher chance you find the bad ones, filter them out, and clear your mind or steer it into better ideas. Here is an example to get some ideas on fixing a recurring exception:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcsff9y3mvhiensk7mg34.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcsff9y3mvhiensk7mg34.png" alt="claude-to-get-ideas" width="800" height="632"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It asks clarifying questions so that I can give it more useful context. Then I can see the response, iterate, or ask for more ideas, etc. I usually always set these instructions for any LLM:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Ask clarifying questions before giving an answer. Keep explanations not too long. Try to be as insightful as possible, and remember to verify if a solution can be implemented when answering about Azure and architecture in general.
It's also very important for you to verify if there is official documentation that supports your claims and statements. Please find official documentation supporting your claims, before responding to a user. If there isn't documentation confirming your statement, don't include it in the response.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That is also why it searches for docs. I've gotten way too many statements in the LLM's response that when I follow-up on, it realizes it made an error, or assumption, etc. When I ask it further about that sentence that it just gave me, I just get "You're right - I was wrong about that"... Don't become too over-reliant on these tools 😅.&lt;/p&gt;

&lt;h3&gt;
  
  
  AI as a co-teacher
&lt;/h3&gt;

&lt;p&gt;With that said, the tech lead and senior devs are also responsible for upskilling their team by sharing knowledge, best practices, challenging juniors with more complex tasks, etc. And this part of the job isn't that simple; it's hard to be a force multiplier that improves everyone around you. So, what if the tech lead could use AI in this way, by creating &lt;a href="https://code.visualstudio.com/docs/copilot/customization/prompt-files" rel="noopener noreferrer"&gt;reusable prompts&lt;/a&gt;, documentation, and custom agents? How about the tech lead uses AI as a co-teacher, and then shares how to do it with the rest of the team? All of these are then able to help juniors be onboarded, help them understand our codebase and our domain. &lt;a href="https://www.anthropic.com/engineering/claude-code-best-practices" rel="noopener noreferrer"&gt;Claude Code Best practices post&lt;/a&gt; also reference onboarding as a good use case that helps Anthropic engineers.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;At Anthropic, using Claude Code in this way has become our core onboarding workflow, significantly improving ramp-up time and reducing load on other engineers.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;A lot of onboarding time is spent on understanding the business logic and then how it's implemented. For juniors, it's also about the design patterns or codebase structure. So I really think this is a net-positive for the whole team.&lt;/p&gt;

&lt;h3&gt;
  
  
  My augmentation list
&lt;/h3&gt;

&lt;p&gt;It might not be much, but these are essentially the tasks I'm augmented by AI:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Technical&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Initial&lt;/strong&gt; code review (e.g. nitpicks, typos), some stuff I should really just automate 😅&lt;/li&gt;
&lt;li&gt;Generate summaries for the PR description&lt;/li&gt;
&lt;li&gt;Architectural discussions, including trade-off and risk analysis

&lt;ul&gt;
&lt;li&gt;Draft an ADR (Architecture decision record) based on my analysis and arguments&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Co-Teacher and Co-Worker

&lt;ul&gt;
&lt;li&gt;"Deep Research" and discussion about possible solutions&lt;/li&gt;
&lt;li&gt;Learn new tech with analogies or specific Azure features&lt;/li&gt;
&lt;li&gt;Find new sources of information (e.g. blog posts, official docs, conference talks)&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Troubleshooting for specific infrastructure problems

&lt;ul&gt;
&lt;li&gt;Generating KQL queries (e.g. rendering charts, analyzing traces &amp;amp; exceptions &amp;amp; dependencies)&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Refactoring and documentation suggestions&lt;/li&gt;

&lt;li&gt;Generation of new unit tests given X scenarios&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Non-technical&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Summarizing book chapters/blog posts or videos (e.g. NotebookLM)&lt;/li&gt;
&lt;li&gt;Role play in various scenarios (e.g. book discussions)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Of course, we also need to talk about the tasks that fall outside the Jagged Frontier. Again, these can vary from person to person. From my usage and experiments so far, these are the tasks that currently fall outside the frontier:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Being responsible for technical support tickets, where a customer encountered an error or has a question about our product. This involves answering the ticket, asking clarifying questions when necessary, opening up tickets on a 3rd party that are related to this issue, and then resolving the issue.&lt;/li&gt;
&lt;li&gt;Deep valuable code review. This includes good insights, suggestions, and knowledge sharing to improve the PR author's skills. &lt;a href="https://www.coderabbit.ai/" rel="noopener noreferrer"&gt;CodeRabbit&lt;/a&gt; does often give valuable code reviews, way better than any other solution. Still not the same as human review 🙂&lt;/li&gt;
&lt;li&gt;Development of a v0 (or draft) for new complex features&lt;/li&gt;
&lt;li&gt;Fixing bugs that require business domain knowledge&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Delegating some of those tasks would be cool, at least 50% 😄, while our engineering team focuses on other tasks. But oh well, maybe that day will come. &lt;/p&gt;

&lt;h2&gt;
  
  
  AI-assisted coding
&lt;/h2&gt;

&lt;p&gt;AI-assisted coding can be very helpful on some tasks, and lately my goal is to increase the number of tasks AI can assist me. In our team, we've read &lt;a href="https://www.anthropic.com/engineering/claude-code-best-practices" rel="noopener noreferrer"&gt;Claude Code Best practices&lt;/a&gt; in order to learn and see what fits best for our use case. Then we dive deeper in some topics that post references, for example &lt;a href="https://docs.anthropic.com/en/docs/build-with-claude/prompt-engineering/extended-thinking-tips" rel="noopener noreferrer"&gt;these docs&lt;/a&gt; were very useful to learn about Claude's extended thinking feature, complementing the usage of "think" &amp;lt; "think hard" &amp;lt; "think harder" &amp;lt; "ultrathink". We also found &lt;a href="https://simonwillison.net/2025/Apr/19/claude-code-best-practices/" rel="noopener noreferrer"&gt;this post by Simon&lt;/a&gt; about this entire feature that was interesting.&lt;br&gt;
In most tasks, using an iterative approach, just like normal software development, is indeed way better than one-shot with the perfect prompt. Still, if it takes too many iterations, like some bugfixes were too complex because it's hard to pinpoint the location of the bug, then it loses performance and overall becomes bad (infinite load spinner of death 🤣).&lt;/p&gt;

&lt;p&gt;Before we can use AI-assisted coding on more complex tasks, we need to improve the output quality. So we've invested a lot of time in fine-tuning custom instructions and meta-prompting. Let's talk about these two.&lt;/p&gt;
&lt;h3&gt;
  
  
  Custom instructions
&lt;/h3&gt;

&lt;p&gt;According to Copilot docs, instructions should be short, self-contained statements. Most principles in &lt;a href="https://learn.microsoft.com/en-us/training/modules/introduction-prompt-engineering-with-github-copilot/2-prompt-engineering-foundations-best-practices" rel="noopener noreferrer"&gt;prompt engineering&lt;/a&gt; are about being short, specific, and making sure our critical instructions is something the model takes special attention to.&lt;br&gt;
Like everyone talks about, the context window is very important, so it's really good if we can just have an instruction file of 200 lines. The longer our instructions are, the greater the risk that the LLM won't follow them, since it can pay more attention to other tokens or forget relevant instructions. With that said, keeping instructions short is also a challenge when we use the few-shot prompting technique and add more examples.&lt;/p&gt;

&lt;p&gt;To build our custom instructions, we used C# and Blazor files from &lt;a href="https://github.com/github/awesome-copilot/tree/main" rel="noopener noreferrer"&gt;the awesome-copilot repo&lt;/a&gt; and other sources of inspiration like &lt;a href="https://parahelp.com/blog/prompt-design" rel="noopener noreferrer"&gt;parahelp prompt design&lt;/a&gt; to get a first version. We wanted to know what techniques other teams use. Then we made specific edits to follow our own guidelines and removed rules specific to explaining concepts, etc.&lt;br&gt;
We also added some &lt;strong&gt;capitalized words&lt;/strong&gt; that are common in system prompts or commands, like IMPORTANT, NEVER, ALWAYS, MUST. The IMPORTANT word is also at the end of the instruction, to try and &lt;strong&gt;refocus&lt;/strong&gt; the attention to coding standards:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;IMPORTANT: Follow our coding standards when implementing features or fixing bugs. If you are unsure about a specific coding standard, ask for clarification.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I'm not 100% sure how this capitalization works, or why it works... and I have not found docs/evidence/research on this. All I know is that capitalized words have different tokens than lowercase. It's probably something the model pays more attention to, since in the training data, when we use these words, it means it's important. I do wish Microsoft, OpenAI, and Anthropic included this topic on capitalization in their prompt engineering docs/tutorials.&lt;/p&gt;

&lt;p&gt;It's at the end of our file since it's also &lt;a href="https://huggingface.co/papers/2307.03172" rel="noopener noreferrer"&gt;being researched that the beginning and end of a prompt&lt;/a&gt; are what the LLM pays more attention to and finds more relevant. Some middle parts are "meh" and can be forgotten. &lt;a href="https://learn.microsoft.com/en-us/azure/ai-foundry/openai/concepts/prompt-engineering?tabs=chat#repeat-instructions-at-the-end" rel="noopener noreferrer"&gt;Microsoft docs&lt;/a&gt; say the same essentially, it's known as "&lt;strong&gt;recency bias&lt;/strong&gt;". In most prompts we see, this section exists at the end to refocus the LLM's attention.&lt;/p&gt;

&lt;h3&gt;
  
  
  Meta-prompting
&lt;/h3&gt;

&lt;p&gt;Our goal also isn't to have the perfect custom instructions and prompt, since refining it later with an iterative/conversational approach works well. But we came across the concept of &lt;a href="https://cookbook.openai.com/examples/enhance_your_prompts_with_meta_prompting" rel="noopener noreferrer"&gt;meta-prompting&lt;/a&gt;, a term that is becoming more popular. Basically, we asked Claude how to improve our prompt, and it gave us some cool ideas to improve our instructions/reusable prompts.&lt;/p&gt;

&lt;p&gt;But don't forget to use LLMs with caution... I keep getting "You're absolutely right..." and it's annoying how sycophantic it is oftentimes 😅&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5l0h642hdv82vo59m3iu.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5l0h642hdv82vo59m3iu.PNG" alt="llm-trust-but-verify" width="580" height="315"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The quality of the output is most likely affected by the complexity of the task I'm working on too. Prompting skills only go so far, from what I've researched and learned so far, I can say there is a learning curve for understanding LLMs. So we need to continue experimenting and learning the layers between our prompt and the output we see.&lt;/p&gt;

&lt;h2&gt;
  
  
  Resources
&lt;/h2&gt;

&lt;p&gt;This is not an exhaustive list by any means, just some resources I find very useful:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.youtube.com/watch?v=EWvNQjAaOHw&amp;amp;t=7238s" rel="noopener noreferrer"&gt;Andrej Karpathy - How I use LLMs&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.youtube.com/watch?v=LCEmiRjPEtQ" rel="noopener noreferrer"&gt;Andrej Karpathy: Software Is Changing (Again)&lt;/a&gt;

&lt;ul&gt;
&lt;li&gt;Related to this is &lt;a href="https://natesnewsletter.substack.com/p/software-30-vs-ai-agentic-mesh-why" rel="noopener noreferrer"&gt;this post from Nate Jones&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;&lt;a href="https://www.youtube.com/watch?v=tbDDYKRFjhk" rel="noopener noreferrer"&gt;Does AI Actually Boost Developer Productivity? (100k Devs Study) - Yegor Denisov-Blanch, Stanford&lt;/a&gt;&lt;/li&gt;

&lt;li&gt;&lt;a href="https://www.anthropic.com/engineering/claude-code-best-practices" rel="noopener noreferrer"&gt;Claude Code: Best practices for agentic coding&lt;/a&gt;&lt;/li&gt;

&lt;li&gt;&lt;a href="https://zed.dev/blog/why-llms-cant-build-software" rel="noopener noreferrer"&gt;Why LLMs Can't Really Build Software&lt;/a&gt;&lt;/li&gt;

&lt;li&gt;&lt;a href="https://www.youtube.com/watch?v=-1yH_BTKgXs" rel="noopener noreferrer"&gt;Is AI the Future of Software Development, or Just a new Abstraction? Insights from Kelsey Hightower&lt;/a&gt;&lt;/li&gt;

&lt;li&gt;&lt;a href="https://cookbook.openai.com/examples/gpt-5/gpt-5_prompting_guide" rel="noopener noreferrer"&gt;GPT-5 prompting guide&lt;/a&gt;&lt;/li&gt;

&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;I've enjoyed learning and improving myself over the years. But with GenAI I now feel like I could learn a lot more and improve myself even further since I'm choosing them as &lt;strong&gt;augmentation tools&lt;/strong&gt;.&lt;br&gt;
Hopefully, this article motivates you to pursue AI augmentation for yourself. It's okay to be skeptical about all the hype you watch and hear around these tools. It's a good mechanism to not fall for all the sales pitches and fluff CEO's and others in the industry talk about. Just don't let your skepticism prevent you from learning, experimenting, building your own opinion, and finding ways of improving your work 🙂.&lt;/p&gt;

&lt;p&gt;Still... I can't deny my curiosity to know more about how these systems work underneath. How is fine-tuning done exactly? How does post-training work? Can these models emit telemetry (logs, traces, metrics) that we can observe? Why does capitalization (e.g. IMPORTANT, MUST) or setting a &lt;a href="https://docs.anthropic.com/en/docs/build-with-claude/prompt-engineering/system-prompts" rel="noopener noreferrer"&gt;role/persona&lt;/a&gt; improve prompts? Can we really not have access to a high-level tree with the weights the LLM uses to correlate tokens, and use it to justify why a given output was produced? Or why an instruction given as input was not followed?&lt;br&gt;
It's okay to just have a basic understanding and know about the new abstractions we have with these LLMs. But knowing how that abstraction works leads to knowing how to transition to automation.&lt;/p&gt;

&lt;p&gt;I will keep searching and learning more in order to answer these questions or find engineers in the industry who have answered them. Especially around &lt;strong&gt;interpretability research&lt;/strong&gt;, which is amazing!!! I recommend reading this research, for example - &lt;a href="https://www.anthropic.com/research/tracing-thoughts-language-model" rel="noopener noreferrer"&gt;Tracing the thoughts of a large language model&lt;/a&gt;.&lt;br&gt;
Hope you enjoyed reading, feel free to share in the comments below how you use AI to augment yourself 🙂.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>learning</category>
    </item>
    <item>
      <title>Learning: Observability</title>
      <dc:creator>David Pereira</dc:creator>
      <pubDate>Sun, 30 Mar 2025 09:00:00 +0000</pubDate>
      <link>https://dev.to/bolt04/learning-observability-3i37</link>
      <guid>https://dev.to/bolt04/learning-observability-3i37</guid>
      <description>&lt;h2&gt;
  
  
  Table of Contents
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Introduction&lt;/li&gt;
&lt;li&gt;Get Started&lt;/li&gt;
&lt;li&gt;
Logs

&lt;ul&gt;
&lt;li&gt;Canonical logs&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Traces&lt;/li&gt;

&lt;li&gt;Metrics&lt;/li&gt;

&lt;li&gt;General Best Practices&lt;/li&gt;

&lt;li&gt;

Resources

&lt;ul&gt;
&lt;li&gt;GitHub demo repo&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Conclusion&lt;/li&gt;

&lt;/ul&gt;

&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;In this small post, I'll share some resources, notes I've taken while learning, and best practices for making our systems observable. I've always had a knowledge gap regarding observability, and recently I've truly enjoyed learning more about this area in our software industry.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Quick note&lt;/strong&gt;: In this post I'll only share about 3 telemetry &lt;a href="https://opentelemetry.io/docs/concepts/signals/" rel="noopener noreferrer"&gt;signals&lt;/a&gt;. &lt;strong&gt;Profile&lt;/strong&gt; is another signal that I will research in the future.&lt;/p&gt;

&lt;h2&gt;
  
  
  Get Started
&lt;/h2&gt;

&lt;p&gt;Follow these steps to get started with auto-instrumentation in your application using OpenTelemetry: &lt;a href="https://opentelemetry.io/docs/languages/net/getting-started/#instrumentation" rel="noopener noreferrer"&gt;https://opentelemetry.io/docs/languages/net/getting-started/#instrumentation&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For OpenTelemetry in a front-end app you can check these useful resources:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://grafana.com/oss/faro/" rel="noopener noreferrer"&gt;Grafana faro&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://nextjs.org/docs/app/building-your-application/optimizing/open-telemetry#using-vercelotel" rel="noopener noreferrer"&gt;Next.js&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.checklyhq.com/blog/in-depth-guide-to-monitoring-next-js-apps-with-opentelemetry/" rel="noopener noreferrer"&gt;Guide for OpenTelemetry in Next.js&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://opentelemetry.io/docs/languages/js/getting-started/browser/" rel="noopener noreferrer"&gt;Browser OpenTelemetry getting started&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Client-side instrumentation in OpenTelemetry is part of &lt;a href="https://opentelemetry.io/community/roadmap/#p2-client-instrumentation-rum" rel="noopener noreferrer"&gt;their roadmap&lt;/a&gt; which is great to see, since I've only seen vendor-specific solutions and products for front-end apps (e.g. New Relic, Datadog). For browser instrumentation otel doesn't seem to be super mature yet, but a lot of effort is being put into this area by the OpenTelemetry team.&lt;/p&gt;

&lt;h2&gt;
  
  
  Logs
&lt;/h2&gt;

&lt;p&gt;We all know about logs 😄. It's data that we all need in order to troubleshoot and know what is happening in our applications. We shouldn't overdo it, creating tons and tons of logs since that will probably create noise and make it harder to troubleshoot problems.&lt;/p&gt;

&lt;p&gt;For logs, we can use &lt;a href="https://github.com/open-telemetry/opentelemetry-dotnet/blob/main/docs/logs/README.md#best-practices" rel="noopener noreferrer"&gt;these best practices&lt;/a&gt;. From this list, these are an absolute must to follow:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Avoid string interpolation&lt;/li&gt;
&lt;li&gt;Use structured logging&lt;/li&gt;
&lt;li&gt;Log redaction for sensitive information&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In addition to the list above, we should also include the &lt;code&gt;TraceId&lt;/code&gt; and &lt;code&gt;SpanId&lt;/code&gt; in our log records, to correlate logs with traces. If you are using the Serilog console sink, &lt;a href="https://github.com/serilog/serilog-sinks-console/blob/4c9a7b6946dfd2d7f07a792c40bb3d46af835ee9/src/Serilog.Sinks.Console/ConsoleLoggerConfigurationExtensions.cs#L32" rel="noopener noreferrer"&gt;by default the message template&lt;/a&gt; won't have those fields so if you want them, consider using &lt;a href="https://github.com/serilog/serilog/wiki/Formatting-Output#formatting-json" rel="noopener noreferrer"&gt;JsonFormatter&lt;/a&gt; or &lt;code&gt;CompactJsonFormatter&lt;/code&gt;. Here is an example Serilog configuration in &lt;code&gt;appsettings.json&lt;/code&gt; (setup to remove unnecessary/noisy logs):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="nl"&gt;"Serilog"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"Using"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="s2"&gt;"Serilog.Sinks.Console"&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"MinimumLevel"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"Default"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Information"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"Override"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"Microsoft.AspNetCore"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Warning"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"Microsoft.Extensions.Diagnostics.HealthChecks"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Warning"&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"WriteTo"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"Name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Console"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"Args"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="nl"&gt;"formatter"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Serilog.Formatting.Json.JsonFormatter, Serilog"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"renderMessage"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"Enrich"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="s2"&gt;"FromLogContext"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="s2"&gt;"WithMachineName"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="s2"&gt;"WithThreadId"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="s2"&gt;"WithProcessId"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="s2"&gt;"WithProcessName"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="s2"&gt;"WithExceptionDetails"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="s2"&gt;"WithExceptionStackTraceHash"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="s2"&gt;"WithEnvironmentName"&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"Properties"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"Application"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"GrafanaDemoOtelApp"&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Below are some documentation links for logging in .NET. The &lt;code&gt;ILogger&lt;/code&gt; extension methods are not always the best choice (e.g. &lt;code&gt;logger.LogInformation&lt;/code&gt;), especially in high-performance scenarios or if your logs are in a hot path:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://learn.microsoft.com/en-us/dotnet/core/extensions/high-performance-logging" rel="noopener noreferrer"&gt;High-performance logging in .NET&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://learn.microsoft.com/en-us/dotnet/core/extensions/logger-message-generator" rel="noopener noreferrer"&gt;Compile-time logging source generation&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  Canonical logs
&lt;/h3&gt;

&lt;p&gt;There is also a different way of logging, based on having more attributes in one single log line. I've seen this in Stripe where they call it &lt;a href="https://stripe.com/blog/canonical-log-lines" rel="noopener noreferrer"&gt;canonical log lines&lt;/a&gt;. Charity Majors also references this &lt;strong&gt;canonical logs&lt;/strong&gt; term in her blog post about Observability 2.0 (that I reference in the Resources section).&lt;/p&gt;

&lt;p&gt;This idea is very interesting, but might lack awareness. At least in .NET land, I didn't find many references to this style of logging or example code that we could follow when there are many &lt;code&gt;ILogger&lt;/code&gt; instances involved.&lt;/p&gt;
&lt;h2&gt;
  
  
  Traces
&lt;/h2&gt;

&lt;p&gt;For traces in .NET we have &lt;a href="https://learn.microsoft.com/en-us/dotnet/core/diagnostics/distributed-tracing-instrumentation-walkthroughs#best-practices-1" rel="noopener noreferrer"&gt;these best practices&lt;/a&gt;. So far I've seen four common solutions for adding &lt;a href="https://microsoft.github.io/code-with-engineering-playbook/observability/correlation-id/" rel="noopener noreferrer"&gt;correlation ids&lt;/a&gt; in traces (not all are standards):&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://www.w3.org/TR/trace-context/" rel="noopener noreferrer"&gt;W3C trace context&lt;/a&gt; - current standard in the HTTP protocol for tracing&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://en.wikipedia.org/wiki/List_of_HTTP_header_fields#Common_non-standard_request_fields" rel="noopener noreferrer"&gt;X-Correlation-Id&lt;/a&gt; - a non-standard HTTP header for RESTful APIs (also known as &lt;a href="https://http.dev/x-request-id" rel="noopener noreferrer"&gt;X-Request-Id&lt;/a&gt;). I thought this was a standard since it's widely used, but I didn't find a RFC from IETF or any other organization.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/dotnet/runtime/blob/main/src/libraries/System.Diagnostics.DiagnosticSource/src/HttpCorrelationProtocol.md" rel="noopener noreferrer"&gt;Request-Id&lt;/a&gt; - this is a known header in the .NET ecosystem&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/openzipkin/b3-propagation" rel="noopener noreferrer"&gt;B3 Zipkin propagation&lt;/a&gt; - Zipkin format standard&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://docs.aws.amazon.com/xray/latest/devguide/xray-concepts.html#xray-concepts-tracingheader" rel="noopener noreferrer"&gt;AWS X-Ray Trace Id&lt;/a&gt; - proprietary solution for AWS that adds headers for tracing&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Not every company/project uses W3C trace context, you have some options above to pick from. I prefer the standard W3C trace context 😄 (maybe the industry will widely adopt this in the future) and using OpenTelemetry to manage these headers (HTTP, AMQP, etc) and correlation with logs automatically. The code you don't write can't have bugs 😆.&lt;/p&gt;

&lt;p&gt;With that said, in some situations, you might have integrations with 3rd party software and need to use their custom headers or project limitations and need to use a particular format. At the end of the day what's important is that you have distributed tracing working E2E.&lt;/p&gt;

&lt;p&gt;There is also a relevant spec for distributed tracing called &lt;a href="https://www.w3.org/TR/baggage/" rel="noopener noreferrer"&gt;Baggage&lt;/a&gt; which OpenTelemetry implements and we can use in our apps. The most important part here is trace propagation to get the full trace from the publisher to the consumer.&lt;/p&gt;
&lt;h2&gt;
  
  
  Metrics
&lt;/h2&gt;

&lt;p&gt;For metrics, it's important to follow naming conventions for custom metrics. Especially if your organization has a platform team, setting conventions helps everyone. I do know some otel semantic conventions aren't stable, and that also leads to some nuget packages being pre-release.&lt;/p&gt;

&lt;p&gt;But anyhow, set conventions for your team or read and follow &lt;a href="https://opentelemetry.io/docs/specs/semconv/general/metrics/" rel="noopener noreferrer"&gt;OpenTelemetry semantic conventions&lt;/a&gt;.&lt;br&gt;
An important resource I found is the comments on &lt;a href="https://prometheus.io/docs/practices/instrumentation/#do-not-overuse-labels" rel="noopener noreferrer"&gt;Prometheus best practices&lt;/a&gt; related to high cardinality metrics.&lt;/p&gt;

&lt;p&gt;When I started trying out custom metrics instrumentation I discovered that OpenTelemetry is not always used (the SDK + OTLP). We have the Prometheus SDK which is mature and widely used. Then for Java there are other solutions like Micrometer and others that integrate very well with Spring. In regards to the Java ecosystem, I read &lt;a href="https://opentelemetry.io/blog/2024/java-metric-systems-compared/#benchmark-opentelemetry-java-vs-micrometer-vs-prometheus-java" rel="noopener noreferrer"&gt;this otel Java benchmarks&lt;/a&gt; and &lt;a href="https://spring.io/blog/2024/10/28/lets-use-opentelemetry-with-spring" rel="noopener noreferrer"&gt;this Spring post&lt;/a&gt; just because I was interested in knowing what the industry is adopting and why.&lt;/p&gt;
&lt;h2&gt;
  
  
  General Best Practices
&lt;/h2&gt;

&lt;p&gt;There is a ton to be learned with SRE principles and practices. But one in particular was very useful for me and my team: &lt;strong&gt;always categorize our custom metrics according to the 4 Golden Signals&lt;/strong&gt;. Any metric we can't categorize is probably not useful for us.&lt;/p&gt;


  &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdeniseyu.io%2Fart%2Fsketchnotes%2Ftopic-based%2Fmonitoring.png" alt="Image Credit to - Denise Yu"&gt;Image Credit to - Denise Yu
  


&lt;p&gt;&lt;a href="https://deniseyu.io/art/" rel="noopener noreferrer"&gt;Source of Denise Yu's art&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://sre.google/sre-book/monitoring-distributed-systems/" rel="noopener noreferrer"&gt;Google's SRE book&lt;/a&gt; is amazing to learn more about the 4 Golden signals and creating SLO-based alerts. All our alerts should be actionable (or the support team will not be happy), so it helps if they are based on SLOs that are defined as a team.&lt;/p&gt;

&lt;p&gt;They also have &lt;a href="https://sre.google/sre-book/service-best-practices/" rel="noopener noreferrer"&gt;some best practices for production services&lt;/a&gt;.&lt;/p&gt;
&lt;h2&gt;
  
  
  Resources
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Glossary of many observability terms in case you’re not familiar with them: &lt;a href="https://github.com/prathamesh-sonpatki/o11y-wiki" rel="noopener noreferrer"&gt;https://github.com/prathamesh-sonpatki/o11y-wiki&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/magsther/awesome-opentelemetry" rel="noopener noreferrer"&gt;Awesome Observability GitHub repo&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;If dashboards make you happy &lt;a href="https://play.grafana.org/d/feg4yc4qw3wn4b/third-annual-observability-survey?pg=survey-2025&amp;amp;plcmt=toc-cta-2&amp;amp;orgId=1&amp;amp;from=2025-03-13T02:49:20.476Z&amp;amp;to=2025-03-14T02:49:20.476Z&amp;amp;timezone=utc&amp;amp;var-region=$__all&amp;amp;var-role=$__all&amp;amp;var-size=$__all&amp;amp;var-industry=$__all&amp;amp;var-filters=%60Region%60%20in%20%28%27Europe%27,%27Asia%27,%27North%20America%27,%27Africa%27,%27South%20America%27,%27Oceania%27,%27Middle%20East%27%29%20AND%20%60Role%60%20IN%20%28%27Platform%20team%27,%27SRE%27,%27CTO%27,%27Engineering%20manager%27,%27Developer%27,%27Director%20of%20engineering%27,%27Other%27%29%20AND%20%60Size_of_organization%60%20IN%20%28%2710%20or%20fewer%20employees%27,%2711%20-%20100%20employees%27,%27101%20-%20500%20employees%27,%27501%20-%201,000%20employees%27,%271,001%20-%202,500%20employees%27,%272,501%20-%205,000%20employees%27,%275,001%2B%20employees%27%29%20AND%20%60Industry%60%20IN%20%28%27Telecommunications%27,%27Healthcare%27,%27IoT%27,%27Financial%20services%27,%27Education%27,%27Government%27,%27Applied%20Sciences%27,%27Software%20%26%20Technology%27,%27Media%20%26%20Entertainment%27,%27Travel%20%26%20Transportation%27,%27Retail%2FE-commerce%27,%27Energy%20%26%20Utilities%27,%27Automotive%20%26%20Manufacturing%27,%27Other%27%29" rel="noopener noreferrer"&gt;check the Grafana observability report dashboard&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;&lt;a href="https://aws-observability.github.io/observability-best-practices/guides/" rel="noopener noreferrer"&gt;AWS observability best practices guide&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://grafana.com/blog/2018/08/02/the-red-method-how-to-instrument-your-services/" rel="noopener noreferrer"&gt;About RED and USE method&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://learn.microsoft.com/en-us/dotnet/core/diagnostics/distributed-tracing-instrumentation-walkthroughs#best-practices-1" rel="noopener noreferrer"&gt;Traces Instrumentation best practices in .NET&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://signoz.io/guides/what-are-the-limitations-of-prometheus-labels/#what-are-the-limitations-of-prometheus-labels" rel="noopener noreferrer"&gt;What are the Limitations of Prometheus Labels?&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.cncf.io/training/certification/otca/" rel="noopener noreferrer"&gt;CNCF OpenTelemetry certification&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/cncf/tag-observability/blob/main/whitepaper.md" rel="noopener noreferrer"&gt;TAG Observability whitepaper&lt;/a&gt; - this is an amazing resource with tons of information! I also recommend checking out the other resources they have in the tag-observability repo and community&lt;/li&gt;
&lt;li&gt;Resources specifically about &lt;strong&gt;Observability 2.0&lt;/strong&gt;:

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://charity.wtf/tag/observability-2-0/" rel="noopener noreferrer"&gt;Observability 2.0 by Charity Majors&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.aparker.io/post/3leq2g72z7r2t" rel="noopener noreferrer"&gt;Re-Redefining Observability&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.youtube.com/watch?v=ag2ykPO805M" rel="noopener noreferrer"&gt;Is It Time To Version Observability? (Signs Point To Yes) - Charity Majors&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Talks

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.youtube.com/watch?v=hhZrOHKIxLw" rel="noopener noreferrer"&gt;How Prometheus Revolutionized Monitoring at SoundCloud - Björn Rabenstein&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.youtube.com/watch?v=X99X-VDzxnw" rel="noopener noreferrer"&gt;How to Include Latency in SLO-based Alerting - Björn Rabenstein, Grafana Labs&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.youtube.com/watch?v=pLPMAAOSxSE" rel="noopener noreferrer"&gt;Myths and Historical Accidents: OpenTelemetry and the Future of Observability Part 1&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://youtu.be/3tBj3ZCPGJY?t=687" rel="noopener noreferrer"&gt;Modern Platform Engineering: 9 Secrets of Generative Teams - Liz Fong-Jones&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.youtube.com/watch?v=gviWKCXwyvY" rel="noopener noreferrer"&gt;Context Propagation makes OpenTelemetry awesome&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  GitHub demo repo
&lt;/h3&gt;

&lt;p&gt;I've been developing a demo app (it has fewer features than the &lt;a href="https://github.com/open-telemetry/opentelemetry-demo" rel="noopener noreferrer"&gt;otel demo&lt;/a&gt;) to demonstrate how to build an app with OpenTelemetry, Grafana and Prometheus. It's primarily focused on a small app I can showcase in my talks.&lt;/p&gt;

&lt;p&gt;If you're interested take a look:&lt;/p&gt;


&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://assets.dev.to/assets/github-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/BOLT04" rel="noopener noreferrer"&gt;
        BOLT04
      &lt;/a&gt; / &lt;a href="https://github.com/BOLT04/grafana-observability-demo" rel="noopener noreferrer"&gt;
        grafana-observability-demo
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      Repo with grafana and observability related demo app
    &lt;/h3&gt;
  &lt;/div&gt;
  &lt;div class="ltag-github-body"&gt;
    
&lt;div id="readme" class="md"&gt;
&lt;div class="markdown-heading"&gt;
&lt;h1 class="heading-element"&gt;Observability - Grafana Demo for Talks&lt;/h1&gt;
&lt;/div&gt;

&lt;p&gt;This is a simple demo showcasing how we can instrument our applications with OpenTelemetry, using Azure Monitoring + Grafana + Prometheus
It's intended to be used as the demo of a specific talk about observability.&lt;/p&gt;
&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;Demo app instructions&lt;/h2&gt;
&lt;/div&gt;
&lt;p&gt;Read the instructions in &lt;code&gt;src/README.md&lt;/code&gt;.&lt;/p&gt;
&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;Session Abstracts&lt;/h2&gt;

&lt;/div&gt;
&lt;div class="markdown-heading"&gt;
&lt;h3 class="heading-element"&gt;Observability with Azure Managed Grafana&lt;/h3&gt;

&lt;/div&gt;
&lt;p&gt;Nowadays, OpenTelemetry is used extensively to collect telemetry data from our applications, and serves as an industry standard. But we need a way to visualize this data in a clear way, and that is where Azure Managed Grafana comes in.&lt;/p&gt;
&lt;p&gt;In this session we'll go through the core concepts of observability and demonstrate how we can use Azure Managed Grafana, integrated with Prometheus Grafana Tempo and Loki to gather insights from our telemetry data
We will cover topics such as the basics about logs, metrics and traces, manual instrumentation, OTLP, and others. We'll…&lt;/p&gt;
&lt;/div&gt;
  &lt;/div&gt;
  &lt;div class="gh-btn-container"&gt;&lt;a class="gh-btn" href="https://github.com/BOLT04/grafana-observability-demo" rel="noopener noreferrer"&gt;View on GitHub&lt;/a&gt;&lt;/div&gt;
&lt;/div&gt;


&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F14piw6dnmu0v0vh6b51t.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F14piw6dnmu0v0vh6b51t.gif" alt="happy gif"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Hopefully, some of these resources I've shared are useful to you 😄. I still have a ton to learn and explore, but I'm happy with the knowledge I've acquired so far.&lt;/p&gt;

&lt;p&gt;There are some specific standards + projects that I'll dive in and explore more, like: eBPF; OpenMetrics. OpenMetrics is something I'd like to spend some quality time reading about, but I know &lt;a href="https://www.cncf.io/blog/2024/09/18/openmetrics-is-archived-merged-into-prometheus/" rel="noopener noreferrer"&gt;it's archived&lt;/a&gt; and &lt;a href="https://www.reddit.com/r/devops/comments/1f5ttdx/openmetrics_is_archived_merged_into_prometheus/?rdt=47070" rel="noopener noreferrer"&gt;reddit says the same&lt;/a&gt;. Just want to read and watch some talks about it to feed my curiosity 😃.&lt;/p&gt;

&lt;p&gt;Last but not least, I want to follow the work that some industry leaders are doing like &lt;a href="https://charity.wtf/" rel="noopener noreferrer"&gt;Charity Majors&lt;/a&gt;, specifically about Observability 2.0 😄. I discovered this term in the &lt;a href="https://www.thoughtworks.com/radar/techniques/summary/observability-2-0" rel="noopener noreferrer"&gt;Thouthworks tech radar&lt;/a&gt;, and the part "high-cardinality event data in a single data store" caught my interest.&lt;br&gt;
I'm still learning, researching, and listening to the opinions of industry leaders about this term to then develop my own opinions. Maybe I'll make a blog post about this in the future 😁.&lt;/p&gt;

</description>
      <category>o11y</category>
      <category>observability</category>
      <category>learning</category>
    </item>
    <item>
      <title>Taking a look at what influenced me as I grow</title>
      <dc:creator>David Pereira</dc:creator>
      <pubDate>Mon, 26 Jun 2023 12:00:00 +0000</pubDate>
      <link>https://dev.to/bolt04/taking-a-look-at-what-influenced-me-as-i-grow-1jhd</link>
      <guid>https://dev.to/bolt04/taking-a-look-at-what-influenced-me-as-i-grow-1jhd</guid>
      <description>&lt;p&gt;I've wanted to write down what/who has helped me grow as a developer for quite some time. Well... here it is 😄.&lt;/p&gt;

&lt;p&gt;Here is a tweet where I shared talks that I enjoyed a ton, and helped me grow:&lt;/p&gt;

&lt;p&gt;&lt;iframe class="tweet-embed" id="tweet-1540783019862024194-83" src="https://platform.twitter.com/embed/Tweet.html?id=1540783019862024194"&gt;
&lt;/iframe&gt;

  // Detect dark theme
  var iframe = document.getElementById('tweet-1540783019862024194-83');
  if (document.body.className.includes('dark-theme')) {
    iframe.src = "https://platform.twitter.com/embed/Tweet.html?id=1540783019862024194&amp;amp;theme=dark"
  }



&lt;/p&gt;

&lt;p&gt;In this post, I'll share my retrospective about what has influenced me. I'll mention a few videos on that tweet that really molded who I am 😄&lt;/p&gt;

&lt;h2&gt;
  
  
  Communication and Leadership
&lt;/h2&gt;

&lt;p&gt;The person that taught me the most about communication and leadership was &lt;a href="https://paulonetocoach.pt/Blog.html" rel="noopener noreferrer"&gt;Paulo Neto&lt;/a&gt;. I'd had the privilege of participating in many group or 1-1 sessions with Paulo, with other colleagues from Create IT.&lt;/p&gt;

&lt;p&gt;You don't need to be the Tech Lead to be a leader on your team. Give feedback to your colleagues on what they did great, or what they can improve. This doesn't mean "always give specific instructions on how to get from point A to point B". The strategy we use to give feedback depends on the other person, but sometimes it's just better to ask questions, not give solutions. This challenges the other person to come up with a solution and think critically about the problem.&lt;/p&gt;

&lt;p&gt;I used to help by giving all the answers to the problems people asked me. By giving my opinion even if nobody asked for it. If I give away the solution right away, let's say to a junior dev, it means I just stole an opportunity for that person to grow.&lt;br&gt;
Now of course, it depends on the scenario (PRD bugs can have some nasty consequences), but I believe stepping aside and asking more questions, to make the other person think for themselves and get to the solution, is way better.&lt;/p&gt;

&lt;p&gt;When you give feedback, the focus is on the other person's growth, not your own.&lt;/p&gt;

&lt;p&gt;I'm still getting better at being a human, but Paulo helped me a lot in being on the right path.&lt;/p&gt;
&lt;h2&gt;
  
  
  Learning to learn
&lt;/h2&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/ujxvy5NjeRQ"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;This video really taught me what meant &lt;strong&gt;to learn&lt;/strong&gt;. Re-learning is a part of the process! This completely changed how I look at certain topics.&lt;/p&gt;

&lt;p&gt;One more thing that is crucial, is to &lt;strong&gt;reserve time to learn&lt;/strong&gt;! Saying "I don't have time to learn k8s" is the equivalent of saying "Learning k8s is not my priority right now". We have time for everything in this world, it's all a matter of priorities 😂. I've sacrificed my learning/growing time to deliver user stories on the sprint, for several occasions. If you are saying &lt;strong&gt;yes&lt;/strong&gt; to every request someone asks you, it usually ends up with you prioritizing those requests, and not your own time to learn k8s, for example.&lt;/p&gt;

&lt;h2&gt;
  
  
  Learning to teach
&lt;/h2&gt;

&lt;p&gt;Sharing knowledge is a key differentiator for a software developer. Have you ever tried teaching something to your colleagues, then get follow-up questions? Are you able to answer all of them 😅 ?&lt;/p&gt;

&lt;p&gt;Teaching helps you discover gaps in your knowledge, and it's something I'm still improving. To feel like an expert in a subject, you need to first explain it to someone that has no knowledge about it. If you try and you fail to get the other person to understand the subject, it means you have work to do. I've used many meetings with colleagues as a means to know if I deeply understand a subject like idempotency, microservices, BFF, CQRS, etc.&lt;/p&gt;

&lt;p&gt;Imagine you're onboarding a new teammate and you are the senior dev. Your capability to teach about the codebase, business domain, and other subjects is important. It can be the difference between having a new dev that is productive after one week, instead of one month.&lt;/p&gt;

&lt;p&gt;I've talked mostly in this post about my own growth... but that doesn't mean all I need to do is develop my own skills.&lt;br&gt;
IMO, &lt;strong&gt;the hardest challenge in this industry is making other people more productive&lt;/strong&gt;! Not just making myself more productive and better at engineering 😅. Some people do this amazingly well, but I'm still growing in this regard 😄&lt;/p&gt;

&lt;p&gt;Like &lt;a href="https://www.goodreads.com/quotes/19421-if-you-can-t-explain-it-to-a-six-year-old" rel="noopener noreferrer"&gt;Einstein said&lt;/a&gt;:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;If you can't explain it to a six year old, you don't understand it yourself.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h2&gt;
  
  
  People I've followed
&lt;/h2&gt;

&lt;p&gt;In retrospect, I follow just a few key people. As time passed, I focused on other interesting topics, instead of "simple tutorials" (the amount of content I consume didn't exactly decrease). Here is a list of the people I follow the most nowadays:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.youtube.com/@ThePrimeTimeagen" rel="noopener noreferrer"&gt;ThePrimeTimeagen&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://twitter.com/codeopinion" rel="noopener noreferrer"&gt;Derek Comartin&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.youtube.com/@shanselman" rel="noopener noreferrer"&gt;Scott Hanselman&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.youtube.com/@ContinuousDelivery" rel="noopener noreferrer"&gt;Dave Farley&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;One of the videos I loved the most from Scott was this one:&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/8HE5LJwAv1k"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;What he shared really resonated with me. I felt powered up inside to keep learning and sharing, perhaps make more blog posts in Portuguese... but I never did that 😅.&lt;br&gt;
It has some really good gems of advice inside, I do recommend you watch it.&lt;/p&gt;

&lt;p&gt;Also someone I have learned a ton is &lt;a href="https://twitter.com/editingemily" rel="noopener noreferrer"&gt;Emily Freeman&lt;/a&gt;. I absolutely loved this talk she made at GitHub Universe!&lt;br&gt;
&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/Z66-us_VDu8"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;Recently I shared another list of more front-end people I follow and recommend:&lt;br&gt;
&lt;/p&gt;
&lt;div class="liquid-comment"&gt;
    &lt;div class="details"&gt;
      &lt;a href="/bolt04"&gt;
        &lt;img class="profile-pic" src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F192676%2Fcb99d336-4580-47c0-becb-ee381d71b4e8.jpg" alt="bolt04 profile image"&gt;
      &lt;/a&gt;
      &lt;a href="/bolt04"&gt;
        &lt;span class="comment-username"&gt;David Pereira&lt;/span&gt;
      &lt;/a&gt;
      &lt;span class="color-base-30 px-2 m:pl-0"&gt;•&lt;/span&gt;

&lt;a href="https://dev.to/bolt04/comment/26ff1" class="comment-date crayons-link crayons-link--secondary fs-s"&gt;
  &lt;time class="date-short-year"&gt;
    May 7 '23
  &lt;/time&gt;

&lt;/a&gt;

    &lt;/div&gt;
    &lt;div class="body"&gt;
      &lt;p&gt;I'd add some people to that list:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://kentcdodds.com/" rel="nofollow noopener noreferrer"&gt;Kent C. Dodds&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://twitter.com/dan_abramov" rel="nofollow noopener noreferrer"&gt;Dan Abramov&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.youtube.com/@t3dotgg" rel="nofollow noopener noreferrer"&gt;Theo&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://twitter.com/rauchg" rel="nofollow noopener noreferrer"&gt;Guillermo Rauch&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;They have either awesome blogs or videos that you can learn a lot from!&lt;/p&gt;

&lt;p&gt;I'd also recommend to follow and learn from Kelsey Hightower, for example &lt;a href="https://www.youtube.com/watch?v=cl0zMen43E4" rel="nofollow noopener noreferrer"&gt;this video&lt;/a&gt;. It's not particular about front end, but you can learn many valuable lessons from him :)&lt;/p&gt;


    &lt;/div&gt;
&lt;/div&gt;





&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;I'm way more confident now, and all those people I mentioned made and shared content that helped me grow.&lt;br&gt;
I still have ambitious goals, so I'll just have to keep growing!&lt;/p&gt;

&lt;p&gt;If you're interested in AsyncAPI and CloudEvents, take a look at my post: &lt;br&gt;
&lt;/p&gt;
&lt;div class="ltag__link"&gt;
  &lt;a href="/bolt04" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__pic"&gt;
      &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F192676%2Fcb99d336-4580-47c0-becb-ee381d71b4e8.jpg" alt="bolt04"&gt;
    &lt;/div&gt;
  &lt;/a&gt;
  &lt;a href="https://dev.to/bolt04/getting-started-with-cloudevents-and-asyncapi-8db" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__content"&gt;
      &lt;h2&gt;Getting Started with CloudEvents and AsyncAPI&lt;/h2&gt;
      &lt;h3&gt;David Pereira ・ Sep 16 '21&lt;/h3&gt;
      &lt;div class="ltag__link__taglist"&gt;
        &lt;span class="ltag__link__tag"&gt;#eventdriven&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#cloudevents&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#asyncapi&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#architecture&lt;/span&gt;
      &lt;/div&gt;
    &lt;/div&gt;
  &lt;/a&gt;
&lt;/div&gt;


</description>
      <category>career</category>
    </item>
    <item>
      <title>How to use the SEO meta tag rules module for Page Designer in SFCC</title>
      <dc:creator>David Pereira</dc:creator>
      <pubDate>Sat, 05 Nov 2022 15:21:00 +0000</pubDate>
      <link>https://dev.to/bolt04/how-to-use-the-seo-meta-tag-rules-module-for-page-designer-in-sfcc-20i8</link>
      <guid>https://dev.to/bolt04/how-to-use-the-seo-meta-tag-rules-module-for-page-designer-in-sfcc-20i8</guid>
      <description>&lt;h2&gt;
  
  
  Table of contents
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Introduction&lt;/li&gt;
&lt;li&gt;The problem&lt;/li&gt;
&lt;li&gt;The solution&lt;/li&gt;
&lt;li&gt;Alternative solutions to SEO meta tag rules&lt;/li&gt;
&lt;li&gt;Conclusion&lt;/li&gt;
&lt;li&gt;Additional Links&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;We have a great feature available to us in Salesforce Commerce Cloud (SFCC) called &lt;a href="https://trailhead.salesforce.com/en/content/learn/modules/b2c-seo-meta-tags/b2c-seo-meta-tag-explore-rules" rel="noopener noreferrer"&gt;SEO meta tag rules&lt;/a&gt;. This is a module inside of SEO in Business Manager (BM).&lt;br&gt;
However, currently there is &lt;strong&gt;only support for &lt;a href="https://documentation.b2c.commercecloud.salesforce.com/DOC1/topic/com.demandware.dochelp/content/b2c_commerce/topics/page_designer/b2c_use_page_meta_tag_rules_for_pd.html" rel="noopener noreferrer"&gt;PDP/PLP pages made in page designer&lt;/a&gt;&lt;/strong&gt; that support these meta tag rules.&lt;/p&gt;

&lt;p&gt;In this blog post we'll see how to support SEO meta tag rules for &lt;strong&gt;all pages made in page designer&lt;/strong&gt;.&lt;/p&gt;
&lt;h2&gt;
  
  
  The problem
&lt;/h2&gt;

&lt;p&gt;Before going any further, I want to mention some disclaimers. At the time of writing this, not all pages in page designer support this. This could be added to SFCC and be directly supported without custom development. I posted this idea as a &lt;a href="https://ideas.salesforce.com/s/idea/a0B8W00000JJF4rUAH/support-all-page-designer-pages-for-seo-meta-tag-rules" rel="noopener noreferrer"&gt;feature enhancement for Salesforce in IdeaExchange&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;With that said, let's take a step back and understand the problem.&lt;/p&gt;

&lt;p&gt;The SEO experts on a business operate mostly on the SEO meta tag rules BM module. If they need more granular level of SEO customization, most System Objects (e.g. Products, Content Assets, Categories) of SFCC support SEO fields like &lt;code&gt;pageUrl&lt;/code&gt;, &lt;code&gt;pageTitle&lt;/code&gt;, &lt;code&gt;pageDescription&lt;/code&gt;. SEO experts can define rules that act as fallbacks, and then if some merchants don't define these fields for particular categories or pieces of content. You still have some meta tags being added to the page by these rules.&lt;/p&gt;

&lt;p&gt;Now, even though this feature supports many types of pages, like home page, product pages or product listing pages. It doesn't support all types of Page Designer (PD) pages.&lt;br&gt;
You can also take a look at the &lt;a href="https://documentation.b2c.commercecloud.salesforce.com/DOC1/index.jsp?topic=%2Fcom.demandware.dochelp%2FDWAPI%2Fscriptapi%2Fhtml%2Fapi%2Fclass_dw_experience_Page.html" rel="noopener noreferrer"&gt;Page class&lt;/a&gt;, and check that it doesn't have the &lt;code&gt;pageMetaTags&lt;/code&gt; field like the &lt;a href="https://documentation.b2c.commercecloud.salesforce.com/DOC1/index.jsp?topic=%2Fcom.demandware.dochelp%2FDWAPI%2Fscriptapi%2Fhtml%2Fapi%2Fclass_dw_content_Content.html" rel="noopener noreferrer"&gt;Content class&lt;/a&gt; does.&lt;/p&gt;

&lt;p&gt;Page Designer pages are very similar to Content Assets, in the sense that underneath, &lt;a href="https://documentation.b2c.commercecloud.salesforce.com/DOC2/topic/com.demandware.dochelp/content/b2c_commerce/topics/page_designer/b2c_pg_comp_types_content_assets.htm" rel="noopener noreferrer"&gt;these pages are persisted as the same content objects on the SFCC's database&lt;/a&gt;. But there are still differences, the support for meta tag rules feature is one of them.&lt;/p&gt;

&lt;p&gt;Ultimately, these differences makes it harder for merchants or SEO experts that want to leverage the same functionality available on the rest of the site.&lt;/p&gt;

&lt;p&gt;Now that you have a better understanding about the problem, let's shift our focus into a solution.&lt;/p&gt;
&lt;h2&gt;
  
  
  The solution
&lt;/h2&gt;
&lt;h3&gt;
  
  
  Content Assets
&lt;/h3&gt;

&lt;p&gt;The whole premise of this solution is that it's supported by the SEO meta tag rules module of BM. In this module we have support for the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Homepage&lt;/li&gt;
&lt;li&gt;Product pages&lt;/li&gt;
&lt;li&gt;Content Detail pages&lt;/li&gt;
&lt;li&gt;Content Listing pages&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The only one that fits well with a page designer page is &lt;strong&gt;Content Detail page&lt;/strong&gt;. Meaning we'll create content assets as a way to support these SEO rules for Page Designer.&lt;br&gt;
Each page can have it's corresponding content asset, where the content ID follows a naming convention like "-seo". This way in &lt;code&gt;Page.js&lt;/code&gt; you can get the meta tags for that page, through that content asset. Here is an example of the &lt;code&gt;Page-Show&lt;/code&gt; extension:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="nx"&gt;server&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;prepend&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Show&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nf"&gt;function &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;req&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;next&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
   &lt;span class="kd"&gt;var&lt;/span&gt; &lt;span class="nx"&gt;PageMgr&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;dw/experience/PageMgr&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
   &lt;span class="kd"&gt;var&lt;/span&gt; &lt;span class="nx"&gt;ContentModel&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;*/cartridge/models/content&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
   &lt;span class="kd"&gt;var&lt;/span&gt; &lt;span class="nx"&gt;pageMetaHelper&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;*/cartridge/scripts/helpers/pageMetaHelper&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="kd"&gt;var&lt;/span&gt; &lt;span class="nx"&gt;page&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;PageMgr&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getPage&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;req&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;querystring&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;cid&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;page&lt;/span&gt; &lt;span class="o"&gt;!=&lt;/span&gt; &lt;span class="kc"&gt;null&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nx"&gt;page&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;isVisible&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="kd"&gt;var&lt;/span&gt; &lt;span class="nx"&gt;pageContent&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;ContentMgr&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getContent&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;page&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;ID&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;-seo&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;pageContent&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="kd"&gt;var&lt;/span&gt; &lt;span class="nx"&gt;content&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;ContentModel&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;pageContent&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;content/content&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

            &lt;span class="nx"&gt;pageMetaHelper&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;setPageMetaData&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;req&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;pageMetaData&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;content&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
            &lt;span class="nx"&gt;pageMetaHelper&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;setPageMetaTags&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;req&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;pageMetaData&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;content&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="nf"&gt;next&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="p"&gt;})&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now you don't need to create a content asset for every since page from PD. For example, if you business doesn't use the syntax &lt;code&gt;${Content.pageTitle}&lt;/code&gt; in your meta tag rules, or any expression regarding the object &lt;code&gt;Content&lt;/code&gt;. Then in that case you could simply create one content asset for each folder the business wants to have.&lt;br&gt;
You might be wondering why I'm mentioning folders and how they fit in the solution, so let's take a deeper look.&lt;/p&gt;
&lt;h3&gt;
  
  
  Folders
&lt;/h3&gt;

&lt;p&gt;Folders are the way you can group multiple pages, and make all of them inherit the same meta tag rules. Once a folder is created, the business can use it in the SEO meta tag rules module inside BM.&lt;br&gt;
Imagine you have a group of pages that are part of the same marketing campaign. As an SEO expert, of course you'd like to have page titles, descriptions, open graph and other meta tags for all of them.&lt;br&gt;
But maybe the content team hasn't reached the maturity level to configure this for every since page. So you create a meta tag rule, acting as a fallback in case the content team doesn't configure some pages.&lt;/p&gt;

&lt;p&gt;So far the solution revolves around the business creating manually folders, assigning them to pages and creating content assets for every page (in case they want to leverage page properties) or every folder.&lt;br&gt;
I believe we can improve on this solution, so let's jump in to some automation.&lt;/p&gt;
&lt;h3&gt;
  
  
  Automation
&lt;/h3&gt;

&lt;p&gt;This is a critical step, since having all this manual work doesn't make sense for business people. As engineers we can do better and automate the creation of these content assets. We can develop two different pieces that play together:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;a &lt;strong&gt;job&lt;/strong&gt; to ensure every page from Page Designer, has it's associated content asset and all the rest&lt;/li&gt;
&lt;li&gt;a &lt;strong&gt;new BM module&lt;/strong&gt; to automate folder creation, specifically creating multiple folders&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Let's go through the job first, its responsible to update the content assets associated with each page. This means creating that content asset if it doesn't already exist, and then assign it to the same folders as the page. To implement this job you need:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Get the list of all Page Designer pages&lt;/li&gt;
&lt;li&gt;Iterate through all pages&lt;/li&gt;
&lt;li&gt;For each page, create the content asset and assign the appropriate folders&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Here is a code snippet (in ES6) representing a possible implementation:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;libraryGateway&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;*/cartridge/scripts/gateways/libraryGateway&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;pagesList&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;getAllPageDesignerPages&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

&lt;span class="nx"&gt;pagesList&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;forEach&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;page&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;contentId&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;page&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;ID&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;-seo`&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;libraryGateway&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;createContentAsset&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
        &lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;contentId&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="na"&gt;pageTitle&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;page&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;pageTitle&lt;/span&gt;
    &lt;span class="p"&gt;})&lt;/span&gt;
    &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;result&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;error&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;throw&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Error&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Error creating content asset&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="nx"&gt;page&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;folders&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;forEach&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;folder&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;libraryGateway&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;assignContentAssetToFolder&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;contentId&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;folder&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;ID&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;})&lt;/span&gt;
&lt;span class="p"&gt;})&lt;/span&gt;

&lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;getAllPageDesignerPages&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;ContentSearchModel&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;dw/content/ContentSearchModel&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;apiContentSearchModel&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;ContentSearchModel&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;libraryID&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;someId&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;

    &lt;span class="nx"&gt;apiContentSearchModel&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;setRecursiveFolderSearch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="nx"&gt;apiContentSearchModel&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;setFilteredByFolder&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="nx"&gt;apiContentSearchModel&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;setFolderID&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;libraryID&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="nx"&gt;apiContentSearchModel&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;search&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;contentSearchResultIterator&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;apiContentSearchModel&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getContent&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;count&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Number&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;apiContentSearchModel&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getCount&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;

    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;pages&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[]&lt;/span&gt;
    &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;contentSearchResultIterator&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nx"&gt;count&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;while &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;contentSearchResultIterator&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;hasNext&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;contentResult&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;contentSearchResultIterator&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;next&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
            &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;contentResult&lt;/span&gt;&lt;span class="p"&gt;?.&lt;/span&gt;&lt;span class="nx"&gt;page&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
                &lt;span class="c1"&gt;// transform some contentResult fields to other types...&lt;/span&gt;
                &lt;span class="nx"&gt;pages&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;push&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;contentResult&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
            &lt;span class="p"&gt;}&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;pages&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the implementation above, getting all pages from Page Designer is done through the &lt;code&gt;ContentSearchModel&lt;/code&gt; API. Although this works, it's not ideal in my opinion since these pages are required to be &lt;strong&gt;searchable&lt;/strong&gt; (a setting on all pages) and &lt;strong&gt;online&lt;/strong&gt;. If they aren't, then they won't be on the &lt;a href="https://documentation.b2c.commercecloud.salesforce.com/DOC2/topic/com.demandware.dochelp/content/b2c_commerce/topics/search_and_navigation/b2c_index_creation.html" rel="noopener noreferrer"&gt;Content index&lt;/a&gt;, which seems to be the only way to get all page designer pages on a given site.&lt;br&gt;
To create content assets and assigning them to folders, we delegate that responsability to the &lt;code&gt;libraryGateway&lt;/code&gt; module. To implement this module we need to use OCAPI.&lt;/p&gt;

&lt;h3&gt;
  
  
  OCAPI
&lt;/h3&gt;

&lt;p&gt;We can use OCAPI Data APIs to create content assets/folders and assign content assets to folders. While researching I tried finding another way, but I didn't find anything easier. The &lt;a href="https://documentation.b2c.commercecloud.salesforce.com/DOC1/topic/com.demandware.dochelp/DWAPI/scriptapi/html/api/packageList.html?cp=0_20_2" rel="noopener noreferrer"&gt;Salesforce Commerce API&lt;/a&gt; (&lt;em&gt;dw&lt;/em&gt; library accessible server-side) doesn't seem to have APIs for these operations. I didn't find any pipelets or jobs steps that do this either... perhaps by &lt;a href="https://documentation.b2c.commercecloud.salesforce.com/DOC1/topic/com.demandware.dochelp/DWAPI/jobstepapi/html/api/jobstep.ExportContent.html" rel="noopener noreferrer"&gt;exporting the content library&lt;/a&gt;, then creating a custom job that reads that file, edits it with the new content objects, and finally runs a job step to import that edited file.&lt;/p&gt;

&lt;p&gt;Interacting with OCAPI is not an expensive development effort, so you can develop a custom cartridge for this. We won't go through the details about that cartridge, perhaps on a separate blog post :slight_smile:&lt;/p&gt;

&lt;h3&gt;
  
  
  OCAPI endpoints
&lt;/h3&gt;

&lt;p&gt;All the operations we want to do can be found on the Libraries resource from the Data API. To create a content asset we can use &lt;a href="https://documentation.b2c.commercecloud.salesforce.com/DOC1/index.jsp?topic=%2Fcom.demandware.dochelp%2FOCAPI%2Fcurrent%2Fdata%2FResources%2FLibraries.html&amp;amp;anchor=id893141162__id696504113" rel="noopener noreferrer"&gt;this endpoint&lt;/a&gt;, to assign it to a folder we can use &lt;a href="https://documentation.b2c.commercecloud.salesforce.com/DOC1/index.jsp?topic=%2Fcom.demandware.dochelp%2FOCAPI%2Fcurrent%2Fdata%2FResources%2FLibraries.html&amp;amp;anchor=id893141162__id-2058222274" rel="noopener noreferrer"&gt;this endpoint&lt;/a&gt;. One thing to keep in mind, creating a content asset is an &lt;strong&gt;idempotent&lt;/strong&gt; operation. This means it creates the object if it doesn't already exist, but if it does it ignores the existing object and writes a new one on top.&lt;/p&gt;

&lt;p&gt;In practice, this means if someone edits these content assets (e.g. through BM, locking the resource and updating the description field), that modification will be lost.&lt;br&gt;
If you don't want to lose those edits, you should consider using the endpoint to get the content asset, and use that data on your PUT request payload. In my case, these content assets are supposed to be "hidden", so no one should need to edit them.&lt;/p&gt;

&lt;h3&gt;
  
  
  BM extension to create folders
&lt;/h3&gt;

&lt;p&gt;Now let's discuss the custom BM module to create folders. If business people want to create multiple filters in one go, instead of doing it through the BM UI and then going to Page Designer to assign pages to folders, they input the filter names in an input box and click a button. We can simply develop a web page that has this input box and a button (and some instructions on how to use it). This is possible by &lt;a href="https://documentation.b2c.commercecloud.salesforce.com/DOC1/topic/com.demandware.dochelp/content/b2c_commerce/topics/site_development/b2c_customize_business_manager.html" rel="noopener noreferrer"&gt;extending BM&lt;/a&gt; and building a custom cartridge, with this UI and the controller that executes the creation of folders.&lt;/p&gt;

&lt;p&gt;We won't go to much into detail about this custom cartridge, I've added additional links at the bottom to help you building this cartridge.&lt;/p&gt;

&lt;h2&gt;
  
  
  Alternative solutions to SEO meta tag rules
&lt;/h2&gt;

&lt;p&gt;In the case where the content object quotas are hit, we can consider building something entirely custom. What I mean is we would not use the SEO meta tag rules module from BM anymore. We would build our piece of software to handle this scenario - &lt;strong&gt;all page designer page types&lt;/strong&gt;. Of course that means building software that does the same as Salesforce's built-in module of BM, considering:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Storage to store these rules, meta tag definitions, and others&lt;/li&gt;
&lt;li&gt;An API that at least exposes a way to get meta tags for a given page, taking into account API design, SLA (important if you'll call this in a middleware of the Page.js controller), etc&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is a discussion you must have with your business/client, explaining the trade-offs of each scenario.&lt;/p&gt;

&lt;p&gt;In my opinion, it's generally often better to reuse existing functionality or an off-the-shelf solution like a plugin cartridge.&lt;br&gt;
Something that either is standardized and known in the community, instead of your own solution... but as always, this depends on the context we're in.&lt;br&gt;
For SEO meta tag rules of SFCC in particular, we &lt;a href="https://trailhead.salesforce.com/trailblazer-community/feed/0D54S00000Hk216SAB" rel="noopener noreferrer"&gt;can't extend this module&lt;/a&gt;. So if we really needed to support this feature and we hit the API quotas of the SFCC platform, we would consider building a cost-effective solution outside SFCC, considering the need of a custom parser for &lt;code&gt;if&lt;/code&gt; statements, context variables... again a discussion to be made with the business and architects.&lt;/p&gt;

&lt;h2&gt;
  
  
  Improvements to this solution
&lt;/h2&gt;

&lt;p&gt;One important note about this solution is that pages would &lt;strong&gt;only inherit meta tags assigned to the default folder&lt;/strong&gt; (primary folder). From my research, a content asset can only have one default folder, and that is the folder the meta tags come from.&lt;br&gt;
In my use case the ideal was to setup some hierarchy, where a page could have 3 folders, and you could get all meta tags assigned to those folders. Now you kind of have this &lt;a href="https://documentation.b2c.commercecloud.salesforce.com/DOC1/topic/com.demandware.dochelp/content/b2c_commerce/topics/search_engine_optimization/b2c_meta_tag_rules.html" rel="noopener noreferrer"&gt;hierarchy&lt;/a&gt; (primary folder, then parent folders, up to root), but it's not the exact behavior I needed.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In conclusion, you can develop a custom cartridge with this functionality, and support this for your business or clients. In the future, this might be supported out of the box by SFCC. The greatest challenges were: analyzing ways to extend the SEO meta tag rules module; an API to get all page designer pages and understanding the limitations of our solution.&lt;/p&gt;

&lt;p&gt;Let us know in the comments if this feature is something you'd like to have, or vote and comment on the &lt;a href="https://ideas.salesforce.com/s/idea/a0B8W00000JJF4rUAH/support-all-page-designer-pages-for-seo-meta-tag-rules" rel="noopener noreferrer"&gt;IdeaExchange's post&lt;/a&gt;. I hope this has been helpful 👍.&lt;/p&gt;

&lt;p&gt;Check out my other blog post on &lt;a href="https://blogit.create.pt/davidpereira/2022/06/06/session-management-in-salesforce-b2c-commerce-cloud/" rel="noopener noreferrer"&gt;session management for SFCC&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Additional Links
&lt;/h2&gt;

&lt;p&gt;Here are some links to documentation and other resources, that you may find useful if you're interested in building this feature for your business:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://ideas.salesforce.com/s/idea/a0B8W00000JJF4rUAH/support-all-page-designer-pages-for-seo-meta-tag-rules" rel="noopener noreferrer"&gt;IdeaExchange's post&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://trailhead.salesforce.com/trailblazer-community/feed/0D54S00000IRNXRSA5" rel="noopener noreferrer"&gt;How to get a list of pages of Page Designer using code?&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://documentation.b2c.commercecloud.salesforce.com/DOC1/topic/com.demandware.dochelp/OCAPI/current/data/Resources/Libraries.html" rel="noopener noreferrer"&gt;OCAPI Data Libraries resource&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://documentation.b2c.commercecloud.salesforce.com/DOC1/topic/com.demandware.dochelp/content/b2c_commerce/topics/admin/b2c_configuring_a_business_manager_site.html" rel="noopener noreferrer"&gt;Configuring the Business Manager Site&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>commercecloud</category>
      <category>salesforce</category>
      <category>javascript</category>
      <category>webdev</category>
    </item>
    <item>
      <title>Reactathon 2022 - A Short Summary</title>
      <dc:creator>David Pereira</dc:creator>
      <pubDate>Sun, 03 Jul 2022 19:07:00 +0000</pubDate>
      <link>https://dev.to/bolt04/reactathon-2022-a-short-summary-140b</link>
      <guid>https://dev.to/bolt04/reactathon-2022-a-short-summary-140b</guid>
      <description>&lt;p&gt;Lately we've been filled with cool stuff in the React community. In case you missed it, &lt;a href="https://www.reactathon.com/" rel="noopener noreferrer"&gt;Reactathon&lt;/a&gt; happened at the beginning of May, and with it came a lot of interesting talks and discussions between people in the community.&lt;br&gt;
In this post I'll reference what I consider to be the most interesting and hot topics on the React community… and try to make it short 😅.&lt;/p&gt;

&lt;p&gt;If you haven't seen the conference, you can take a look at this &lt;a href="https://www.youtube.com/playlist?list=PLRvKvw42Rc7O0eWo2m_guXdZsGTEQM_jj" rel="noopener noreferrer"&gt;YT playlist for all sessions&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  The state of React 2022
&lt;/h2&gt;

&lt;p&gt;So first of all, what is the state of React currently? With &lt;a href="https://reactjs.org/blog/2022/03/29/react-v18.html" rel="noopener noreferrer"&gt;React 18&lt;/a&gt;, what was once called concurrent mode is now concurrent features. This changes the approach into an &lt;strong&gt;incremental adoption&lt;/strong&gt;, so that you could use &lt;strong&gt;concurrent features&lt;/strong&gt; in specific places of your React app.&lt;/p&gt;

&lt;p&gt;In this talk, Lee also announces Next.js new routing system which resembles Remix a lot. They want to take advantage of &lt;strong&gt;nested routes&lt;/strong&gt; which is great! This allows us to provide a better user experience on pages that have one component that blocks rendering.&lt;/p&gt;

&lt;p&gt;Last but not least, there are new developments when it comes to &lt;strong&gt;server-side rendering&lt;/strong&gt;, and new &lt;strong&gt;client-side rendering APIs&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Edge computing
&lt;/h2&gt;

&lt;p&gt;If you are unaware of this type of compute, Edge computing allows a better user experience, because it reduces the time you get a response with the content you want to visualize. The time is reduced because to process your request you don't need to "talk" to a distant server on the oasis.&lt;/p&gt;

&lt;p&gt;CDN providers like Cloudflare, are building new runtimes to allow you to run code closer to your customers – on the edge. Cloudflare workers for example, don't use Node.js or Deno under the hood, it's their own JS runtime. Of course, an effort is being done to &lt;a href="https://blog.cloudflare.com/introducing-the-wintercg/" rel="noopener noreferrer"&gt;standardize these runtimes&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;In this talk, Kent talks about how Remix improves the developer experience for edge computing. First they use the &lt;a href="https://developer.mozilla.org/pt-BR/docs/Web/API/Fetch_API" rel="noopener noreferrer"&gt;Web Fetch API&lt;/a&gt;, and depending on where you want to deploy your function. Remix then translates the request/response objects to the respective platform's API. They also support &lt;strong&gt;streaming in the edge&lt;/strong&gt;, in order to send some content to the user quickly, and then send the rest.&lt;/p&gt;

&lt;p&gt;With that said, this is all in JS/TS or WASM land. At least I haven't seen a lot of support for other languages and runtimes (e.g. C#) on services that provide edge computing.&lt;/p&gt;

&lt;h2&gt;
  
  
  Streaming server components
&lt;/h2&gt;

&lt;p&gt;With React 18 it's now possible to stream changes to the browser, with new APIs like Suspense that allows for asynchronous processing.&lt;/p&gt;

&lt;p&gt;Why is this cool? Because we &lt;strong&gt;don't want to block rendering with data fetching&lt;/strong&gt;. When our component needs to fetch data before rendering what the user wants to see, we need to render a loading spinner... which ain't cool.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How about we initiate fetches before we render. This way the requests are in parallel and don't block rendering&lt;/strong&gt;. Using streaming server rendering fixes this, which is why it's so awesome! Ryan Florence goes more in detail how this is done in this talk: &lt;a href="https://www.youtube.com/watch?v=95B8mnhzoCM" rel="noopener noreferrer"&gt;When to fetch: Remixing React Router&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Bear in mind, this is just one way of rendering. Last year at React Conf 2021 there was an intro session about this topic as well: &lt;a href="https://www.youtube.com/watch?v=pj5N-Khihgc&amp;amp;list=PLNG_1j3cPCaZZ7etkzWA7JfdmKWT0pMsa" rel="noopener noreferrer"&gt;Streaming Server Rendering with Suspense&lt;/a&gt;. Another great session that talks about the different rendering patterns: &lt;a href="https://www.youtube.com/watch?v=PN1HgvAOmi8" rel="noopener noreferrer"&gt;Advanced Rendering Patterns: Lydia Hallie&lt;/a&gt;. It's an amazing session to help you visualize the impacts on performance, and what are the trade-offs of each pattern.&lt;/p&gt;

</description>
      <category>react</category>
    </item>
    <item>
      <title>Getting Started with CloudEvents and AsyncAPI</title>
      <dc:creator>David Pereira</dc:creator>
      <pubDate>Thu, 16 Sep 2021 18:30:48 +0000</pubDate>
      <link>https://dev.to/bolt04/getting-started-with-cloudevents-and-asyncapi-8db</link>
      <guid>https://dev.to/bolt04/getting-started-with-cloudevents-and-asyncapi-8db</guid>
      <description>&lt;h2&gt;
  
  
  Table of contents
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Introduction&lt;/li&gt;
&lt;li&gt;CloudEvents&lt;/li&gt;
&lt;li&gt;AsyncAPI&lt;/li&gt;
&lt;li&gt;Conclusion&lt;/li&gt;
&lt;li&gt;Additional Links&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In the previous blog post we went over a &lt;a href="https://dev.to/bolt04/case-study-azure-service-bus-and-event-driven-architectures-4lh0"&gt;case study for Azure Service Bus&lt;/a&gt;. In this article we’ll look at two specs, CloudEvents and AsyncAPI, that you can use to solve some problems of your event-driven architectures.&lt;/p&gt;

&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;At the moment, there are quite a few tools and products that have adopted CloudEvents or AsyncAPI. &lt;a href="https://knative.dev/docs/eventing/accessing-traces/#" rel="noopener noreferrer"&gt;Knative Eventing&lt;/a&gt; is a tool that helps developers in a serverless context, Azure Event Grid &lt;a href="https://docs.microsoft.com/en-us/azure/event-grid/cloud-event-schema" rel="noopener noreferrer"&gt;natively supports CloudEvents&lt;/a&gt;. More recently, Jenkins added &lt;a href="https://cd.foundation/blog/2021/09/02/jenkins-interoperability-with-cloudevents" rel="noopener noreferrer"&gt;integration with CloudEvents&lt;/a&gt; with a new plugin. It allows users to configure Jenkins as a source and a sink for CloudEvents.&lt;/p&gt;

&lt;p&gt;There is also interesting integrations between Kubernetes and Azure Event Grid that are compliant with the CloudEvents v1.0 spec. Checkout the &lt;a href="https://github.com/tomkerkhove/k8s-event-grid-bridge" rel="noopener noreferrer"&gt;GitHub repository&lt;/a&gt; or this &lt;a href="https://blog.tomkerkhove.be/2021/01/19/introducing-kubernetes-event-grid-bridge/" rel="noopener noreferrer"&gt;blog post&lt;/a&gt; and learn more about it. &lt;/p&gt;

&lt;p&gt;Postman has &lt;a href="https://blog.postman.com/asyncapi-joins-forces-with-postman-future-of-apis/" rel="noopener noreferrer"&gt;joined forces with AsyncAPI&lt;/a&gt; along with organizations such as Salesforce, Slack and Solace. Postman in particular is publishing &lt;a href="https://www.postman.com/postman/workspace/postman-open-technologies-asyncapi/example/12959542-7624bf1e-2e63-4e1f-a573-45598722af74" rel="noopener noreferrer"&gt;public collections&lt;/a&gt; related to AsyncAPI. For example a list of companies adopting AsyncAPI, with links to those resources (GitHub repositories, websites, etc).&lt;/p&gt;

&lt;p&gt;I hope these projects have got you excited to learn more about these specs. Let’s dive into some of their details!&lt;/p&gt;

&lt;h2&gt;
  
  
  CloudEvents
&lt;/h2&gt;

&lt;p&gt;The CloudEvents specification is under the &lt;a href="https://github.com/cncf/wg-serverless" rel="noopener noreferrer"&gt;CNCF Serverless working group&lt;/a&gt; since 2018. The spec's purpose is &lt;em&gt;describing event data in a common way&lt;/em&gt;. This is useful in many scenarios, for example, routing events to the appropriate subscribers depending on the &lt;em&gt;type&lt;/em&gt; of the event. Since applications can use a lot of different transports to send and receive events, the CloudEvents spec is protocol-agnostic so it defines &lt;strong&gt;protocol bindings&lt;/strong&gt; in order for the metadata to be correctly mapped for HTTP, AMQP, Kafka, etc.&lt;/p&gt;

&lt;p&gt;There are many use cases in using the CloudEvents spec, but perhaps the main one would be &lt;strong&gt;interoperability&lt;/strong&gt;. Imagine applications across clouds, being able to communicate in an event-driven architecture. Where there are producers of all sources, and consumers using all kinds of protocols (e.g. HTTP, AMQP, WebSockets). We can have middleware that connects these applications, adds E2E tracing and more with the use of CloudEvents. Of course we could connect the same applications without a common format, but it requires mapping between event formats (cloud providers use different schemas). Middleware would also need to parse the event data to get specific information.&lt;/p&gt;

&lt;p&gt;Another use case is SaaS (Software-as-a-service) that publishes events that clients are interested in to integrate with their own systems. For example, hooking into the checkout flow in a Shopify storefront to add extra checks. By leveraging CloudEvents these events can be &lt;strong&gt;consistent&lt;/strong&gt;, opening the door for numerous integrations between 3rd party software.&lt;/p&gt;

&lt;h3&gt;
  
  
  Extensions
&lt;/h3&gt;

&lt;p&gt;There are a few extensions worth mentioning, one of them is for &lt;a href="https://github.com/cloudevents/spec/blob/v1.0/extensions/distributed-tracing.md" rel="noopener noreferrer"&gt;distributed tracing&lt;/a&gt;. However, it seems there is some discussion around removing this extension from the spec (check this &lt;a href="https://github.com/cloudevents/spec/pull/751" rel="noopener noreferrer"&gt;PR on GitHub&lt;/a&gt;). There are open issues on some SDKs to support it, and others have already made changes to remove it. The future isn't clear, but I'd argue it's interesting to follow this closely for any updates, since tracing events is very important in an event-driven architecture.&lt;/p&gt;

&lt;p&gt;The &lt;a href="https://github.com/cloudevents/spec/blob/v1.0.1/extensions/partitioning.md" rel="noopener noreferrer"&gt;Partioning extension&lt;/a&gt; is another interesting extension, it defines a field to be handled by message brokers that can separate load via a partition key. This is used for example in the Kafka protocol binding that requires implementations to map the &lt;code&gt;partitionKey&lt;/code&gt; attribute to the &lt;code&gt;key&lt;/code&gt; of the Kafka message. In Kafka the concept of a partition is well known, so this maps out really well.&lt;/p&gt;

&lt;h3&gt;
  
  
  Relation with Serverless computing
&lt;/h3&gt;

&lt;p&gt;Serverless computing has increased in popularity and use in the industry, especially for it's cost model. But many FaaS (Functions-as-a-service) providers have their own function interface. Which means developers can't write a function in JavaScript, and deploy them in two cloud providers without making changes. This specification improves portability between FaaS platforms, so that developers receive an event in the same format and can reuse libraries for handling the event.&lt;/p&gt;

&lt;h2&gt;
  
  
  AsyncAPI
&lt;/h2&gt;

&lt;p&gt;I’ll start with a description of what AsyncAPI is: a specification that describes and documents event-driven APIs in a machine-readable format. It's protocol-agnostic like CloudEvents, so it can be used for APIs that work over many protocols, including MQTT, WebSockets, and Kafka. &lt;/p&gt;

&lt;p&gt;The following is AsyncAPI's vision stated on their &lt;a href="https://www.asyncapi.com/roadmap" rel="noopener noreferrer"&gt;website&lt;/a&gt;:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyduk55ed03efp6ysv8pp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyduk55ed03efp6ysv8pp.png" alt="asyncapi description - AsyncAPI becomes the #1 API specification for defining and developing APIs. Any kind of APIs." width="800" height="556"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I find this vision to be very interesting, mainly because of the part: &lt;strong&gt;Any kind of APIs&lt;/strong&gt;. At first, you might wonder if this means the AsyncAPI spec will define rules and more concepts for other types of APIs like GraphQL or OpenAPI. But this is not at all the case, the goal is to &lt;strong&gt;integrate with existing tools and specs&lt;/strong&gt;! &lt;/p&gt;

&lt;p&gt;This is valuable for developers because usually enterprise architectures consist of a mix of technologies, each for it's appropriate use case. Developers nowadays don't just interact with RESTful APIs in a request/response model. There are different demands and considerations for the ever increasing devices users can use, the software we build needs to match these demands and still maintain manageable. Where we can evolve and create new applications that leverage the numerous APIs that exist internally or from a 3rd party.&lt;/p&gt;
&lt;h3&gt;
  
  
  Concepts
&lt;/h3&gt;

&lt;p&gt;The spec version 2.1.0 defines a few concepts apart from the common Producer, Consumer and Message. A &lt;strong&gt;Channel&lt;/strong&gt; can be seen as a topic/exchange or queue, an application can send messages to a channel and consumers can subscribe to it to receive them. The &lt;strong&gt;Operation&lt;/strong&gt; object indicates if it's a publish or a subscribe operation and how an application can send or receive messages. A &lt;strong&gt;Binding&lt;/strong&gt; (or "protocol binding") is a mechanism to define protocol-specific information or query parameters for the channel bindings. For example, for the AMQP protocol we can specify the channel is an exclusive queue like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="nl"&gt;"bindings"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"amqp"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"is"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"queue"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"queue"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"exclusive"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Each protocol has it's own JSON schema and we can have bindings for Messages, Servers, Channels, Operations and others.&lt;/p&gt;

&lt;h3&gt;
  
  
  Components
&lt;/h3&gt;

&lt;p&gt;You can define &lt;strong&gt;components&lt;/strong&gt; to reuse in multiple AsyncAPI documents and we can &lt;strong&gt;reference&lt;/strong&gt; other AsncAPI documents. Let's say you have two publishers who publish the same message, but with different values in one of the message properties. We'd have two AsyncAPI documents specifying the publishers, referencing a 3rd document specifying the common message with it's properties. &lt;/p&gt;

&lt;p&gt;The &lt;code&gt;$ref&lt;/code&gt; field is a string that can be the path to the other file, or a URL for an external file where the schema we want is defined. This reference object uses the same rules and format of JSON Reference, which opens the door for many possibilites (&lt;a href="https://www.asyncapi.com/docs/specifications/v2.1.0#referenceObject" rel="noopener noreferrer"&gt;check the docs&lt;/a&gt; to know more).&lt;/p&gt;

&lt;p&gt;When we start to have a lot of apps that depend on each other’s schemas, we can take a look at some solutions to scale ou AsyncAPI documents. Perhaps we use &lt;a href="https://docs.confluent.io/platform/current/schema-registry/index.html" rel="noopener noreferrer"&gt;Confluent's Schema Registry&lt;/a&gt; for our JSON schemas and setup a &lt;strong&gt;catalog of events&lt;/strong&gt; in our organization. This empowers new developers seeking for ways to integrate with existing systems and event producers. We can also just store these components in a GitHub repository, and reference them in our AsyncAPI documents.&lt;/p&gt;

&lt;h3&gt;
  
  
  Tooling
&lt;/h3&gt;

&lt;p&gt;There is already quite a few tools and the &lt;a href="https://www.asyncapi.com/docs/community/tooling" rel="noopener noreferrer"&gt;tooling ecosystem&lt;/a&gt; is increasing! I've recently seen a &lt;a href="https://github.com/fmvilas/asyncapi-to-postman" rel="noopener noreferrer"&gt;repository&lt;/a&gt; that enables the creation of Postman collections from an AsyncAPI spec. I've also seen &lt;a href="https://github.com/asyncapi/cupid" rel="noopener noreferrer"&gt;architecture documents being generated&lt;/a&gt; from multiple AsyncAPI specs too, having a tool that can understand relations between applications and then output a diagram is pretty cool.&lt;/p&gt;

&lt;h3&gt;
  
  
  Generators for AsyncAPI
&lt;/h3&gt;

&lt;p&gt;One piece of tooling that is often used are generators that produce documentation and code. For example gRPC tools have this capability using the protocol buffer compiler. AsyncAPI generators can take an AsyncAPI document and generate client/server code or documentation in HTML and markdown. Currently, it depends on the template we use to generate server-side code, for example the &lt;a href="https://github.com/asyncapi/nodejs-ws-template" rel="noopener noreferrer"&gt;Node.js WebSocket template&lt;/a&gt; generates both server and client code. &lt;/p&gt;

&lt;p&gt;This can be improved and extended overtime, especially because of the way the &lt;a href="https://github.com/asyncapi/generator" rel="noopener noreferrer"&gt;generator&lt;/a&gt; is designed, enabling &lt;strong&gt;extensibility&lt;/strong&gt; so we can have templates for many other languages that support more protocols, etc. For example, there is only a &lt;a href="https://github.com/asyncapi/dotnet-nats-template" rel="noopener noreferrer"&gt;NATS generator for .NET Core&lt;/a&gt;... but perhaps in the future there could be more protocols supported for .NET Core and examples built for Azure 😃.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;There is a lot of exciting stuff happening in the event-driven architectures world 😄, we have only touched the surface in this post. In the CloudEvents space there are new specs being designed and worked on: Discovery; Subscription and Schema Registry APIs. Since AsyncAPI defines a document that you can use to describe your API, it'd be interesting to see how these correlate to each other, and using them together. &lt;/p&gt;

&lt;p&gt;I encourage you to join these communities and contribute to their open source projects 😄, &lt;a href="https://github.com/cloudevents" rel="noopener noreferrer"&gt;CloudEvents&lt;/a&gt; and &lt;a href="https://github.com/asyncapi" rel="noopener noreferrer"&gt;AsyncAPI&lt;/a&gt;, both specs are very community-driven. Collaboration between everyone is the way forward, with &lt;a href="https://hacktoberfest.digitalocean.com/" rel="noopener noreferrer"&gt;Hacktoberfest&lt;/a&gt; and &lt;a href="https://www.asyncapi.com/blog/events2021" rel="noopener noreferrer"&gt;AsyncAPI's Hackathon&lt;/a&gt; coming up, searching good first issues is a great way to start and to contribute 👍!&lt;/p&gt;

&lt;p&gt;Let me know in the comments if you're using these specs and what are your thoughts on them. The next blog will be about a practical example for .NET Core, Azure and AMQP messaging using CloudEvents and AsyncAPI, so stay tuned!&lt;/p&gt;

&lt;h2&gt;
  
  
  Additional Links
&lt;/h2&gt;

&lt;p&gt;Here are some links to talks, docs and blog posts that you may find useful if you're interested to know more about CloudEvents and AsyncAPI:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.youtube.com/watch?v=TZPPjAv12KU" rel="noopener noreferrer"&gt;The Serverless and Event-Driven Future - Austen Collins, Serverless (Intermediate Skill Level)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.openfaas.com/reference/triggers/#cloudevents" rel="noopener noreferrer"&gt;OpenFaaS supports CloudEvents&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.postman.com/postman-galaxy/the-future-of-api-specifications/" rel="noopener noreferrer"&gt;The Future of API Specifications talk by Fran Méndez&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://tech.ebayinc.com/engineering/asyncapi-2-0-enabling-the-event-driven-world/" rel="noopener noreferrer"&gt;AsyncAPI 2.0: Enabling the Event-Driven World&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>eventdriven</category>
      <category>cloudevents</category>
      <category>asyncapi</category>
      <category>architecture</category>
    </item>
    <item>
      <title>Case Study: Azure Service Bus and Event-Driven Architectures</title>
      <dc:creator>David Pereira</dc:creator>
      <pubDate>Fri, 04 Jun 2021 13:46:56 +0000</pubDate>
      <link>https://dev.to/bolt04/case-study-azure-service-bus-and-event-driven-architectures-4lh0</link>
      <guid>https://dev.to/bolt04/case-study-azure-service-bus-and-event-driven-architectures-4lh0</guid>
      <description>&lt;h2&gt;
  
  
  Table of contents
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Introduction&lt;/li&gt;
&lt;li&gt;Event-driven architectures&lt;/li&gt;
&lt;li&gt;Implementation&lt;/li&gt;
&lt;li&gt;Conclusion&lt;/li&gt;
&lt;li&gt;Additional Links&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;In this article we will talk about Event-Driven Architectures. We choose to use the Azure Cloud Infrastructure.&lt;br&gt;
Service Bus provides reliable, secure asynchronous messaging at scale. This article is written by the engineering team at CreateIT and it is intended to show you a case study in one of our projects for a client.&lt;/p&gt;

&lt;p&gt;We’ll take a deeper dive into the Service Bus technology, architecture, and design choices. The post will cover both conceptual material as well as implementation details. Most importantly, we will discuss design and implementation of some of the features that provide secure and reliable messaging at scale, while minimizing operational cost.&lt;/p&gt;

&lt;h3&gt;
  
  
  Service Bus Entities
&lt;/h3&gt;

&lt;p&gt;When we are working with Azure Service Bus, we can choose two Entities: &lt;strong&gt;Topics&lt;/strong&gt; or &lt;strong&gt;Queues&lt;/strong&gt;. You can have multiple Topics or Queues per Service Bus Namespace, but firstly you need to differ one from another. If you want a FIFO queue and only have one Consumer, then Queues are the way to go. If you need multiple Consumers, then the Topic is the better option. In this specific case we will create a Subscription per Consumer (Topics are only available from the Standard Pricing Tier).&lt;/p&gt;

&lt;h2&gt;
  
  
  Event-driven architectures
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Benefits with event-driven architectures
&lt;/h3&gt;

&lt;p&gt;What are the benefits of using a queue in the middle of these systems?&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;We can decide to &lt;strong&gt;load balance the input&lt;/strong&gt; from Customer Services. Let’s say there are a lot of updates being made to a customer, meaning a lot of events being published. We scale the number of consumers and user the &lt;strong&gt;competing pattern&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;We can &lt;strong&gt;throttle the input&lt;/strong&gt;. If on black Friday there are a ton of events and in case our Audit Log system is down. We simply store these events on the queue and consume them when the service is back online again. Of course we’d need to implement some logic for this behavior, but adding this “middleware” buys us options.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In our use case, we wanted to move to an implementation where the Web API doesn’t get affected by any changes on these external systems. But in order to change the implementation, we must first figure out what are the challenges associated with this change.&lt;/p&gt;

&lt;h3&gt;
  
  
  Challenges with event-driven architectures
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Message/Event order
&lt;/h4&gt;

&lt;p&gt;Azure Service Bus has a feature called &lt;strong&gt;sessions&lt;/strong&gt;. A session provides a context to send and retrieve messages that will preserve ordered delivery. However, in our use case we chose not to use it.&lt;/p&gt;

&lt;h4&gt;
  
  
  Message Lock Duration
&lt;/h4&gt;

&lt;p&gt;When we are using Queues, every message has a lock Duration. During this time the consumer needs to process it. But if this consumer needs to contact multiple external systems this time may rise and our messages could stop in the Dead-Letter. So the best practice is to change it accordingly to your needs. We recommend you to set this time extremely high in the beginning, and then do some tests to calculate the average.&lt;/p&gt;

&lt;p&gt;After that add 30% more of its value, in case of some lengthy  requests (in this case if we have outliers, they might stop in the Queue). If you are using a Topic, you will have a Lock Duration per Subscription. So make sure to adjust this time accordingly to its functions.&lt;/p&gt;

&lt;h2&gt;
  
  
  Implementation
&lt;/h2&gt;

&lt;p&gt;In the Figure 1 you can see the initial architecture for the Customer Management system. It was responsible to make the requests to the other systems.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsualf3a98syb1eyfrcqz.png" class="article-body-image-wrapper"&gt;&lt;img alt="Initial architecture diagram" src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsualf3a98syb1eyfrcqz.png" width="800" height="477"&gt;&lt;/a&gt;&lt;br&gt;Figure 1 – Initial architecture diagram
  &lt;/p&gt;

&lt;p&gt;With the new implementation, a message broker was introduced and we used the &lt;a href="https://martinfowler.com/articles/201701-event-driven.html#Event-carriedStateTransfer" rel="noopener noreferrer"&gt;event-carried state transfer pattern&lt;/a&gt;, meaning our events had all the information the consumer needed in order to do their job. We took in consideration the &lt;strong&gt;event notifications pattern&lt;/strong&gt;, where the consumer would have to make a request to the API that originated the event, in order to get more information. But this brings new problems to the table. What if when the consumer code runs, the information for that customer ID changed? What if the event was &lt;code&gt;CUSTOMER_CREATED&lt;/code&gt; but in the meanwhile the customer was deleted?&lt;/p&gt;

&lt;h3&gt;
  
  
  Retries with Polly
&lt;/h3&gt;

&lt;p&gt;In a distributed system, many things can go wrong. The network can fail or have additional latency, systems may be temporarily down, etc. We use the &lt;code&gt;Azure.ServiceBus.Messaging&lt;/code&gt; NuGet package so we are able to check if the exception is a transient fault or not (more information on &lt;a href="https://github.com/Azure/azure-sdk-for-net/blob/Azure.Messaging.ServiceBus_7.1.1/sdk/servicebus/Azure.Messaging.ServiceBus/README.md#exception-handling" rel="noopener noreferrer"&gt;these docs&lt;/a&gt;), then use &lt;a href="https://github.com/App-vNext/Polly" rel="noopener noreferrer"&gt;Polly&lt;/a&gt; to setup retry logic and fallbacks. There are other options to implement retry policies, for example we took in consideration the &lt;a href="https://docs.microsoft.com/en-us/azure/architecture/best-practices/retry-service-specific#service-bus" rel="noopener noreferrer"&gt;Retry guidance for Azure Services&lt;/a&gt; documentation from Microsoft. Since we use the latest Azure SDK,  the appropriate class would be &lt;a href="https://docs.microsoft.com/en-us/dotnet/api/azure.messaging.servicebus.servicebusretrypolicy?view=azure-dotnet" rel="noopener noreferrer"&gt;ServiceBusRetryPolicy&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;We configured Polly to retry to publish a message three times (this configuration is on &lt;code&gt;appsettings.json&lt;/code&gt;), with exponential times between each attempt.&lt;br&gt;
If after the third retry we can’t publish the message we need to save it, because it has crucial information. So to solve this issue we created a Fallback Gateway, to write these messages to a Container inside an Azure Storage Account.&lt;/p&gt;

&lt;h3&gt;
  
  
  Filters for message routing
&lt;/h3&gt;

&lt;p&gt;This section only applies for Topics Entities on the Azure Service Bus.&lt;br&gt;
We can add &lt;a href="https://docs.microsoft.com/en-us/azure/service-bus-messaging/topic-filters" rel="noopener noreferrer"&gt;Filters&lt;/a&gt; on our Subscriptions to help us with routing each message to its specific Consumer. We considered two filter types:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;SQL Filter&lt;/li&gt;
&lt;li&gt;Correlation Filter&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Using the Correlation Filter you can configure Custom Properties and create Filters for your needs. You just need to make sure the producer of the messages, includes the header, that you are currently using to filter, on the message.&lt;/p&gt;

&lt;p&gt;With SQL Filters you can create conditional expression to evaluate the current message. Just make sure that all the system properties are prefixed with sys. in the expression. Either way, both filters work just choose one that suits you the most!&lt;/p&gt;

&lt;h3&gt;
  
  
  Dead-Letter Queue
&lt;/h3&gt;

&lt;p&gt;In case the consumer application can’t process the message after the &lt;strong&gt;Max Delivery Count&lt;/strong&gt; attempts, instead of returning it to the queue it will be sent automatically to the Dead-Letter queue. If you are using the Topic, each subscriber has its own Dead-Letter queue. You can also, configure different Max Delivery Counts Values for each Subscriber.&lt;/p&gt;

&lt;p&gt;All messages that are published to the Service Bus have a TTL (Time-To-Live). After this time ends the message will be transferred automatically to the Dead-Letter. So make sure you adjust this time accordingly to your needs.&lt;/p&gt;

&lt;p&gt;With this we are able to save messages that weren’t processed by the consumer application, but we should always strive to have an empty dead-letter queue.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Our first steps into an Event-Driven Architecture was a truly success!&lt;br&gt;
We were able to expand our previous solution to be compatible with multiple external systems and instead of having the API sending a HTTPS request for each one we had this Application sending one message to a Topic in the Service Bus.&lt;br&gt;
One of our goals was to have load balance in the Publisher Application. We went from a 1-&amp;gt;3 dependency to a 1-&amp;gt; 1, as you can see in the Figure 2.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftlg0ttindxf7rairkuj5.png" class="article-body-image-wrapper"&gt;&lt;img alt="Architecture Diagram with Azure Service Bus" src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftlg0ttindxf7rairkuj5.png" width="800" height="209"&gt;&lt;/a&gt;&lt;br&gt;Figure 2 – Architecture Diagram with Azure Service Bus
  &lt;/p&gt;

&lt;p&gt;Which is great and keeps the system scalable and future proof. Our solution became more decoupled in order to keep the Application agnostic to these changes.&lt;br&gt;
If you have a similar situation with an Event-Driven Architecture then we totally recommend you to check more about this technology and it’s features.&lt;/p&gt;

&lt;p&gt;We would like to share a link to &lt;a href="https://github.com/Azure/azure-service-bus" rel="noopener noreferrer"&gt;Microsoft Azure Service Bus GitHub&lt;/a&gt;. Most of the implementations of either the publisher or the subscriber were inspired by this documentation, so make sure you check it out!&lt;br&gt;
If you have any questions, please write them down below.&lt;/p&gt;

&lt;h2&gt;
  
  
  Additional Links
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://docs.microsoft.com/en-us/azure/service-bus-messaging/service-bus-messaging-exceptions" rel="noopener noreferrer"&gt;Service Bus Exceptions&lt;/a&gt;&lt;br&gt;
&lt;a href="https://docs.microsoft.com/en-us/azure/service-bus-messaging/service-bus-messaging-overview" rel="noopener noreferrer"&gt;Service Bus Basic Steps&lt;/a&gt;&lt;br&gt;
&lt;a href="https://docs.microsoft.com/pt-pt/azure/service-bus-messaging/service-bus-dotnet-get-started-with-queues" rel="noopener noreferrer"&gt;Tutorial with DotNet&lt;/a&gt;&lt;/p&gt;

</description>
      <category>azure</category>
      <category>eventdriven</category>
      <category>architecture</category>
      <category>cloud</category>
    </item>
    <item>
      <title>Notes on feedback and self-reflection</title>
      <dc:creator>David Pereira</dc:creator>
      <pubDate>Mon, 05 Apr 2021 10:28:10 +0000</pubDate>
      <link>https://dev.to/bolt04/notes-on-feedback-and-self-reflection-33i4</link>
      <guid>https://dev.to/bolt04/notes-on-feedback-and-self-reflection-33i4</guid>
      <description>&lt;h2&gt;
  
  
  Table of contents
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Introduction&lt;/li&gt;
&lt;li&gt;Giving Feedback&lt;/li&gt;
&lt;li&gt;Burnout&lt;/li&gt;
&lt;li&gt;Reflection happens when you stop&lt;/li&gt;
&lt;li&gt;Conclusion&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;In this blog post, I'll share what I've learned recently regarding feedback in the context of conversations and self-reflection. Communication is a topic I set as one of &lt;a href="https://dev.to/bolt04/2020-was-a-blast-for-me-time-to-look-forward-22h2#goals"&gt;my goals for 2021&lt;/a&gt;, so I'd like to share some of the topics I've enjoyed thinking and learning about.&lt;br&gt;
As we grow in our careers, these become more relevant and important to understand. The way I see it, communication and leadership are essential if I want to help as many people as I possibly can, but of course, I won't forget the joy of technical skills and I'll always keep honing those 😁.&lt;/p&gt;

&lt;h2&gt;
  
  
  Giving Feedback
&lt;/h2&gt;

&lt;p&gt;This is something that I use a significant portion of my time to improve because I want to be careful in the message I send to someone else. My intention is always to contribute with an opinion or perspective so that the other person can improve. Of course, the other person validates what I said first, and then &lt;strong&gt;decides what to do&lt;/strong&gt;. This is very important because it means we can express our opinion to someone, but they aren't obliged to accept it and implement it the next time.&lt;/p&gt;

&lt;p&gt;To me being open to new ideas that challenge the way I currently think is great! It's the perfect opportunity to come up with arguments and stand my ground about what I believe in. On the other hand, it's a phenomenal opportunity to hear arguments that are better than mine, and see something from a different perspective.&lt;/p&gt;

&lt;p&gt;However, I have noticed that I refrain from giving feedback on certain topics, because I don't know the &lt;em&gt;best way to do it&lt;/em&gt;. The way I used to think is that there is positive and negative feedback:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Positive feedback is the same as giving recognition to the other person, like saying "great job" or "you inspired others on that meeting". &lt;/li&gt;
&lt;li&gt;Negative feedback means we give feedback on points that could be better, like "you could have shown X in the presentation". &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;But if we think about it, what is the difference between positive and negative feedback? Does it even exist? &lt;/p&gt;

&lt;h3&gt;
  
  
  Continuous improvement (a never-ending story)
&lt;/h3&gt;

&lt;p&gt;Think about a time your favorite football team won the game with a clean sheet (zero goals suffered). As a coach, you might give feedback to your players like "great game, you all did great". But this implies there are no improvements to make, doesn't it? Even though we got a clean sheet, were there no defensive errors? Are there aspects the other team did better than us?&lt;/p&gt;

&lt;p&gt;Striving for &lt;strong&gt;continuous improvement&lt;/strong&gt; helps to be in the right mindset to give feedback to someone. The way I'm going to focus on giving feedback is by &lt;strong&gt;asking questions&lt;/strong&gt;, so that the other person can think critically about their logic to a problem. This is an approach I'm trying to implement for myself, and I've seen it work so it motivates me to be better at it. &lt;/p&gt;

&lt;p&gt;I heard giving feedback is a delicate art, now I see why that is. If we are mindful of this, we'll take into consideration the reaction the other person may have and always focus on the most important thing in giving feedback: &lt;strong&gt;helping the other person improve&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Burnout
&lt;/h2&gt;

&lt;p&gt;Burnout is a tough topic to discuss, but I've recently been thinking about it. Mostly because I'm trying to do a lot of different things, and it seems I can't focus on one of them at a time. But it's a real struggle to stop doing things you find interesting when you're so curious to learn something new or improve something that already exists 😅. &lt;/p&gt;

&lt;p&gt;At my job I'm currently invested in: doing some open source work; recruitment with technical interviews and other phases; being responsible for one of our book clubs; being responsible for some internal initiatives (e.g. event-driven architectures) and some other stuff like digging into zero-downtime deployments, health checks, and cloud architecture. Outside of work I try to keep up with some open source projects I'm part of in the &lt;a href="https://github.com/EddieHubCommunity" rel="noopener noreferrer"&gt;EddieHub&lt;/a&gt; organization and write blog posts.&lt;/p&gt;

&lt;p&gt;Sometimes it does seem like quite a bit of work, but what really matters is not the situation I'm in, but the &lt;strong&gt;state I'm in&lt;/strong&gt;. How am I feeling during these times I want to do a lot? Tired or calm? In all honesty, it depends on the day. If I'm not tired, I can decide to just bang out some code and study some new web development stuff friday night + most of the weekend. When I'm tired, I decide to postpone some stuff to the next day, because most of it is to keep up with the industry and keep learning as much as I can (as well as feeding my curiosity). &lt;/p&gt;

&lt;p&gt;But there is no deadline for this. I can set one but if I don't complete it, it's not the end of the world. I'd feel much more stressed if I decided "I have to reach my next level at this date", because my mind would focus on where I want to be, and it would project itself to that place. If I disregard where I'm currently at and project myself into the "desired place", I'm &lt;strong&gt;removing that space where growth happens&lt;/strong&gt;!&lt;/p&gt;

&lt;h2&gt;
  
  
  Reflection happens when you stop
&lt;/h2&gt;

&lt;p&gt;We need time to think. We can't be in a constant state of flow, because there is no time to think and reflect. I've never thought too much about this, but I realize now I should make it a habit of mine, to take time to simply &lt;strong&gt;do nothing&lt;/strong&gt;! The sprint is finished? Great, let's stop and think about what happened and what can be improved. It's also at these times where we stop that we are more creative. At least I usually have more ideas when I go for a walk or step outside my room 😅.&lt;/p&gt;

&lt;p&gt;Taking time to think how the meeting went is good, but it's also important to ask someone else for feedback because there are things we don't see in ourselves sometimes. This has become very clear for me recently since I hadn't noticed what my behavior was like in a conversation with another person. &lt;/p&gt;

&lt;p&gt;My default behavior tends to keep me in "thinking mode", stopping me from interacting in a group. It says certain words repeatedly in a presentation or simply while talking. It asks a question, following right up with another question... &lt;strong&gt;Self-reflection&lt;/strong&gt; is very important because it gives you &lt;strong&gt;awareness&lt;/strong&gt; about yourself, your behavior. Once you know how you behave in a given situation, you just have to know how you want to behave, and start working towards that.&lt;/p&gt;

&lt;h3&gt;
  
  
  Define a plan of action
&lt;/h3&gt;

&lt;p&gt;The way I started working towards improving my behavior and other aspects, is to ask questions to myself and then define a &lt;strong&gt;plan of action&lt;/strong&gt;. For example, I want to improve the way I give feedback to my coworkers. First I ask myself "How am I giving feedback today?", in which I could answer "well I say they did a great presentation at the end and suggest different solutions to problems". Okay, then I'll ask "What can I do to help the other person improve their skills the next time?". My answer would be "I think I could... hum... I could ask them how they think they can improve that specific block of code or that part of the presentation". Now I know something I can ask and take action for the next time I give feedback to someone.&lt;/p&gt;

&lt;p&gt;Of course, planning can be hard to do and most of the time we may feel "I have no idea how to improve or what to do". But forcing ourselves to think and answer questions, many times results in small steps we know we can take. Then when we have a vision of the plan, "checking off" boxes along the way helps us see and feel progress.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://i.giphy.com/media/7WuIJlmdvBanml4P6t/giphy.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://i.giphy.com/media/7WuIJlmdvBanml4P6t/giphy.gif" width="480" height="288"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;I believe in this learning process:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Learn&lt;/li&gt;
&lt;li&gt;Reflect&lt;/li&gt;
&lt;li&gt;Implement&lt;/li&gt;
&lt;li&gt;Share&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;I learned it from this &lt;a href="https://youtu.be/ujxvy5NjeRQ?t=1224" rel="noopener noreferrer"&gt;video&lt;/a&gt; and ever since then I'm trying to share everything I learn, after reflecting on it. I'm not perfect at it, but I try my best to follow it and challenge myself by reflecting and sharing. I want to continue to grow on this aspect (human skills), so I'll want to write more posts on this topic in the future 🙂. The truth is I'm still sinking in all I've learned and trying to implement bits and pieces for all this knowledge to be at the root of my mind 😄.&lt;/p&gt;

&lt;p&gt;Hopefully, you enjoyed reading this post and took something out of it. Currently, I'm working on a blog post about Azure Service Bus and event-driven architectures, so stay tuned for that!&lt;/p&gt;

</description>
      <category>communication</category>
    </item>
  </channel>
</rss>
