<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: ByteMinds</title>
    <description>The latest articles on DEV Community by ByteMinds (@byteminds_agency).</description>
    <link>https://dev.to/byteminds_agency</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/byteminds_agency"/>
    <language>en</language>
    <item>
      <title>Building Teams for Digital Products: Essential Roles, Methods, and Real-World Advice</title>
      <dc:creator>Byteminds Agency</dc:creator>
      <pubDate>Wed, 11 Jun 2025 10:46:06 +0000</pubDate>
      <link>https://dev.to/byteminds_agency/building-teams-for-digital-products-essential-roles-methods-and-real-world-advice-2m0o</link>
      <guid>https://dev.to/byteminds_agency/building-teams-for-digital-products-essential-roles-methods-and-real-world-advice-2m0o</guid>
      <description>&lt;p&gt;A digital product is more than just a set of features or an interface. Creating it is a process that demands not only technical expertise but also effective team organization.&lt;/p&gt;

&lt;p&gt;Product development involves high uncertainty at every stage, requiring each team member to mitigate risks and adapt to change actively.&lt;br&gt;
In this article, we’ll explore:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The essential roles in digital product development&lt;/li&gt;
&lt;li&gt;The pros and cons of working with freelancers, outsourcing, and in-house teams&lt;/li&gt;
&lt;li&gt;How to choose the right collaboration model for your project&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Why Product Development Isn’t About Websites
&lt;/h2&gt;

&lt;p&gt;Website or landing page development follows a predictable path: predefined layouts, structured pages, and minimal uncertainty. Products, however, are dynamic and continuously evolving. At every stage, new questions arise, like, “What’s more important: refining the interface or quickly releasing a new feature?” forcing teams to pivot tasks and strategies.&lt;/p&gt;

&lt;p&gt;This is why product development teams differ significantly: they require more than just a group of specialists. They need an ecosystem designed to handle constant change.&lt;/p&gt;

&lt;p&gt;Yet in practice, many fail to grasp this difference or even see its necessity. &lt;/p&gt;

&lt;p&gt;This “design-first, develop-fast” mindset is common but fundamentally flawed for product development. &lt;/p&gt;

&lt;p&gt;Now that we’ve set the stage, let’s delve into the nuances of team collaboration models.&lt;/p&gt;

&lt;h2&gt;
  
  
  Key Roles in a Product Team
&lt;/h2&gt;

&lt;p&gt;For a digital product to progress rather than stall indefinitely, each role must own its responsibilities and actively reduce uncertainty. &lt;/p&gt;

&lt;p&gt;Below, we break down the core roles that drive progress.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Product Owner: The Strategic Navigator&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6z3up7gi4ncvpmagvznb.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6z3up7gi4ncvpmagvznb.jpg" alt="Image description" width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The Product Owner (PO) ensures business objectives aren’t lost in endless refinements. This role demands deep market understanding, the ability to prioritize team ideas, and the judgment to decide what’s essential now and what can wait.&lt;/p&gt;

&lt;p&gt;For instance, if developers propose a complex feature, the PO assesses: Will this actually deliver business value? &lt;/p&gt;

&lt;p&gt;Their expertise should span:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The product’s core purpose and its competitive landscape&lt;/li&gt;
&lt;li&gt;Market-specific constraints and opportunities&lt;/li&gt;
&lt;li&gt;Audience pain points and unmet needs&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These insights help the Product Owner develop meaningful hypotheses and make informed decisions.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Important: Often, the PO is the client or a stakeholder with market expertise. Their role is to minimize strategic uncertainty by setting priorities and focusing on valuable solutions. Decisiveness here is crucial—without it, the risk of developing irrelevant solutions is high.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;Product Producer: Bridging Strategy and Execution&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F801uix1zs12k5x36ggkd.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F801uix1zs12k5x36ggkd.jpg" alt="Image description" width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Far more than just a project manager, the Product Producer acts as the critical link between business goals and technical execution. They reduce chaos, manage client expectations, and shield the team from unnecessary scope changes. &lt;/p&gt;

&lt;p&gt;If a business demands a complex feature that developers estimate will take months, the Producer finds a compromise, like delivering an MVP sooner or suggesting an alternative solution.&lt;/p&gt;

&lt;p&gt;Beyond task decomposition and timeline management, the Producer understands business priorities deeply. This additional insight helps him make better decisions during the product development process.&lt;/p&gt;

&lt;p&gt;The Product Producer is an enhanced version of a project manager and the right-hand to the Product Owner. They organize tasks, establish transparent processes, and build predictable workflows. &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Critical Note: If the Product Owner hasn't adequately prioritized tasks, the Producer can partially mitigate this, but can’t fully replace the strategic vision of the Product Owner.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;Business Analyst: Architect of Cohesive Systems&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzvkq0baq8xr7y53o1cck.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzvkq0baq8xr7y53o1cck.jpg" alt="Image description" width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The Business Analyst’s main role is to serve as the vital bridge between conceptual requirements and practical implementation. Their core mission is to ensure that all product requirements:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Are technically feasible and internally consistent&lt;/li&gt;
&lt;li&gt;Contribute to a unified system architecture&lt;/li&gt;
&lt;li&gt;Account for critical edge cases and real-world usage scenarios&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;By thoroughly understanding how different system components interact (or should interact), the Analyst transforms abstract ideas into clear, actionable tasks for the development team. Their technical discernment helps identify whether a proposed feature provides genuine value or simply becomes unnecessary complexity — what we might call "attaching a fifth wheel to a car."&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Key Impact: The Analyst helps the team navigate ambiguous requirements, highlight key aspects, and focus on the main thing, reducing tactical uncertainty.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;UX Designer: Crafting Intuitive Experiences&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmta8zchhhzkgtts9u0ds.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmta8zchhhzkgtts9u0ds.jpg" alt="Image description" width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Far more than just creating visually appealing interfaces, the UX Designer engineers user journeys that are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Intuitively navigable&lt;/li&gt;
&lt;li&gt;Purposefully structured&lt;/li&gt;
&lt;li&gt;Optimized for the target audience&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For mass-market products, this means prioritizing simplicity and accessibility above all. The designer's prototypes and wireframes serve as the blueprint for how real users will interact with and derive value from the product.&lt;/p&gt;

&lt;p&gt;Superior UX design minimizes two critical risks:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;User frustration caused by confusing interfaces&lt;/li&gt;
&lt;li&gt;Lost conversions due to poor experience design&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;By establishing predictable, user-friendly interaction patterns, the UX Designer ensures customers can effortlessly achieve their goals within the system, without getting lost or discouraged.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Team Lead: Guardian of Technical Excellence&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F08st7yub93w73a32rkoo.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F08st7yub93w73a32rkoo.jpg" alt="Image description" width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A skilled development team alone isn't enough — without technical leadership, projects can quickly derail.&lt;/p&gt;

&lt;p&gt;The Team Lead serves as the architectural cornerstone, ensuring:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Robust system design that withstands growth&lt;/li&gt;
&lt;li&gt;Consistent engineering practices across the team&lt;/li&gt;
&lt;li&gt;Production-ready code quality&lt;/li&gt;
&lt;li&gt;Scalability planning from day one&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Without this role, teams risk creating disposable solutions that crumble under real-world loads. The Team Lead reduces technical uncertainty by making informed architectural choices, ensuring code quality, selecting appropriate tools, assigning tasks based on developer skills, and implementing rigorous testing.&lt;/p&gt;

&lt;p&gt;Have you heard of the Three Amigos principle? It’s a collaborative communication method where requirements, potential scenarios, and risks are discussed jointly beforehand.  This significantly reduces misunderstandings and increases product quality.&lt;/p&gt;

&lt;p&gt;So, one of the three amigos will often be the Team Leader, who dives into the product before the start of development to properly reduce technical uncertainty during further work.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Development Team: Execution Powerhouse&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl36ynjfd5qdzt6x52v3k.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl36ynjfd5qdzt6x52v3k.jpg" alt="Image description" width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Comprising developers and QA specialists, this group executes tasks, writes code, tests, and implements features.&lt;/p&gt;

&lt;p&gt;Crucially, they must collaborate effectively with other roles rather than merely follow instructions. Their clear task execution, adaptability to change, and attention to detail significantly reduce project uncertainty.&lt;/p&gt;

&lt;h2&gt;
  
  
  Building Your Product Team: Choosing the Right Approach
&lt;/h2&gt;

&lt;p&gt;Creating a successful digital product requires more than just talent—it demands the right team structure. You essentially have three options: hiring freelancers, partnering with an outsourcing firm, or building an in-house team. Each approach has its trade-offs, and the best choice depends on your project’s needs, budget, and long-term goals.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Freelancers: Cheap, Flexible, and Fragmented&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;At first glance, freelancers seem appealing: they’re cost-effective, available on demand, and can handle specific tasks. But assembling a full product team from freelancers is often a recipe for frustration. While they excel at executing well-defined assignments, they rarely engage deeply with the product vision or collaborate effectively with others.&lt;/p&gt;

&lt;p&gt;The core issue isn’t skill—it’s alignment. Freelancers work transactionally, not strategically. Without strong product management on your side, coordination can quickly become chaotic. Deadlines slip, communication falters, and what should be a cohesive product often ends up as a patchwork of disjointed solutions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;When it works:&lt;/strong&gt; For one-off, short-term tasks, but not for building an entire product.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Outsourcing: Speed, Expertise, and Lower Risk&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Outsourcing strikes a balance between flexibility and structure. A competent agency handles everything from scoping to execution — ideal if you lack the time, resources, or expertise to manage development internally. Even if you already have an in-house team, outsourcing can accelerate your MVP launch by offering fresh perspectives and avoiding internal bottlenecks.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key advantages:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Faster execution&lt;/strong&gt; – experienced teams avoid common pitfalls&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Established processes&lt;/strong&gt; – they deliver structured, scalable solutions&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Lower risk&lt;/strong&gt; – if the project fails, costs are contained; if it succeeds, you can transition it in-house&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;The catch?&lt;/strong&gt; Choosing the right partner. Vet their portfolio, clarify expectations upfront, and ensure contractual alignment on goals.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;In-House Teams: Commitment at a Cost&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A dedicated internal team is the gold standard for long-term product development. Your employees live and breathe the product, adapt quickly to change, and drive sustained innovation. The catch? It’s expensive and demanding.&lt;/p&gt;

&lt;p&gt;Building an in-house team means:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;High recruitment stakes&lt;/strong&gt; – finding skilled people who align with your vision takes time&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Ongoing investment&lt;/strong&gt; – salaries, training, and culture-building aren’t one-time efforts&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Process overhead&lt;/strong&gt; – you’ll need strong leadership to maintain cohesion and productivity&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Mistakes are costly&lt;/strong&gt; – mis-hires or poor management can derail progress entirely. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;But when done right, an in-house team becomes your greatest competitive edge.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Bottom Line:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Freelancers&lt;/strong&gt; - only for discrete tasks&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Outsourcing&lt;/strong&gt; - ideal for MVPs and resource-strapped teams&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;In-house&lt;/strong&gt; - the best choice—if you can sustain it&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Your team structure isn’t just about talent—it’s about how well that talent works together. Choose wisely.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Art of Effective Teamwork: Principles That Drive Success
&lt;/h2&gt;

&lt;p&gt;Building a high-performing product team isn’t about rigid processes or perfect plans—it’s about fostering the right kind of collaboration. Here’s what truly makes teams work.&lt;/p&gt;

&lt;p&gt;Communication is one of the few points that consistently drives results. But effective communication isn’t about weekly reports or ritual demos—it’s about active dialogue. Teams need space to debate ideas, challenge assumptions, and co-create solutions. Still, there’s a fine line between productive discussion and wasted time.&lt;/p&gt;

&lt;p&gt;We’ve found that structured collaboration prevents misalignment. The magic happens when both sides engage authentically — not as "client and contractor," but as partners willing to listen, adapt, and occasionally compromise.&lt;/p&gt;

&lt;p&gt;It’s also critically important to understand who is responsible for what. Simply assigning roles is not enough — each team member needs to perform effectively within their area and avoid overstepping into others’ domains.&lt;/p&gt;

&lt;p&gt;Clear roles matter, but boundaries matter more. Consider this cautionary tale:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;A &lt;strong&gt;Product Producer&lt;/strong&gt; skips stakeholder conversations and relies solely on metrics → surfaces flawed insights&lt;/li&gt;
&lt;li&gt;The &lt;strong&gt;Analyst&lt;/strong&gt; wastes time untangling those assumptions instead of focusing on discovery&lt;/li&gt;
&lt;li&gt;The &lt;strong&gt;UX Designer&lt;/strong&gt;, lacking direction, makes arbitrary interface changes&lt;/li&gt;
&lt;li&gt;The &lt;strong&gt;Team Lead&lt;/strong&gt; gets dragged into priority debates instead of safeguarding the architecture&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Developers&lt;/strong&gt; juggle conflicting tasks, creating technical debt&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;The result?&lt;/strong&gt; Escalating uncertainty and operational chaos. True ownership means excelling in your role while trusting others to do the same.&lt;/p&gt;

&lt;p&gt;The next important thing is the project plan. A good project plan is a compass, not shackles. Products change, and that’s normal. The ability to adapt quickly is what makes teams resilient. Flexibility, however, doesn’t mean disorder: a shared direction helps teams pivot while staying aligned.&lt;/p&gt;

&lt;p&gt;And finally — people. They are the most valuable asset in any project. Invest in them. Your team’s growth directly impacts your product’s quality. When you provide learning opportunities, encourage professional development, and create psychological safety, you don’t just get employees — you get invested partners who bring their best thinking to tough challenges.&lt;/p&gt;

&lt;p&gt;The difference between good and exceptional work rarely comes down to process. It comes down to how your team engages — with the product and with each other.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Author: Andrey Stepanov, CTO at ByteMinds&lt;/em&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Learnings from using the Sitecore ADM module</title>
      <dc:creator>Anna Bastron</dc:creator>
      <pubDate>Tue, 26 Nov 2024 11:00:51 +0000</pubDate>
      <link>https://dev.to/byteminds_agency/learnings-from-using-the-sitecore-adm-module-9gc</link>
      <guid>https://dev.to/byteminds_agency/learnings-from-using-the-sitecore-adm-module-9gc</guid>
      <description>&lt;p&gt;If you worked with a Sitecore XP website that has been live for a long time, I am sure you are familiar with the xDB cleanup requirements. I usually recommend to agree xDB data retention and set up the clean up process from day 1, but it is not always possible. Some websites were developed on an older version of Sitecore that did not have the built-in functionality for data removal, or the volume of xDB data was underestimated, or maybe tracking was enabled later down the line without considering data retention. &lt;/p&gt;

&lt;p&gt;This is a common issue and the exact reason why &lt;a href="https://support.sitecore.com/kb?id=kb_article_view&amp;amp;sysparm_article=KB0232559" rel="noopener noreferrer"&gt;Sitecore Analytics Database Manager (ADM) module&lt;/a&gt; exists. It allows viewing xDB records statistics and removing collection data (raw analytical data collected by Sitecore XP) via Sitecore UI while keeping the data integrity.&lt;/p&gt;

&lt;p&gt;However, when I tried to use ADM for the first time on Sitecore 9 and Sitecore 10 websites, I faced a few challenges, particularly when dealing with large volumes of old data. In this article, we’ll explore these challenges and look at optimisation strategies for efficient data cleanup.&lt;/p&gt;

&lt;h2&gt;
  
  
  How Sitecore ADM works
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcnd9oadmavtz9scw0i1q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcnd9oadmavtz9scw0i1q.png" alt="High level overview of ADM process" width="800" height="366"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;One thing that is critical to understand is that any ADM cleanup process consists of two key phases: task generation and task processing.&lt;/p&gt;

&lt;h4&gt;
  
  
  Task generation
&lt;/h4&gt;

&lt;p&gt;During this phase, the module retrieves all contacts from the xDB regardless of the date range specified by the user. It begins processing the most recent contacts first and systematically works backward through the database. The module reads additional data associated with each contact, such as interactions and facets, to determine whether a record meets the deletion criteria. If a contact qualifies for deletion, its ID is stored in the &lt;code&gt;[Tasks]&lt;/code&gt; table of the &lt;code&gt;[ADM.Tasks]&lt;/code&gt; database for later task processing. &lt;/p&gt;

&lt;h4&gt;
  
  
  Task processing
&lt;/h4&gt;

&lt;p&gt;In the second phase, the module processes the tasks generated in the previous step, deleting all records marked for removal.&lt;/p&gt;

&lt;p&gt;While this approach seems robust at first, it comes with a few significant limitations, particularly around the performance and scalability of the task generation phase.&lt;/p&gt;

&lt;h2&gt;
  
  
  Challenges
&lt;/h2&gt;

&lt;h4&gt;
  
  
  1. Inefficient SQL queries
&lt;/h4&gt;

&lt;p&gt;One of the main challenges is the way how the ADM module works with xDB Shard databases. Instead of relying on SQL indexes to efficiently filter contacts by date, the module retrieves all contacts and iterates through them, starting with the most recent. This results in slower data retrieval times, especially when working with large datasets spanning several years.&lt;/p&gt;

&lt;p&gt;For instance, even if you specify a short date range in 2022, the module will still need to process and check all contacts, including those from 2023-2024. This significantly increases the load on the shard databases, leading to higher data input/output operations and longer processing times.&lt;/p&gt;

&lt;h4&gt;
  
  
  2. Threading and timeout issues
&lt;/h4&gt;

&lt;p&gt;By default, the ADM module operates with 4 threads and processes 1,000 contacts per batch. While these settings may be sufficient for smaller databases, you may want to tweak these settings if you have a larger database.&lt;/p&gt;

&lt;p&gt;The module’s batching system divides work between threads at the start of the process, meaning that if there is an SQL timeout in one of the threads, the entire thread will be aborted and the cleanup of this batch will not be completed. Therefore, the same process will have to be restarted from scratch to pick up remaining DB records. This can be frustrating, particularly when processing large datasets that take several hours to be processed.&lt;/p&gt;

&lt;h4&gt;
  
  
  3. Single task limit
&lt;/h4&gt;

&lt;p&gt;Another challenge is that only one cleanup process can run at a time. Attempting to create a new task while another is running will terminate the existing process, which makes it critical to monitor and manage the process carefully.&lt;/p&gt;

&lt;h4&gt;
  
  
  4. Limited pause functionality
&lt;/h4&gt;

&lt;p&gt;The task generation phase cannot be paused, it is supported only for task processing. There was an idea to run the cleanup in batches outside of business hours for one of the websites with a lot of xDB data to remove, but it was impossible to do run task generation in batches because it cannot be paused and it had to be done in one go.&lt;/p&gt;

&lt;h2&gt;
  
  
  Optimisation tactics
&lt;/h2&gt;

&lt;p&gt;Here are some tricks that helped me to reduce processing times and achieve the best performance for the cleanup in the past:&lt;/p&gt;

&lt;h4&gt;
  
  
  1. Read documentation
&lt;/h4&gt;

&lt;p&gt;I know this sounds obvious, but if you plan to use ADM and have not looked at the documentation that comes with it, do it now! It is quite technical and can answer many questions you already have.&lt;/p&gt;

&lt;h4&gt;
  
  
  2. Upscale to Premium tier
&lt;/h4&gt;

&lt;p&gt;If your server or database resources are under strain, especially &lt;code&gt;Shards&lt;/code&gt; and &lt;code&gt;ProcessingPools&lt;/code&gt; databases, you can try temporarily  upscaling them for the cleanup. Premium Azure SKUs are optimised for high data I/O, which is critical when working with the ADM intensive data processing.&lt;/p&gt;

&lt;h4&gt;
  
  
  3. Adjust performance settings
&lt;/h4&gt;

&lt;p&gt;There is some flexibility in tweaking the performance of the ADM module. You can increase multiple &lt;code&gt;NumberOfThreads&lt;/code&gt;, &lt;code&gt;RetrieveDataBatchSize&lt;/code&gt; and &lt;code&gt;NumberOfConnectionRetries&lt;/code&gt; settings in the ADM configuration files (see section "6. Performance Tuning" in the ADM module documentation). This helps speed up task generation and processing but should be balanced against server and database available resources.&lt;/p&gt;

&lt;h4&gt;
  
  
  4. Measure and plan accordingly
&lt;/h4&gt;

&lt;p&gt;Although tempting, splitting the cleanup process into smaller date ranges may not necessarily improve performance. The ADM module will still read all contacts every time, so this strategy may not be the most efficient for some cases. Instead, allow the process to finish at least one time and monitor its completion, note the time taken to better estimate and plan additional runs.&lt;/p&gt;

&lt;p&gt;Also, if you plan to perform the cleanup in multiple runs, start with the recent date ranges because they will be accessed first during the task generation phase.&lt;/p&gt;

&lt;h4&gt;
  
  
  5. Consider direct SQL queries
&lt;/h4&gt;

&lt;p&gt;For older versions of Sitecore, or if you struggle to run ADM process on your xDB Shards even after tweaking performance settings, an alternative approach could be used - running direct SQL queries to clean up data (you can find a few useful scripts &lt;a href="https://github.com/geann/Sitecore-xDB-cleanup-scripts/" rel="noopener noreferrer"&gt;here&lt;/a&gt;). However, this method requires understanding of SQL scripts and xDB tables structure, so do this only if you have backed up your Shard DBs and you are confident in your skills.&lt;/p&gt;




&lt;p&gt;Sitecore xDB cleanup is not always simple and the ADM module can be useful tool for managing and cleaning up contact and interaction data. I hope this article helped you to understand how the module works, its limitations and tactics for optimising its performance.&lt;/p&gt;

</description>
      <category>sitecore</category>
      <category>xdb</category>
      <category>sitecoreadm</category>
      <category>webdev</category>
    </item>
    <item>
      <title>Your last migration to Xperience by Kentico</title>
      <dc:creator>Byteminds Agency</dc:creator>
      <pubDate>Fri, 15 Nov 2024 12:37:03 +0000</pubDate>
      <link>https://dev.to/byteminds_agency/your-last-migration-to-xperience-by-kentico-ad4</link>
      <guid>https://dev.to/byteminds_agency/your-last-migration-to-xperience-by-kentico-ad4</guid>
      <description>&lt;p&gt;The more mature Xperience by Kentico product becomes, the more often I hear "How can we migrate there?”. Why do I think I know the answer?&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;I have moved flats or houses  8 times in my life&lt;/li&gt;
&lt;li&gt;And 3 times I’ve even changed countries to live&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Which makes me... A migration expert! Or maybe because I’m one of the authors of &lt;a href="https://github.com/Kentico/xperience-by-kentico-sitecore-migration-tool" rel="noopener noreferrer"&gt;Sitecore Migration Tool for Xperience by Kentico&lt;/a&gt;? It's up to you to decide!&lt;/p&gt;

&lt;p&gt;Anyway, let's return to our topic. Why do we call this a migration? When everyone knows it’s going to be a rebuild, right?&lt;/p&gt;

&lt;p&gt;Look at this picture: is this a rebuild? Or is this a migration?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F31ghk7nqrw6fwvuk6nt7.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F31ghk7nqrw6fwvuk6nt7.gif" alt="Image description" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;There is no right or wrong answer here. It all depends on how we define “the thing on the left”, and “the thing on the right”.&lt;/p&gt;

&lt;p&gt;A long time ago, “the thing on the left” was nothing but a bunch of printed materials, Excel spreadsheets, and documents. We were creating new sites on Kentico from zero digital history - it was just a build with nothing to migrate.&lt;/p&gt;

&lt;p&gt;In recent times, most of the clients approached us with ugly websites already and asked us to rebuild them nice and shiny on the Kentico platform. This was a good “rebuild everything” era, where in the worst-case scenario we had to migrate only some pieces of content from the previous site.&lt;/p&gt;

&lt;p&gt;But now the situation has changed and businesses already have good enough websites that they simply want to improve. And since “that thing on the left” could be a more sophisticated DXP, there are more things to consider while migrating.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkf1s1g31n9sbuy63ja48.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkf1s1g31n9sbuy63ja48.gif" alt="Image description" width="853" height="480"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When we know that DXP is not as simple as CMS, then DXP migration is more than just a content migration.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fad9m5kh2w1llh8knvlus.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fad9m5kh2w1llh8knvlus.gif" alt="Image description" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Of course, content and website code are still the biggest parts to migrate. But there are also things like users, tracking, personalization, email marketing, automation, A/B tests, and e-commerce that should not be missed.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F62cf7jcuuw8f10r5qbwy.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F62cf7jcuuw8f10r5qbwy.gif" alt="Image description" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Content migration
&lt;/h2&gt;

&lt;p&gt;Every content migration guide (and &lt;a href="https://docs.kentico.com/developers-and-admins/development/content-modeling/begin-with-business-needs#content-modeling-considerations" rel="noopener noreferrer"&gt;Kentico documentation&lt;/a&gt; is not an exception here) tells you to run a content audit first. But what does that mean exactly? It could be as simple as putting your content in one of these 4 buckets with the idea of migrating only important and easy bits. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft2lw8kor9vw3cmoe2fsw.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft2lw8kor9vw3cmoe2fsw.gif" alt="Image description" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Not so important or relevant, but easy to migrate. It could be outdated news or events stored in a structured format.&lt;/li&gt;
&lt;li&gt;Not important and hard to migrate. Usually, these would be landing pages built heavily using widgets and components and not utilizing structured content at all.&lt;/li&gt;
&lt;li&gt;Important and relevant, but hard to migrate. These are your most popular and most important landing pages that were built using components. This is going to be your headache for migration.&lt;/li&gt;
&lt;li&gt;Important, relevant, and structured. Easy to migrate. The thing we like the most and should be focusing on. Most commonly are product, category, service, or article details pages.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And actually, this was exactly what I did before migrating to another country! There is always something easy to pack – documents, money, pets, and… your partner, of course. Cars and furniture are important but hard to take with you. Clothes, books and board games are still kind of important and if you have some spare room in your bag, you can take them. And finally something like heavy equipment you just ignore and forget. It’s easier to buy new.&lt;/p&gt;

&lt;p&gt;One other thing I learned along the way - it doesn’t have to be all in one go, and my board games met me a few months later via the post. The same can be done with content migration - you can prioritize and phase it in batches.&lt;/p&gt;

&lt;p&gt;There is another exercise helping you identify the content worth automatic migration. Let’s build a funnel! Marketing people like funnels, aren’t they?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9lce11nxni620cqn0fd4.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9lce11nxni620cqn0fd4.gif" alt="Image description" width="853" height="480"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;list all content in all languages&lt;/li&gt;
&lt;li&gt;then quickly check the amount of it – a rule of thumb it is worth automating migration if there are at least a hundred pages of a particular type&lt;/li&gt;
&lt;li&gt;next, check whether this content is still relevant and important. Maybe now is the time to forget it and let it rest in peace with the old site&lt;/li&gt;
&lt;li&gt;finally, even if it is important, the previous implementation could contain some major mistakes in the content model that you wouldn’t want to bring to Xperience by Kentico&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;When you finally have a list of content to migrate, Kentico gives you a few &lt;a href="https://github.com/Kentico/xperience-by-kentico-migration-toolkit" rel="noopener noreferrer"&gt;options for tools&lt;/a&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://github.com/Kentico/xperience-by-kentico-kentico-migration-tool" rel="noopener noreferrer"&gt;Kentico Migration Tool&lt;/a&gt; &lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/Kentico/xperience-by-kentico-sitecore-migration-tool" rel="noopener noreferrer"&gt;Sitecore Migration Tool&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/Kentico/xperience-by-kentico-sitefinity-migration-tool" rel="noopener noreferrer"&gt;Sitefinity Migration Tool&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;For everything else that doesn’t fit the three options above -  &lt;a href="https://github.com/Kentico/xperience-by-kentico-universal-migration-tool" rel="noopener noreferrer"&gt;Universal Migration Tool&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flmcaraficllnnw3w3u0a.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flmcaraficllnnw3w3u0a.gif" alt="Image description" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Website code migration
&lt;/h2&gt;

&lt;p&gt;Code is the next important part of the migration journey. Make sure you have enough space in your migration “bag” for it.&lt;/p&gt;

&lt;p&gt;In terms of the backend code, domain logic, integrations, and scheduled tasks are things that are not so hard to migrate. Usually, this code would not wildly change, even if you migrate it from an old .NET Framework version to .NET Core. Migrating from other languages than C# though may be much more challenging and wouldn’t make much sense.&lt;/p&gt;

&lt;p&gt;Harder bits are the backend code supporting rendering of layouts, templates, and components – mainly because different platforms have wildly different APIs for accessing this data from the database storage.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo06or196opr55yiq9f7k.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo06or196opr55yiq9f7k.gif" alt="Image description" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For the frontend code migration, we can choose between these three paths:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Migrate ‘as is’ when we are dealing with an extremely tight budget&lt;/li&gt;
&lt;li&gt;Allow some extra time for addressing the existing technical dept, refactoring, and solving some issues&lt;/li&gt;
&lt;li&gt;Or the option that we all like the most - a full re-design and frontend rebuild.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F88q27x8udaz539c11eu2.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F88q27x8udaz539c11eu2.gif" alt="Image description" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Members migration
&lt;/h2&gt;

&lt;p&gt;Moving on to the next topic – frontend users, or members, migration. Very often I see this part missing in the migration plan. However, if you don’t have members on your website – doing nothing IS your plan!&lt;/p&gt;

&lt;p&gt;But if you do have them, the worst thing would be implementing a new registration process and not migrating the existing users. Trust me, the day when you release a new site and ask all your customers to re-register again will be the unhappiest day of your life! So, at the very least make sure to migrate user profiles. If you are limited on budget you can simplify this and ask users to reset their passwords in the mass-email. Customers would still not be very happy, but this option provides at least acceptable UX. Unless you ask for the password reset every other week ;)&lt;/p&gt;

&lt;p&gt;The best-in-class solution here would be to implement a seamless migration without asking for a password reset. this is doable but has some technical complications and therefore is more expensive. Make sure to consult with your technical team on this topic!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw43qqpdkunyahdf1sref.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw43qqpdkunyahdf1sref.gif" alt="Image description" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Usually, member profiles have a variety of information stored:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Basic information as simple data fields like:&lt;br&gt;
Name&lt;br&gt;
Email&lt;br&gt;
Birthdate, etc.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Complex profile data like:&lt;br&gt;
Favorites&lt;br&gt;
Collections&lt;br&gt;
Order history&lt;br&gt;
Preferences, notifications, etc.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Make sure to figure out how these things are stored in the existing system to allow for smooth migration.&lt;/p&gt;

&lt;p&gt;I also recommend considering SSO integration at this stage. This is a perfect time to introduce it during the migration project because you are going to migrate member profiles anyway. In this case, you can move basic profile info into an SSO system like &lt;a href="https://azure.microsoft.com/en-us/products/active-directory-b2c" rel="noopener noreferrer"&gt;Azure Active Directory B2C&lt;/a&gt;, &lt;a href="https://auth0.com/" rel="noopener noreferrer"&gt;Auth0&lt;/a&gt;, or &lt;a href="https://www.okta.com/" rel="noopener noreferrer"&gt;Okta&lt;/a&gt;. And the extended profile information can go into Xperience by Kentico storage. Then you can simply connect your XbyK to SSO &lt;a href="https://docs.kentico.com/developers-and-admins/development/registration-and-authentication/external-authentication" rel="noopener noreferrer"&gt;following the documentation&lt;/a&gt; - and the job is done!&lt;/p&gt;

&lt;h2&gt;
  
  
  Tracking and personalization migration
&lt;/h2&gt;

&lt;p&gt;The next step is tracking and personalization. In what cases should you consider migrating it? Basically, when you have a large known contacts base and the interaction history is important in your website UX.&lt;/p&gt;

&lt;p&gt;For example, when you submit a form, this should change your further user journey based on your activity. This is also important for returning visitors to provide consistent UX for them.&lt;/p&gt;

&lt;p&gt;How to migrate this data? Most of the systems allow some sort of exporting it. At this stage I would recommend focusing on important domain activities, like form submissions, searches, specific downloads and so on. But you can skip migrating page visits because mapping page visits from the previous platform onto the new one especially when your URLs might not match will be a nightmare to manage.&lt;/p&gt;

&lt;p&gt;Hopefully, migrating tracking and contacts data will also be supported by migration tools in the short future.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F94ilozlvhty9nlgz89ib.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F94ilozlvhty9nlgz89ib.gif" alt="Image description" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For example, this is how migrating tracking data from Sitecore can be approached. In Sitecore world, this data storage is called xDB and it contains contacts, sessions, and interactions. This information can be exposed by xConnect service which talks to xDB and it also has a public API to query the data in JSON format and then it can be imported into XbyK by SQL script.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fletbn79v06xryqwyt3sf.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fletbn79v06xryqwyt3sf.gif" alt="Image description" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Marketing automation migration
&lt;/h2&gt;

&lt;p&gt;Now let’s talk about Marketing automation. If you haven’t heard of this yet – XbyK comes with simple automation out of the box already. If your requirements are as simple as sending emails upon form submission or user registration – go ahead and use it. But bear in mind that it is not possible to migrate this automatically and I don’t think it will ever be. Most of the time the marketing automation “migration” processes would look like simply implementing it in XbyK from scratch.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7g1gnu1tsdbbpacsfc43.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7g1gnu1tsdbbpacsfc43.gif" alt="Image description" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In case you require more sophisticated scenarios please integrate with Zapier. XbyK has &lt;a href="https://github.com/Kentico/xperience-by-kentico-zapier" rel="noopener noreferrer"&gt;a native integration package&lt;/a&gt; that will give you custom triggers, but the automation process itself you would need to configure in Zapier.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foln0xm7c88sp7gtuom2o.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foln0xm7c88sp7gtuom2o.gif" alt="Image description" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  A/B and MV tests migration
&lt;/h2&gt;

&lt;p&gt;Next. A/B tests. Oops, this is where our plan may fail a little bit as XbyK doesn’t have native A/B testing as of yet. The easiest recommendation would be to integrate a third-party tool like VWO. Check &lt;a href="https://github.com/Kentico/xperience-by-kentico-tag-manager/blob/main/docs/VWO.md" rel="noopener noreferrer"&gt;the documentation&lt;/a&gt; to learn more about how to integrate it.&lt;/p&gt;

&lt;p&gt;However, in general, you should not be migrating any running tests. It would be a strange idea. Please finish and conclude the existing tests, migrate the code and content only for winning variants, and start creating new tests in VWO directly.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc1395w78nxer3xoth0uv.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc1395w78nxer3xoth0uv.gif" alt="Image description" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  E-commerce migration
&lt;/h2&gt;

&lt;p&gt;And last but not the least – e-commerce. When you are migrating existing Kentico Xperience 13 solutions to Xperience by Kentico &lt;a href="https://github.com/kentico/xperience-by-kentico-k13ecommerce" rel="noopener noreferrer"&gt;you can integrate it as a headless shop&lt;/a&gt;. But for all other cases, it is better to integrate with Shopify until we have a native e-commerce in XbyK.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8h1cc1fq9piqc94s4zvc.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8h1cc1fq9piqc94s4zvc.gif" alt="Image description" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Just looking at Sitecore example again. The product in their ecosystem is called Sitecore Commerce, and it contains product pages and SKUs. Also, many other aspects of e-shops like discounts, taxes, delivery and more, but those things will be hard to migrate.&lt;/p&gt;

&lt;p&gt;What’s possible to migrate is product information. Pages can be migrated via the migration tool like any other content, then SKUs can be exported in a format compatible with Shopify. If we did everything right, after installing the integration SKUs and product pages should be matching by identifiers.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpws50z5wt72e0s0zz0eg.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpws50z5wt72e0s0zz0eg.gif" alt="Image description" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Hopefully, this top-level guide provides some useful thoughts for your future migrations to Xperience by Kentico. As a conclusion, I want to mention it one more time - DXP migration is not going to be as simple as just content migration. But don’t panic! Remember the key building blocks and happy migrations everyone!&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Author: Dmitry Bastron&lt;/em&gt;&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>kentico</category>
      <category>xbyk</category>
    </item>
    <item>
      <title>5 Key Software Architecture Principles for Starting Your Next Project</title>
      <dc:creator>Byteminds Agency</dc:creator>
      <pubDate>Thu, 17 Oct 2024 09:57:41 +0000</pubDate>
      <link>https://dev.to/byteminds_agency/5-key-software-architecture-principles-for-starting-your-next-project-57og</link>
      <guid>https://dev.to/byteminds_agency/5-key-software-architecture-principles-for-starting-your-next-project-57og</guid>
      <description>&lt;p&gt;How deeply should you determine the architecture of a project at the start? What criteria should you use? What should you focus on from the beginning?&lt;/p&gt;

&lt;p&gt;In this article, we will touch on where to start designing the architecture and how to make sure that you don’t have to redo it during the process.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;But first, let's answer the question - why?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;For a business customer, it might seem that deeply considering the architecture in the early stages isn’t very important. You can often hear something like, "Let's quickly build something functional, and then we can make it pretty with diagrams."&lt;/p&gt;

&lt;p&gt;But this is actually a misconception. Of course, even without a formal architecture, developers will come up with something. For example, they might take a ready-made architecture from a textbook or rely on the one they have experience with. And, in the end, something will get "built". But, based on our experience, most often you’ll get something like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fda4mhavdi0ugyt40huyd.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fda4mhavdi0ugyt40huyd.jpg" alt="Image description" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It seems like you’ve built a structure. And it has all the features - windows, doors, balconies, a roof. And yes , you can live in it. But how long will it stand? It’s hard to say.  It’s difficult to assess its quality and whether it meets the needs of those who will live in it. Additionally, due to architectural inaccuracies, the construction may be stopped at any moment because continuing would be too dangerous or costly. You might need to demolish it and start over. &lt;/p&gt;

&lt;p&gt;That’s why planning software architecture from the start is critical. Today, we will discuss a set of rules that may not seem advanced but will help you lay a solid foundation for your projects. Inspired by &lt;a href="https://www.principles.com/" rel="noopener noreferrer"&gt;Ray Dalio’s book “Principles”&lt;/a&gt;, I’ve developed basic principles for software architecture that I actively use in my work when starting a new project.&lt;/p&gt;

&lt;h2&gt;
  
  
  Principle 1: Architecture is subordinate to product and business goals
&lt;/h2&gt;

&lt;p&gt;Not the other way around.&lt;/p&gt;

&lt;p&gt;What happens if this principle is violated? When we start prioritizing the internal architecture over the needs of the people who will use the product, we forget why we’re creating it. We become "&lt;a href="https://en.wikipedia.org/wiki/Architecture_astronaut" rel="noopener noreferrer"&gt;architectural astronauts&lt;/a&gt;", creating overly complex solutions for non-existent problems, wasting the team's time and the business’ money.&lt;/p&gt;

&lt;p&gt;It’s less of a problem in large companies where they can afford costly experiments. But for small companies, this kind of resource misallocation can be disastrous. For example, instead of  building a simple blog, a developer might get carried away and implement microservices, Kubernetes, and all the patterns they know - building a "mini-spaceship" that never takes off. &lt;/p&gt;

&lt;p&gt;According to management textbooks, an ideal workflow starts with a company strategy, followed by a direction strategy, and then a product strategy. And the architecture should be derived from the product strategy, and based on the architecture, the code is written. I agree with this because lower-level decisions should stem from higher-level strategies. But in reality, many projects start with only a vague vision and no  clear product strategy. It may exist in theory, but no one has documented it.&lt;/p&gt;

&lt;p&gt;Software architecture is primarily about understanding the goals - who needs it and why. The architect must understand the product’s economics: how much revenue it should generate, how much the owners are willing to invest, and the time frame for launching it. Understanding the project's future is equally important.&lt;/p&gt;

&lt;p&gt;There are tools available that help you see and understand how your product fits into the business and operates on different levels. We use some of these tools during the analytical stages. All of them provide a vision of the product and help clarify why we are writing code:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fon4iuiu46fcy5clm095m.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fon4iuiu46fcy5clm095m.jpg" alt="Image description" width="800" height="339"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Principle 2: Architecture defines systems, their boundaries, and connections
&lt;/h2&gt;

&lt;p&gt;Seems obvious, right? But let's dig a bit deeper.&lt;/p&gt;

&lt;p&gt;Imagine looking at a plant section under a microscope. You’d see large and small cells.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2c9kdce4asukzpcvol8o.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2c9kdce4asukzpcvol8o.jpg" alt="Image description" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The largest concentration of tissue is often at the boundaries between the cells. The same applies to systems in software architecture—what happens at the boundaries and connections is often the most critical part. What’s inside is shaped by these external connections.&lt;/p&gt;

&lt;p&gt;A high-quality definition of subsystem boundaries and understanding types is the basis of architecture.&lt;/p&gt;

&lt;p&gt;If this principle is ignored and boundaries and connections are not properly defined, disappointment follows. A client comes with an approximate vision, and the team jumps into iterative work without carefully defining the key system boundaries. For example, you might find out halfway through development that a crucial product element requires integration with third-party systems in an unsupported way, leading to costly redesigns.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Example: while developing a personal account for an online store, it turns out that the ability to communicate with personal managers is critical to the business, but the CRM doesn’t support real-time messaging. Imagine discovering this after spending 50% of the budget.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;That’s why boundaries are essential. To address this, we often use a variation of the diagrammatic system context from the C4 model during the early stages of work. Here’s an example:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl6s9et5e67nl7t07dzg1.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl6s9et5e67nl7t07dzg1.jpg" alt="Image description" width="800" height="609"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let's say we have a system - let’s call it a "delivery service". There are people (clients, administrators) who use it and other systems with which it interacts (delivery services). We must register and consider all these connections, capturing the context of the entire enterprise. &lt;/p&gt;

&lt;h2&gt;
  
  
  Principle 3: Architecture works with two constraints: hardware and people
&lt;/h2&gt;

&lt;p&gt;Architecture is always about constraints. Two fundamental constraints prevent us from waving a magic wand and suddenly getting a working result.&lt;/p&gt;

&lt;p&gt;In our case, the constraints are hardware and people. Architecture must provide solutions &lt;strong&gt;for both&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Hardware constraints&lt;/strong&gt; are about understanding the available equipment. We’ve all encountered applications that don’t run smoothly on powerful devices due to inadequate optimization (hello, Notion!). &lt;/p&gt;

&lt;p&gt;As architects, we need to consider where our software will run, what devices and operating systems it will use, how much data storage is required, and network performance. And since we live in the physical world, everything breaks eventually.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa12cx3zk5nvdpmhr2gi7.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa12cx3zk5nvdpmhr2gi7.jpg" alt="Image description" width="780" height="391"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When starting a new project, we try to work with the most basic numbers.&lt;/p&gt;

&lt;p&gt;Examples of fundamental constraints:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7zhrbuj0lxh40ifcsrk5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7zhrbuj0lxh40ifcsrk5.png" alt="Image description" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://gist.github.com/hellerbarde/2843375" rel="noopener noreferrer"&gt;https://gist.github.com/hellerbarde/2843375&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;See also: &lt;a href="https://vercel.com/blog/latency-numbers-every-web-developer-should-know" rel="noopener noreferrer"&gt;https://vercel.com/blog/latency-numbers-every-web-developer-should-know&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;Understanding product requirements and matching them to these constraints often begins with a simple estimate of key parameters based on the product strategy.&lt;/p&gt;

&lt;p&gt;For example, for one sports project, we estimated the number of coaches, teams, and players, and how often they would use the platform. From this, we determined the necessary hosting setup and its cost. It’s also important to consider data volume (e.g., video, images), which can lead to cost and storage limitations or hit pricing tiers with hosting providers.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Sidenote: I am not suggesting &lt;a href="https://wiki.c2.com/?PrematureOptimization" rel="noopener noreferrer"&gt;premature optimization&lt;/a&gt; - trying to squeeze out maximum resource efficiency early on. Instead, I’m advising that you understand the actual business requirements and ensure your system performs adequately for them.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;The second constraint is people.&lt;/strong&gt; They have limited attention spans, limited knowledge,  and different ideas about how things should work. As architects, we must always keep in mind who the architecture is for.&lt;/p&gt;

&lt;p&gt;A developer can’t keep the entire project in mind at once. Most of their time is spent reading code, not writing it.  In this sense, the architecture serves as a map, helping them navigate through the product and understand what changes need to be made and how those changes impact the system.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzmws7he0p1zntxmgyixi.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzmws7he0p1zntxmgyixi.gif" alt="Image description" width="512" height="490"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Architecture should work for the people who will interact with it. Understanding their skills and needs is crucial. If they aren’t available now, you’ll need a plan to onboard them—whether through hiring, contractors, or another approach.&lt;/p&gt;

&lt;p&gt;Consider who will be working with your architecture:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Development team:&lt;/strong&gt; sometimes we know who will develop the project’s architecture, but other times, especially in large companies with tenders, it’s less clear. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Internal customers and controllers:&lt;/strong&gt; these may include CTO or an architectural committee ensuring adherence to the company's broader IT strategy.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;External regulatory bodies:&lt;/strong&gt; in industries like finance or healthcare, these bodies will review products before launch.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Support and development teams:&lt;/strong&gt; often, different teams handle product development and ongoing support.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Depending on their knowledge and skills, you’ll need to tailor your architectural approach - simplifying where necessary or investing more time in training for complex solutions.&lt;/p&gt;

&lt;p&gt;When thinking about “architecture for people” it is important to consider: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Domain separation: Splitting a large system into domain areas (sales, order processing, product catalog, etc.) can improve team efficiency. This avoids overlap and conflict. In domain-driven design, this is known as "bounded context."&lt;/li&gt;
&lt;li&gt;Key architectural approaches: Decide whether to use a modular monolith or microservices. If it’s the latter, describe the interaction patterns.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you neglect either of these architectural aspects—hardware or people—you’ll likely end up with a slow, fragile, unreliable system, or an over-complicated product where people struggle to understand and work with the code.&lt;/p&gt;

&lt;p&gt;There are many approaches and engineering practices in software architecture textbooks, but if we look closely, we will notice that each one aligns more closely with one of these aspects:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffnvbllwtoi2n90w6sr3v.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffnvbllwtoi2n90w6sr3v.jpg" alt="Image description" width="800" height="771"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Your task is to select those approaches that best fit your team and product, and then clearly communicate them to your team.&lt;/p&gt;

&lt;h2&gt;
  
  
  Principle 4: Architecture must allow the project to grow and evolve
&lt;/h2&gt;

&lt;p&gt;Software architecture doesn’t just support the current product; it should enable growth and change in the future. The question isn’t "How do we design it so we never have to redo it?" but rather, "How do we design it so we don’t have to redo it in the next few months?"&lt;/p&gt;

&lt;p&gt;Think of software architecture like a landscape design. Code will grow and expand, and some parts will fade or become obsolete. Like gardeners, we must organize and maintain it. We can, for example, transplant a "tree" (a block of code) or rearrange "flowerbeds" (modules) in our project. Unlike construction architecture, we have more flexibility in software design.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbr88bazkh75g247swkg2.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbr88bazkh75g247swkg2.jpg" alt="Image description" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The project architecture should mainly focus on future changes to the project.&lt;/p&gt;

&lt;p&gt;We must plan for growth—load increases, new features, additional users, new interfaces, or integrations. After reviewing the product roadmap, it’s useful to highlight the areas likely to change and focus on those.&lt;/p&gt;

&lt;p&gt;As software architects, we must prepare the "beds" for future growth and always keep in mind what might grow within the product.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc5cvtzumorvhb7j2j5ap.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc5cvtzumorvhb7j2j5ap.jpg" alt="Image description" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Principle 5: Code and architecture should be self-describing
&lt;/h2&gt;

&lt;p&gt;In short: sometimes it is easier to write code than to draw a diagram.&lt;/p&gt;

&lt;p&gt;Diagrams become outdated as the code evolves, and unless they’re updated regularly, they become useless.&lt;/p&gt;

&lt;p&gt;Situations when code is better:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;A code example of implementing a typical feature can better demonstrate to the team how different modules interact than a diagram. Walk the team through the code and use it for onboarding new developers.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Code examples of common patterns—such as microservice calls, logging, or authentication—can provide clarity. If there’s no time for a full feature, collecting typical patterns into one guide is helpful.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Auto-generation of code documentation: API documentation (requests, parameters, data formats) can be generated automatically. We often write server code with request and response objects, and generate an OpenAPI/Swagger specification (e.g., via Swashbuckle in .NET). There are rarer but effective examples like generating architecture diagrams based on code for infrastructure (e.g., K8S).&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Additionally, tools can help developers navigate the system in absence of detailed architecture docs:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Tools like NDepend can show how modules are related. The advantage is that the diagram is not just a static picture but an interactive map that lets you zoom into individual modules.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8j4yyrbpa01wv5y3zcjc.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8j4yyrbpa01wv5y3zcjc.jpg" alt="Image description" width="672" height="470"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Generated ER diagrams for databases can also be visual, especially with some manual grouping afterward.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh48v7t7ssp5oktywir4t.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh48v7t7ssp5oktywir4t.jpg" alt="Image description" width="800" height="559"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For large projects, these tools will only help with parts of the application. But even with "spaghetti architecture," at least the map will be accurate.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;If you follow these principles, your architecture won’t be arbitrary, overcomplicated, or underdeveloped. You’ll be able to create a solid architecture that’s appropriate for your project and its current stage assuming, of course, you have the time and skills!&lt;/p&gt;

&lt;p&gt;Good luck with your architecture!&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Author - Andrey Stepanov, CTO ByteMinds&lt;/em&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Assessing Algorithm Complexity in C#: Memory and Time Examples</title>
      <dc:creator>Byteminds Agency</dc:creator>
      <pubDate>Thu, 26 Sep 2024 12:35:09 +0000</pubDate>
      <link>https://dev.to/byteminds_agency/assessing-algorithm-complexity-in-c-memory-and-time-examples-15k9</link>
      <guid>https://dev.to/byteminds_agency/assessing-algorithm-complexity-in-c-memory-and-time-examples-15k9</guid>
      <description>&lt;p&gt;Today, we will talk about assessing algorithm complexity and clearly demonstrate how this complexity affects the performance of the code.&lt;/p&gt;

&lt;p&gt;In the &lt;a href="https://dev.to/byteminds_agency/how-to-properly-measure-code-speed-in-net-158o"&gt;last article&lt;/a&gt;, we discussed code benchmarking and how to use it to evaluate the performance of code in .NET. In this article , we will focus on assessing the complexity of algorithms. And to make everything clear, together we will try to assess the algorithms used to solve a common interview task at Google.&lt;/p&gt;

&lt;p&gt;There are many such tasks on platforms like LeetCode, CodeWars, and others. Their value lies not in learning various sorting algorithms that you may never write in practice, but in understanding typical problems you may encounter during software development.&lt;/p&gt;

&lt;h2&gt;
  
  
  Algorithm Complexity Assessment
&lt;/h2&gt;

&lt;p&gt;Why assess algorithm complexity, and what methods exist?&lt;/p&gt;

&lt;p&gt;Understanding algorithm complexity is important because:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Without this knowledge, you can’t tell sub-optimal code from optimal code.&lt;/li&gt;
&lt;li&gt;Every medium or large project will eventually operate with a large amount of data. It is important that your algorithms take this into account and do not become a time bomb.&lt;/li&gt;
&lt;li&gt;Lack of understanding increases the risk of writing low-performance code.&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;More often, the focus is on assessing the algorithm by time (time complexity) - how much time it will take to execute. The execution time depends on the number of elementary operations that the computer performs.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;For each algorithm, several complexity assessments can be made:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Big O (O(f))&lt;/strong&gt; - allows you to assess the upper bound on the complexity of algorithms. This is the ratio of the amount of input data for the algorithm to the time in which the algorithm can process it. In simple terms, this is the maximum execution time of the algorithm when working with large amounts of data.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Omega (Ω(f))&lt;/strong&gt; - allows you to estimate the lower bound of complexity - how long the algorithm will take to run in the best case.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Theta (Θ(f))&lt;/strong&gt; - allows you to get a “dense” complexity estimate, that is, where the speed of operation in the worst and best cases will be proportional to one function. This is not applicable to all algorithms.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;IT companies focus on Big O because it shows how performance scales with more input data. The other types aren’t used as often.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The function that limits the complexity is indicated in brackets after O. In this case, n is the size of the input data. &lt;/p&gt;

&lt;p&gt;For example, O(n) means complexity grows linearly. In this case, the execution time of the algorithm increases in direct proportion to the size of the input data.&lt;/p&gt;

&lt;p&gt;If you imagine a graph of common algorithm complexities, it will look like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx4y1a78oxf6catpig1re.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx4y1a78oxf6catpig1re.jpg" alt="Graph of prevalence of algorithm complexity"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If we divide the complexity into zones, then the complexities of the O(log n), O(1) or O(C) type can be classified as the "Excellent" zone. Such algorithms, regardless of the volume of data, will be executed very quickly - almost instantly.&lt;/p&gt;

&lt;p&gt;Algorithms with O(n) complexity can be classified as the "Average" zone - their complexity grows predictably and linearly. For example, if your algorithm processes 100 elements in 10 seconds, it will process 1000 in about 100 seconds. Not the best result, but predictable.&lt;/p&gt;

&lt;p&gt;Algorithms from the red zone with complexities of O(n^2) and higher are difficult to classify as high-performance. But! Here, everything strongly depends on the volume of input data. If you are sure that you will always have a small amount of data (e.g., 100 elements), and it will be processed in an acceptable time for you, then such algorithms can also be used. But if you are not sure about the constancy of the data volumes (10,000 elements may come instead of 100), it is better to think about optimising the algorithms.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;It’s important to note that the time complexity assessment is a theoretical assessment. It does not take into account internal optimizations and the processor cache; in reality, the picture may be different.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Complexity Assessment by Memory
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;It's important to assess algorithm complexity in terms of memory too, not just time.&lt;/strong&gt; This is often forgotten when studying the topic.&lt;/p&gt;

&lt;p&gt;For example, to speed up calculations, you can create some intermediate data structure such as an array or stack to cache the results. This will lead to additional memory costs but can significantly speed up calculations.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Memory complexity is also called space complexity and is estimated using the same notation as for time — big O. For example, memory complexity O(n^2) means that in the worst case, the algorithm will not need more memory than proportional to n^2.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;When assessing the complexity of algorithms by memory, a simplified model known as the RAM machine is used. In this model, reading or writing to any memory cell is treated as a single operation. This makes the time for both computational and memory operations equal, which simplifies analysis. It closely mirrors working with RAM but doesn’t account for processor registers, disk operations, or garbage collection.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;A Little Practice: Rules For Calculating Complexity On Your Fingers&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;We’ll provide  examples in C#, although pseudocode would suffice. We trust these examples will still be clear and easy to follow.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Example 1:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Let's start with a simple algorithm for assigning a variable:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;private void Alg1(int[] data, int target)
{
    var a = data[target];
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;What is its complexity in time and memory?&lt;/p&gt;

&lt;p&gt;The &lt;strong&gt;data array’s&lt;/strong&gt; unknown dimensions might be misleading, but it’s incorrect to consider them when assessing the complexity of the internal algorithm.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;Rule 1: External data is not taken into account in the complexity of the algorithm.&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;It turns out that our algorithm consists of only one line:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;var a = data[target];&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Access to an array element by index is a known operation with complexity O(1) or O(C). Accordingly, the entire algorithm will take us O(1) in time.&lt;/p&gt;

&lt;p&gt;Additional memory is allocated only for one variable. This means that the amount of data that we will transfer (doesn't matter 1,000 or 10,000) will not affect the final result. Accordingly, our memory complexity remains O(1) or O(C). Such in-place algorithms may use extra memory, but its size isn’t tied to the input data volume.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;To simplify, we’ll write O(C) instead of O(1), as C in this case is a constant. Whether it’s 1, 2 or even 100 - for modern computers this number is not important, since both 1 and 100 operations are performed at almost the same time.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;Example 2:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Let's consider the second algorithm, which is very similar to the first:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;private void Alg2(int[] data, int target)
{
  var a = data[target];
  var b = data[target + 1];
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Does the input array size affect the number of operations in it? No.&lt;/p&gt;

&lt;p&gt;And how about the allocated memory? Also no.&lt;/p&gt;

&lt;p&gt;The time complexity of this algorithm could be estimated as O(2*C) — since we perform twice as many operations as in the previous example, 2 assignments instead of 1. But we have a rule for this too:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;Rule 2: Omit constant factors if they do not affect the result dramatically.&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;If we take this rule into account, the complexity of this algorithm will be the same as in the first example — O(C) in time and O(C) in memory.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Example 3:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;We will add to our algorithm a loop for processing data:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;private int Alg3(int[] data)
{
  var sum = 0;
  for (int i = 0; i &amp;lt; data.Length; i++)
  {
    sum += data[i];
   }

  return sum;
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As we can see, the number of operations in the loop directly depends on the amount of input data: more elements in data - more processing cycles to reach the final result.&lt;/p&gt;

&lt;p&gt;At first glance, if we account for each line of code, we’d get something like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;private int Alg3(int[] data)
{
  var sum = 0; // O(C)
  for (i=0; i &amp;lt; data.Length; i++) // O(n)
  {
     sum += data[i]; // O(C)
  }
  return sum;
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And then the final complexity of the algorithm will be O(C)+O(n). But here again, a new rule intervenes:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;Rule 3: Omit evaluation elements that are less than the maximum complexity of the algorithm.&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Let us explain: if you have O(C)+O(n), the resulting complexity will be O(n) since O(n) will always grow faster than O(C).&lt;/p&gt;

&lt;p&gt;Another example is O(n)+O(n^2). With such complexity, n2 always grows faster than n, which means we discard O(n) and only O(n^2) remains.&lt;/p&gt;

&lt;p&gt;So, the complexity of our third example is O(n). In memory, it remains unchanged, it is O(C).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Example 4:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;We calculate the sum of all possible pairs of values ​​from the array:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;private int Alg4(int[] data)
{
  var sum = 0;
  for (int i = 0; i ‹ data.Length; i++)
  {
    for (int j = 0; j ‹ data.Length; j++)
    {
      sum += data[i]*data[j];
    }
  }
  return sum;
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And to process it, we need two loops. Both of these loops will depend on the dimensionality of the input data.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;Rule 4: Nested complexities are multiplied.&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;The complexity of the outer loop is O(n), and the inner loop is also O(n). According to the rule, these two complexities must be multiplied. As a result, the total complexity of the entire algorithm becomes O(n^2). In terms of memory, without changes - it is O(C).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;A tricky example:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Let's feed a two-dimensional array to the input and calculate the sum of the values:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;private int Alg4_tricky_case(int[][] data)
{
   var sum = 0;
   for (int i = 0; i &amp;lt; data.Length; i++)
   {
       for (int j = 0; j &amp;lt; data.Length; j++)
       {
         sum += data[i][j];
       }
   }

  return sum;
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Again we see nested loops - and if the input array has N x M elements, then the complexity is O(N * M), not O(n^2). Here, the time is proportional to N * M, making it linear at O(n). &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Example 5:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;And what do we see here? A loop - the complexity is already known to us - O(n). But inside, the Alg4() function from the previous example is called:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;private int Alg5(int[] data)
{
  var sum = 0;
  for (int i = 0; i ‹ data.Length; i++)
  {
     sum += Alg4(data);
  }
  return sum;
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If we recall its complexity of O(n^2), as well as rule 4, we will get that the complexity of this algorithm is O(n^3) with all its visual minimalism. With the time complexity unchanged.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;Rule 5: Include in the assessment of the algorithm's overall complexity the assessment of all nested function calls.&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Understanding &lt;a href="https://c-sharp-snippets.blogspot.com/2010/03/runtime-complexity-of-net-generic.html" rel="noopener noreferrer"&gt;the complexity of syntactic sugar methods like LINQ, basic collections and data types&lt;/a&gt; is crucial for predicting behaviour with larger data sets. Without this, you risk high algorithm complexity, which can lead to performance issues as data grows.&lt;/p&gt;

&lt;p&gt;Here’s an example of a minimalistic algorithm that looks good and compact (this is by no means intended asa reference code), but can become a time bomb when working with large volumes of data:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;private List&amp;lt;int&amp;gt; Alg6(int[] data)
{
    List&amp;lt;int&amp;gt; dups = new List&amp;lt;int&amp;gt;();
    for (var i = 0; i &amp;lt; data.Length; i++)
    {
      var currentItem = data[i];
      var newArr = data.Skip(i + 1).ToArray();
      var duplicates = newArr.Where(x =&amp;gt; newArr.Contains(currentItem));
      dups.AddRange(duplicates);
    }

  return dups;
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;What do we see here? Loop = O(n), Where = O(n), Contains = O(n), since newArr is an array.&lt;/p&gt;

&lt;p&gt;So, the time complexity of this algorithm is O(n^3).&lt;/p&gt;

&lt;p&gt;Additionally, ToArray() allocates extra memory to create a copy of the array at each iteration,  meaning the memory complexity is O(n).&lt;/p&gt;

&lt;h2&gt;
  
  
  Google's Task
&lt;/h2&gt;

&lt;p&gt;For our final assessment, let's consider a task commonly given in interviews at Google.&lt;/p&gt;

&lt;p&gt;In short, the goal of the algorithm is to find any two numbers in a sorted array that sum up to the target number.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Solution 1: full pass through the array&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;public static int[] FindPairWithFullWalkthrough(int[] data, int target)
{
  for (int i = 0; i &amp;lt; data.Length; i++)
  {
    for (int j=i+1; j &amp;lt; data.Length; j++)
    {
       if (data[i] + data[j] == target)
         return new[] { data[i], data[j] };
     }
  }

  return new int[0];
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Time complexity:&lt;/strong&gt; O(n2)&lt;br&gt;
&lt;strong&gt;Memory complexity:&lt;/strong&gt; O(C)&lt;/p&gt;

&lt;p&gt;This is a straightforward solution. It’s not the most optimal, as the time complexity increases quickly with the number of elements, but we don’t consume much additional memory.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Solution 2: use HashSet&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;public static int[] FindPairUsingHashSet(int[] data, int target)
{
  HashSet&amp;lt;int&amp;gt; set = new HashSet&amp;lt;int&amp;gt;();
  for (int i = 0; i &amp;lt; data.Length; i++)
  {
    int numberToFind = target - data[i];
    if (set.Contains(numberToFind))
      return new [] { data[i], numberToFind };
    set.Add(data[i]);
  }
  return new int[0]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We go through the array and add the elements we’ve already checked to the HashSet. If the HashSet contains the missing element needed for the sum, then we’re all set and can return the result. Adding and searching in the HashSet is done in O(C) time.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Time complexity:&lt;/strong&gt; O(n)&lt;br&gt;
&lt;strong&gt;Memory complexity:&lt;/strong&gt; O(n)&lt;/p&gt;

&lt;p&gt;This is just an example of how you can improve performance by allocating additional memory for intermediate structures.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Solution 3: use binary search&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;public static int[] FindPairUsingBinarySearch(int[] data, int target)
{
  for (int i = 0; i &amp;lt; data.Length; i++)
  {
      int numberToFind = target - data[i];
      int left = i + 1;
      int right = data.Length - 1;
      while (left &amp;lt;= right)
      {
        int mid = left + (right - left) / 2;
        if (data[mid] == numberToFind)
        {
          return new[] { data[i], data[mid] };
        }

        if (numberToFind &amp;lt; data[mid])
        {
          right = mid - 1;
        }
        else
        {
          left = mid + 1;
        }
      }
  }
  return new int[0];
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The binary search algorithm has a well-known complexity of O(log(n)). The O(n) comes from the outer loop for, and everything inside the while loop is the binary search algorithm. According to Rule 4, the complexities are multiplied.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Time complexity:&lt;/strong&gt; O(n*log(n))&lt;br&gt;
&lt;strong&gt;Memory complexity:&lt;/strong&gt; O(C)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Solution 4: use the two-pointer method&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;public static int[] FindPairUsingTwoPointersMethod(int[] data, int target)
{
  int left = 0;
  int right = data.Length - 1;
  while (left &amp;lt; right)
  {
    int sum = data[left] + data[right];
    if (sum == target) return new[] { data[left], data[right] };
    if (sum &amp;lt; target)
    {
      left++;
    }
    else
    {
      right--;
    }
  }

  return new int[0];
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We move the left and right pointers to the centre until they converge or a pair of values ​​that suit us is found.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Time complexity:&lt;/strong&gt; O(n)&lt;br&gt;
&lt;strong&gt;Memory complexity:&lt;/strong&gt; O(C)&lt;/p&gt;

&lt;p&gt;This is the most optimal solution, as it doesn't use additional memory and performs the fewest number of operations.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Benchmarking solutions&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Now, knowing the complexities of all four solution options, let's benchmark this code and see how the algorithms will behave on different data sets. The information from our previous article will guide us in this process. The results are as follows: &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fehejyi9vvangpnl6bqqe.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fehejyi9vvangpnl6bqqe.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;What do we see here?&lt;/p&gt;

&lt;p&gt;For the baseline of the solution, we use the direct pass through the &lt;strong&gt;FindPairWithFullWalkthrough&lt;/strong&gt; array. On 10 elements, it works in an average of 20 nanoseconds, ranking second in performance.&lt;/p&gt;

&lt;p&gt;Only our most optimal solution option, &lt;strong&gt;FindPairUsingTwoPointersMethod&lt;/strong&gt;, runs faster on a small amount of data.&lt;/p&gt;

&lt;p&gt;The option with &lt;strong&gt;HashSet&lt;/strong&gt; took 8 times longer to process small data sets and required additional memory allocation, which would eventually need to be managed by the Garbage Collector.&lt;/p&gt;

&lt;p&gt;On 1,000 elements, the full pass-through solution (&lt;strong&gt;FindPairWithFullWalkthrough&lt;/strong&gt;) started to noticeably lag behind the other algorithms. The reason is its O(n^2) complexity, which increases much faster than the other complexities.&lt;/p&gt;

&lt;p&gt;On 10,000 elements, the full-pass algorithm took 9.7 seconds to complete, while the others finished in 0.1 seconds or less. Our most optimal solution found a result in just 3 milliseconds.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why did binary search outperform HashSet?&lt;/strong&gt; After all, in theory, O(n * log(n)) should be slower than O(n). The reason is that on real computers, not theoretical ones, memory allocation and deallocation don’t happen instantly - Garbage Collection is triggered every now and then. This is confirmed by the high standard deviation (StdDev) values ​​​​in the HashSet benchmark.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;We’ve learned how to assess the complexity of algorithms and how to use &lt;strong&gt;BenchmarkDotNet&lt;/strong&gt; to trace the relationship between algorithm complexity and the execution time of the code. This will allow you to roughly estimate whether your code is efficient or not, even before running benchmarks.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Author - Anton Vorotyncev&lt;/em&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>8 Non-Obvious Vulnerabilities in E-Commerce Projects Built with NextJS</title>
      <dc:creator>Byteminds Agency</dc:creator>
      <pubDate>Thu, 08 Aug 2024 11:52:43 +0000</pubDate>
      <link>https://dev.to/byteminds_agency/8-non-obvious-vulnerabilities-in-e-commerce-projects-built-with-nextjs-2c2l</link>
      <guid>https://dev.to/byteminds_agency/8-non-obvious-vulnerabilities-in-e-commerce-projects-built-with-nextjs-2c2l</guid>
      <description>&lt;p&gt;Ensuring security during development is crucial, especially as online and e-commerce services become more complex. To mitigate risks, we train developers in web security basics and regularly perform third-party penetration testing before launch.&lt;/p&gt;

&lt;p&gt;In this article, we will talk about security using the example of a multilingual e-commerce service —an online store with a buyer account. The project is built on NextJS, which is part of the backend written in JS by front-end developers. This architecture requires extra vigilance regarding security, as we'll explore in the following cases.&lt;/p&gt;

&lt;p&gt;Penetration tests are usually performed before launching a service. While there are &lt;a href="https://owasp.org/www-community/vulnerabilities/" rel="noopener noreferrer"&gt;well-known vulnerabilities&lt;/a&gt; that are commonly checked, effective pentests go beyond standard methodologies to uncover unique vulnerabilities.&lt;/p&gt;

&lt;p&gt;As mentioned above, the service we are talking about is built on NextJS with the following architecture: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Page rendering occurs on NextJS&lt;/li&gt;
&lt;li&gt;Content and user data are retrieved via Umbraco’s API (which recently released its &lt;a href="https://docs.umbraco.com/umbraco-cms/reference/content-delivery-api" rel="noopener noreferrer"&gt;Content Delivery API&lt;/a&gt; for headless solutions) &lt;/li&gt;
&lt;li&gt;User sessions are created in Umbraco and then used in NextJS&lt;/li&gt;
&lt;li&gt;E-commerce functionality is handled by Shopify, with which the NextJS site interacts via API &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This architecture distributes security responsibilities between backend and frontend developers, potentially leading to vulnerabilities on both sides.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffq5uldb68srgn3ndyt0d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffq5uldb68srgn3ndyt0d.png" alt="Image description" width="800" height="449"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Problem #1 – Data Leakage When Passing Filter Values ​​to the API
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Feature:&lt;/strong&gt; In the personal account, the user can view their current subscriptions. The data is retrieved via a POST request from the “my account” API to Umbraco, which then pulls active subscriptions from Shopify. The tester intercepted this request, examined its contents, removed the user session cookie from it, and passed an empty request body. As a result, they received all subscriptions of all users.&lt;/p&gt;

&lt;p&gt;When developing such features, it’s crucial to consider the overall architecture of the application and how the backend and frontend communicate. Looking at the diagram above, consider the sequence of receiving subscriptions:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The client JS application requests subscriptions of the current user from the NextJS website, passing the cookie of the current user session and the email of the current user—the data that the tester removed during testing.&lt;/li&gt;
&lt;li&gt;NextJS website, knowing the secret Application token of Umbraco, requests subscriptions from Umbraco, forwarding the same user email as a filter.&lt;/li&gt;
&lt;li&gt;Umbraco, knowing the secret Application token of Shopify, requests subscriptions from Shopify, again forwarding the user email from the parameters.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Separately, both the frontend and backend functioned correctly, but a security hole appeared when integrated —no one checked the user's session: the frontend thought that the session would be checked by the backend on Umbraco, and the backend thought that the frontend would check the session on NextJS.&lt;/p&gt;

&lt;p&gt;Additionally, the Shopify API, when passed an empty email as a parameter, returns all subscriptions of all users, meaning the email parameter acts as a filter, and if it is empty, everything is returned. This combination led to the discovered problem.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How to do it right&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;You shouldn't pass email and other user data as request parameters from a client JS application. Instead, retrieve the user's email from the current user's session context, and validate this session.&lt;/p&gt;

&lt;h2&gt;
  
  
  Problem #2 – Access Rights Verification Using Request Parameters Instead of Current Session
&lt;/h2&gt;

&lt;p&gt;The website has the ability to change the password. Testers intercepted the change password request, removed the identification cookie, and sent an empty request with only a body, containing the old and new passwords, to the backend. In one place, security allowed such a request, and checked it in another. When making a request to change the password, retrieve information about the logged-in user from identification cookies, not parameters.&lt;/p&gt;

&lt;p&gt;Another similar case : the website has a project builder section, like favourites, where the user can arrange the products they like in project folders. Testers intercepted a request, removed the identification cookie, but left the request body, which included information on which user to create a collection for and from which products. As a result, they could update another user's favourites without authentication. None of the security levels prevented such penetration.&lt;/p&gt;

&lt;p&gt;In both cases, the same vulnerability: &lt;strong&gt;data is transmitted as parameters in the API instead of being taken from the current user's session&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Problem #3 – Possibility of Recording a Malicious Script When Creating Entities
&lt;/h2&gt;

&lt;p&gt;There may be situations on the website when a user, especially true for user portals, can create some entity that the website will display later. For example, the above-mentioned project folders. In some cases, the website allows the insertion of not only letters as a project name but also a piece of script that, for example, will steal user cookies. When an intruder does this, user cookies may appear in the alert or be sent somewhere.&lt;/p&gt;

&lt;p&gt;One might argue: I create a project for myself, so no one except me can see this data. However, this functionality can evolve: initially, the project name display is available only to a unique user, but later, an admin role might appear. If an admin user encounters this vulnerability, their data might be compromised. Another development possibility is adding the function of sharing projects between users. The user gives the project to another, the script executes, and the cookies are stolen.&lt;/p&gt;

&lt;h2&gt;
  
  
  Problem #4 – Possibility of Сreating a Redirect to a Phishing Website
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Redirects&lt;/strong&gt;. A user navigating the site can add a product to the cart and then proceed to checkout. The peculiarity of the website implementation is that e-commerce is made on Shopify – the cart is rendered on our website, but for checkout – delivery and payment – ​​we redirect the user to the Shopify portal.&lt;/p&gt;

&lt;p&gt;If the user is logged in on our website, the same logged-in state must be preserved when redirecting to Shopify. In Shopify terminology, this Single-SignOn implementation is called Multipass, and you can read about it in detail &lt;a href="https://shopify.dev/docs/api/multipass" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;In short, to generate a redirect, NextJS sends a request with the URL for the redirect to the Umbraco API, which encrypts this URL with a secret key from Shopify, and substitutes the user ID needed for login during the redirect. Our Umbraco API code allows any URL to be inserted into the redirect , and URL Shopify doesn't check it during the redirect. &lt;/p&gt;

&lt;p&gt;This is dangerous because it can lead to phishing. An intruder can prepare a phishing website that looks like your website: the first domain will be valid, but when clicking on the link, the user will be redirected to the phishing site and may trust the initial appearance, giving out their data.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How to do it right&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Ensure that the redirect logic checks the redirect URL for a known domain before executing the redirect.&lt;/p&gt;

&lt;h2&gt;
  
  
  Problem #5 – Unhindered Password Guessing
&lt;/h2&gt;

&lt;p&gt;When logging into a website, the user enters a login and password, after which a request is sent to the server. Some websites do no limit the number of login attempts. In such cases, an attacker can prepare a script for password guessing and run it for a long time.&lt;/p&gt;

&lt;p&gt;Usually, a couple of levels of protection are implemented. Login forms can be hidden under a captcha —thus, an attacker must automate captcha solving for repeated attempts. &lt;/p&gt;

&lt;p&gt;Alternatively, an invisible Google captcha can be used, checked on the backend before further code execution. Another method is rate limiting:limit attempts from one IP, browser, etc.  The simplest approach is to return an error after several login attempts from one IP address. Algorithms can also increase waiting times: the first incorrect login attempt gets a quick response, the second takes more time, and so on. If response time increases exponentially, several failed attempts will result in a long wait.&lt;/p&gt;

&lt;h2&gt;
  
  
  Problem #6 – Active Sessions After Password Change
&lt;/h2&gt;

&lt;p&gt;Imagine someone has hacked a user's account and started doing something on the website on their behalf. The user immediately resets the password, but the attacker remains logged in, since their session is still active.&lt;/p&gt;

&lt;p&gt;Resetting the password should also invalidate all active sessions created for the user. Thus, if the NextJS application is responsible for creating and checking sessions, all active sessions should be reset when the user's password is reset to prevent a security hole.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How to do it right&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;To implement this logic, store the user's active sessions somewhere, since NextJS does not do this natively. In our case, we stored active sessions in the Umbraco database, and the NextJS application updated session data via the API.&lt;/p&gt;

&lt;h2&gt;
  
  
  Problem #7 – Script and HTML Injection in User Registration
&lt;/h2&gt;

&lt;p&gt;In the registration form on the website, you can enter a script and HTML markup in the user's first and last name fields. Usually, the backend provides protection against SQL injections, but protection against frontend scripts is not always obvious.&lt;/p&gt;

&lt;p&gt;Why is this dangerous? Registration data collected by the website is often inserted into the registration confirmation email. For example, “Good afternoon, UserName! Thank you for registering on our portal.”&lt;/p&gt;

&lt;p&gt;A user can register on a third-party website on behalf of a victim, who will receive an email from a valid site with links and scripts that execute if the victim clicks the link. This problem is solved by validating input parameters during user registration.&lt;/p&gt;

&lt;h2&gt;
  
  
  Problem #8 – Risk of Increasing Third-Party Service Bills
&lt;/h2&gt;

&lt;p&gt;Incorrect Google Maps API keys setup (and any other paid services and APIs allowing domain usage limits). If domain settings are incorrect, these keys can be used on other sites, and the client will pay more money. There are cases when keys were stolen, and the website owner paid thousands of dollars.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How to do it right&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In addition to setting up restrictions for these keys, it’s crucial to optimise the number of requests to paid services.. Inefficient code can lead to excessive API calls, potentially allowing competitors to exploit your resources and increase your costs.&lt;/p&gt;

&lt;p&gt;To limit the number of requests, consider reducing the search area and minimising requests to Google Places.  A common inefficiency occurs when the map component is initialised on every page load, even when the map is located far down the page or hidden in an inactive tab. Implement lazy loading to initialise map components only when necessary.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;These security considerations apply to software development in general, not just e-commerce platforms or similar projects. However, NextJS applications are particularly prone to security vulnerabilities due to the unique redistribution of responsibilities between frontend and backend tasks. By keeping these scenarios in mind, development teams can better protect themselves and their users from potential security risks.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Author: Dmitry Bastron&lt;/em&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>How personalisation works in Sitecore XM Cloud</title>
      <dc:creator>Anna Bastron</dc:creator>
      <pubDate>Thu, 11 Jul 2024 11:07:43 +0000</pubDate>
      <link>https://dev.to/byteminds_agency/how-personalisation-works-in-sitecore-xm-cloud-52o4</link>
      <guid>https://dev.to/byteminds_agency/how-personalisation-works-in-sitecore-xm-cloud-52o4</guid>
      <description>&lt;p&gt;In my previous article, I shared a comprehensive &lt;a href="https://dev.to/byteminds_agency/troubleshooting-tracking-and-personalisation-in-sitecore-xm-cloud-2n6"&gt;troubleshooting guide for Sitecore XM Cloud tracking and personalisation&lt;/a&gt;. The guide addresses common issues, explains investigation steps and provides solutions for these issues. &lt;/p&gt;

&lt;p&gt;However, understanding how the personalisation engine works in depth can further help in diagnosing persistent issues and developing personalised websites. This article visualises what happens behind the scenes when you enable personalisation and tracking in your Sitecore XM Cloud applications.&lt;/p&gt;

&lt;h2&gt;
  
  
  Overview of the personalisation workflow
&lt;/h2&gt;

&lt;p&gt;Before we start, let's familiarise ourselves with key elements of the diagram we are going to look at.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmqidbrg3lvgkvmbbwumt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmqidbrg3lvgkvmbbwumt.png" alt="Key elements of personalisation and tracking data flows" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;On the left hand site we can see the &lt;strong&gt;Browser&lt;/strong&gt;, it is responsible for sending requests to our application and displaying the result to end users.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The &lt;strong&gt;JSS app&lt;/strong&gt; sits at the top centre of the diagram and it represents the Rendering Host role in Sitecore Headless topology. It is the application that processes incoming requests and handles the presentation layer. In this case it is a Next.js application based on the &lt;a href="https://doc.sitecore.com/xmc/en/developers/xm-cloud/getting-started-with-xm-cloud.html" rel="noopener noreferrer"&gt;XM Cloud foundation template&lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Edge / XM API&lt;/strong&gt; is shown on the right hand side, it is a GraphQL endpoint that returns layout definition and content for requested pages, including &lt;a href="https://doc.sitecore.com/xmc/en/users/xm-cloud/create-a-page-variant.html" rel="noopener noreferrer"&gt;personalised variants&lt;/a&gt;. It can be Experience Edge API for cloud-based setups or a local CM container API endpoint for development purposes.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Finally, at the bottom we can see &lt;strong&gt;Tracking &amp;amp; Interactive API&lt;/strong&gt; powered by the embedded instance of Sitecore CDP &amp;amp; Personalize. This is where &lt;a href="https://doc.sitecore.com/xmc/en/users/xm-cloud/creating-an-audience.html" rel="noopener noreferrer"&gt;audiences&lt;/a&gt; are stored, &lt;a href="https://doc.sitecore.com/xmc/en/users/xm-cloud/specifying-variables-for-conditions.html" rel="noopener noreferrer"&gt;conditions&lt;/a&gt; are executed and &lt;a href="https://doc.sitecore.com/xmc/en/users/xm-cloud/analyze.html" rel="noopener noreferrer"&gt;analytics&lt;/a&gt; is collected.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;When we talk about Next.js applications, there are two main rendering methods: &lt;strong&gt;Server-Side Rendering (SSR)&lt;/strong&gt; and &lt;strong&gt;Static Site Generation (SSG)&lt;/strong&gt;. Sitecore XM Cloud personalisation engine works slightly differently with these two approaches so we will cover both of them to understand their specifics.&lt;/p&gt;

&lt;h2&gt;
  
  
  Server-Side Rendering
&lt;/h2&gt;

&lt;p&gt;This is what happens when a website visitor opens a page that has personalised variants.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1.&lt;/strong&gt; When a user loads the website or navigates to a new page, the browser sends an HTTP request to the Next.js application, including cookies and HTTP headers that can be used in personalisation conditions.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuw5fqmuxes3iuj20gbjg.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuw5fqmuxes3iuj20gbjg.gif" alt="Step 1. Browser request" width="1200" height="675"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2.&lt;/strong&gt; The Next.js application runs all registered middleware modules, with the &lt;a href="https://doc.sitecore.com/xmc/en/developers/jss/220/jss-xmc/personalization-in-jss-next-js-applications.html" rel="noopener noreferrer"&gt;Personalize middleware&lt;/a&gt; being of particular interest. This middleware sends an API request to the GraphQL endpoint to fetch all personalised variants for the current page configured in the CMS.&lt;/p&gt;

&lt;p&gt;If there are no personalised variants configured for this page or the page is not found, this middleware will exit and page generation will continue as usual.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flgaxkpr6kk7eqotdqfoo.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flgaxkpr6kk7eqotdqfoo.gif" alt="Step 2. Personalize middleware kicks in and sends a request to Edge / XM API" width="1000" height="563"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3.&lt;/strong&gt; If a page has more than one variant, the middleware sends another API request to the Personalize API to detect if the current visitor matches any of the audiences configured for this page. This is where cookies and HTTP headers received from the browser will help as they will be passed to the Personalize API to identify the visitor.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6nqvutp4r94y2ks8wrj1.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6nqvutp4r94y2ks8wrj1.gif" alt="Step 3. Identifying the audience for the current visitor" width="1000" height="563"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 4.&lt;/strong&gt; By combining responses from these two API requests, the middleware determines which personalised page variant is suitable for the current visitor (cat-themed page in the diagram below 😺).&lt;/p&gt;

&lt;p&gt;If the visitor matches an audience configured for the page, the middleware will rewrite the page path to a special personalised variant path (for example, from &lt;code&gt;/_site_Test/Pets&lt;/code&gt; to &lt;code&gt;/_variantId_0dd7b00680be49c6815ca4d0793a36da/_site_Test/Pets&lt;/code&gt;) and this will instruct the Next.js application to use the specific page variant when rendering the page. So the personalised version of the page will be rendered on the server and returned to the browser. &lt;/p&gt;

&lt;p&gt;If the visitor does not match any audiences, then the default page variant will be rendered and returned to the user. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffjvnouyg4pcc1z1ym17p.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffjvnouyg4pcc1z1ym17p.gif" alt="Step 4. Returning the personalised page" width="1000" height="563"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 5.&lt;/strong&gt; Once the page is rendered in the browser, a special React component responsible for tracking will send an API request to the CDP Stream API to register the page view, including which personalised variant was shown. This data is later will be aggregated and shown in analytics reports.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3aq7ibcnf6qnd00vbk9h.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3aq7ibcnf6qnd00vbk9h.gif" alt="Step 5. Sending a page view event" width="1000" height="563"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Static Site Generation (SSG)
&lt;/h2&gt;

&lt;p&gt;The SSG process flow is similar to SSR but has some specifics related to this rendering method. Now, let's see what are these differences.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1.&lt;/strong&gt; This step is exactly the same as for SSR - the browser sends an HTTP request to the Next.js application with cookies and HTTP headers.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4w4sym8f4mn9qhluzv5l.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4w4sym8f4mn9qhluzv5l.gif" alt="Step 1. Browser request" width="1000" height="563"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2.&lt;/strong&gt; This is where things get different from the SSR process. When the Personalize middleware kicks in, it checks if there are any pre-rendered page variants for this page (the default, cat-themed 😺 and dog-themed 🐶 variants in the diagram). If yes, it skips the API request to the Edge / XM API, otherwise it will fall back to the standard SSR process and fetch personalised variants for the current page.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv4d6njbrszjvn9lglug8.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv4d6njbrszjvn9lglug8.gif" alt="Step 2. Leveraging pre-rendered page variants" width="1000" height="563"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3.&lt;/strong&gt; This step is the same as in the SSR flow - if a page has personalised variants, the middleware sends an API request to the Personalize API to identify visitor's audience.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjlhmjrprv2tk8uvcd28a.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjlhmjrprv2tk8uvcd28a.gif" alt="Step 3. Identifying the audience for the current visitor" width="1000" height="563"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 4.&lt;/strong&gt; If there is a match and personalised page variants are pre-generated, the middleware will rewrite the page path and then the appropriate personalised page variant will be chosen and returned to the browser (looks like it's the cat-themed variant again! 😺). &lt;/p&gt;

&lt;p&gt;If there are no pre-generated personalised variants, but they exist in the CMS and the visitor matches one of the audiences, then the middleware will rewrite the page path, the Next.js app will generate the page variant and save the static output for future requests. This is the default process, see notes at the end of the article to learn more about static generation of personalised page variants.&lt;/p&gt;

&lt;p&gt;If the visitor does not match any audiences, then the default page variant will be returned to the user using the statically generated HTML if it exists.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2a1epfhsffuam4ekangy.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2a1epfhsffuam4ekangy.gif" alt="Step 4. Returning the personalised page" width="1000" height="563"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 5.&lt;/strong&gt; As with SSR, once the page is returned to the browser the &lt;code&gt;CdpPageView&lt;/code&gt; React component will send an API request to track the page view event for reporting. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8rs3mcnlgi2b9031fl1r.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8rs3mcnlgi2b9031fl1r.gif" alt="Step 5. Sending a page view event" width="1000" height="563"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As you can see, the flow is very similar for SSR and SSG. The SSG method with static HTML generation and skipping some API requests can give us a performance boost, especially for websites with high traffic and personalisation enabled on frequently visited pages.&lt;/p&gt;

&lt;h2&gt;
  
  
  Notes
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Personalize middleware
&lt;/h3&gt;

&lt;p&gt;The middleware is provided by Sitecore as a part of the &lt;a href="https://doc.sitecore.com/xmc/en/developers/jss/220/jss-xmc/the-jss-xm-cloud-add-on-for-next-js.html" rel="noopener noreferrer"&gt;JSS XM Cloud add-on for Next.js&lt;/a&gt;. Please note that this add-on is compatible with JSS version 21.6 and later. For earlier versions the &lt;a href="https://doc.sitecore.com/xmc/en/developers/jss/215/jss-xmc/the-next-js-personalize-add-on.html" rel="noopener noreferrer"&gt;Next.js Personalize add-on&lt;/a&gt; is used that is now obsolete. &lt;/p&gt;

&lt;p&gt;This add-on is only compatible with Sitecore XM Cloud due to specific naming conventions and pre-configured settings required for the embedded CDP and Personalize instance.&lt;/p&gt;

&lt;h3&gt;
  
  
  Build-time static generation of personalised page variants
&lt;/h3&gt;

&lt;p&gt;To expand on the step 4 of SSG process, let's see when exactly personalised page variants are generated. As you may know, hosting providers often limit the time available for SSG builds. By default, pre-generation of personalised page variants during build is disabled in to avoid long build times.&lt;/p&gt;

&lt;p&gt;However, if sufficient build time is available (for example, your website does not have too many pages) or you have critical personalisation rules on key pages (for instance, you only have a small number of personalised variants on the homepage or an important campaign page), then SSG for personalised variants &lt;a href="https://doc.sitecore.com/xmc/en/developers/jss/220/jss-xmc/walkthrough--configuring-personalization-in-a-next-js-jss-app.html#enable-static-generation-for-personalized-variants" rel="noopener noreferrer"&gt;can be explicitly enabled&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;This can be done by modifying the file &lt;code&gt;src/lib/sitemap-fetcher/plugins/graphql-sitemap-service.ts&lt;/code&gt; and setting the &lt;code&gt;includePersonalizedRoutes&lt;/code&gt; parameter to &lt;code&gt;true&lt;/code&gt; in the sitemap service constructor:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;this._graphqlSitemapService = new MultisiteGraphQLSitemapService({
    clientFactory,
    sites: [...new Set(siteResolver.sites.map((site: SiteInfo) =&amp;gt; site.name))],
    includePersonalizedRoutes: true,
});
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Just make sure to watch your build time after enabling this setting to avoid build failing or incurring unnecessary hosting costs.&lt;/p&gt;




&lt;p&gt;Sitecore XM Cloud provides robust support for personalisation out-of-the-box for both SSR and SSG rendering methods. The add-on with Personalize middleware and CDP tracking component streamlines the process of fetching, matching, and delivering personalised content to website visitors while tracking interactions for reporting. &lt;/p&gt;

&lt;p&gt;Hope this article helps to understand the entire process of personalisation and tracking in XM Cloud and allows you to build well-performing and personalised applications. Feel free to share your thoughts and questions in the comments! &lt;/p&gt;

</description>
      <category>sitecore</category>
      <category>xmcloud</category>
      <category>personalization</category>
      <category>webdev</category>
    </item>
    <item>
      <title>Server and client components in Next.js: when, how and why?</title>
      <dc:creator>Byteminds Agency</dc:creator>
      <pubDate>Thu, 27 Jun 2024 10:33:59 +0000</pubDate>
      <link>https://dev.to/byteminds_agency/server-and-client-components-in-nextjs-when-how-and-why-1bi3</link>
      <guid>https://dev.to/byteminds_agency/server-and-client-components-in-nextjs-when-how-and-why-1bi3</guid>
      <description>&lt;p&gt;Next.js offers powerful capabilities for creating high-performance web applications. An important part of its functionality, with the advent of the Next App Router, is the server and client components, which allow developers to control server-side and client-side rendering, depending on their project’s requirements. Let's look at these components in more detail.&lt;/p&gt;

&lt;p&gt;All the text and examples in this article refer to Next.js 13.4 and newer versions, in which React Server Components have gained stable status and became the recommended approach for developing applications using Next.js.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is a Server Component (RSC) and how is it rendered?
&lt;/h2&gt;

&lt;p&gt;&lt;em&gt;&amp;gt; React Server Components are rendered exclusively on the server. Their code is not included in the JavaScript bundle file, so they are never hydrated or re-rendered on the client.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;By default, all components are server-side components. This allows you to automatically implement server-side rendering without additional configuration, and you can later convert a server component into a client-side component if necessary.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;RSC renders in two stages on the server:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;React renders server-side components into a special data format called RSC Payload.&lt;/li&gt;
&lt;li&gt;Next.js uses the RSC payload and JavaScript instructions for client components to render HTML on the server.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Then, on the client:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;HTML is used to instantly show a quick, interactive preview - this is only for the initial loading of the page.&lt;/li&gt;
&lt;li&gt;The RSC payload is used to reconcile the client and server component trees and update the DOM accordingly.&lt;/li&gt;
&lt;li&gt;JavaScript instructions are used to hydrate client components and provide interactivity to the application.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  What is the RSC payload?
&lt;/h2&gt;

&lt;p&gt;&lt;em&gt;&amp;gt; The RSC payload is a compact binary representation of a rendered tree of React server components. The RSC payload is used on the client to update the browser DOM and contains:&lt;/em&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The rendered result of server components.&lt;/li&gt;
&lt;li&gt;Placeholders for where the rendered client components should appear, and links to their JavaScript chunk files.&lt;/li&gt;
&lt;li&gt;Any props passed from the server component to the client component.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Advantages of RSC&lt;/strong&gt; &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Improves application performance because heavy dependencies that could be used to render the component on the server (Markdown, code highlighter, etc.) are not sent to the client.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Enhances web vitals application metrics (TTI, etc.)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://nextjs.org/docs/app/building-your-application/routing/loading-ui-and-streaming#what-is-streaming"&gt;HTML streaming&lt;/a&gt; when using RSC allows you to break the rendering work into fragments and transfer them to the client when ready. This allows the user to see parts of the page earlier, without waiting for the entire page to be fully rendered on the server.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Disadvantages of RSC&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;The RSC payload increases HTML file size&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Secrets intended only for the server (tokens, keys, etc.) can leak to the client. Potential security issues for next.js applications are described in detail in this &lt;a href="https://nextjs.org/blog/security-nextjs-server-components-actions"&gt;article&lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Increases the mental load when choosing the appropriate component type during application development, likely requiring time to train the team.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;What is a client component and how is it rendered?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Client-side components allow you to create an interactive user interface that is pre-rendered on the server and can use client-side JavaScript to execute in the browser.&lt;/p&gt;

&lt;p&gt;To optimize the initial page load, Next.js uses the &lt;a href="https://react.dev/reference/react-dom/server"&gt;API React&lt;/a&gt; to render static HTML previews on the server for both client and server components. This ensures that when a user first visits your application, they immediately see the content of the page without waiting for the JavaScript client component bundle to load, parse, and execute.&lt;/p&gt;

&lt;p&gt;Despite their name, "client components" are initially rendered on the server, but are then executed on both the server and the client.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb0fc28cf9plqao5xa2vx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb0fc28cf9plqao5xa2vx.png" alt="Image description" width="800" height="291"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We can easily convert a server component into a client component by adding a “use client” directive to the beginning of the file or renaming it to “counter.client.js”:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;'use client';

export default function Counter() {
  return &amp;lt;div&amp;gt;Counter - client component&amp;lt;/div&amp;gt;;
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  When to use a server component and when to use a client component?
&lt;/h2&gt;

&lt;p&gt;The choice between server and client components depends on the specific requirements of  your task. Server-side components are ideal for scenarios that require accessing data on the server during rendering or retrieving data that should not be available on the client. &lt;br&gt;
Client components, on the other hand, are effective for creating interactive elements that use React hooks and browser APIs.&lt;/p&gt;

&lt;p&gt;To understand which type of component is suitable in a particular case, you can use the helpful &lt;a href="https://nextjs.org/docs/app/building-your-application/rendering/composition-patterns#when-to-use-server-and-client-components"&gt;table&lt;/a&gt; located on the next.js documentation website.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn9w5h2arbat3z46l4c9q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn9w5h2arbat3z46l4c9q.png" alt="Image description" width="800" height="725"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In RSC, we cannot use React hooks, Context or browser APIs. We can only use server-side component APIs such as headers, cookies, etc.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&amp;gt; Important: Server components can import client components.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;When we use client components, we can use React hooks, Context, and APIs that are only available in the browser. However, we cannot use APIs that are only available in server components, such as headers, cookies, etc.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&amp;gt; Important: Client components cannot import server components, but you can pass a server component as a child element or property of a client component.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;With the advent of React Server Components, it has become a recommended best practice to move client components to the end nodes of your component tree whenever possible. However, sometimes you need to conditionally render server-side components using client-side interactivity.&lt;/p&gt;

&lt;p&gt;Let's say we have a client component like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;'use client'

import { useState } from 'react'

export default function ClientComponent({
  children,
}: {
  children: React.ReactNode
}) {
  const [show, setShow] = useState(false)

  return (
    &amp;lt;&amp;gt;
      &amp;lt;button onClick={() =&amp;gt; setShow(!show)}&amp;gt;Show&amp;lt;/button&amp;gt;
      {show &amp;amp;&amp;amp; children}
    &amp;lt;/&amp;gt;
  )
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The ClientComponent doesn't know that its children will eventually be filled with the server component's render result. The ClientComponent's only responsibility is to decide where the child elements will ultimately be placed.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// This pattern works:
// You can pass a Server Component as a child or prop of a
// Client Component.
import ClientComponent from './client-component'
import ServerComponent from './server-component'

// Pages in Next.js are Server Components by default
export default function Page() {
  return (
    &amp;lt;ClientComponent&amp;gt;
      &amp;lt;ServerComponent /&amp;gt;
    &amp;lt;/ClientComponent&amp;gt;
  )
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With this approach,  and  are separated from each other and can be rendered independently. In this case, the  child component can be rendered on the server before the  is rendered on the client.&lt;/p&gt;

&lt;p&gt;All possible patterns of sharing server and client components are described in detail in the &lt;a href="https://nextjs.org/docs/app/building-your-application/rendering/composition-patterns"&gt;documentation&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  FAQ
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Why use Next.js React Server Components (RSC)?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;React Server Components (RSC) provide a new way to build applications that allows developers to split code between the client and server. This becomes especially useful for large-scale projects with significant amounts of data or dynamic content.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How are RSC and Next.js related? Can I use RSC without Next.js?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;RSC is tightly integrated with Next.js and provides additional features to optimize page load. While you can theoretically create RSC without using Next.js, it will be much more difficult and less efficient. Next.js provides an intuitiveRSC framework, automatic preloading, and many other features that make the development process much easier.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How does this relate to Suspense?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Server Components data retrieval APIs are integrated with Suspense. RSC uses Suspense to provide loading states and to unblock parts of a stream so that the client can show content before the entire response has completed.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What are the performance advantages of using RSC?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Server Components allow you to move most of the data retrieval to the server so that the client doesn't have to make as many requests. This also eliminates the typical useEffect network waterfalls on the client for retrieving data.&lt;/p&gt;

&lt;p&gt;Server Components also allow you to add non-interactive functionality to your application without increasing the JS bundle size. Moving functions from the client to the server reduces the initial code size and parsing time of client JS. Also, reducing the number of client components improves client processor time. The client can skip server-generated parts of the tree during reconciliation because it knows that they could not be affected by state updates.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Do I have to use RSC?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If you already have a client React application, you can think of it as a tree of client components. If that suits you, great! Server-side components extend React to support other scenarios and are not a replacement for client-side components.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Is this a replacement for SSR?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;No, they complement each other. SSR is primarily a technique for quickly rendering a non-interactive version of client components. You will still have to pay the cost of downloading, parsing, and executing these client components once the HTML is loaded.&lt;/p&gt;

&lt;p&gt;You can combine server-side components and SSR, where server-side components are rendered first, and client-side components are rendered in HTML for a fast, non-interactive rendering during hydration. When they are combined this way, you still get a fast launch time, but you also significantly reduce the amount of JS loaded on the client.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Can I gradually migrate to RSC by rewriting the project's codebase?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Yes, with the release of the &lt;a href="https://nextjs.org/blog/next-13-4"&gt;new app router and RSC&lt;/a&gt;, the previous approach still works, and you can gradually switch to the RSC approach. It should be noted that RSC components only work in the app router. There is a detailed &lt;a href="https://nextjs.org/docs/app/building-your-application/upgrading/app-router-migration"&gt;guide on how to transition to the new app router&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Author: Sergei Pestov&lt;/em&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>How to properly measure code speed in .NET</title>
      <dc:creator>Byteminds Agency</dc:creator>
      <pubDate>Thu, 20 Jun 2024 11:00:30 +0000</pubDate>
      <link>https://dev.to/byteminds_agency/how-to-properly-measure-code-speed-in-net-158o</link>
      <guid>https://dev.to/byteminds_agency/how-to-properly-measure-code-speed-in-net-158o</guid>
      <description>&lt;p&gt;Imagine you have a solution to a problem or a task, and now you need to evaluate the optimality of this solution from a performance perspective. The most obvious way is to use &lt;a href="https://learn.microsoft.com/en-us/dotnet/api/system.diagnostics.stopwatch?view=net-6.0"&gt;StopWatch&lt;/a&gt; like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6ecf2byf8y1s8s6bas5h.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6ecf2byf8y1s8s6bas5h.jpg" alt="Image description" width="800" height="183"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;However, there are several issues with this method:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;It is quite an inaccurate method, since the code being evaluated is executed only once, and the execution time can be affected by various side effects  such as hard disk performance, not warmed-up cache, processor context switching, and other applications.&lt;/li&gt;
&lt;li&gt;It does  allow you to test the application in Production mode. During compilation, &lt;a href="https://learn.microsoft.com/en-us/archive/msdn-magazine/2015/february/compilers-what-every-programmer-should-know-about-compiler-optimizations"&gt;a significant part of the code is optimized automatically&lt;/a&gt;, without our participation, which can seriously affect the final result.&lt;/li&gt;
&lt;li&gt;Your algorithm may perform well on a small dataset but underperform on a large one (or vice versa). Therefore, to test performance in different situations with different data sets, you will have to write new code for each scenario.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So what other options do we have? How can we evaluate the performance of our code properly? &lt;a href="https://benchmarkdotnet.org/articles/overview.html"&gt;BenchmarkDotNet&lt;/a&gt; is the solution for this.&lt;/p&gt;

&lt;h2&gt;
  
  
  Benchmark setup
&lt;/h2&gt;

&lt;p&gt;BenchmarkDotNet is a NuGet package that can be installed on any type of application to measure the speed of code execution. To do this, we only need two things :a class to perform the benchmarking code and a way to launch a runner to execute it.&lt;/p&gt;

&lt;p&gt;Here's what a basic benchmarking class looks like:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbx6xirx4shhaixrpknsy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbx6xirx4shhaixrpknsy.png" alt="Image description" width="800" height="351"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let’s break down this class, starting with the attributes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;MemoryDiagnoser&lt;/strong&gt; attribute collects information about the Garbage Collector’s operation and the memory allocated during code execution.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Orderer&lt;/strong&gt; attribute determines the order in which the final results are displayed in the table. In our case, it is set to FastestToSlowest, meaning the fastest code appears first, and the slowest last.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;RankColumn&lt;/strong&gt; attribute adds a column to the final report, numbering the results from 1 to X.&lt;/p&gt;

&lt;p&gt;We have added the &lt;strong&gt;Benchmark&lt;/strong&gt; attribute to the method itself. It marks the method as one of the test cases. And the &lt;strong&gt;Baseline=true&lt;/strong&gt; parameter says that we will consider the performance of this method to be 100%. And then we will evaluate other algorithm options in relation to it.&lt;/p&gt;

&lt;p&gt;To run the benchmark, we need the second piece of the puzzle:the Runner. It is simple: we go to our Program.cs (in a console application) and add one line with &lt;strong&gt;BenchmarkRunner&lt;/strong&gt;:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsiyj8yfxcr5g9sk22cwq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsiyj8yfxcr5g9sk22cwq.png" alt="Image description" width="800" height="278"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then, we build our application in Production mode and run the code for execution.&lt;/p&gt;

&lt;h2&gt;
  
  
  Analysis of results
&lt;/h2&gt;

&lt;p&gt;If everything set up correctly, then after running the application, we will see how &lt;strong&gt;BenchmarkRunner&lt;/strong&gt; executes our code multiple times and eventually produces the following report:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fweok32354alp43y4wgtt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fweok32354alp43y4wgtt.png" alt="Image description" width="800" height="427"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Important: any anomalous code executions (those much faster or slower than the average) will be excluded from the final report. We can see the clipped anomalies listed below the resulting table.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The report contains quite a lot of data about the performance of the code, including the version of the OS on which the test was run, the processor used, and the version of .Net. But the main information that interests us is the last table where we see:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Mean is the average time it takes to execute our code;&lt;/li&gt;
&lt;li&gt;Error—an estimation error (half of the 99.9 percentile);&lt;/li&gt;
&lt;li&gt;StdDev is the standard deviation of the estimate;&lt;/li&gt;
&lt;li&gt;Ratio - a percentage estimate of improvement or deterioration in performance relative to Baseline - the basic method that we consider as the starting point (remember Baseline=true above?);&lt;/li&gt;
&lt;li&gt;Rank - ranking;&lt;/li&gt;
&lt;li&gt;Allocated - the memory allocated during execution of our method.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Real test
&lt;/h2&gt;

&lt;p&gt;To make the final results more interesting, let's add a few more variants of our algorithm and see how the results change.&lt;/p&gt;

&lt;p&gt;Now, the benchmark class will look like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fij7tm89bq39t8m930fdb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fij7tm89bq39t8m930fdb.png" alt="Image description" width="800" height="1027"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Our focus now is  on benchmarking. We will leave the evaluation of the algorithms themselves for the next article.&lt;/p&gt;

&lt;p&gt;And here is the result of performing such benchmarking:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6y5t0hrxyxu3k2755qde.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6y5t0hrxyxu3k2755qde.png" alt="Image description" width="800" height="147"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We see that &lt;strong&gt;GetYearFromDateTime&lt;/strong&gt;, our starting point, is the slowest and takes about 218 nanoseconds, while the fastest option, &lt;strong&gt;GetYearFromSpanWithManualConversion&lt;/strong&gt;, takes only 6.2 nanoseconds —35 times faster than the original method.&lt;/p&gt;

&lt;p&gt;We can also see how much memory was allocated for the two methods &lt;strong&gt;GetYearFromSplit&lt;/strong&gt; and &lt;strong&gt;GetYearFromSubstring&lt;/strong&gt;, and how long it took the Garbage Collector to clean up this memory (which also reduces overall system performance).&lt;/p&gt;

&lt;h2&gt;
  
  
  Working with Various Inputs
&lt;/h2&gt;

&lt;p&gt;Finally, let’s discuss how to evaluate the performance of our algorithm on both large and small data sets. &lt;strong&gt;BenchmarkDotNet&lt;/strong&gt; provides two attributes for this: &lt;strong&gt;Params&lt;/strong&gt; and &lt;strong&gt;GlobalSetup&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Here is the benchmark class using these two attributes:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxoq0jdlj7u2f0fbpe2i6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxoq0jdlj7u2f0fbpe2i6.png" alt="Image description" width="800" height="841"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In our case, the &lt;strong&gt;Size&lt;/strong&gt; field is parameterized and affects the code that runs in &lt;strong&gt;GlobalSetup&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;As a result of executing &lt;strong&gt;GlobalSetup&lt;/strong&gt;, we generate an initial array of 10, 1000 and 10000 elements to run all test scenarios. As mentioned earlier, some algorithms perform effectively only with a large or small number of elements.&lt;/p&gt;

&lt;p&gt;Let's run this benchmark and look at the results:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frtzdfjgbegyud65usrbq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frtzdfjgbegyud65usrbq.png" alt="Image description" width="800" height="306"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here, we can clearly see the performance of each method with 10, 1000 and 10000 elements: the &lt;strong&gt;Span&lt;/strong&gt; method consistently  leads regardless of the input data size, while the &lt;strong&gt;NewArray&lt;/strong&gt; method performs progressively worse as the data size increases. &lt;/p&gt;

&lt;h2&gt;
  
  
  Graphs
&lt;/h2&gt;

&lt;p&gt;The BenchmarkDotNet library allows you to analyze the received data not only in text and tabular form but also graphically, in the form of graphs.&lt;/p&gt;

&lt;p&gt;To demonstrate, we will create a benchmark class to measure the runtime of different sorting algorithms on the .NET8 platform, configured to run three times for different numbers of sorted elements: 1000, 5000, 10000. The sorting algorithms are: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;DefaultSort - the default sorting algorithm used in .NET8 &lt;/li&gt;
&lt;li&gt;InsertionSort - insertion sort &lt;/li&gt;
&lt;li&gt;MergeSort - merge sort &lt;/li&gt;
&lt;li&gt;QuickSort - quick sort &lt;/li&gt;
&lt;li&gt;SelectSort - selection sorting&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa0tab393n37ho10am8a9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa0tab393n37ho10am8a9.png" alt="Image description" width="800" height="380"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The benchmark results include a summary in the form of a table and a graph:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm5x8q6i9y5zzhh0welu2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm5x8q6i9y5zzhh0welu2.png" alt="Image description" width="800" height="380"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;BenchmarkDotNet also generated separate graphs for each benchmark (in our case, for each sorting algorithm) based on the number of sorted elements:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fym4w3h6ogkr5tc6wg6fy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fym4w3h6ogkr5tc6wg6fy.png" alt="Image description" width="800" height="608"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We have covered the basics of working with BenchmarkDotNet and how it helps us evaluate the results of our work, making informed decisions about which code to keep, rewrite or delete. &lt;/p&gt;

&lt;p&gt;This approach allows us to build the most productive systems, ultimately improving user experiences.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Author - Anton Vorotyncev&lt;/em&gt;&lt;/p&gt;

</description>
      <category>nextjs</category>
      <category>dotnet</category>
      <category>webdev</category>
    </item>
    <item>
      <title>Formalizing API Workflow in .NET Microservices</title>
      <dc:creator>Byteminds Agency</dc:creator>
      <pubDate>Fri, 24 May 2024 10:27:05 +0000</pubDate>
      <link>https://dev.to/byteminds_agency/formalizing-api-workflow-in-net-microservices-22ii</link>
      <guid>https://dev.to/byteminds_agency/formalizing-api-workflow-in-net-microservices-22ii</guid>
      <description>&lt;p&gt;We work with IT products in the fields of logistics and e-commerce. Most of these projects are architecturally large, including many services essential for the proper operation of entire systems.&lt;/p&gt;

&lt;p&gt;Let's talk about how to organize the interaction of microservices in a large, long-lived product, both synchronously and asynchronously.&lt;/p&gt;

&lt;p&gt;The microservice approach involves creating a microservice for each feature within a large product. For example, a microservice that handles a specific function in logistics processes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;warehousing (collection/placement/movement of commodity units)&lt;/li&gt;
&lt;li&gt;sorting of cargo spaces&lt;/li&gt;
&lt;li&gt;labeling of cargo spaces&lt;/li&gt;
&lt;li&gt;consolidation of cargo spaces&lt;/li&gt;
&lt;li&gt;other&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Each microservice has its codebase, database, and API for interacting with other services. This allows us to write them in different programming languages and use different technologies. &lt;br&gt;
All new microservices are written on new versions of frameworks, while all outdated ones are gradually migrated. The goal is to provide the most efficient and standardized approach to ensuring interoperability between microservices.  Creating a new microservice and integrating it into the overall system should be as quick and painless as possible for both its developer and the developers who use this microservice.&lt;/p&gt;

&lt;h2&gt;
  
  
  Synchronous interaction between microservices
&lt;/h2&gt;

&lt;p&gt;Synchronous interaction occurs when one system sends a message to another and waits for an acknowledgment or response before continuing. This type of interaction is common when the requesting system needs information to proceed with its actions. To organize such interaction, various protocols like (g)RPC, SOAP, and the architectural style REST are widely used.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Farsti40n0pzc4hh8n7jd.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Farsti40n0pzc4hh8n7jd.jpg" alt="Image description" width="712" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The diagram shows the approach to developing an API for a new microservice. First, we create an API specification in OpenAPI format and get it approved by the architecture department. Based on this specification, we create a contract library containing API interfaces and data structures. Then, we use this library to create a client's API that will interact with this API. Usually, we use the Refit library for this purpose.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgq6qduh48o9srjm5cjpp.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgq6qduh48o9srjm5cjpp.jpg" alt="Image description" width="656" height="839"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This is what the OpenAPI specification looks like. IDE Rider has a plugin that allows you to edit it, and Swagger generates the specification description. All methods of this API, along with request and response structures, are described here. Once this specification is approved, we begin developing the contract library.&lt;/p&gt;

&lt;h2&gt;
  
  
  Sync contract library
&lt;/h2&gt;

&lt;p&gt;The Sync contracts library is a NuGet package based on the specification. It contains:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Interfaces of each API controller&lt;/li&gt;
&lt;li&gt;The client interface, which combines all controller interfaces&lt;/li&gt;
&lt;li&gt;Request/Response models used in controllers&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft40ie2p2dv2ptaodpe8d.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft40ie2p2dv2ptaodpe8d.jpg" alt="Image description" width="800" height="562"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For example, in one of the libraries we have two interfaces: ITaskController and IConsolidationCargoUnitsController. They define all the necessary methods, which will later be implemented by both the corresponding controllers and clients. &lt;/p&gt;

&lt;p&gt;Since we use the Refit library to generate clients, we also define the types of requests and their routes using attributes like [Get(...)], [Post(...)], etc. It's important to note that these are Refit attributes, not ASP.NET attributes. Accordingly, our contract library may not depend on ASP.NET at all.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsotb6ci9z9j64hrh41ql.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsotb6ci9z9j64hrh41ql.jpg" alt="Image description" width="800" height="770"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw2aefc22ric7l35odbwg.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw2aefc22ric7l35odbwg.jpg" alt="Image description" width="800" height="275"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next, we describe the client interface, which does not contain its own methods but simply inherits all controller interfaces and is marked it with a special attribute that implements versioning. The implementation of this interface (using the Refit library) will be the client that external systems use to interact with it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgfbp7d9oedn6qlgqvx5n.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgfbp7d9oedn6qlgqvx5n.jpg" alt="Image description" width="643" height="923"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This is what the library of contracts ultimately looks like: one client interface, several controller interfaces, and data models. This is all completely based on the specification. &lt;/p&gt;

&lt;p&gt;Once the development of the contract library is complete, we publish a NuGet package. Once the package is published, we can start implementing the API itself, as well as the client for this API. Since the specification has already been approved and the contract library has been published, API and client development can be done in parallel.&lt;/p&gt;

&lt;h2&gt;
  
  
  API development
&lt;/h2&gt;

&lt;p&gt;This is what the API controller class looks like, where we implement the controller interface with the necessary business logic. It’s worth noting that here ASP.NET attributes are used, which is one of the disadvantages of this approach - you have to duplicate routes both in the contract library and in the controllers themselves. &lt;/p&gt;

&lt;p&gt;For simple cases, where the routes have no restrictions (for example, {id}/{sortingCenterId}), they can be made into constants and reused. But when restrictions are included in the routes (for example, {id:int}/{sortingCenterId}), such routes have to be duplicated, as the semantics inherent in ASP.NET are not supported by Refit (and vice versa).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvzm06ikugefhybjtikoh.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvzm06ikugefhybjtikoh.jpg" alt="Image description" width="800" height="792"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Client development
&lt;/h2&gt;

&lt;p&gt;The client for the API implements using the Refit library. We wrote the following extension method to register an API client. We pass the client interface from the contract library, along with configuration arguments, to the method below. As a result, a dynamically generated Refit class containing HTTP client calls is registered in the DI container for this interface.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdd4e770rcweqib2xd8l5.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdd4e770rcweqib2xd8l5.jpg" alt="Image description" width="800" height="448"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Accordingly, all we need to do is register the client interface from the contract library in one line. Next, we implement this interface using DI into the necessary classes and address external services. Also, this approach makes it easy to test code that has dependencies on the client API.&lt;/p&gt;

&lt;h2&gt;
  
  
  Asynchronous interaction between microservices
&lt;/h2&gt;

&lt;p&gt;Asynchronous communication occurs when one system sends a message to another and continues its work without waiting for an acknowledgment or response. The response can be received later through messages or callback functions. This type of interaction is common when the requesting system does not require information to continue its actions.&lt;/p&gt;

&lt;p&gt;In our case, asynchronous communication of microservices is implemented through Kafka (message broker). We are writing the Async API specification, which is a similar standard to OpenAPI but is used for describing an asynchronous interaction protocol.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F11k2pijjqb9cfmcbogws.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F11k2pijjqb9cfmcbogws.jpg" alt="Image description" width="712" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The principle is similar to synchronous one. We describe the specification, approve it, create a library of asynchronous contracts, and then write producers that will publish messages to the queue and consumers that will read and process them.&lt;/p&gt;

&lt;p&gt;The specification is slightly different—  we describe not methods, but message types and channels. In each channel, we describe the types of messages that are sent to the corresponding channels.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F462rn3wqsaiwcmcalw1x.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F462rn3wqsaiwcmcalw1x.jpg" alt="Image description" width="786" height="905"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Async contract library
&lt;/h2&gt;

&lt;p&gt;The Async contracts library is a NuGet package based on the specification. It contains message models that are sent to or read from the queue.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgewb774vpzvmcplfch5q.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgewb774vpzvmcplfch5q.jpg" alt="Image description" width="800" height="274"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In our case, all messages are events. An event is a wrapper for a data object with the necessary parameters defined within the event. &lt;/p&gt;

&lt;p&gt;For example, the “Delivery created” event, in addition to information about the event itself (identifier, type, date, and time), will contain information about the delivery. The process looks like this: we created a library of contracts, added all the events provided for in the specification, and created a producer that will generate events.&lt;/p&gt;

&lt;p&gt;The screenshot shows a piece of business logic code that completes the execution of the task. Starting from line 5, a transaction is opened, within which data is written to the microservice database. The next three lines are responsible for sending the message to the queue.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyu13ck2b0brfcrez1ns5.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyu13ck2b0brfcrez1ns5.jpg" alt="Image description" width="800" height="277"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu9zkg95oyxiuo2ev8uqg.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu9zkg95oyxiuo2ev8uqg.jpg" alt="Image description" width="800" height="499"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The screenshot above shows a consumer. This is the handler to which all messages arrive. It includes filtering and usually processes the business logic that should be triggered when a message is received.&lt;/p&gt;

&lt;p&gt;When a new microservice appears, the approach has already been worked out, and all the processes for creating a library of contracts and working with Kafka are debugged. As a result, everything is efficient and there are no discrepancies in development.&lt;/p&gt;

&lt;p&gt;Of course, the methods of organizing the interaction of microservices described in this article are not the only possible ones. For instance, gRPC is another powerful alternative worth exploring. We might talk about that in a future article, using an example from one of our other projects. Until then, happy coding!&lt;/p&gt;

&lt;p&gt;Author: Artyom Chernenko&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Hidden Aspects of TypeScript and How to Resolve Them</title>
      <dc:creator>Byteminds Agency</dc:creator>
      <pubDate>Tue, 21 May 2024 12:19:41 +0000</pubDate>
      <link>https://dev.to/byteminds_agency/hidden-aspects-of-typescript-and-how-to-resolve-them-1k5c</link>
      <guid>https://dev.to/byteminds_agency/hidden-aspects-of-typescript-and-how-to-resolve-them-1k5c</guid>
      <description>&lt;p&gt;We suggest using a special &lt;a href="https://www.typescriptlang.org/play/"&gt;editor&lt;/a&gt; to immediately check each example while reading the article. This editor is convenient because you can switch the TypeScript version in it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Getting “any” instead of “unknown”
&lt;/h2&gt;

&lt;p&gt;When we use the “&lt;strong&gt;any&lt;/strong&gt;” type, we lose typing - we can access any method or property of such an object, and the compiler will not warn us about possible errors. If we use “&lt;strong&gt;unknown&lt;/strong&gt;”, the compiler will notify us of potential issues.&lt;/p&gt;

&lt;p&gt;Some functions and operations return “&lt;strong&gt;any&lt;/strong&gt;” by default - this is not entirely obvious, here are some examples:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
// JSON.parse
const a = JSON.parse('{ a: 1 }'); // any
// Array.isArray
function parse(a: unknown) {
if (Array.isArray(a)) {
console.log(a); // a[any]
}
}
// fetch
fetch("/")
.then((res) =&amp;gt; res.json())
.then((json) =&amp;gt; {
console.log(json); // any
});
// localStorage, sessionStorage
const b = localStorage.a; // any
const c = sessionStorage.b // any
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://github.com/total-typescript/ts-reset"&gt;ts-reset&lt;/a&gt; can solve this problem.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/total-typescript/ts-reset"&gt;ts-reset&lt;/a&gt; is a library that helps solve some non-obvious issues where we wish TypeScript worked differently by default.&lt;/p&gt;

&lt;h2&gt;
  
  
  Array methods are too strict for the “as const” construct
&lt;/h2&gt;

&lt;p&gt;This issue is also found in the “&lt;strong&gt;has&lt;/strong&gt;” methods of “&lt;strong&gt;Set&lt;/strong&gt;” and “&lt;strong&gt;Map&lt;/strong&gt;”.&lt;/p&gt;

&lt;p&gt;Example: we create an array of users, assign the “&lt;strong&gt;as const&lt;/strong&gt;” construct, then call the “&lt;strong&gt;includes&lt;/strong&gt;” method and get an error because argument 4 does not exist in the “&lt;strong&gt;userIds&lt;/strong&gt;” type.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
const userIds = [1, 2, 3] as const;

userIds.includes(4);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://github.com/total-typescript/ts-reset"&gt;ts-reset&lt;/a&gt; will also help get rid of this error.&lt;/p&gt;

&lt;h2&gt;
  
  
  Filtering an array from “undefined”
&lt;/h2&gt;

&lt;p&gt;Let's say we have a numeric array that may contain “&lt;strong&gt;undefined&lt;/strong&gt;”. To get rid of these “&lt;strong&gt;undefined&lt;/strong&gt;”, we filter the array. But the “&lt;strong&gt;newArr&lt;/strong&gt;” array will still contain the array type “&lt;strong&gt;number&lt;/strong&gt;” or “&lt;strong&gt;undefined&lt;/strong&gt;”.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
const arr = [1, 2, undefined];
const newArr = arr.filter((item) =&amp;gt; item !== undefined);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We can solve the problem like this, and then “&lt;strong&gt;newArr2&lt;/strong&gt;” will  have the type “&lt;strong&gt;number&lt;/strong&gt;”:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
const newArr2 = arr.filter((item): item is number =&amp;gt; item !== undefined);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Also, ts-reset can help but only for the case when the “&lt;strong&gt;filter&lt;/strong&gt;” function argument is “&lt;strong&gt;BooleanConstructor&lt;/strong&gt;” type.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
const filteredArray = [1, 2, undefined].filter(Boolean)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Narrowing a type using bracket notation
&lt;/h2&gt;

&lt;p&gt;We create an object with a type key string, value string, or array of strings.&lt;/p&gt;

&lt;p&gt;We then access the object's property using bracket notation and check that the object's return type is a string. In TypeScript versions below 4.7, the “&lt;strong&gt;queryCountry&lt;/strong&gt;” type will be a string or an array of strings, i.e. automatic type narrowing does not work, even though we have already checked the condition.&lt;/p&gt;

&lt;p&gt;However, if you use TypeScript version 4.7 and above, type narrowing will work as expected.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
const query: Record&amp;lt;string, string | string[]&amp;gt; = {};

const COUNTRY_KEY = 'country';

if (typeof query[COUNTRY_KEY] === 'string') {
    const queryCountry: string = query[COUNTRY_KEY];
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://www.typescriptlang.org/docs/handbook/release-notes/typescript-4-7.html#control-flow-analysis-for-bracketed-element-access"&gt;Link to documentation&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Enum problems
&lt;/h2&gt;

&lt;p&gt;We create an “&lt;strong&gt;enum&lt;/strong&gt;” and do not specify the values explicitly,   so each key in order will have numerical values from 0 onwards.&lt;/p&gt;

&lt;p&gt;Using this “&lt;strong&gt;enum&lt;/strong&gt;”, we type the first argument of the “&lt;strong&gt;showMessage&lt;/strong&gt;” function, expecting that we will be able to pass only those codes that are described in the “&lt;strong&gt;enum&lt;/strong&gt;”:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
enum LogLevel {
    Debug, // 0
    Log, // 1
    Warning, // 2
    Error // 3
}

const showMessage = (logLevel: LogLevel, message: string) =&amp;gt; {
    // code...
}

showMessage(0, 'debug message');
showMessage(2, 'warning message');
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If we pass a value not contained in the “&lt;strong&gt;enum&lt;/strong&gt;” as an argument, we should see the error "&lt;strong&gt;Argument of type '-100' is not assignable to parameter of type 'LogLevel'.&lt;/strong&gt;" But in TypeScript versions below 5.0, this error doesn’t occur, although logically it should:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
showMessage(-100, 'any message')
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We can also create an “&lt;strong&gt;enum&lt;/strong&gt;” and explicitly specify numeric values. We indicate the “&lt;strong&gt;enum&lt;/strong&gt;” type to the constant “a” and assign any non-existent number that is not in the “&lt;strong&gt;enum&lt;/strong&gt;”, for example, 1. When using TypeScript versions below 5, there will be no error.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
enum SomeEvenDigit {
    Zero = 0,
    Two = 2,
    Four = 4
}

const a: SomeEvenDigit = 1;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And one more thing: when using TypeScript below version 5, calculated values cannot be used in “&lt;strong&gt;enum&lt;/strong&gt;”.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
enum User {
  name = 'name',
    userName = `user${User.name}`
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://www.typescriptlang.org/docs/handbook/release-notes/typescript-5-0.html#enum-overhaul"&gt;Link to documentation&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Functions that have an explicit return type of “undefined” must have an explicit return
&lt;/h2&gt;

&lt;p&gt;In versions of TypeScript below 5.1, an error will appear in cases where a function has an explicit type of “&lt;strong&gt;undefined&lt;/strong&gt;”, but no “&lt;strong&gt;return&lt;/strong&gt;”.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
function f4(): undefined {}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;There will be no error in the following cases:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
function f1() {}

function f2(): void {}

function f3(): any {}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To summarize, if we explicitly assign the type “&lt;strong&gt;void&lt;/strong&gt;” or “&lt;strong&gt;any&lt;/strong&gt;” to a function, there will be no error. It will appear if we assign a function type “&lt;strong&gt;undefined&lt;/strong&gt;”, and only when using TypeScript version below 5.1.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://devblogs.microsoft.com/typescript/announcing-typescript-5-1-rc/#easier-implicit-returns-for-undefined-returning-functions"&gt;Link to documentation&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  The behavior of “enums” follows nominative typing, not structural typing
&lt;/h2&gt;

&lt;p&gt;This is, even though TypeScript uses structural typing. &lt;/p&gt;

&lt;p&gt;Let's create an “&lt;strong&gt;enum&lt;/strong&gt;” and a function whose argument we type with this “&lt;strong&gt;enum&lt;/strong&gt;”. Then we try to call the function passing a string that is identical to one of the enum values as the argument. We get an error in “&lt;strong&gt;showMessage&lt;/strong&gt;”: the argument type “&lt;strong&gt;Debug&lt;/strong&gt;” cannot be assigned because the “&lt;strong&gt;enum&lt;/strong&gt;” type “&lt;strong&gt;LogLevel&lt;/strong&gt;” is expected.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
enum LogLevel {
    Debug = 'Debug',
    Error = 'Error'
}

const showMessage = (logLevel: LogLevel, message: string) =&amp;gt; {
    // code...
}

showMessage('Debug', 'some text')
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Even if we create a new “&lt;strong&gt;enum&lt;/strong&gt;” with the same values, it won't work.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
enum LogLevel2 {
    Debug = 'Debug',
    Error = 'Error'
}
showMessage(LogLevel2.Debug, 'some text')
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The solution is to use objects with the value “&lt;strong&gt;as const&lt;/strong&gt;”.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
const LOG_LEVEL = {
    DEBUG: 'debug',
    ERROR: 'error'
} as const

type ObjectValues = T[keyof T]

type LogLevel = ObjectValues;

const logMessage = (logLevel: LogLevel, message: string) =&amp;gt; {
    // code...
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this case, we can pass anything, and there will be no error because we are working with a simple value, and it does not matter where it is passed from.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
logMessage('debug', 'some text')
logMessage(LOG_LEVEL.DEBUG, 'some text')
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Possibility of returning the wrong data type in function with overloading
&lt;/h2&gt;

&lt;p&gt;Suppose we want to return a string from a function if 2 of its arguments are strings. We create such functions and then check whether our arguments are strings. In this case, we can return any data type, even though a string was specified in the first step.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
function add(x: string, y: string): string
function add(x: number, y: number): number
function add(x: unknown, y: unknown): unknown {

    if (typeof x === 'string' &amp;amp;&amp;amp; typeof y === 'string') {
                return 100;
    }

    if (typeof x === 'number' &amp;amp;&amp;amp; typeof y === 'number') {
        return x + y
    }

    throw new Error('invalid arguments passed');
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, we expect that “&lt;strong&gt;const&lt;/strong&gt;” will contain the type “&lt;strong&gt;string&lt;/strong&gt;”, but we get a number.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
const str = add("Hello", "World!");
const num = add(10, 20);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Passing an object as an argument to a function with an extra property
&lt;/h2&gt;

&lt;p&gt;When typing the arguments of functions and classes, we cannot add extra properties that were not originally specified in the type or interface. After all, in this case, we are simply passing a different structure as an argument.&lt;/p&gt;

&lt;p&gt;However, in TypeScript, it is possible to break this rule:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;type Func = () =&amp;gt; {
  id: string;
};

const func: Func = () =&amp;gt; {
  return {
    id: "123",
    name: "Hello!",
  };
};
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For greater clarity, let's create an object with the “&lt;strong&gt;formatAmountParams&lt;/strong&gt;” settings, which we will pass to the “&lt;strong&gt;formatAmount&lt;/strong&gt;” function. As you can see, an object with settings can contain extra properties and there will be no error.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;type FormatAmount = {
  currencySymbol?: string,
  value: number
}

const formatAmount = ({ currencySymbol = '$', value }: FormatAmount) =&amp;gt; {
  return `${currencySymbol} ${value}`;
}

const formatAmountParams = {
  currencySymbol: 'USD',
  value: 10,
  anotherValue: 20
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Also, there is no error if we pass an object that contains extra properties:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
formatAmount(formatAmountParams);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;But we will get an error if we create an object as a function argument and pass it with an extra property.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
formatAmount({ currencySymbol: '', value: 10, anotherValue: 12 });
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In addition, we may face unexpected behavior if we want to rename “&lt;strong&gt;currencySymbol&lt;/strong&gt;” to “&lt;strong&gt;currencySign&lt;/strong&gt;”.&lt;/p&gt;

&lt;p&gt;First, let's change the type, then TypeScript will prompt that we need to change the key in the object from “&lt;strong&gt;currencySymbol&lt;/strong&gt;” to “&lt;strong&gt;currencySign&lt;/strong&gt;”.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;type FormatAmount = {
  currencySign?: string,
  value: number
}

const formatAmount = ({ currencySign = '$', value }: FormatAmount) =&amp;gt; {
  return `${currencySign} ${value}`;
}

const formatAmountParams = {
  currencySymbol: 'USD',
  value: 10
}

formatAmount(formatAmountParams);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;There are no errors -  so we might think that the refactoring went smoothly. But in “&lt;strong&gt;formatAmountParams&lt;/strong&gt;” the old name “&lt;strong&gt;currencySymbol&lt;/strong&gt;” remains, and instead of the expected result “USD 10” we will get “$10”.&lt;/p&gt;

&lt;h2&gt;
  
  
  Loss of typing when using “Object.keys”
&lt;/h2&gt;

&lt;p&gt;Let's create an “&lt;strong&gt;obj&lt;/strong&gt;” object. “&lt;strong&gt;Using Object.keys&lt;/strong&gt;”, let's create an array with the object's keys and iterate through this array. If we access an object by key in a loop, TypeScript will say that we cannot do this because the generic type “string” cannot be used as a key for the “&lt;strong&gt;obj&lt;/strong&gt;” object.&lt;/p&gt;

&lt;p&gt;A possible solution is to cast the type using the “&lt;strong&gt;as&lt;/strong&gt;” construct. But this can be unsafe because we are manually setting what type will be there. We need to ensure that [key] is not just a string, but a key, and indicate this explicitly.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
const obj = {a: 1, b: 2}

Object.keys(obj).forEach((key) =&amp;gt; {
  console.log(obj[key])
  console.log(key as keyof typeof obj)
});
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  TypeScript may not recognize data type changes
&lt;/h2&gt;

&lt;p&gt;Let's create a “&lt;strong&gt;UserMetadata&lt;/strong&gt;” type as a key-value “&lt;strong&gt;Map&lt;/strong&gt;”. Based on this type, we create a “&lt;strong&gt;cache&lt;/strong&gt;” and try to get the value for the key “&lt;strong&gt;foo&lt;/strong&gt;” using the “&lt;strong&gt;get&lt;/strong&gt;” method. Everything works as expected.&lt;/p&gt;

&lt;p&gt;Next, we'll create a “&lt;strong&gt;cacheCopy&lt;/strong&gt;” object based on “&lt;strong&gt;cache&lt;/strong&gt;” and also call the “&lt;strong&gt;get&lt;/strong&gt;” method. TypeScript won't indicate that anything is wrong, but there will be an error because the object doesn't have a “&lt;strong&gt;get&lt;/strong&gt;” method.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;type Metadata = {};

type UserMetadata = Map&amp;lt;string, Metadata&amp;gt;;

const cache: UserMetadata = new Map();

console.log(cache.get('foo'));

const cacheCopy: UserMetadata = { ...cache };

console.log(cacheCopy.get('foo'));
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Merge interfaces
&lt;/h2&gt;

&lt;p&gt;Interfaces, unlike types, can merge. If there are interfaces with the same names in one file, then when we assign this interface, it will contain properties from all interfaces with the same names.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
interface User {
    id: number;
}

interface User {
    name: string;
}

// Error: Property 'id' is missing in type '{ name: string; }' but required in type 'User', because User interfaces merged
const user: User = {
    name: 'bar',
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Moreover, if we have global interfaces, for example, predefined in TypeScript itself, they will also be merged. For example, if we create an interface named “&lt;strong&gt;comment&lt;/strong&gt;”, we will get a merge of interfaces because “&lt;strong&gt;comment&lt;/strong&gt;” already exists in “&lt;strong&gt;lib.dom.d.ts&lt;/strong&gt;”.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
interface Comment {
  id: number;
  text: string;
}

// Error: Type '{ id: number; text: string; }' is missing the following properties from type 'Comment': data, length, ownerDocument, appendData, and 59 more.
const comment: Comment = {
  id: 5,
  text: "good video!",
};
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://www.typescriptlang.org/docs/handbook/declaration-merging.html#merging-interfaces"&gt;Link to documentation&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;If you want to review the topic but don’t want to read the article again, you can watch a few videos on YouTube:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=I6V2FkW1ozQ"&gt;Be Careful With Return Types In TypeScript&lt;/a&gt;&lt;br&gt;
&lt;a href="https://www.youtube.com/watch?v=jjMbPt_H3RQ"&gt;Enums considered harmful&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Author: Andrey Stepanov&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Troubleshooting tracking and personalisation in Sitecore XM Cloud</title>
      <dc:creator>Byteminds Agency</dc:creator>
      <pubDate>Thu, 25 Apr 2024 11:16:30 +0000</pubDate>
      <link>https://dev.to/byteminds_agency/troubleshooting-tracking-and-personalisation-in-sitecore-xm-cloud-2n6</link>
      <guid>https://dev.to/byteminds_agency/troubleshooting-tracking-and-personalisation-in-sitecore-xm-cloud-2n6</guid>
      <description>&lt;p&gt;One of the first things I tested in Sitecore XM Cloud was embedded tracking and personalisation capabilities. It has been really interesting to see what is available out-of-the-box, how much flexibility XM Cloud offers to marketing teams and what is required from developers to set it up. &lt;/p&gt;

&lt;p&gt;However, in this article I want to take a step back and talk about troubleshooting steps for situations when Sitecore XM Cloud tracking or personalisation is not working as expected. I have been working with Sitecore XP for many years so over time it became easy to find what is wrong in my setup. As XM Cloud applications are built on a new technology stack, investigating issues initially was a challenge. So I created this troubleshooting guide and hope that you will find it useful too. &lt;/p&gt;

&lt;h2&gt;
  
  
  Troubleshooting tracking
&lt;/h2&gt;

&lt;p&gt;The first step is to ensure that your website is tracking visitors correctly as without it you will not be able to use personalisation reports. Navigate to the &lt;strong&gt;Analyze&lt;/strong&gt; section on the XM Cloud portal. If you see the "No data available" message and empty charts across both &lt;strong&gt;Site insights&lt;/strong&gt; and &lt;strong&gt;Page insights&lt;/strong&gt;, it is a sign that tracking is not functioning as expected.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flbv3v7rl7bdiqt63par4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flbv3v7rl7bdiqt63par4.png" alt="Missing tracking data" width="800" height="593"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To troubleshoot, load your website in a browser, open the &lt;strong&gt;Development Tools&lt;/strong&gt; panel and switch to the &lt;strong&gt;Network&lt;/strong&gt; tab. Refresh the page and look for a POST request to the Personalize tracking API, it should look similar to this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;https://api-engage-eu.sitecorecloud.io/v1.2/events
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you do not see this request, ensure that the component &lt;strong&gt;\src\components\CdpPageView.tsx&lt;/strong&gt; and the file &lt;strong&gt;\src\Scripts.tsx&lt;/strong&gt; are included in your layout (by default it is &lt;strong&gt;\src\Layout.tsx&lt;/strong&gt;), otherwise tracking will not work.&lt;/p&gt;

&lt;p&gt;Additionally, check the browser console for any error messages. Here are potential reasons that could cause them:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Incorrect Site Identifier (or Point of Sale in CDP/Personalize terminology)&lt;/li&gt;
&lt;li&gt;Incorrect Target URL&lt;/li&gt;
&lt;li&gt;Content Security Policy headers block tracking requests&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Make sure that your Site Identifiers and Target URL match those from the &lt;strong&gt;Developer Settings&lt;/strong&gt; in XM Cloud and that Target URL is whitelisted in your security headers.&lt;/p&gt;

&lt;p&gt;Rectifying these errors should restore tracking functionality and you will start seeing statistics and charts similar to these in the XM Cloud portal:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fismajqf00bdypf4jh2ea.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fismajqf00bdypf4jh2ea.png" alt="Analytics reports in Sitecore XM Cloud" width="579" height="1500"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Troubleshooting personalisation
&lt;/h2&gt;

&lt;p&gt;Troubleshooting personalisation can't be done directly via browser simply because it is performed on the server side within your rendering host application. This is where debug-level logging can help us see what happens behind the scenes.&lt;/p&gt;

&lt;p&gt;To enable debug-level logs for the rendering host add the following line to the &lt;strong&gt;.env&lt;/strong&gt; file:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;DEBUG=sitecore-jss:&lt;/strong&gt;*&lt;/li&gt;
&lt;li&gt;or &lt;strong&gt;DEBUG=sitecore-jss:personalize&lt;/strong&gt;* to limit detailed logging to the Personalize middleware only&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;After this you will start seeing API request and response details for both the GraphQL API and Sitecore Personalize API, as well as additional debugging information:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feuvw08spxivo5m4g5cv2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feuvw08spxivo5m4g5cv2.png" alt="Debug logs of the rendering host container" width="696" height="556"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Handling common scenarios
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;1. Middleware is disabled&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;One of the common reasons why personalisation does not seem to be working is the "&lt;strong&gt;disable&lt;/strong&gt;" function of Personalize middleware. This function defines logic and conditions for skipping personalisation. &lt;/p&gt;

&lt;p&gt;If the function returns "&lt;strong&gt;true&lt;/strong&gt;", you will see the following message in the log:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sitecore-jss:personalize skipped (personalize middleware is disabled)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;By default, it returns "&lt;strong&gt;true&lt;/strong&gt;" for the development environment so personalisation will not be triggered on your local machine.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Solution:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Open the file &lt;strong&gt;\src\lib\middleware\plugins\personalize.ts&lt;/strong&gt; and update the function &lt;strong&gt;disabled&lt;/strong&gt;. Remember to implement your own logic, for example cookie consent validation.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkzrizr002likw071v4qx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkzrizr002likw071v4qx.png" alt="The function " width="800" height="86"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Timeouts&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Timeouts for both GraphQL and Personalize API requests can occur in development environments with frequent codebase updates and application restarts. If there is a timeout error, you will easily see which API responds slower than the allowed threshold.&lt;br&gt;
Log message for GraphQL API timeout:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sitecore-jss:personalize request: {url: 'http://cm/sitecore/api/graph/edge’,…} sitecore-jss:personalize response error: 'Request timed out, timeout of 400ms is exceeded'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Log message for Personalize API timeout:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sitecore-jss:personalize request: {url: 'https://api-engage-eu.sitecorecloud.io/v2/callFlows’,…} sitecore-jss:personalize request error:  [AbortError: Request timed out, timeout of 400ms is exceeded]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Solution:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Adjusting timeout thresholds in the &lt;strong&gt;.env&lt;/strong&gt; file can resolve these issues. However, be cautious not to set them too high as it can slow down your application. To increase API request timeouts, change the following variables:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;PERSONALIZE_MIDDLEWARE_CDP_TIMEOUT&lt;/strong&gt; for Personalize API&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;PERSONALIZE_MIDDLEWARE_EDGE_TIMEOUT&lt;/strong&gt; for GraphQL API&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Default values are 400ms and depending on the development environment configuration, these requests may be require more time to respond.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Personalize info not found&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This error occurs when an incorrect page route or language is requested. Basically, it means that the GraphQL API was not able to find the requested page and layout details for it. Here is the log message to identify it, usually it is followed by 404 page response:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sitecore-jss:personalize skipped (personalize info not found)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Solution:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Make sure that you request the correct page URL and correct language version.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. No personalization configured&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If no personalised variants are configured for the requested page, the "no personalization configured" message appears in the log. It means that the GraphQL API was able to find the page and its layout, but the layout does not contain any personalised page variants.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sitecore-jss:personalize skipped (no personalization configured)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Solution:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Verify that the page you are testing indeed contains personalised variants. If the requested page does not have any personalisation rules, then this message is expected and may not require any fixes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. Browser ID generation failed&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The browser ID is used for identifying requests from the same visitors. If a browser ID has been previously generated, it is stored in a cookie and sent to the server with every page request.&lt;br&gt;
However, if it is the very request without any cookies (or cookies have been cleared), then Personalize middleware will attempt to generate a new browser ID for the current request. If something goes wrong during browser ID generation, the following message will be written to the log:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sitecore-jss:personalize skipped (browser id generation failed)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Solution:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Ensure that the correct Site Identifier, Target URL and Clint Key are configured in the &lt;strong&gt;.env&lt;/strong&gt; file as they are needed for browser ID generation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;6. No variant identified&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;When the Personalize middleware sends a request but fails to identify any matching audience for the current visitor, the "no variant identified" message appears:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sitecore-jss:personalize skipped (no variant identified)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Solution:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Double check that the personalisation rule is correct and does match your current visitor audience. &lt;/p&gt;

&lt;p&gt;If you are not sure what is wrong with the rule, try simplifying it as much as possible to identify which part of it is not working as expected. For example, if a rule combines multiple conditions, consider testing each condition individually to find which one is failing.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;7. Invalid variant&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This scenario occurs when an audience is identified for the current visitor, but the Personalize middleware is unable to find corresponding page variant in the page layout. It can be caused by outdated page definitions or mismatches between environments.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sitecore-jss:personalize skipped (invalid variant)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Solution:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If there are mismatches between content in XM Cloud and audience definitions, save personalisation rules in XM Cloud to synchronise them with Personalize API.&lt;/p&gt;

&lt;p&gt;If you are seeing this error in a local development environment and want to test personalisation there, consider copying content from the environment where these personalisation rules were created.&lt;/p&gt;

&lt;h2&gt;
  
  
  Other API errors
&lt;/h2&gt;

&lt;p&gt;If you are seeing any other errors related to personalisation, pay attention to logged API requests and responses. For example, an incorrect CM hostname or API key would trigger an error.&lt;br&gt;
In some cases in can be useful to grab request details from logs and try running it in a tool like Postman to see how it is executed and what it returns.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;GraphQL API request:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sitecore-jss:personalize request: {    
url: 'http://cm/sitecore/api/graph/edge',    
headers: { sc_apikey: ‘...' },    
query: '\n query($siteName: String!, $language: String!, $itemPath: String!) {\n layout(site: $siteName, routePath: $itemPath, language: $language) {\n item {\n id\n version\n personalization {\n variantIds\n }\n }\n }\n }\n ',    
variables: { siteName: 'Test', itemPath: '/', language: 'en’ }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Personalize API request:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sitecore-jss:personalize request: {   
url: 'https://api-engage-eu.sitecorecloud.io/v2/callFlows',    
headers: { content-type: 'application/json’, user-agent: ‘...’ },   timeout: 10000,   
method: 'POST',   
body: '{"clientKey":“...","pointOfSale":"default","channel":"WEB", "browserId":“...","friendlyId":"embedded_77735a5b0ac9441caa5f00428e47500_en", "params":{"referrer":"about:client","utm":{"campaign":“test_campaign", "content":null,"medium":null,"source":null}}}'
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Successful response
&lt;/h2&gt;

&lt;p&gt;When debug-level logging is enabled, successful responses from GraphQL and Personalize APIs will be saved too. Examples of successful responses and their data format are shown below. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;GraphQL API response:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sitecore-jss:personalize response: {    
layout: {       
   item: {          
      id: '77735A5b0AC9441CAA5f00428E47500’,          
      version: 1,          
      personalization: {             
         variantIds: [ '25026aae743c4de2a5f54effc47f5a5c', length: 1 ]    
      }       
   }    
}}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Personalize API response:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sitecore-jss:personalize response: {    
status: 200,    
statusText: 'OK’,    
headers: { ... },    
url: 'https://api-engage-eu.sitecorecloud.io/v2/callFlows’,    
redirected: false,    
data: { variantId: '25026aae743c4de2a5f54effc47f5a5c’ }  
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Finally, if you see the following "&lt;strong&gt;Personalize middleware end&lt;/strong&gt;" message, congratulations, your website returned some personalised content! &lt;/p&gt;

&lt;p&gt;You can use the logged variant ID to make sure that correct audience was selected and the expected variant is displayed on the website:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sitecore-jss:personalize personalize middleware end: {    
rewritePath: '/_variantId_0dd7b00680be49c6815ca4d0793a36da/_site_Test/About’,    
browserId: '53bda419-2228-4a4b-a406-5e08a453f9e9’,    
headers: {    
   set-cookie: 'BID_{CDP_client_key}=53bda419-2228-4a4b-a406-5e08a453f9e9;
   ...,    
   x-middleware-cache: 'no-cache’,    
   x-middleware-rewrite: 'http://localhost:3000/ variantId0dd7b00680be49c6815ca4d0793a36da/_site_Test/About’,    
   x-sc-rewrite: '/_variantId_0dd7b00680be49c6815ca4d0793a36da/_site_Test/About’ 
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Hopefully, this guide helped you to understand the common issues in XM Cloud tracking and personalisation, and you found ideas on how to approach and resolve them. &lt;/p&gt;

&lt;p&gt;Please do let me know if you want to learn more about how XM Cloud personalisation works inside out and I will cover it in my next post!&lt;/p&gt;

</description>
    </item>
  </channel>
</rss>
