<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Bella Xiang</title>
    <description>The latest articles on DEV Community by Bella Xiang (@bellaxiang).</description>
    <link>https://dev.to/bellaxiang</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/bellaxiang"/>
    <language>en</language>
    <item>
      <title>Top 5 Open Source AI Image Generators: Free Tools for Creative Projects</title>
      <dc:creator>Bella Xiang</dc:creator>
      <pubDate>Fri, 02 Aug 2024 08:46:54 +0000</pubDate>
      <link>https://dev.to/bellaxiang/top-5-open-source-ai-image-generators-free-tools-for-creative-projects-iln</link>
      <guid>https://dev.to/bellaxiang/top-5-open-source-ai-image-generators-free-tools-for-creative-projects-iln</guid>
      <description>&lt;p&gt;AI technology has been rapidly transforming the landscape of creative industries. According to a recent survey, &lt;a href="https://www.forbes.com/advisor/business/software/ai-in-business/" rel="noopener noreferrer"&gt;over 50% of respondents believe&lt;/a&gt; that AI can significantly enhance written content by improving quality, creativity, and efficiency. This indicates the immense potential AI-driven solutions hold for various content creation contexts. In the advertising and marketing sector alone, 37% of professionals have already integrated AI into their workflows, showcasing a growing trend towards AI adoption in creative fields.&lt;/p&gt;

&lt;p&gt;Businesses embracing AI can expect a substantial revenue increase ranging from 6% to 10%. Moreover, creatives are increasingly recognizing the pivotal role of AI in idea generation processes. While traditional methods still dominate the final outcome phase of projects, there is a clear shift towards leveraging AI for innovative ideation.&lt;/p&gt;

&lt;p&gt;Open-source tools play a crucial role in democratizing access to advanced technologies like AI image generators. With 83% of creatives having already adopted AI into their practices, the significance of &lt;a href="https://en.wikipedia.org/wiki/Open-source_software" rel="noopener noreferrer"&gt;open-source platforms&lt;/a&gt; cannot be overstated. These tools not only foster creativity but also encourage collaboration and innovation within the creative community.&lt;/p&gt;

&lt;p&gt;In this blog, I will delve into five top open-source AI image generators that are revolutionizing creative projects. From exploring their unique features to understanding how they empower artists and creators, this journey will provide valuable insights into the world of AI-generated imagery.&lt;/p&gt;

&lt;h2&gt;
  
  
  1. &lt;a href="https://stability.ai/" rel="noopener noreferrer"&gt;Stable Diffusion&lt;/a&gt;
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Why Stable Diffusion Stands Out
&lt;/h3&gt;

&lt;p&gt;When it comes to AI image generation, &lt;strong&gt;Stable Diffusion&lt;/strong&gt; emerges as a standout tool in the creative realm. Its &lt;a href="https://zapier.com/blog/best-ai-image-generator/" rel="noopener noreferrer"&gt;open-source nature empowers users&lt;/a&gt; with technical acumen to seamlessly download and operate it locally. This accessibility opens doors for a diverse range of applications, from crafting artistic portraits to generating historical renditions and architectural visualizations.&lt;/p&gt;

&lt;p&gt;The flexibility offered by &lt;strong&gt;Stable Diffusion&lt;/strong&gt; is truly a game-changer. Users have the freedom to tailor their models according to specific requirements, enabling them to unleash their creativity without constraints. This adaptability sets &lt;strong&gt;Stable Diffusion&lt;/strong&gt; apart, making it a preferred choice for those seeking customized solutions in image generation.&lt;/p&gt;

&lt;h3&gt;
  
  
  Getting Started with Stable Diffusion
&lt;/h3&gt;

&lt;p&gt;Embarking on your journey with &lt;strong&gt;Stable Diffusion&lt;/strong&gt; requires understanding its basic requirements and setup process. To begin, ensure you have the necessary technical skills to navigate through the installation procedure smoothly. Whether you are an experienced user or a novice exploring AI tools, &lt;strong&gt;Stable Diffusion&lt;/strong&gt; offers an intuitive interface that caters to varying levels of expertise.&lt;/p&gt;

&lt;p&gt;To kickstart your experience with &lt;strong&gt;Stable Diffusion&lt;/strong&gt;, consider leveraging its API and Clipdrop service for enhanced functionality. These additional features complement the core capabilities of &lt;strong&gt;Stable Diffusion&lt;/strong&gt;, providing users with a comprehensive toolkit for creating visually stunning outputs.&lt;/p&gt;

&lt;h2&gt;
  
  
  2. &lt;a href="https://pixart-alpha.github.io/" rel="noopener noreferrer"&gt;PixArt-Alpha&lt;/a&gt;
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Unleashing Creativity with PixArt-Alpha
&lt;/h3&gt;

&lt;p&gt;PixArt-Alpha stands at the forefront of innovation in the realm of AI-driven image generation. This groundbreaking Transformer-based Text-to-Image (T2I) diffusion model is poised to redefine the standards of photorealistic synthesis. The fusion of cutting-edge technology and artistic expression culminates in a tool that transcends traditional boundaries, offering creators a canvas where imagination knows no limits.&lt;/p&gt;

&lt;h4&gt;
  
  
  Key Features and Capabilities
&lt;/h4&gt;

&lt;p&gt;PixArt-Alpha's prowess lies in its ability to transform textual prompts into visually striking images with remarkable realism. By harnessing the power of advanced algorithms, this tool empowers users to bring their ideas to life seamlessly. The model's adaptability ensures that even intricate details are captured faithfully, resulting in outputs that captivate and inspire.&lt;/p&gt;

&lt;p&gt;Incorporating PixArt-Alpha into your creative workflow unlocks a world of possibilities. From generating lifelike landscapes to crafting fantastical creatures, the versatility of this tool knows no bounds. Artists and designers alike can explore new horizons, pushing the boundaries of visual storytelling with each stroke of creativity.&lt;/p&gt;

&lt;h3&gt;
  
  
  Examples of Creative Projects
&lt;/h3&gt;

&lt;p&gt;PixArt-Alpha's impact reverberates across various artistic endeavors, showcasing its transformative influence on digital artistry. Renowned artists and enthusiasts alike have embraced this tool to breathe life into their visions. Whether recreating historical masterpieces or envisioning futuristic landscapes, PixArt-Alpha serves as a catalyst for unparalleled creativity.&lt;/p&gt;

&lt;p&gt;One notable testimonial captures the essence of PixArt-Alpha's significance:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Junsong Chen&lt;/strong&gt;, a visionary in the creative industry, emphasizes, "PixArt-α is not just an evolution; it’s a revolution."&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;This sentiment echoes the sentiments shared by countless users who have experienced firsthand the revolutionary capabilities of PixArt-Alpha in reshaping the landscape of digital art creation.&lt;/p&gt;

&lt;h2&gt;
  
  
  3. &lt;a href="https://bpatrik.github.io/pigallery2/" rel="noopener noreferrer"&gt;PiGallery&lt;/a&gt; - A Self-Hosted AI Image Generator
&lt;/h2&gt;

&lt;p&gt;In the realm of &lt;a href="https://en.wikipedia.org/wiki/Self-hosting_(web_services)" rel="noopener noreferrer"&gt;&lt;strong&gt;self-hosted AI image generators&lt;/strong&gt;&lt;/a&gt;, &lt;strong&gt;PiGallery&lt;/strong&gt; emerges as a versatile tool catering to the needs of creators seeking autonomy and control over their image generation processes. The decision to opt for a self-hosted solution stems from the desire for enhanced privacy, customization options, and seamless integration within existing workflows.&lt;/p&gt;

&lt;h3&gt;
  
  
  Why Choose a Self-Hosted AI Image Generator
&lt;/h3&gt;

&lt;p&gt;The allure of a &lt;strong&gt;self-hosted AI image generator&lt;/strong&gt; lies in the myriad benefits it offers to users. By hosting the tool locally, creators can safeguard sensitive data and intellectual property while ensuring compliance with data privacy regulations. Additionally, self-hosting grants individuals the freedom to customize settings, plugins, and themes according to their preferences, fostering a tailored user experience.&lt;/p&gt;

&lt;h3&gt;
  
  
  Exploring PiGallery's Features
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;PiGallery&lt;/strong&gt; distinguishes itself through its advanced features that streamline image analysis and organization. Leveraging &lt;a href="https://en.wikipedia.org/wiki/Machine_learning" rel="noopener noreferrer"&gt;machine learning capabilities&lt;/a&gt;, &lt;strong&gt;PiGallery2&lt;/strong&gt; can &lt;a href="https://bpatrik.github.io/pigallery2/" rel="noopener noreferrer"&gt;extract face regions&lt;/a&gt; from photo metadata, enabling users to categorize and search images based on facial recognition. Moreover, its geo-tagging functionality facilitates geographical sorting of images, enhancing accessibility and navigation within extensive image libraries.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Face region extraction for efficient categorization&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Geo-tagging for geographical image organization&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  4. DreamStudio by Stability AI
&lt;/h2&gt;

&lt;h3&gt;
  
  
  User-Friendly Interface for Creatives
&lt;/h3&gt;

&lt;p&gt;DreamStudio, developed by Stability AI, stands out as a &lt;a href="https://www.zdnet.com/article/best-ai-image-generator/" rel="noopener noreferrer"&gt;customizable AI image generator&lt;/a&gt; tailored to meet the diverse needs of users and developers. What sets DreamStudio apart is its intuitive user interface that simplifies the image creation process. With fields dedicated to size adjustments, style preferences, negative prompts, and image inputs, users can fine-tune their settings to achieve the precise visual rendition they envision.&lt;/p&gt;

&lt;h4&gt;
  
  
  Ease of Use and Image Generation Options
&lt;/h4&gt;

&lt;p&gt;Navigating through DreamStudio's interface is a seamless experience, thanks to its &lt;a href="https://builtin.com/artificial-intelligence/dreamstudio" rel="noopener noreferrer"&gt;user-centric design&lt;/a&gt;. The platform offers a range of customization options that empower users to explore various creative avenues effortlessly. Whether adjusting parameters for color schemes, textures, or thematic elements, DreamStudio provides a comprehensive toolkit for crafting unique and captivating visuals.&lt;/p&gt;

&lt;h3&gt;
  
  
  DreamStudio in Action
&lt;/h3&gt;

&lt;p&gt;Real-world applications of DreamStudio underscore its versatility and impact across different industries. From digital art enthusiasts seeking innovative tools to professionals in need of quick yet high-quality image generation solutions, DreamStudio caters to a broad spectrum of users. The tool's adaptability allows individuals to delve deep into AI technology, experiment with diverse prompts, and witness firsthand the transformative capabilities of &lt;a href="https://en.wikipedia.org/wiki/Generative_adversarial_network" rel="noopener noreferrer"&gt;generative AI&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Incorporating feedback from users and developers alike, Stability AI continues to &lt;a href="https://stability.ai/news/stablestudio-open-source-community-driven-future-dreamstudio-release" rel="noopener noreferrer"&gt;refine DreamStudio's features&lt;/a&gt; and functionalities. By prioritizing user experience and innovation, Stability AI reinforces its commitment to driving advancements in open-source AI development. As more creators embrace the possibilities offered by DreamStudio, the tool solidifies its position as a go-to resource for those venturing into the realm of AI-driven image generation.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Seamless customization options&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Versatile applications across industries&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  5. &lt;a href="https://github.com/lllyasviel/Fooocus" rel="noopener noreferrer"&gt;Fooocus&lt;/a&gt;
&lt;/h2&gt;

&lt;h3&gt;
  
  
  The Power of Offline Functionality
&lt;/h3&gt;

&lt;p&gt;In the realm of AI image generation tools, &lt;strong&gt;Fooocus&lt;/strong&gt; shines brightly with its unique offline functionality. The significance of offline capabilities cannot be overstated, especially for users in scenarios where internet connectivity may be limited or unreliable. &lt;strong&gt;Fooocus&lt;/strong&gt; offers a seamless experience by allowing users to create stunning images without being tethered to continuous online access. This feature not only enhances convenience but also ensures uninterrupted creativity, empowering users to unleash their artistic vision anytime, anywhere.&lt;/p&gt;

&lt;h4&gt;
  
  
  Why Offline Capabilities Matter
&lt;/h4&gt;

&lt;p&gt;The ability to operate &lt;strong&gt;Fooocus&lt;/strong&gt; offline addresses a crucial need for creators who value independence and flexibility in their workflow. By eliminating dependency on constant internet connectivity, &lt;strong&gt;Fooocus&lt;/strong&gt; provides a reliable solution for generating images on the go. Whether you are traveling, working in remote locations, or simply prefer working offline, this feature caters to diverse user preferences and requirements.&lt;/p&gt;

&lt;h3&gt;
  
  
  Getting the Most Out of Fooocus
&lt;/h3&gt;

&lt;p&gt;To maximize your experience with &lt;strong&gt;Fooocus&lt;/strong&gt; and achieve realistic photo generation results, consider implementing the following tips:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Utilize simple prompts: Opt for clear and concise textual inputs to guide image creation effectively.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Experiment with styles: Explore different artistic styles and themes within &lt;strong&gt;Fooocus&lt;/strong&gt; to diversify your creative outputs.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Fine-tune settings: Adjust parameters such as color schemes, textures, and resolutions to refine the details of your generated images.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Embrace iteration: Iteratively refine your prompts and settings to discover new possibilities and enhance the quality of your creations.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;By incorporating these strategies into your workflow, you can harness the full potential of &lt;strong&gt;Fooocus&lt;/strong&gt; and elevate your image generation endeavors to new heights. Whether you are a beginner seeking intuitive tools or an experienced creator looking for innovative features, &lt;strong&gt;Fooocus&lt;/strong&gt; offers a versatile platform for unleashing your imagination with ease.&lt;/p&gt;

&lt;p&gt;With its user-friendly interface, &lt;a href="https://education.civitai.com/generative-ai-art-with-fooocus-quickstart-guide/" rel="noopener noreferrer"&gt;low system requirements&lt;/a&gt;, and focus on simplicity without compromising quality, &lt;strong&gt;Fooocus&lt;/strong&gt; stands as a beacon of accessibility in the realm of AI image generators. Experience the beauty of effortless image creation today with &lt;strong&gt;Fooocus&lt;/strong&gt;!&lt;/p&gt;

&lt;h2&gt;
  
  
  MyScaleDB: Powering the Next Generation of AI Image Generators
&lt;/h2&gt;

&lt;p&gt;While the article focuses on open-source AI image generators, it's worth considering the role of a robust vector database in building such kind of tools, and the powerful option is to integrate MyScaleDB as the underlying vector database.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://myscale.com/product/" rel="noopener noreferrer"&gt;MyScaleDB&lt;/a&gt;'s strengths lie in its ability to efficiently handle the high-dimensional vector data that is integral to AI image generation models. Built on ClickHouse, MyScale excels in storing the structured and unstructured data in a unified system. By leveraging MyScaleDB as the data storage and retrieval engine, developers can take advantage of its advanced MSTG indexing and SQL vector joint querying capabilities to achieve lightning-fast performance, even with massive datasets.&lt;/p&gt;

&lt;p&gt;Moreover, MyScaleDB's SQL-based interface provides a familiar and accessible means of interacting with the underlying data, enabling developers to leverage their existing SQL knowledge and tools. This can significantly simplify the integration process and reduce the learning curve for those transitioning to AI-powered image generation projects. Besides, With the support of text-to-SQL and Self Query in MyScaleDB, users of the AI image tools can directly input the natural language to interact with the database, which is beneficial to the user experience.&lt;/p&gt;

&lt;p&gt;As the demand for AI-powered image generation continues to grow, solutions like MyScaleDB will play an increasingly pivotal role in enabling developers to build cutting-edge applications that push the boundaries of what's possible. If you are interested in creating an AI image generator, MyScale offers you 5 million of free vector storage in its development pod, with which you can experience all the advanced features. Welcome to join our &lt;a href="https://discord.gg/D2qpkqc4Jq" rel="noopener noreferrer"&gt;Discord&lt;/a&gt; to discuss your ideas with us.&lt;/p&gt;

</description>
      <category>opensource</category>
      <category>ai</category>
      <category>tools</category>
    </item>
    <item>
      <title>MyScaleDB, the Revolutionary SQL Vector Database, Goes Open-Source</title>
      <dc:creator>Bella Xiang</dc:creator>
      <pubDate>Wed, 03 Apr 2024 10:16:02 +0000</pubDate>
      <link>https://dev.to/bellaxiang/myscaledb-the-revolutionary-sql-vector-database-goes-open-source-2i5b</link>
      <guid>https://dev.to/bellaxiang/myscaledb-the-revolutionary-sql-vector-database-goes-open-source-2i5b</guid>
      <description>&lt;p&gt;&lt;a href="https://myscale.com/"&gt;MyScaleDB&lt;/a&gt;, the cutting-edge SQL vector database, is thrilled to announce that it is now &lt;a href="https://github.com/myscale/myscaledb"&gt;open-source&lt;/a&gt; as of March 28. This move marks a significant milestone for the AI developer community because it provides a powerful tool to build and scale AI applications like never before.&lt;/p&gt;

&lt;p&gt;MyScaleDB is a high-performance, scalable, and cost-effective database that harnesses SQL queries to accelerate vector search and processing. Our team of experienced database engineers has worked tirelessly to create a solution that enables every developer to build production-grade GenAI applications with powerful and familiar SQL.&lt;/p&gt;

&lt;p&gt;We believe that the open-sourcing of MyScaleDB gives developers the keys to unlock the full potential to handle the complexities of today’s ever-changing AI world. You’ll have the freedom to customize and enhance the database to suit your specific needs, whether you are building an AI chatbot, a recommendation system, a natural language processing application, or any other Generative AI product and solutions.&lt;/p&gt;

&lt;p&gt;Here are some of the key benefits of using MyScaleDB in your AI projects:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Fully SQL-Compatible: 

&lt;ul&gt;
&lt;li&gt;Fast, powerful, and efficient vector search, filtered search, and SQL-vector join queries&lt;/li&gt;
&lt;li&gt;Use SQL with vector-related functions to interact with MyScaleDB. No need to learn complex new tools or frameworks – stick with what you know and love.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;Production-Ready for AI applications:

&lt;ul&gt;
&lt;li&gt;A single platform to manage and process structured data, text, vector, JSON, geospatial, time-series data, and more.&lt;/li&gt;
&lt;li&gt;Improved RAG accuracy by combining vectors with rich metadata and performing high-precision, high-efficiency filtered search at any ratio.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;Unmatched performance and scalability:

&lt;ul&gt;
&lt;li&gt;MyScaleDB leverages cutting-edge OLAP database architecture and advanced vector algorithms for lightning-fast vector operations.&lt;/li&gt;
&lt;li&gt;Scale your applications effortlessly and cost-effectively as your data grows.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Linpeng Tang, CTO of MyScale&lt;/strong&gt;: "We're thrilled to put the power of MyScaleDB into the hands of developers worldwide. By open-sourcing our technology, we aim to foster innovation and collaboration within the AI developer community, ultimately leading to groundbreaking solutions in AI data management and analytics.”&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Wen Dai, General Manager of Solutions Architecture, Greater China, AWS&lt;/strong&gt;: "Vector data processing is a critical part for LLM infrastructure, while SQL can provide significant scalability and convenience to application developers. MyScale has made notable contributions in this area. With its open source availability, developers will have options to leverage the value of structured data to work with different LLMs for diversified use cases, for better performance, lower cost, and faster innovation paces.”&lt;/p&gt;

&lt;p&gt;Ready to start building with MyScaleDB? Head over to our &lt;a href="https://github.com/myscale/myscaledb"&gt;GitHub&lt;/a&gt; repository and dive in! We can't wait to see what you'll create.&lt;/p&gt;

&lt;p&gt;We're committed to continuously improving and evolving MyScaleDB to meet the ever-changing needs of the AI industry. Join us on this exciting journey and participate in the AI data management revolution!&lt;/p&gt;

&lt;p&gt;For the latest updates and to connect with fellow MyScaleDB developers, follow us on &lt;a href="https://twitter.com/MyScaleDB"&gt;Twitter&lt;/a&gt; or join our &lt;a href="https://discord.gg/x4g5FKtJ6E"&gt;Discord&lt;/a&gt; community. Let's build the future of AI together!&lt;/p&gt;

</description>
      <category>vectordatabase</category>
      <category>sql</category>
      <category>opensource</category>
      <category>ai</category>
    </item>
    <item>
      <title>Chaos Engineering: Efficient Way to Improve System Availability</title>
      <dc:creator>Bella Xiang</dc:creator>
      <pubDate>Mon, 19 Jun 2023 06:03:44 +0000</pubDate>
      <link>https://dev.to/bellaxiang/chaos-engineering-efficient-way-to-improve-system-availability-2m5k</link>
      <guid>https://dev.to/bellaxiang/chaos-engineering-efficient-way-to-improve-system-availability-2m5k</guid>
      <description>&lt;p&gt;&lt;strong&gt;Author: &lt;a href="https://github.com/moomman"&gt;Zhao Jinyang&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Resilience is a crucial requirement for ShardingSphere-Proxy, an essential database infrastructure. Testing and verifying resilience can be efficiently achieved through the use of chaos engineering methodology. To support customized chaos engineering, the &lt;a href="https://shardingsphere.apache.org/oncloud/"&gt;ShardingSphere-on-Cloud&lt;/a&gt; project is designing and implementing a new &lt;a href="https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/"&gt;CustomResourceDefinition&lt;/a&gt; (CRD) called Chaos. This post provides a practical description of CRD’s concept and features, helping the community better understand its potential benefits.&lt;/p&gt;

&lt;h1&gt;
  
  
  What is chaos engineering
&lt;/h1&gt;

&lt;p&gt;System availability is a critical metric for evaluating service reliability. Numerous methods can ensure high availability, including engineering resilience and techniques, and others. One such technique is chaos engineering, which involves introducing software faults into production systems to enhance availability.&lt;/p&gt;

&lt;p&gt;According to &lt;a href="https://principlesofchaos.org/"&gt;Principles of Chaos&lt;/a&gt; (2019), the definition of chaos engineering is:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“Chaos Engineering is the discipline of experimenting on a system to build confidence in the system’s capability to withstand turbulent conditions in production.”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;In other words, chaos engineering is a practice that aims to enhance system robustness by detecting potential weaknesses in software systems early, ultimately preventing major disruptions or failures.&lt;/p&gt;

&lt;h1&gt;
  
  
  Why is chaos engineering needed
&lt;/h1&gt;

&lt;p&gt;The complexity of a system can be shown in a linear and nonlinear way as well as reflect how changes in the input of a system affect the output.&lt;/p&gt;

&lt;p&gt;A linear system is typically predictable. There are many examples of linear systems in nature, such as simple mathematical functions and physical definitions.&lt;/p&gt;

&lt;p&gt;In contrast, the output of a nonlinear system cannot be accurately calculated. In a large distributed program, components interact with each other, and we cannot determine if expected output can be achieved under various inputs.&lt;/p&gt;

&lt;p&gt;Currently, most programs are increasingly complex. In common cloud environments, coordinating various components is becoming more challenging (such as Kubernetes, along with the services running on it, like Istio, Envoy, and other software infrastructure).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxjdha2kh28dctbdlzume.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxjdha2kh28dctbdlzume.png" alt="Image description" width="514" height="733"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The complexity and rapid changes inherent to many systems often lead to developers having a narrow understanding of the overall picture. For example, developers behind a mall system may not familiar with the technical details of the infrastructure they adopted. With increased complexity, any single person’s understanding on the model built by the system may become less accurate. Hence, gaining a complete comprehension of a complex system is not realistic.&lt;/p&gt;

&lt;p&gt;Chaos is inherent and describes an unknown state in complex systems. &lt;strong&gt;Chaos engineering is used to discover chaos in complex systems, learn the behavior of the system, and develop the ability to respond to failures and restore the system to a steady state.&lt;/strong&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  The guidelines and practical ways of chaos engineering
&lt;/h1&gt;

&lt;h2&gt;
  
  
  Formulate a hypothesis about steady-state
&lt;/h2&gt;

&lt;p&gt;Every experiment begins with a hypothesis, often taking the form of “even in XYZ circumstances, the system remains in a steady state.” This principle emphasizes the establishment of hypotheses based on defining steady states. Therefore, we should define various indicators of the system’s normal state based on long-term monitoring of the production environment and focus on measurable outputs, rather than internal properties of the system.&lt;/p&gt;

&lt;p&gt;When identifying a steady state, it’s often essential to consider the global outputs of the system, such as running logs, performance logs, alerts, and program behavior, and abstract them into steady-state conditions. Having introduced experimental variables (faults), these steady-state conditions should change as expected.&lt;/p&gt;

&lt;p&gt;When the system is in the steady state we defined, we should consider that the system can provide services normally to the outside world. In addition, monitoring the steady state is also important so that the system can recover to the steady state in a short period of time.&lt;/p&gt;

&lt;h2&gt;
  
  
  Introducing diverse real-world events
&lt;/h2&gt;

&lt;p&gt;We ought to introduce events that are real and what we care about such as trying to reproduce faults that occurred in the production environment, such as cache avalanche, service degradation, etc.&lt;/p&gt;

&lt;p&gt;Behaviors that would lead to the same fault symptoms should not be introduced, such as occupying all the memory, CPU, or disk of a service instance or ‘killing’ the instance, which system responds to bad requests. Testing should focus on the system’s behavior after a fault occurs, rather than on how to trigger the fault.&lt;/p&gt;

&lt;h2&gt;
  
  
  Experiments in the production environment
&lt;/h2&gt;

&lt;p&gt;When conducting experiments, we can learn about the relevant behaviors of the system and establish confidence in the system. If we conduct experiments in a test environment, we can only establish confidence in that specific test environment. If there are differences between the production environment and the test environment, we cannot establish confidence in the production environment.&lt;/p&gt;

&lt;p&gt;This is because a complex system is a whole, environmental differences between testing and production environments can render testing environment experiments meaningless, causing a “&lt;a href="https://zh.wikipedia.org/wiki/%E9%95%BF%E9%9E%AD%E6%95%88%E5%BA%94"&gt;Bullwhip-effect&lt;/a&gt;”. However, conducting experiments in the production environment may affect users of the system and cause losses. We need to make trade-offs in the formal environment and let the experimental tools mature in the quasi-production environment before routing the experiments to a small portion of traffic in the production environment.&lt;/p&gt;

&lt;h2&gt;
  
  
  Automate experiments
&lt;/h2&gt;

&lt;p&gt;When testing massive experiment sets are required, automating the process is more efficient than manually setting experiment environments, introducing faults, and gathering results. Automated experiments save time, run continuously, and can cover a larger number of experiment sets.&lt;/p&gt;

&lt;p&gt;When repeat experiments are required, hypotheses are not always true, and they can be expired following iterated software, so periodic conducting regression experiments are needed.&lt;/p&gt;

&lt;h2&gt;
  
  
  Minimize the blast radius
&lt;/h2&gt;

&lt;p&gt;Safe experiment methods can reduce the risk to the production environment, such as using traffic shadowing or selecting a suitable time period. An indicator in a small variable group is more significant compared to a small control group.&lt;/p&gt;

&lt;h1&gt;
  
  
  Chaos maturity model
&lt;/h1&gt;

&lt;p&gt;Chaos maturity model provided a &lt;a href="https://www.oreilly.com/content/chaos-engineering/#cmm_map_image"&gt;model map&lt;/a&gt;, based on different positions to measure different types of chaos engineering in practice.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq2dnwgeg741omh4anp3v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq2dnwgeg741omh4anp3v.png" alt="Image description" width="800" height="546"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;There are two axes on the map, adoption on the X-axis and sophistication on the Y-axis, which can be explored separately:&lt;/p&gt;

&lt;h2&gt;
  
  
  Adoption
&lt;/h2&gt;

&lt;p&gt;As chaos engineering becomes mature, chaos engineering software needs to achieve a specific level that robustness validation alone can significantly affect the compliance process. However, initial adoption of chaos engineering generally starts from scratch.&lt;/p&gt;

&lt;h2&gt;
  
  
  Sophistication
&lt;/h2&gt;

&lt;p&gt;Sophistication has some different metrica: provide consultation services and provide a set of tools. Due to the software infrastructure’s diversity, no tool can abstract sophisticated chaos engineering experiment instances in all environments and apply it in reality. Thus, &lt;strong&gt;chaos engineering practices were contributed from massive labor inputs, then customized solutions were gradually developed&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Another way to understand sophistication engineering is to consider the system levels and introducing system experiment variables. Experiments typically start at the infrastructure level with killing pods or virtual machines at the initial. During the initial stages of chaos experiments, the common approach is to use methods such as killing pods or virtual machines. As the tools become more sophisticated, chaos injection logic may be introduced into the target system, impacting the requests between services.&lt;/p&gt;

&lt;p&gt;Additionally, when experimental variables affect business logic, we can observe more complex experiments. For instance, returning feasible but unexpected request responses to a service can lead to different results by programs. The experiments in the system will be conducted from the infrastructure layer to the application layer, and then to the business logic layer. Moreover, low-granularity experiments such as those that tend to trigger potential faults in the business logic layer are to be prioritized.&lt;/p&gt;

&lt;h1&gt;
  
  
  Continuous verification
&lt;/h1&gt;

&lt;blockquote&gt;
&lt;p&gt;“&lt;a href="https://www.oreilly.com/library/view/chaos-engineering/9781492043850/"&gt;Continuous verification (CV) is a discipline of proactive experimentation, implemented as tooling that verifies system behaviors.&lt;/a&gt;” — Casy Rosenthal&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Continuous validation development tools are a prime example of complexity in the chaos maturity model. CV, like CI/CD, addresses the need for increasingly complex operational systems. Due to resource constraints, system developers cannot afford to verify internal plans, and must instead focus on validating the system’s output meets desired expectations. That’s why CV is better than verification and also this is a successful sign of managing complex systems.&lt;/p&gt;

&lt;p&gt;There are at least three types of continuous verification: feature testing, data artifacts, and correctness.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Feature Testing&lt;/strong&gt;: based on the various performance indicators (concurrency, latency deviation, execution speed, etc.), and through observation of actual production traffic, the report and recognition of this test will be established.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Data artifacts&lt;/strong&gt;: databases and storage applications have various requirements for the characters of writing and retrieving data, such as transaction consistency, idempotence, incorrect data isolation levels, etc.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Correctness&lt;/strong&gt;: not all correct forms are manifested as a certain state or ideal attribute. In some cases, the interaction between different components must be taken by interface contracts or agreements. When an interface request returns a seemingly correct result that is beyond its judgement logic, unexpected errors may occur. The reason for such issues is that different levels of code are consistent at the logical level but inconsistent between layers.&lt;/p&gt;

&lt;h1&gt;
  
  
  Open-source chaos engineering platform
&lt;/h1&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://litmuschaos.io/"&gt;Litmus Chaos&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Litmus Chaos is a chaos engineering platform that provides cross-cloud services. It’s a CNCF open-source project that many organizations have used. &lt;a href="https://litmuschaos.io/"&gt;Litmus Chaos&lt;/a&gt;’s mission is to help Kubernetes SRE and developers to find weaknesses in non-Kubernetes platforms and applications that run on Kubernetes.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://chaos-mesh.org/"&gt;Chaos Mesh&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Chaos Mesh is a chaos engineering platform open-sourced by PingCAP. It has a strong capability to orchestrate failure scenarios and provide comprehensive failure simulation types, which allow users to simulate the faults that might occur in production and testing environments and helps them identify potential failures. Chaos Mesh provides comprehensive visual tools to help beginner programmers conveniently run and monitor their own chaos scenarios. Chaos Mesh was developed based on Kubernetes CRD, mainly including three components:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Chaos Dashboard: a visible platform of Chaos Mesh, provides a user-friendly WebUI, allowing users to design, monitor for Chaos, and manage RABC permits.&lt;/li&gt;
&lt;li&gt;Chaos Controller Manager: core logical components of Chaos Mesh, able to schedule users’ designed Chaos CR. The component includes many CRD Controllers, such as PodChaos Controller, WorkerFlow Controller, etc.&lt;/li&gt;
&lt;li&gt;Chaos Daemon: the main execution component of Chaos Mesh. Chaos Daemon runs as DeamonSet, and holds Privileged access by default (opt-in). Generally, this component interferes with network equipment, file systems, and kernels by invasion to target Pod Namespace.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://chaosblade.io/"&gt;Chaos Blade&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Chaos Blade is a chaos engineering project designed and open-sourced by Alibaba in 2019, which includes the chaos engineering experiment tool Chaos Blade and platform Chaosblade-box. It helps enterprises solve high availability issues during cloud-native processes through chaos engineering.&lt;/p&gt;

&lt;p&gt;Chaosblade supports three big platforms, four computing language applications, involves over 200 experimental scenarios, and over 3000 parameters, allowing for fine control of the experimental scope.&lt;/p&gt;

&lt;p&gt;ChaosBlade-Box supports the management of experimental tools, and in addition to managing Chaos Blade, it also supports the aggregation of experimental tools from other platforms such as Litmuschaos.&lt;/p&gt;

&lt;h1&gt;
  
  
  Conclusion
&lt;/h1&gt;

&lt;p&gt;To introduce chaos engineering into a certain system, we can refer to the chaos maturity model and start with simple inputs. In the case of our community, we can agree upon a date for developers of various components in the system to perform a fault test together, record the results to enhance the sense of participation and importance of chaos engineering for contributors. We’ll then observe system behavior, define steady states, and design reasonable chaos experiment plans. These experiments can be conducted in pre-production or production environments to discover and learn new behaviors of the system and enhance the community’s ability to handle faults. Afterwards, we can design automated experiments and use regression testing to ensure the correctness of experimental hypotheses. By the way, Chaos is coming soon to the 0.3.0 version, stay tuned!&lt;/p&gt;

&lt;h1&gt;
  
  
  Reference
&lt;/h1&gt;

&lt;ol&gt;
&lt;li&gt;&lt;a href="https://principlesofchaos.org/"&gt;Principles of Chaos Engineering&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://litmuschaos.io/"&gt;LitmusChaos&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://chaos-mesh.org/"&gt;Chaos Mesh: A Powerful Chaos Engineering Platform for Kubernetes&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://chaosblade.io/"&gt;Chaos Blade&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://zh.wikipedia.org/wiki/%E9%95%BF%E9%9E%AD%E6%95%88%E5%BA%94"&gt;Bullwhip effect&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.oreilly.com/content/chaos-engineering/#cmm_map_image"&gt;CMM Map&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.oreilly.com/library/view/chaos-engineering/9781492043850/"&gt;Chaos Engineering (Rosenthal, C, Jones,N, 2020)&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;

</description>
      <category>opensource</category>
      <category>database</category>
      <category>shardingsphere</category>
      <category>chaosengineering</category>
    </item>
    <item>
      <title>How to Run SQL Trace with ShardingSphere-Agent</title>
      <dc:creator>Bella Xiang</dc:creator>
      <pubDate>Fri, 09 Jun 2023 05:28:56 +0000</pubDate>
      <link>https://dev.to/bellaxiang/how-to-run-sql-trace-with-shardingsphere-agent-3489</link>
      <guid>https://dev.to/bellaxiang/how-to-run-sql-trace-with-shardingsphere-agent-3489</guid>
      <description>&lt;h4&gt;
  
  
  Author: &lt;a href="https://github.com/jiangML"&gt;Jiang Maolin&lt;/a&gt;
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://shardingsphere.apache.org/"&gt;Apache ShardingSphere&lt;/a&gt;, a data service platform that follows the Database Plus concept for distributed database systems, offers a range of features, including data sharding, read/write splitting, data encryption, and shadow database. In production environment, especially in data-sharding scenarios, SQL tracing is critical for monitoring and analyzing slow queries and abnormal executions. Therefore, a thorough understanding of SQL rewriting and query execution is crucial.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is ShardingSphere-Agent
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://shardingsphere.apache.org/document/current/en/reference/observability/"&gt;ShardingSphere-Agent&lt;/a&gt; provides an observable framework for ShardingSphere. It is implemented based on Java Agent technology, using Byte Buddy to modify the target bytecode and weave them into data collection logic. Metrics, tracing and logging functions are integrated into the agent through plugins to obtain observable data of system status. Among them, the tracing plugin is used to obtain the tracing information of SQL parsing and SQL execution, which can help users analyze SQL trace when using &lt;a href="https://shardingsphere.apache.org/document/current/en/user-manual/shardingsphere-jdbc/"&gt;Apache ShardingSphere-JDBC&lt;/a&gt; or &lt;a href="https://shardingsphere.apache.org/document/current/en/user-manual/shardingsphere-proxy/"&gt;Apache ShardingSphere-Proxy&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;This post will take ShardingSphere-Proxy as an example to explain how to use ShardingSphere-Agent for SQL tracing.&lt;/p&gt;

&lt;h2&gt;
  
  
  Two Basic Concepts You Need to Know
&lt;/h2&gt;

&lt;p&gt;Before starting with the article, here are two important concepts that need to be paid attention to first:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Span: the basic unit in a trace. A span is created for each call in the trace and ideentified by a unique ID. Spans can contain some customized information such as descriptive information, timestamps, key-value pairs, etc.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Trace: the collection of spans with a tree structure. In ShardingSphere-Proxy, a trace represents to the full execution process of a SQL statement.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;When running a SQL statement in ShardingSphere-Proxy, it goes through parsing, routing, rewriting, execution, and merging. Currently, tracing has been implemented in two critical steps: parsing and execution — with execution oftentimes being the focus. In the execution stage, Proxy will connect to the physical database to execute the actual SQL. Therefore, the information obtained during this stage provides important evidence for troubleshooting issues and fully reflects the correspondence between logical SQL and actual SQL after rewriting.&lt;/p&gt;

&lt;p&gt;In ShardingSphere-Proxy, a trace consists of three types of spans:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
  &lt;tr&gt;
   &lt;td&gt;Span
   &lt;/td&gt;
   &lt;td&gt;Description
   &lt;/td&gt;
  &lt;/tr&gt;
  &lt;tr&gt;
   &lt;td&gt;
&lt;code&gt;/ShardingSphere/rootInvoke/&lt;/code&gt;
   &lt;/td&gt;
   &lt;td&gt;This span indicates the complete execution of an SQL statement, and you can view the amount of time spent on executing an SQL
   &lt;/td&gt;
  &lt;/tr&gt;
  &lt;tr&gt;
   &lt;td&gt;
&lt;code&gt;/ShardingSphere/parseSQL/&lt;/code&gt;
   &lt;/td&gt;
   &lt;td&gt;This span indicates the parsing stage of the SQL execution. You can view the parsing time of an SQL and the SQL statements. (It is not available when a &lt;code&gt;PreparedStatement&lt;/code&gt; is used.)
   &lt;/td&gt;
  &lt;/tr&gt;
  &lt;tr&gt;
   &lt;td&gt;
&lt;code&gt;/ShardingSphere/executeSQL/&lt;/code&gt;
   &lt;/td&gt;
   &lt;td&gt;This span indicates the rewritten SQL is executed. And the time spent on executing is also available. (This span is not available if the SQL doesn’t need to be executed in the backend physical database).
   &lt;/td&gt;
  &lt;/tr&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  How to use ShardingSphere-Agent for SQL tracing
&lt;/h2&gt;

&lt;p&gt;For the convenience of viewing the tracing data, Zipkin or Jaeger is usually used to collect and display the tracing data. Currently, ShardingSphere-Agent supports reporting trace data to both components. Next, let’s use the sharding scenario as an example to explain how to report data and analyze the SQL trace.&lt;/p&gt;

&lt;h3&gt;
  
  
  Configuring ShardingSphere-Proxy
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Download Proxy from the &lt;a href="https://shardingsphere.apache.org/document/current/en/downloads/"&gt;official website&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Create &lt;code&gt;demo_ds_0&lt;/code&gt; and &lt;code&gt;demo_ds_1&lt;/code&gt; under the MySQL database as the storage unit &lt;code&gt;ds_0&lt;/code&gt; and &lt;code&gt;ds_1&lt;/code&gt; .&lt;/li&gt;
&lt;li&gt;Start Proxy, and connect to Proxy using a MySQL client tool; create logical database &lt;code&gt;sharding_db&lt;/code&gt;, and register the storage units under this database using DistSQL (Distributed SQL). DistSQL is the specific SQL language for Apache ShardingSphere. It is used in exactly the same way as standard SQL, and is used to provide SQL-level operational capabilities for incremental functions. For details, please refer to the &lt;a href="https://shardingsphere.apache.org/document/current/en/user-manual/shardingsphere-proxy/distsql/"&gt;DistSQL official document&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--oEd34Cw5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://shardingsphere.apache.org/blog/img/2023_06_07_how_to_run_sql_trace_with_shardingsphere2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--oEd34Cw5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://shardingsphere.apache.org/blog/img/2023_06_07_how_to_run_sql_trace_with_shardingsphere2.png" alt="" width="800" height="548"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Use DistSQL to create sharding &lt;code&gt;rule t_order&lt;/code&gt;, set &lt;code&gt;ds_0&lt;/code&gt; and &lt;code&gt;ds_1&lt;/code&gt; as storage units, and set the number of shard to 4.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--83UauUnd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://shardingsphere.apache.org/blog/img/2023_06_07_how_to_run_sql_trace_with_shardingsphere3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--83UauUnd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://shardingsphere.apache.org/blog/img/2023_06_07_how_to_run_sql_trace_with_shardingsphere3.png" alt="" width="800" height="158"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create table &lt;code&gt;t_order&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Xslgsaf8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://shardingsphere.apache.org/blog/img/2023_06_07_how_to_run_sql_trace_with_shardingsphere4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Xslgsaf8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://shardingsphere.apache.org/blog/img/2023_06_07_how_to_run_sql_trace_with_shardingsphere4.png" alt="" width="800" height="222"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--BTYqMkkT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://shardingsphere.apache.org/blog/img/2023_06_07_how_to_run_sql_trace_with_shardingsphere5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--BTYqMkkT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://shardingsphere.apache.org/blog/img/2023_06_07_how_to_run_sql_trace_with_shardingsphere5.png" alt="" width="702" height="540"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Finally, there will be tables &lt;code&gt;t_order_0&lt;/code&gt; and &lt;code&gt;t_order_2&lt;/code&gt; created in the physical database &lt;code&gt;demo_ds_0&lt;/code&gt;, and &lt;code&gt;t_order_1&lt;/code&gt; and &lt;code&gt;t_order_3&lt;/code&gt; tables in the physical database &lt;code&gt;demo_ds_1&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;After ShardingSphere-Proxy is well configured, the next step is to introduce how to report SQL trace data to Zipkin and Jaeger through ShardingSphere-Agent.&lt;/p&gt;

&lt;h3&gt;
  
  
  Reporting to Zipkin
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Deploy Zipkin (please refer to the &lt;a href="https://zipkin.io/pages/quickstart.html"&gt;official website&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Configure &lt;code&gt;agent.yaml&lt;/code&gt; to export data to Zipkin
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;plugins:
 tracing:
   OpenTelemetry:
     props:
       otel.service.name: "shardingsphere" # the service name configured
       otel.traces.exporter: "zipkin" # Use zipkin exporter
       otel.exporter.zipkin.endpoint: "http://localhost:9411/api/v2/spans" # the address where zipkin receives data
       otel-traces-sampler: "always_on" # sampling setting
       otel.metrics.exporter: "none" # close OpenTelemetry metric collection
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Restart Proxy and Agent after stopping Proxy (&lt;code&gt;--agent&lt;/code&gt;means enabling Agent)
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;./bin/stop.sh
./bin/start.sh --agent
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Use a MySQL client tool to connect to the Proxy and execute the following queries — &lt;code&gt;insert&lt;/code&gt;, &lt;code&gt;select&lt;/code&gt;, &lt;code&gt;update&lt;/code&gt;, and &lt;code&gt;delete&lt;/code&gt; in sequence.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Ee7SRjBV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://shardingsphere.apache.org/blog/img/2023_06_07_how_to_run_sql_trace_with_shardingsphere6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Ee7SRjBV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://shardingsphere.apache.org/blog/img/2023_06_07_how_to_run_sql_trace_with_shardingsphere6.png" alt="" width="800" height="416"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Visit &lt;a href="http://127.0.0.1:9411/zipkin/"&gt;http://127.0.0.1:9411/zipkin/&lt;/a&gt; (Zipkin UI), and you can see 4 pieces of trace data, which is exactly the same number of SQL queries.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--SSQXzdTf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://shardingsphere.apache.org/blog/img/trace7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--SSQXzdTf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://shardingsphere.apache.org/blog/img/trace7.png" alt="" width="800" height="189"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let’s analyze the trace of the insert query. After finding the trace, you can see the execution details of this query. The Tags information in the &lt;code&gt;/shardingsphere/parsesql/&lt;/code&gt; span shows that the parsed SQL is consistent with the SQL executed on the client.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--SfylVDmr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://shardingsphere.apache.org/blog/img/trace8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--SfylVDmr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://shardingsphere.apache.org/blog/img/trace8.png" alt="" width="800" height="226"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;There are 4 &lt;code&gt;/shardingsphere/executesql/&lt;/code&gt; spans shown in the span table. After reviewing the details, it is found that the following two SQL statements were executed in the storage unit &lt;code&gt;ds_0&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;insert into t_order_0 (order_id, user_id, address_id, status) VALUES (4, 4, 4, 'OK')
insert into t_order_2 (order_id, user_id, address_id, status) VALUES (2, 2, 2, 'OK')
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--kpB4y2Ne--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://shardingsphere.apache.org/blog/img/trace9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--kpB4y2Ne--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://shardingsphere.apache.org/blog/img/trace9.png" alt="" width="800" height="189"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--zyBP0YI2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://shardingsphere.apache.org/blog/img/trace10.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--zyBP0YI2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://shardingsphere.apache.org/blog/img/trace10.png" alt="" width="800" height="204"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The following two SQL statements are executed in the storage unit &lt;code&gt;ds_1&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;insert into t_order_1 (order_id, user_id, address_id, status) VALUES (1, 1, 1, 'OK')
insert into t_order_3 (order_id, user_id, address_id, status) VALUES (3, 3, 3, 'OK')
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--4IgaB1ua--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://shardingsphere.apache.org/blog/img/trace11.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--4IgaB1ua--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://shardingsphere.apache.org/blog/img/trace11.png" alt="" width="800" height="189"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--VhdgZVYC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://shardingsphere.apache.org/blog/img/trace12.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--VhdgZVYC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://shardingsphere.apache.org/blog/img/trace12.png" alt="" width="800" height="185"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then log in to the physical database to check the corresponding data (after executing the insert query)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Pi7p3Rp5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://shardingsphere.apache.org/blog/img/trace13.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Pi7p3Rp5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://shardingsphere.apache.org/blog/img/trace13.png" alt="" width="800" height="436"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--3RSi08yX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://shardingsphere.apache.org/blog/img/trace14.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--3RSi08yX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://shardingsphere.apache.org/blog/img/trace14.png" alt="" width="800" height="506"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Due to the &lt;code&gt;t_orde&lt;/code&gt;r table being partitioned into 4 shards and data with &lt;code&gt;order_id&lt;/code&gt; 1 to 4 being inserted, one record will be inserted into each of the &lt;code&gt;t_order_0&lt;/code&gt;, &lt;code&gt;t_order_1&lt;/code&gt;, &lt;code&gt;t_order_2&lt;/code&gt;, and &lt;code&gt;t_order_3&lt;/code&gt; tables. As a result, there will be 4 &lt;code&gt;/shardingsphere/executesql&lt;/code&gt; spans. The displayed SQL trace is consistent with the actual execution results. So you can view the time spent on each step through the span and also know the specific execution of the SQL through the &lt;code&gt;/shardingsphere/executesql/&lt;/code&gt; span.&lt;/p&gt;

&lt;p&gt;The following is the trace details of the select, update, and delete queries, which are also consistent with the actual situation.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--84Ad_zJB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://shardingsphere.apache.org/blog/img/trace15.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--84Ad_zJB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://shardingsphere.apache.org/blog/img/trace15.png" alt="" width="800" height="168"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--q_pBUrtW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://shardingsphere.apache.org/blog/img/trace16.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--q_pBUrtW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://shardingsphere.apache.org/blog/img/trace16.png" alt="" width="800" height="173"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Yym27foW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://shardingsphere.apache.org/blog/img/trace17.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Yym27foW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://shardingsphere.apache.org/blog/img/trace17.png" alt="" width="800" height="168"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Reporting to Jaeger
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Deploy Jaeger (please refer to the &lt;a href="https://www.jaegertracing.io/"&gt;official website&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Deploy Proxy&lt;/li&gt;
&lt;li&gt;Configure &lt;code&gt;agent.yaml&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;plugins:
 tracing:
   OpenTelemetry:
     props:
       otel.service.name: "shardingsphere" # the service name configured
       otel.traces.exporter: "jaeger" # Use jaeger exporter
       otel.exporter.otlp.traces.endpoint: "http://localhost:14250" # the address where jaeger receives data
       otel.traces.sampler: "always_on" # sampling setting
       otel.metrics.exporter: "none" # close OpenTelemetry metric collection
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Restart Proxy and Agent after stopping Proxy (&lt;code&gt;--agent&lt;/code&gt; means enabling Agent)
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;./bin/stop.sh
./bin/start.sh --agent
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Log into Proxy and execute SQL queries under the &lt;code&gt;sharding_db&lt;/code&gt; database (this SQL query is same as the ones executed in the Zipkin example)&lt;/li&gt;
&lt;li&gt;From &lt;a href="http://127.0.0.1:16686/"&gt;http://127.0.0.1:16686/&lt;/a&gt; (Jaeger UI address), you will see 4 trace data, same as the number of executed SQL queries.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--E65iZwxq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://shardingsphere.apache.org/blog/img/trace18.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--E65iZwxq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://shardingsphere.apache.org/blog/img/trace18.png" alt="" width="800" height="366"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Since the executed SQL queries are the same as those in the Zipkin example, the trace data should also be the same. As an example, we will use the trace from the insert query.&lt;/p&gt;

&lt;p&gt;From the following picture, their are one parsed span and 4 executed span&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--DLSuH4rY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://shardingsphere.apache.org/blog/img/trace19.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--DLSuH4rY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://shardingsphere.apache.org/blog/img/trace19.png" alt="" width="800" height="159"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Storage unit &lt;code&gt;ds_0&lt;/code&gt; has executed the following two SQL statements&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;insert into t_order_0 (order_id, user_id, address_id, status) VALUES (4, 4, 4, 'OK')
insert into t_order_2 (order_id, user_id, address_id, status) VALUES (2, 2, 2, 'OK')
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--wVLCYQZH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://shardingsphere.apache.org/blog/img/trace20.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--wVLCYQZH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://shardingsphere.apache.org/blog/img/trace20.png" alt="" width="800" height="331"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Storage unit &lt;code&gt;ds_1&lt;/code&gt; has executed the following two SQL statements&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;insert into t_order_1 (order_id, user_id, address_id, status) VALUES (1, 1, 1, 'OK')
insert into t_order_3 (order_id, user_id, address_id, status) VALUES (3, 3, 3, 'OK')
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--H5ZcGbRc--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://shardingsphere.apache.org/blog/img/trace21.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--H5ZcGbRc--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://shardingsphere.apache.org/blog/img/trace21.png" alt="" width="800" height="354"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;By analyzing the span number, the parsing result of SQL statement and the execution process, it is concluded that the whole SQL link is in line with the expectation&lt;/p&gt;

&lt;h2&gt;
  
  
  Use Sampling Rate
&lt;/h2&gt;

&lt;p&gt;Sampling is very common when the amount of trace data in the production environment is very large. Shown as follows, set the sampling rate to 0.01 (sampling rate of 1%). OpenTelemetry Exporters is used for exporting data here, and please refer to the &lt;a href="https://github.com/open-telemetry/opentelemetry-java/tree/main/sdk-extensions/autoconfigure"&gt;OpenTelemetry Exporters document&lt;/a&gt; for detailed parameters.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;plugins:
 tracing:
   OpenTelemetry:
     props:
       otel.service.name: "shardingsphere"
       otel.metrics.exporter: "none"
       otel.traces.exporter: "zipkin"
       otel.exporter.zipkin.endpoint: "http://localhost:9411/api/v2/spans"
       otel-traces-sampler: "traceidratio"
       otel.traces.sampler.arg: "0.01"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;SQL Tracking allows developers and DBAs to quickly diagnose and locate performance bottlenecks in applications. By collecting SQL tracing data through ShardingSphere-Agent and using visualization tools such as Zipkin and Jaeger, the time-consuming situation of each storage node can be analyzed, which helps to improve the stability and robustness of the application, and ultimately enhances the user experience.&lt;/p&gt;

&lt;p&gt;Finally, you’re welcome to join &lt;a href="https://app.slack.com/huddle/T026JKU2DPF/C027BBHUJ80"&gt;ShardingSphere slack channel&lt;/a&gt; to discuss your questions, suggestions or ideas about ShardingSphere and ShardingSphere-Agent.&lt;/p&gt;

</description>
      <category>programming</category>
      <category>tutorial</category>
      <category>opensource</category>
      <category>database</category>
    </item>
  </channel>
</rss>
