<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: jessielin</title>
    <description>The latest articles on DEV Community by jessielin (@jessielin).</description>
    <link>https://dev.to/jessielin</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/jessielin"/>
    <language>en</language>
    <item>
      <title>Experiment workload performance impact by number of Connections</title>
      <dc:creator>jessielin</dc:creator>
      <pubDate>Fri, 09 Jun 2023 15:47:51 +0000</pubDate>
      <link>https://dev.to/jessielin/experiment-workload-performance-impact-by-number-of-connections-458e</link>
      <guid>https://dev.to/jessielin/experiment-workload-performance-impact-by-number-of-connections-458e</guid>
      <description>&lt;h1&gt;
  
  
  Motivation:
&lt;/h1&gt;

&lt;p&gt;As a Cockroach Enterprise Architect, I often help customers tune workload performance before projects launch in Production. And connection pool sizing is one of the nobs I often get asked about. &lt;/p&gt;

&lt;p&gt;In Cockroach Labs document on &lt;a href="https://www.cockroachlabs.com/docs/v22.2/connection-pooling#sizing-connection-pools"&gt;Sizing connection pools&lt;/a&gt;, it states&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Many workloads perform best when the maximum number of active connections is between 2 and 4 times the number of CPU cores in the cluster.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The number above applies for ACTIVE connections, meaning connections w/ an active statement being executed, not total connections. Hence customers sometimes asks me about optimal connection pool size for their application. &lt;/p&gt;

&lt;p&gt;The short answer, as you might have guessed, is "it depends". A longer answer is it depends on the workload characteristics (CPU/IO/Network) and SLA requirements. In this blogpost, I'd like to articulate the reason and run experiments w/ two workloads to illustrate the concept. Hope it helps you tune your workload on CockroachDB and get into Production quickly and successfully.&lt;/p&gt;

&lt;h1&gt;
  
  
  Overview:
&lt;/h1&gt;

&lt;p&gt;We selected CockroachDB built-in TPCC and KV as two workload examples and three scenario to demonstrate how workload characteristics impact the choice of connection pool. We used different number of connections, and compare resource utilization and performance metrics, to demonstrate how to best choose connection pool size. And when such parameter is chosen, we achieve a balance of resource, latency as well throughput.&lt;/p&gt;

&lt;p&gt;You'll find &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The optimal connection pool size and the number of active connection in use depends on the workload.&lt;/li&gt;
&lt;li&gt;what metrics to watch for and how to find the sweetspot of connection pool size&lt;/li&gt;
&lt;/ol&gt;

&lt;h1&gt;
  
  
  Experiment Design:
&lt;/h1&gt;

&lt;p&gt;In &lt;a href="https://www.cockroachlabs.com/docs/stable/recommended-production-settings.html#connection-pooling"&gt;Production Checklist&lt;/a&gt;, it explains&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Creating the appropriate size pool of connections is critical to gaining maximum performance in an application. Too few connections in the pool will result in high latency as each operation waits for a connection to open up. But adding too many connections to the pool can also result in high latency as each connection thread is being run in parallel by the system. The time it takes for many threads to complete in parallel is typically higher than the time it takes a smaller number of threads to run sequentially.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;From above, we expect to see:&lt;br&gt;
When not enough connections in use, a cluster has low inflight queries (or active connections), low CPU usage, and workload has high latency due to waiting on connections. When applications opens too many connections, a cluster has too many inflight queries or active connections, CPU overloaded and high latency due to waiting on CPU.&lt;/p&gt;

&lt;p&gt;The good news is all the metrics above can be easily monitored in built-in DBConsole when using CockroachDB. You can run experiments and find the optimal connection pool size for your workload.&lt;/p&gt;

&lt;p&gt;Also note the recommended number of Active Connections assumes that workloads are CPU bound, which is true for many OLTP workloads. In this experiment we choose TPCC as the first workload to test, since it's a popular OLTP benchmark workload.&lt;/p&gt;

&lt;p&gt;On the other hand, not all workloads are CPU bound. In the experiment, we will use KV, a simple key-value access workload, an I/O intensive workload as an example. Cockroach uses LSM tree to quickly flush data to disk thus it’s less likely to make workload IO bound. But if Disk IOPS is not provision properly, it could be an issue.&lt;/p&gt;

&lt;p&gt;Lastly to simulate Network bound workload, we deployed a client on the other side of US continent to compare connection pool size configurations and performance.&lt;/p&gt;

&lt;h1&gt;
  
  
  Implementation:
&lt;/h1&gt;

&lt;p&gt;We set up a 4-node cluster in AWS US-East-2 using &lt;code&gt;m5d.xlarge&lt;/code&gt; instances with local ssd. And another node in the same region to run the workloads using &lt;code&gt;cockroach workload&lt;/code&gt; tool, and connect to the cluster via haproxy installed on the same node. In the third scenario, the client is deployed in US-West-1.&lt;br&gt;
version: v22.2.6&lt;/p&gt;

&lt;h1&gt;
  
  
  Testing / validation:
&lt;/h1&gt;

&lt;h2&gt;
  
  
  TPCC test with 500 warehouses
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--KFH2srlQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9jau5zpk2uzz564nhj01.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--KFH2srlQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9jau5zpk2uzz564nhj01.png" alt="Image description" width="800" height="441"&gt;&lt;/a&gt;&lt;br&gt;
Since the cluster has 16 cores, using the benchmark number, we started w/ 32 connections, then increase it to 64. At this stage, the number of open transactions and active connections are far less than the number of connection. Efficiency ( a metric that TPCC benchmark measures) is high, and CPU utilization is low. Though P50 and P90 is low , PMax is 50 times higher than P90. The extreme tail-latency is a sign of connection starvation. Interestingly the number of active connections are higher than other scenario.&lt;/p&gt;

&lt;p&gt;Next, we increased the number of connections to 125, Efficiency increased, latency reduced. Most noticeably PMax dropped by 90%. We further increase it to 250 connection, and it showed get diminishing return. Efficiency and P50 stay flat, but tail latency increased slightly. &lt;/p&gt;

&lt;p&gt;However 500 connections seems to be a sweet-spot, where CPU utilization and throughput is among the highest, while latency is lowest across the board. &lt;/p&gt;

&lt;p&gt;Lastly, if we further increase the number of connections to 1000, CPU utilization increases, but Efficiency drops and tail latency increased. It indicates the applications use the additional connections to send query but they're queued up and nothing much more gets done.&lt;/p&gt;

&lt;p&gt;If this is a real workload, we could further change the pool size to 700 or 400 to explore further. But for this exercise, 500 connections is the best choice. It may be be relevant to the number of warehouses we test is also 500. Nevertheless, the number of open transactions and active connections are still far less than the baseline, 32, 2 times of the number of CPU cores. This may be an indication of client becoming the bottleneck, as a single node on &lt;code&gt;m5d.xlarge&lt;/code&gt; instance may not be able to send more queries.&lt;/p&gt;

&lt;h2&gt;
  
  
  KV test
&lt;/h2&gt;

&lt;p&gt;Compared with TPCC, KV is a much simpler workload based on key value pairs. We expect KV to have higher throughput and be more IO bound.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--61SC06_D--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fv1hu6lz79cph3y4592q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--61SC06_D--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fv1hu6lz79cph3y4592q.png" alt="Image description" width="800" height="432"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Similarly we'll started w/ 32 connections. QPS is much higher and latency is much lower than TPCC workload, as queries are much simpler. At this stage, number of Active Connection is at 30, CPU utilization are already over 80%. Both metrics are higher than TPCC. We also notice the IO throughputs are about twice as high as TPCC. It indicates the expectation above is correct. But the PMax is 100 times than P50. This is a warning sign.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--LumpbpDK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ec7o1baqrqwinyr8gh3y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--LumpbpDK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ec7o1baqrqwinyr8gh3y.png" alt="Image description" width="800" height="462"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next we increased the number of connections to 64, number of Active Connection jumped to 62, and CPU utilization and throughput increases slightly. P50 almost double, but PMax drops dramatically. It indicates doubling the connection pool size are effective in reducing PMax.&lt;/p&gt;

&lt;p&gt;We further increased the number of connections to 125, CPU utilization dramatically increases, P50 almost double again and tail latency increases 10x. Similar case for 250 connections. &lt;/p&gt;

&lt;p&gt;In this exercise the best connection pool size is likely around 64. We also see the number of open transactions and active statements are almost the same as the number of connections, and are close to 64, 4 times of number of cpus.&lt;/p&gt;

&lt;h2&gt;
  
  
  KV test - network bottleneck
&lt;/h2&gt;

&lt;p&gt;Lastly, to simulate network bound workload, we put the drive node to us-west region, as opposed to in the same region in scenario 1 &amp;amp; 2.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--HbfKHuFv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xtsm13xzlbaob7wk7wvl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--HbfKHuFv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xtsm13xzlbaob7wk7wvl.png" alt="Image description" width="800" height="425"&gt;&lt;/a&gt;&lt;br&gt;
Since we know the network latency will be much longer, using more connection likely will address the bottleneck. We started from 250 connections, but unlike scenario 2, the number of Open Transaction is only 75 and number of Active Connections is 63. Both are far less than the number of connections. The QPS is 5K and half of scenario 2, even though there are plenty of connections to send more work. When we increase the number of connections to above 500, QPS are and getting close to the same level as Scenario 2, and CPU utilization are still lower than previous example. Also note the number of Active Statements are now are 85-90% of number of Open Transactions. &lt;/p&gt;

&lt;p&gt;Inter-region network latency is about 50ms round-trip, and application latency is slightly over 50ms as expected. Also note tail latency more than doubled when increasing number of connections to 2000. Given us-east &amp;amp; west latency is about 50ms, the sweet-spot for number of connection is likely about 500, where mean latency is low and tail latency is reasonable, CPU utilization and throughput are reasonable high.&lt;/p&gt;

&lt;h1&gt;
  
  
  Take-away:
&lt;/h1&gt;

&lt;ol&gt;
&lt;li&gt;Ultimately performance tuning is to address bottleneck iteratively, as we add more connections, bottleneck may moves to another part of the system, CPU in this case. So we want to find a sweet-spot when multiple metrics are considered together. And slightly err on over-provisioning to counter unexpected peak.&lt;/li&gt;
&lt;li&gt;Understand workload characteristics (CPU/Network/IO) and experiment with number of connections to tune the workload, and check out &lt;a href="https://www.cockroachlabs.com/docs/stable/recommended-production-settings.html#connection-pooling"&gt;Production Checklist&lt;/a&gt; and other documents and blogposts to build confidence that the workload is ready for Production launch!&lt;/li&gt;
&lt;/ol&gt;

</description>
      <category>cockroachdb</category>
      <category>performance</category>
      <category>connectionpool</category>
    </item>
    <item>
      <title>Use AWS Certificate Manager (ACM) to simplify UI certification management for CockroachDB</title>
      <dc:creator>jessielin</dc:creator>
      <pubDate>Fri, 04 Nov 2022 20:45:01 +0000</pubDate>
      <link>https://dev.to/jessielin/use-aws-certificate-manager-acm-to-simplify-ui-certification-management-for-cockroachdb-knh</link>
      <guid>https://dev.to/jessielin/use-aws-certificate-manager-acm-to-simplify-ui-certification-management-for-cockroachdb-knh</guid>
      <description>&lt;p&gt;Recently a customer asked me how to use AWS Certificate Manager(ACM) to manage certifications for self-hosted CockroachDB clusters. I looked into it and would like to share tips and tricks below. Please feel free to comment below to let me know know what other topics you like to see!&lt;/p&gt;

&lt;h2&gt;
  
  
  Problem statement:
&lt;/h2&gt;

&lt;p&gt;CockroachDB secure cluster requires TLS/HTTPS to access DBConsole, and we currently recommend using Let’s Encrypt and &lt;a href="https://www.cockroachlabs.com/docs/v22.1/create-security-certificates-custom-ca.html#accessing-the-db-console-for-a-secure-cluster"&gt;upload UI Certs to cockroach nodes&lt;/a&gt;. When certificates expires, this requires additional administrative effort to rotate and maintain the certs.&lt;/p&gt;

&lt;h2&gt;
  
  
  Solution:
&lt;/h2&gt;

&lt;p&gt;AWS ACM can issue and auto renew certificates when using DNS validation and thus can reduce administrative overhead. AWS NLB TLS termination can handle TLS decryption between browser and NLB, and re-encryption between NLB and CockroachDB nodes to meet the requirement of CockroachDB. AWS NLB document isn’t very clear on how to configure it, so we’re adding additional screenshots here. &lt;/p&gt;

&lt;p&gt;The other benefit is we now only need to manage Node Certs on the server side.&lt;/p&gt;

&lt;h2&gt;
  
  
  Walkthrough:
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Request a public certificate in ACM for a domain name. I used &lt;code&gt;jessielin.xxxx.dev&lt;/code&gt; in this case.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--LjGmDcJh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/i02ems0yu9kc1w4gbq4l.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--LjGmDcJh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/i02ems0yu9kc1w4gbq4l.png" alt="Image description" width="880" height="539"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create Network Load Balancer with 2 listeners. One for DBConsole, and one for SQL access.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--J3H0WNk6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/oa7jnm8ldjdoncu1oi0h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--J3H0WNk6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/oa7jnm8ldjdoncu1oi0h.png" alt="Image description" width="880" height="433"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;a. For DBConsole access, add TLS listener and Target Group. Use the certs issued by ACM. Target Group port should be the port number specified in --http-addr.  By using TLS listener and TLS target group AWS NLB will decrypt and re-encrypt.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--S8p-XWPk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/olz9ccuotjp6tgdiiccp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--S8p-XWPk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/olz9ccuotjp6tgdiiccp.png" alt="Image description" width="880" height="521"&gt;&lt;/a&gt;&lt;br&gt;
Target Group&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--FuChqGKb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/za5eapqd6hh29oszhr7x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--FuChqGKb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/za5eapqd6hh29oszhr7x.png" alt="Image description" width="880" height="719"&gt;&lt;/a&gt;&lt;br&gt;
Healthcheck&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--fXVthjIG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/d8brxcik4fm8mj71tr4j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--fXVthjIG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/d8brxcik4fm8mj71tr4j.png" alt="Image description" width="880" height="205"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;b. For SQL access, add TCP listener and forward it to sql port. The &lt;a href="https://www.cockroachlabs.com/docs/v22.1/deploy-cockroachdb-on-aws.html#step-4-set-up-load-balancing"&gt;official document&lt;/a&gt; explains it very well. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--xQVwHrVw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4jpbzq3od9grpqkja9b7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--xQVwHrVw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4jpbzq3od9grpqkja9b7.png" alt="Image description" width="880" height="725"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Add LB hostname and ip address to Node certs &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;a. use &lt;code&gt;openssl x509 -in certs/node.crt -text&lt;/code&gt; to find out existing nodes' hostname, ip addresses&lt;/p&gt;

&lt;p&gt;b. LB hostname is required to Common Name or Subject Alternative Names fields of the certificate, documented &lt;a href="https://www.cockroachlabs.com/docs/stable/authentication.html#using-digital-certificates-with-cockroachdb"&gt;here&lt;/a&gt;. To Add LB hostname and ip addr to the list, use &lt;code&gt;cockroach cert create-node&lt;/code&gt; to create new certs as &lt;a href="https://www.cockroachlabs.com/docs/v22.1/cockroach-cert.html#create-the-certificate-and-key-pairs-for-nodes"&gt;documented&lt;/a&gt; and redistribute to all nodes. Validate the new certs are loaded correctly from DBConsole&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--HZ4vrtU6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/x63apbwantajy90gnmrx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--HZ4vrtU6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/x63apbwantajy90gnmrx.png" alt="Image description" width="880" height="787"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create an A record in Route 53 to redirect the &lt;code&gt;jessielin.xxxx.dev&lt;/code&gt; to the NLB&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Yp3beQ98--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jqcwjyqo6mv7kvpzfzlr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Yp3beQ98--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jqcwjyqo6mv7kvpzfzlr.png" alt="Image description" width="880" height="819"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Voila there you have it!&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--89tSKO6z--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4gli4r391zk8easwmigz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--89tSKO6z--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4gli4r391zk8easwmigz.png" alt="Image description" width="880" height="862"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--SWnspmN_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/m2sunkgkinp4ivl5omj7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--SWnspmN_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/m2sunkgkinp4ivl5omj7.png" alt="Image description" width="880" height="1061"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>cockroachdb</category>
      <category>aws</category>
      <category>acm</category>
    </item>
  </channel>
</rss>
