<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Underdog Sports</title>
    <description>The latest articles on DEV Community by Underdog Sports (@underdogsports).</description>
    <link>https://dev.to/underdogsports</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/underdogsports"/>
    <language>en</language>
    <item>
      <title>AWS Outpost and RDS: Reslotting Checklist</title>
      <dc:creator>Regis Wilson</dc:creator>
      <pubDate>Fri, 26 Jul 2024 00:35:48 +0000</pubDate>
      <link>https://dev.to/underdogsports/aws-outpost-and-rds-reslotting-checklist-4o0j</link>
      <guid>https://dev.to/underdogsports/aws-outpost-and-rds-reslotting-checklist-4o0j</guid>
      <description>&lt;h2&gt;
  
  
  Overview
&lt;/h2&gt;

&lt;p&gt;At Underdog, we use Amazon Web Services’ (AWS) Relational Database Service (RDS) and Elastic Kubernetes Service (EKS) (among many other services) to power our sports betting applications. Most readers will be readily familiar with these services and if you aren’t, I will explain most of the terms and technologies briefly as we go. What will be different about this post is a recent AWS service we’ve been using in conjunction with these two (RDS and EKS) called AWS Outpost.&lt;/p&gt;

&lt;p&gt;In order to meet United States state gaming regulations and requirements, we use Outpost to meet data and application security residency requirements in particular states where we operate. Outpost is great for delivering compute and storage services close to a region where we operate that might be outside of Amazon’s many regional centers.&lt;/p&gt;

&lt;p&gt;This post will detail some of the challenging aspects of running and operating these services that we had recently and hope that the experience will enlighten you, the dear reader, if you ever need to venture down this road in the future. If you are already familiar with configuring and operating these services together, you might also enjoy reading it with the pleasure and satisfaction of &lt;em&gt;schadenfreude&lt;/em&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  AWS Regions and Outpost
&lt;/h2&gt;

&lt;p&gt;If you are unfamiliar with Outpost (as I was initially), you may wonder what the service is and why Underdog chose to use it. In simple terms, you may already know that AWS regions are spread across the globe in various countries to offer services that are close to major population centers so that AWS’ cloud services can be placed closest to the most consumers. Inside each country, as in the United States, there are several regions located inside several states, for example Virginia (us-east-1), Ohio (us-east-2), and Oregon (us-west-2), among others.&lt;/p&gt;

&lt;p&gt;What happens if Underdog wants to operate services inside a state that is not listed as an AWS region but, never-the-less, wants to collocate cloud services within the borders of a particular state? There are many options to independently host our own hardware and networking gear, but we wanted to continue to enjoy the automation and speed of deploying Infrastructure as a Service (IAAS) via Application Programming Interfaces (APIs) and so-called Infrastructure as Code (IAC). This is where the promise of AWS Outpost comes into play.&lt;/p&gt;

&lt;p&gt;With AWS Outpost, customers like us can order capacity from AWS and deploy it into the hosting facility of our choice, along with connectivity from AWS Direct Connect or VPN services to provide a regionally-located service in a particular location that we need to operate in. While the capacity for Outpost is inherently limited, and also is not “instantly” deployable or provisioned like other services, at least we do not need to directly manage infrastructure, networking, security, or software ourselves. Also, any services that are provisioned inside the Outpost will be managed by the same AWS APIs and dashboards that we are already using. Most importantly, the resources inside the Outpost can be shared by our staging and production AWS accounts and Virtual Private Clouds (VPCs) in a relatively seamless and integrated way to operate from a location outside an established AWS region.&lt;/p&gt;

&lt;h2&gt;
  
  
  Getting Familiar With Outpost
&lt;/h2&gt;

&lt;p&gt;As with all AWS services, it is important to know what the services’ strengths, weaknesses and limitations are. It is also important to understand how one or more services may (or may not) interoperate together! This is where the majority of our problems originated as we went into provisioning our applications in a remote environment. In terms of provisioning Elastic Cloud Compute (EC2) services that we can use as nodegroups in EKS, this is a relatively understood solution and works well with Outpost configurations. What isn’t as directly easy to understand or configure was the RDS integration with Outpost and we ran into issues with some basics like choosing instance sizes and using instances in Outpost with RDS.&lt;/p&gt;

&lt;p&gt;The first issue we encountered initially was choosing the correct capacity and sizing of instances for both compute and RDS instances. If you are spoiled by AWS’ amazing depth and breadth of instance sizes, architectures, and variety, you will need to reorganize your thinking around a fixed set of capacity and architecture limitations for your Outpost racks and/or servers. As an example, let’s say that you have one &lt;a href="https://aws.amazon.com/outposts/rack/hardware-specs/" rel="noopener noreferrer"&gt;rack&lt;/a&gt; with 4 each &lt;code&gt;m5.24xlarge&lt;/code&gt; “raw” capacity. You could subdivide that capacity as you saw fit, let’s say that you started conservatively (as we did) with way too large of instance sizes as 8 each &lt;code&gt;m5.12xlarge&lt;/code&gt; instances, spread across staging, production, RDS, and EKS as follows (please note all drawings are for illustrative purposes only and should not be relied upon for factual reference):&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fader3b58mtn92rji48r1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fader3b58mtn92rji48r1.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This seemed like a reasonable starting point for us and would allow us plenty of capacity and resources to operate without worrying about needing more vertical scaling capacity. With this setup, we were able to successfully launch services relatively quickly and simply, all options considered.&lt;/p&gt;

&lt;h2&gt;
  
  
  Right Size Scaling
&lt;/h2&gt;

&lt;p&gt;If you are familiar with AWS services, you may now be thinking to yourself, as we later did, “Whoah, this seems like a lot of capacity to use for RDS and EKS!” The truth is that if we had more time, insight, and experience, we might have been able to come up with a much better provisioning scheme to right-size the capacity for each use case. The engineering issue isn’t so much the large over-allocation of capacity (which is a concern), but rather the operational downstream issues such as having spare capacity, being able to migrate or upgrade services and versions, and being able to scale horizontally (instead of vertically) as needed to meet demands.&lt;/p&gt;

&lt;p&gt;After some post-launch issues were settled, we analyzed the data and came up with a much more reasonable allocation scheme that would suit our needs now and in the future. We settled on something that looked more like the following drawing.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdvlh0jmho99o0l6lakry.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdvlh0jmho99o0l6lakry.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You will notice that we have much more reasonably-sized (but still beefy) &lt;code&gt;m5.8xlarge&lt;/code&gt; RDS databases. We also have enough capacity to add more RDS replicas (or new primaries even), and plenty of application-worthy EKS worker nodes for redundancy and failover. Not only that, but we now have way more free “spare” capacity for future needs as either smaller RDS databases or as beefier EKS workloads emerge.&lt;/p&gt;

&lt;p&gt;Armed with this new information we let our AWS representatives know about our plans and future configuration. We had a pretty good idea of the plan of operations and had laid out a strategy for performing the changes “in place” with a reasonable amount of downtime during a maintenance window.&lt;/p&gt;

&lt;h2&gt;
  
  
  Best Laid Plans of Mice and Men
&lt;/h2&gt;

&lt;p&gt;Experts in RDS and Outpost may already see the issue we were going to find ourselves faced with and will be chuckling to themselves, but this was the original plan we were going to follow to migrate from the original configuration to the new capacity configuration. We had consciously chosen the shortest amount of time that the maintenance window would occur to re-slot the entire Outpost rack in one window without causing undue issues affecting both staging and production. We did not have multiple Outpost racks available to work with; but this is something we definitely will consider in the future.&lt;/p&gt;

&lt;p&gt;See if you can spot the issue:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Initiate downtime maintenance window in the application by shutting down all application services and issuing maintenance page notifications&lt;/li&gt;
&lt;li&gt;Temporarily stop staging RDS instances and EKS worker nodes&lt;/li&gt;
&lt;li&gt;Take snapshots for disaster recovery purposes&lt;/li&gt;
&lt;li&gt;Temporarily stop production RDS instances and EKS worker nodes&lt;/li&gt;
&lt;li&gt;Take snapshots for disaster recovery purposes&lt;/li&gt;
&lt;li&gt;AWS support will apply new Outpost reslotting configuration&lt;/li&gt;
&lt;li&gt;Restart staging and production RDS instances with new sizes&lt;/li&gt;
&lt;li&gt;Restart staging and production EKS worker nodes and join to clusters&lt;/li&gt;
&lt;li&gt;Test and validate the applications&lt;/li&gt;
&lt;li&gt;End the maintenance window and allow normal operations&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;If you spotted the issue in step 7 labeled “restart staging and production RDS instances”, congratulations! For everyone else, you can follow along and learn from our experience when you attempt to do this yourself.&lt;/p&gt;

&lt;h2&gt;
  
  
  You Can't Modify a Stopped DB Instance.
&lt;/h2&gt;

&lt;p&gt;This statement from the AWS documentation should be tattooed on the forehead of any AWS RDS practitioner – in reverse so that they can read it in the mirror. Or, perhaps, both forward and reverse for people who look at the tattoo and for themselves looking at it in the mirror. The issue we immediately faced as we tried to start the new instances was that the previous &lt;code&gt;db.m5.12xlarge&lt;/code&gt; instances were not available any more in our Outpost configuration, so we could not start the RDS instances. We also could not convert the instances into the new &lt;code&gt;db.m5.8xlarge&lt;/code&gt; instances sizes that did exist in the new configuration since the databases were shut down!!&lt;/p&gt;

&lt;p&gt;I’m not exaggerating too much when I say that I briefly considered the fact that we had made a fatal mistake and were going to be down in production for hours doing a disaster recovery at this point. It is very important in these situations not to panic but to stay calm, talk through your options, and decide on a safe course of corrective actions.&lt;/p&gt;

&lt;p&gt;Fortunately, we had the following in our favor, which you should also have at your disposal if you attempt anything like this. We made sure we had an AWS representative and AWS support people on the call while our maintenance window was active and the team on our side were engaged. This enabled us to get real time feedback on the reslotting process, get answers to RDS and Outpost answers from AWS, and also (critically) allowed us to reconfigure the capacity online while we were in the midst of trying to salvage our operations. If you do not have enterprise support, then you will most likely not be able to resolve a situation like this. Nor should you even attempt to do something like this without enterprise support obviously.&lt;/p&gt;

&lt;h2&gt;
  
  
  Failure is Not an Option
&lt;/h2&gt;

&lt;p&gt;We quickly broke out our calculators, slide rulers, and pocket pens to come up with an emergency configuration that would enable us to start both &lt;code&gt;db.m5.12xlarge&lt;/code&gt; and &lt;code&gt;db.m5.8xlarge&lt;/code&gt; target instance capacity &lt;em&gt;at the same time&lt;/em&gt;. It was like that scene in &lt;em&gt;Apollo 13&lt;/em&gt; where Ed Harris says that we’ve never lost a database in the cloud and we were going to use everything at our disposal to figure out a solution. We were able to come up with the following configuration to solve our issue.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6st31soqh1gk700lbeuq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6st31soqh1gk700lbeuq.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Fortunately, we had enough of the “spare” capacity to configure as RDS interim instances. Later on, AWS could then reslot the unused capacity back into our spare capacity as needed. There was a huge sigh of relief as the configuration was reslotted and the databases were started, modified to new instance sizes, then rebooted as the correct target instance size!&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;In summary, we learned quite a bit and hope that you have too, if you have any plans for Outpost capacity in your future. In no particular order these lessons are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Always plan, check your plan, recheck your plan, and have a backup plan&lt;/li&gt;
&lt;li&gt;Always work closely with your AWS support and representatives to avoid problems like this if you can, and have them available when you need them in advance&lt;/li&gt;
&lt;li&gt;Always stay calm and consider your options. Stick to the plan but react appropriately when circumstances change&lt;/li&gt;
&lt;li&gt;Read your documentation and pay close attention to every detail as it impacts your planned path&lt;/li&gt;
&lt;li&gt;When migrating capacity, ensure enough spare capacity is available both before, during, and after your migration plans&lt;/li&gt;
&lt;li&gt;Use multiple phases of the migration plan where possible; consider initial phases, interim phases, and final phases&lt;/li&gt;
&lt;li&gt;Please, please, please give your AWS support people and representatives a big show of appreciation for their hard work, dedication, and help!&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  About the Author
&lt;/h3&gt;

&lt;p&gt;Regis is a staff platform engineer at Underdog. He has designed, built, and operated cloud-native architectures since 2015.&lt;/p&gt;

&lt;h3&gt;
  
  
  We're Hiring!
&lt;/h3&gt;

&lt;p&gt;If you want to work on exciting projects like these with exciting people like me, please check out our &lt;a href="https://underdogfantasy.com/careers" rel="noopener noreferrer"&gt;hiring page&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Image Credit
&lt;/h3&gt;

&lt;p&gt;Photo by &lt;a href="https://unsplash.com/@tdederichs?utm_content=creditCopyText&amp;amp;utm_medium=referral&amp;amp;utm_source=unsplash" rel="noopener noreferrer"&gt;Torsten Dederichs&lt;/a&gt; on &lt;a href="https://unsplash.com/photos/multicolored-direction-signage-beside-black-shed-5bokmbXK6vA?utm_content=creditCopyText&amp;amp;utm_medium=referral&amp;amp;utm_source=unsplash" rel="noopener noreferrer"&gt;Unsplash&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>awsrds</category>
      <category>awsoutpost</category>
      <category>devops</category>
    </item>
    <item>
      <title>Ruby YJIT at Underdog</title>
      <dc:creator>Jerrod Carpenter</dc:creator>
      <pubDate>Sat, 22 Apr 2023 18:29:17 +0000</pubDate>
      <link>https://dev.to/underdogsports/ruby-yjit-at-underdog-5629</link>
      <guid>https://dev.to/underdogsports/ruby-yjit-at-underdog-5629</guid>
      <description>&lt;p&gt;Here at Underdog, many of our projects are Rails services. So when &lt;a href="https://www.ruby-lang.org/en/news/2022/12/25/ruby-3-2-0-released/"&gt;Ruby 3.2.0&lt;/a&gt; was released, we were eager to put it through its paces and see how much performance could be gained with this latest version! In this post, we'll discuss some critical changes in the release and how they affect performance. &lt;/p&gt;

&lt;p&gt;Per tradition, &lt;a href="https://www.ruby-lang.org/en/news/2022/12/25/ruby-3-2-0-released/"&gt;Ruby 3.2.0&lt;/a&gt; was released this past Christmas. By all accounts, this is a step forward for the language, and with it came the following optimizations:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://shopify.engineering/ruby-variable-width-allocation"&gt;Variable width memory allocation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Object shapes&lt;/li&gt;
&lt;li&gt;&lt;a href="https://shopify.engineering/yjit-just-in-time-compiler-cruby"&gt;Production-ready YJIT compilation&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Shopify has taken a leadership role in pushing Ruby forward — this release is the proof. They were running YJIT in production days before the release, and given that level of rigor, we were comfortable pushing 3.2.0 to our own production environment after some brief testing in our staging environment.&lt;/p&gt;

&lt;h2&gt;
  
  
  How We Upgraded
&lt;/h2&gt;

&lt;p&gt;We used Ruby’s official Debian-based images for our application containers when we decided we wanted to enable YJIT. Unfortunately, &lt;a href="https://github.com/docker-library/ruby/pull/400#issuecomment-1369740381"&gt;YJIT support was not available in those images at the time&lt;/a&gt;, so we opted to switch our container images to be based on Ruby’s Alpine-based images instead. We were using the Debian-based images for stability's sake. However, we had already run into issues with other out-of-date packages due to Debian's general philosophy on this topic, so switching to Alpine gave us additional benefits.&lt;/p&gt;

&lt;p&gt;The workload here was small. All we had to do was change our Dockerfile, update some system-level dependency names between the two distros, and then add the &lt;code&gt;RUBY_YJIT_ENABLE&lt;/code&gt; environment variable to our application containers’ Kubernetes manifests. Once we flipped that value to &lt;code&gt;1&lt;/code&gt; via deployment, we were able to exec into a container and confirm YJIT was enabled with &lt;code&gt;ruby —-version&lt;/code&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Our Results
&lt;/h2&gt;

&lt;p&gt;We had a few third-party dependencies we needed to update to support Ruby 3.2.0, but after those changes, things went smoothly. We had no issues or surprises over the weekend, and overall we saw roughly a 10% drop in response time:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ACPkUEOA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pe02jqjc7gx9bcmtr7wf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ACPkUEOA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pe02jqjc7gx9bcmtr7wf.png" alt="graph of comparisons of response times before and after YJIT upgrade" width="567" height="384"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Our p50s saw about a 9% improvement, which grew to 12.5% at p98. We found this intriguing and consistent with how we expected this update to affect Rails. Our quickest APIs spend most of their time with database IO so there’s less “Ruby time” to optimize, whereas our worst-performing APIs are often doing more logic in Ruby or serializing larger objects. We expected to see the most significant improvement in these more Ruby-intensive requests, and backed this up by cherry picking an API we knew was Ruby intensive. &lt;/p&gt;

&lt;p&gt;Below is our Drafts API that serializes draft entries and picks. It is more data than we should be serializing, and running a little more logic than it should be, but it’s been below our SLOs so we have yet to optimize.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Before&lt;/th&gt;
&lt;th&gt;After&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--QrYAdejD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/h3l9q6pzj56hsyrhsfil.png" alt="draft endpoint response times before" width="394" height="296"&gt;&lt;/td&gt;
&lt;td&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--6q4AgY5q--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4kur9ufcig97atekixpf.png" alt="draft endpoint response times after" width="380" height="288"&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;This improvement is staggering. In this comparison, you can see that the improvement was greatest for the worst-performing scenarios — which is amazing!&lt;/p&gt;

&lt;p&gt;At Underdog, we also process stats and grade contests with background jobs via Sidekiq. We were curious if our job execution times improved as well… and absolutely, it did!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--C2AV56Mt--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xhjyuhsphb2fkqz90ez1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--C2AV56Mt--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xhjyuhsphb2fkqz90ez1.png" alt="graph of comparisons of execution times for sidekiq jobs before and after YJIT upgrade" width="573" height="433"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The p90 looks a little strange, but other than that, we saw a significant improvement. This makes sense — we expected enhancements here as well because it’s common to have Ruby-heavy processes turned into background jobs.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusions
&lt;/h2&gt;

&lt;p&gt;It is so important to keep projects up-to-date to take advantage of the newest releases, especially considering the free performance and how easy it was for Underdog to upgrade to 3.2.0. For some time, Ruby has been promising performance upgrades with mixed results through the lens of a Rails deployment, but they took a huge step forward with 3.2.0. The Ruby Core team hit it out of the park, and we’re lucky to have such a strong team behind the language.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Want to learn more about our team here at Underdog? Check out our &lt;a href="https://underdogfantasy.com/careers?utm_source=DevTo&amp;amp;utm_medium=blog+&amp;amp;utm_campaign=UDLife"&gt;culture and careers page here&lt;/a&gt; and see how we are changing the game!&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>ruby</category>
      <category>rails</category>
      <category>performance</category>
    </item>
  </channel>
</rss>
