<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: koki kitamura</title>
    <description>The latest articles on DEV Community by koki kitamura (@koukikitamura).</description>
    <link>https://dev.to/koukikitamura</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/koukikitamura"/>
    <language>en</language>
    <item>
      <title>Server can handle 10 million users</title>
      <dc:creator>koki kitamura</dc:creator>
      <pubDate>Sun, 01 May 2022 14:10:26 +0000</pubDate>
      <link>https://dev.to/koukikitamura/server-can-handle-10-million-users-10h8</link>
      <guid>https://dev.to/koukikitamura/server-can-handle-10-million-users-10h8</guid>
      <description>&lt;h2&gt;
  
  
  Overview
&lt;/h2&gt;

&lt;p&gt;I created an API server that is highly scalable and can handle 10 million users. It's an SNS like Twitter.&lt;br&gt;
The implementation is published on &lt;a href="https://github.com/koukikitamura/scalable-twitter" rel="noopener noreferrer"&gt;Github&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The development environment is as follows.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Node 16.14&lt;/li&gt;
&lt;li&gt;Express 4.17.3&lt;/li&gt;
&lt;li&gt;DynamoDB 2012-08-10&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The functional requirements are as follows.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Post a tweet&lt;/li&gt;
&lt;li&gt;Post a Comment for tweet&lt;/li&gt;
&lt;li&gt;Follow user&lt;/li&gt;
&lt;li&gt;Get Timeline &lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Services with hundreds of millions of users, such as Facebook, Amazon, and Youtube, need to handle a lot of traffic. A commonly used approach to handling heavy traffic is scale-out rather than scale-up. Scale-up is expensive because it uses high-performance server. In addition, there is a performance limit for operating on one server.&lt;/p&gt;

&lt;p&gt;Let's talk about scale-out. Application can be broadly divided into three layers.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Client layer&lt;/li&gt;
&lt;li&gt;Server layer&lt;/li&gt;
&lt;li&gt;Database layer&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;When handling a large amount of traffic, The server layer only processes the data, it does not store it. Therefore, it is easy to scale out.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz848w0lvbbvoytc9l2xv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz848w0lvbbvoytc9l2xv.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;On the other hand, the database layer becomes difficult to maintain consistency and availability as data is distributed due to scale-out. You also need the logic to decide which data is stored on which node. Data relocation is required when increasing or decreasing the number of nodes. Since these features are not in RDB, we will use NoSQL.&lt;/p&gt;

&lt;p&gt;Typical databases that support scale-out include BigTable, HBase, DynamoDB, Cassandra, etc.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Database&lt;/th&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;BigTable、HBase&lt;/td&gt;
&lt;td&gt;Consistent and up-to-date data can be obtained. On the other hand, data cannot be acquired while the lock is applied due to data update.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;DynamoDB、Cassandra&lt;/td&gt;
&lt;td&gt;Data is always accessible. On the other hand, old data may be read during data synchronization.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;This time, we will create an API server for SNS, so availability is more important than consistency. Therefore, we use DynamoDB.&lt;/p&gt;
&lt;h2&gt;
  
  
  What is DynamoDB?
&lt;/h2&gt;

&lt;p&gt;DynamoDB is a key-value database. You can create tables, and each table stores an item. Each item has a key and a value.&lt;/p&gt;

&lt;p&gt;You can specify a partition key and a sort key for the item key. The partition key is used to determine the node from within the DynamoDB cluster. The sort key is like an index on a table and is used for sorting.&lt;/p&gt;

&lt;p&gt;You can store multiple attribute / value pairs for an item's value. The attributes can be different for each item.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj3jekb4mxgk89h4k51z7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj3jekb4mxgk89h4k51z7.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;DynamoDB queries are limited and basically narrow down items by partition key and sort key only. When querying using other attributes, it will be slower as the number of items increases because it is necessary to check all items.&lt;/p&gt;

&lt;p&gt;When you want to treat other attributes as partition keys, use GSI (Global Secondaly Index). When other attributes are treated as sort keys, LSI (Local Secndary Index) is used.&lt;/p&gt;
&lt;h2&gt;
  
  
  Database Design
&lt;/h2&gt;

&lt;p&gt;DynamoDB's database design is different from RDB. The flexibility of querying RDBs allows you to design a normalized table first, without considering access patterns to your data. On the other hand, DynamoDB has a limited query pattern, so first determine the access pattern to the data and then design the table based on it. Specifically, we will proceed with the following flow.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Modeling&lt;/li&gt;
&lt;li&gt;Create use case list&lt;/li&gt;
&lt;li&gt;Design Table&lt;/li&gt;
&lt;li&gt;Create query definition&lt;/li&gt;
&lt;/ol&gt;
&lt;h3&gt;
  
  
  Modeling
&lt;/h3&gt;

&lt;p&gt;The ER diagram is as follows.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fut1ihoxkqwcu72o3ue34.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fut1ihoxkqwcu72o3ue34.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The timeline shows tweets of users that you are following. In SNS, the display speed of the timeline has a great influence on usability. Consider a database design that can display the timeline faster.&lt;/p&gt;
&lt;h3&gt;
  
  
  Read Heavy / Write Light on the timeline
&lt;/h3&gt;

&lt;p&gt;In the case of a normalized table design, writing data at the time of tweeting is light because data is written only to the Tweets table. On the other hand, reading data on the timeline is heavy. The main flow when reading the timeline is as follows.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Get a list of IDs of users you are following&lt;/li&gt;
&lt;li&gt;Get tweets from each user you follow&lt;/li&gt;
&lt;li&gt;Merge the retrieved tweets&lt;/li&gt;
&lt;li&gt;Sort merged tweets&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The SQL for getting the timeline is as follows.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;SELECT&lt;/span&gt;
  &lt;span class="o"&gt;*&lt;/span&gt;
&lt;span class="k"&gt;FROM&lt;/span&gt; 
  &lt;span class="n"&gt;tweets&lt;/span&gt;
&lt;span class="k"&gt;WHERE&lt;/span&gt;
  &lt;span class="n"&gt;userId&lt;/span&gt; &lt;span class="k"&gt;IN&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="n"&gt;followeeId&lt;/span&gt; &lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="n"&gt;follows&lt;/span&gt; &lt;span class="k"&gt;WHERE&lt;/span&gt; &lt;span class="n"&gt;followerId&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s1"&gt;'user id'&lt;/span&gt;
  &lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;ORDER&lt;/span&gt; &lt;span class="k"&gt;BY&lt;/span&gt;
  &lt;span class="n"&gt;postDate&lt;/span&gt; &lt;span class="k"&gt;DESC&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With this method, the more followers you have, the heavier the load on the timeline will be. It can be said to be a Read Heavy / Write Light method.&lt;/p&gt;

&lt;h3&gt;
  
  
  Read Light / Write Heavy on the timeline
&lt;/h3&gt;

&lt;p&gt;Consider a Read Light / Write Heavy technique. If you create a Timeline table and want to read the timeline, just query the timeline table. On the other hand, when a user tweeted, make sure to write the tweet to the user's follower's timeline.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Focb1lluh5licjlrnkpfk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Focb1lluh5licjlrnkpfk.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The SQL for getting the timeline is as follows.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;SELECT
  *
FROM
  timelines
WHERE
  userId = 'user id'
ORDER BY
  tweetPostDate
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This time, we will use this Read Light / Write Heavy method.&lt;/p&gt;

&lt;h3&gt;
  
  
  Create use case list
&lt;/h3&gt;

&lt;p&gt;Create a data use case list based on functional requirements to find out how to access the data.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Entity&lt;/th&gt;
&lt;th&gt;UseCase&lt;/th&gt;
&lt;th&gt;Screen&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Tweet&lt;/td&gt;
&lt;td&gt;getTimelineByUserId&lt;/td&gt;
&lt;td&gt;Home&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;User&lt;/td&gt;
&lt;td&gt;getUserByUserName&lt;/td&gt;
&lt;td&gt;User Detail&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Follow&lt;/td&gt;
&lt;td&gt;getFolloweesByUserId&lt;/td&gt;
&lt;td&gt;User Detail&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Follow&lt;/td&gt;
&lt;td&gt;getFollowersByUserId&lt;/td&gt;
&lt;td&gt;User Detail&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Follow&lt;/td&gt;
&lt;td&gt;getCountFoloweeByUserId&lt;/td&gt;
&lt;td&gt;User Detail&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Follow&lt;/td&gt;
&lt;td&gt;getcountFollowerByUsreId&lt;/td&gt;
&lt;td&gt;User Detail&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Tweet&lt;/td&gt;
&lt;td&gt;getTweetsByUserId&lt;/td&gt;
&lt;td&gt;User Detail&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Tweet&lt;/td&gt;
&lt;td&gt;getTweetByTweetId&lt;/td&gt;
&lt;td&gt;Tweet Detail&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Comment&lt;/td&gt;
&lt;td&gt;getCommentsByTweetId&lt;/td&gt;
&lt;td&gt;Tweet Detail&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  Design Table
&lt;/h3&gt;

&lt;p&gt;We will design the table and index based on the use case list. DynamoDB has a limited query pattern, but a method called &lt;a href="https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/bp-gsi-overloading.html" rel="noopener noreferrer"&gt;Overloading GSI&lt;/a&gt; allows for flexible queries.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8en7ffzmjjvb0vxgp5cy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8en7ffzmjjvb0vxgp5cy.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Include the ID in the sort key. Make the order of ID and record creation time the same. Then you can sort the posts by date without using LSI.&lt;/p&gt;

&lt;h3&gt;
  
  
  Create query definition
&lt;/h3&gt;

&lt;p&gt;Finally, write out the query conditions. Based on this, we will implement around the database.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Entity&lt;/th&gt;
&lt;th&gt;UseCase&lt;/th&gt;
&lt;th&gt;Parameters&lt;/th&gt;
&lt;th&gt;Table / Index&lt;/th&gt;
&lt;th&gt;Key Condition&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Tweet&lt;/td&gt;
&lt;td&gt;getTimelineByUserId&lt;/td&gt;
&lt;td&gt;{ UserId }&lt;/td&gt;
&lt;td&gt;Primary Key&lt;/td&gt;
&lt;td&gt;GetItem (ID=UserId AND begins_with(DataType, timeline))&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;User&lt;/td&gt;
&lt;td&gt;getUserByUserName&lt;/td&gt;
&lt;td&gt;{Username}&lt;/td&gt;
&lt;td&gt;GSI-1&lt;/td&gt;
&lt;td&gt;Query (DataValue=Username AND DataType=usserProfile)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Follow&lt;/td&gt;
&lt;td&gt;getFolloweesByUserId&lt;/td&gt;
&lt;td&gt;{UserId}&lt;/td&gt;
&lt;td&gt;Primary key&lt;/td&gt;
&lt;td&gt;Query (ID=userId AND begins_with(DataType, followee)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Follow&lt;/td&gt;
&lt;td&gt;getFollowersByUserId&lt;/td&gt;
&lt;td&gt;{UserId}&lt;/td&gt;
&lt;td&gt;Primary Key&lt;/td&gt;
&lt;td&gt;Query (ID=userId AND begins_with(DataType, follower)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Follow&lt;/td&gt;
&lt;td&gt;getCountFoloweeByUserId&lt;/td&gt;
&lt;td&gt;{UserId}&lt;/td&gt;
&lt;td&gt;Primary Key&lt;/td&gt;
&lt;td&gt;Select COUNT / Query (ID=userId AND begins_with(DataType, followee)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Follow&lt;/td&gt;
&lt;td&gt;getcountFollowerByUsreId&lt;/td&gt;
&lt;td&gt;{UserId}&lt;/td&gt;
&lt;td&gt;Primary Key&lt;/td&gt;
&lt;td&gt;Select COUNT / Query (ID=userId AND begins_with(DataType, follower)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Tweet&lt;/td&gt;
&lt;td&gt;getTweetsByUserId&lt;/td&gt;
&lt;td&gt;{UserId}&lt;/td&gt;
&lt;td&gt;Primary Key&lt;/td&gt;
&lt;td&gt;Query(ID=userId AND begins_with(DataType, tweet)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Tweet&lt;/td&gt;
&lt;td&gt;getTweetByTweetId&lt;/td&gt;
&lt;td&gt;{TweetId}&lt;/td&gt;
&lt;td&gt;GSI-1&lt;/td&gt;
&lt;td&gt;Query(DataValue=tweetId AND begins_with(DataType, tweet)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Comment&lt;/td&gt;
&lt;td&gt;getCommentsByTweetId&lt;/td&gt;
&lt;td&gt;{TweetId}&lt;/td&gt;
&lt;td&gt;GSI-1&lt;/td&gt;
&lt;td&gt;Query(DataValue=tweetId AND begins_with(DataType, comment)&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  Design API Server
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Software Design
&lt;/h3&gt;

&lt;p&gt;Design based on Domain Driven Design. The layer and directory names are matched.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Directory Name&lt;/th&gt;
&lt;th&gt;DDD Layer&lt;/th&gt;
&lt;th&gt;Components&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;src/domain&lt;/td&gt;
&lt;td&gt;Domain Layer&lt;/td&gt;
&lt;td&gt;Entity / Value Object / Repository Interface&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;src/application&lt;/td&gt;
&lt;td&gt;Application Layer&lt;/td&gt;
&lt;td&gt;Application Service / Serializer&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;src/infrastructure&lt;/td&gt;
&lt;td&gt;Infrastructure Layer&lt;/td&gt;
&lt;td&gt;Repository / AWS Config&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;src/presentation&lt;/td&gt;
&lt;td&gt;Presentation Layer&lt;/td&gt;
&lt;td&gt;API Server&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  ID generation method
&lt;/h3&gt;

&lt;p&gt;Make the order of ID and record creation time the same. It can be handled by ID generation using the numbering table, but it lacks scalability. Use &lt;a href="https://blog.twitter.com/engineering/en_us/a/2010/announcing-snowflake" rel="noopener noreferrer"&gt;Snowflake&lt;/a&gt; as a scalable ID generation method.&lt;/p&gt;

&lt;p&gt;This method divides the bit string into three parts. The ID is the decimal number of this bit string.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Part&lt;/th&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Epoch time&lt;/td&gt;
&lt;td&gt;The number of seconds of difference from a particular time.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Sequence&lt;/td&gt;
&lt;td&gt;It counts up every time an ID is generated and is cleared every second.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Node number&lt;/td&gt;
&lt;td&gt;The number assigned to each node.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Implementing Snowflake in Node.js is as following.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;config&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;@src/config&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;dateToUnixTime&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;./time&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;workerIDBits&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;sequenceBits&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;12&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="c1"&gt;// Use snowflake&lt;/span&gt;
&lt;span class="c1"&gt;// See: https://blog.twitter.com/engineering/en_us/a/2010/announcing-snowflake&lt;/span&gt;
&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;IdGenerator&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kr"&gt;private&lt;/span&gt; &lt;span class="nx"&gt;workerId&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;number&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="kr"&gt;private&lt;/span&gt; &lt;span class="nx"&gt;lastGenerateAt&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;number&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="kr"&gt;private&lt;/span&gt; &lt;span class="nx"&gt;sequence&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;number&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

  &lt;span class="nf"&gt;constructor&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;workerId&lt;/span&gt;&lt;span class="p"&gt;?:&lt;/span&gt; &lt;span class="nx"&gt;number&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;workerId&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;config&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;snowflakeWorkerId&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;lastGenerateAt&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;dateToUnixTime&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Date&lt;/span&gt;&lt;span class="p"&gt;());&lt;/span&gt;
    &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;sequence&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="nf"&gt;generate&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt; &lt;span class="nx"&gt;number&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;now&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;dateToUnixTime&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Date&lt;/span&gt;&lt;span class="p"&gt;());&lt;/span&gt;

    &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;now&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;lastGenerateAt&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;sequence&lt;/span&gt;&lt;span class="o"&gt;++&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;else&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;sequence&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;lastGenerateAt&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;now&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

    &lt;span class="c1"&gt;// The bit operators ('&amp;lt;&amp;lt;' and '|' ) can handle numbers within&lt;/span&gt;
    &lt;span class="c1"&gt;// the range of signed 32 bit integer.&lt;/span&gt;
    &lt;span class="k"&gt;return &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
      &lt;span class="nx"&gt;now&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt; &lt;span class="o"&gt;**&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;workerIDBits&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;sequenceBits&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt;
      &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;workerId&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt; &lt;span class="o"&gt;**&lt;/span&gt; &lt;span class="nx"&gt;sequenceBits&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt;
      &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;sequence&lt;/span&gt;
    &lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  FAQ
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Is the user's profile information duplicated?
&lt;/h3&gt;

&lt;p&gt;Yes, it's a duplicate. When the profile is updated, you need to start Lambda with DynamoDB Stream to keep it asynchronous and consistent.&lt;/p&gt;

&lt;h3&gt;
  
  
  Isn't the tweet of a user with many followers a heavy writing load?
&lt;/h3&gt;

&lt;p&gt;Yes, it's expensive. Only when the number of followers is large, it is necessary to take some measures such as dynamically merging when the timeline is acquired without writing to the follower's timeline.&lt;/p&gt;

&lt;h3&gt;
  
  
  Do you not cache?
&lt;/h3&gt;

&lt;p&gt;Let's do it. It's not too late to monitor and find bottlenecks before making a decision.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In this article, I explained how to create a highly scalable API server. Just keep in mind that excessive performance optimization can go wrong when there are no performance issues.&lt;/p&gt;

&lt;p&gt;The implementation is published on &lt;a href="https://github.com/koukikitamura/scalable-twitter" rel="noopener noreferrer"&gt;Github&lt;/a&gt;, so please take a look.&lt;/p&gt;

</description>
      <category>node</category>
      <category>dynamodb</category>
      <category>express</category>
    </item>
    <item>
      <title>Data Size Real World</title>
      <dc:creator>koki kitamura</dc:creator>
      <pubDate>Sun, 20 Feb 2022 13:45:13 +0000</pubDate>
      <link>https://dev.to/koukikitamura/real-data-size-world-4ijp</link>
      <guid>https://dev.to/koukikitamura/real-data-size-world-4ijp</guid>
      <description>&lt;p&gt;Applications are currently used by people all over the world. The applications have the following features.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Have millions of users&lt;/li&gt;
&lt;li&gt;Store a large amount of data that amounts from petabytes to exabytes(TB-EB)&lt;/li&gt;
&lt;li&gt;Require performance from ms to μs &lt;/li&gt;
&lt;li&gt;Handle millions of requests per second&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The number TB-EB may be difficult to understand because it is a number of digits that is not often used in daily life.&lt;/p&gt;

&lt;p&gt;Hardware is always required to run the software. It is necessary to select the hardware that is suitable for the features of the software to be run. Understanding the amount of data will help you choose the memory and storage of your hardware. Hardware selection ranges from on-premises server devices and virtual server instance types to PCs used at home.&lt;/p&gt;

&lt;p&gt;Today, the advent of the cloud computing blurs the border between application engineers and infrastructure engineers. Many teams of application engineers build infrastructure using the cloud. Understanding the amount of data makes you choose the right hardware for yourself.&lt;/p&gt;

&lt;p&gt;In this article, we'll give you a sense of the amount of data by showing the amount of data of various things.&lt;/p&gt;

&lt;h2&gt;
  
  
  Data size unit
&lt;/h2&gt;

&lt;p&gt;The unit of data size is a byte. Currently, 1 byte is defined as 8 bits.&lt;/p&gt;

&lt;p&gt;Since the data size handles a large number of digits, add a prefix to omit the number of digits. In the International System of Units(SI), the prefix is as follows:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Symbol&lt;/th&gt;
&lt;th&gt;Name&lt;/th&gt;
&lt;th&gt;Factor&lt;/th&gt;
&lt;th&gt;Power&lt;/th&gt;
&lt;th&gt;EN&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;k&lt;/td&gt;
&lt;td&gt;kilo&lt;/td&gt;
&lt;td&gt;10^3&lt;/td&gt;
&lt;td&gt;10^3&lt;/td&gt;
&lt;td&gt;thousand&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;M&lt;/td&gt;
&lt;td&gt;mega&lt;/td&gt;
&lt;td&gt;10^3k&lt;/td&gt;
&lt;td&gt;10^6&lt;/td&gt;
&lt;td&gt;million&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;G&lt;/td&gt;
&lt;td&gt;giga&lt;/td&gt;
&lt;td&gt;10^3M&lt;/td&gt;
&lt;td&gt;10^9&lt;/td&gt;
&lt;td&gt;billion&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;T&lt;/td&gt;
&lt;td&gt;tera&lt;/td&gt;
&lt;td&gt;10^3G&lt;/td&gt;
&lt;td&gt;10^12&lt;/td&gt;
&lt;td&gt;trillion&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;P&lt;/td&gt;
&lt;td&gt;peta&lt;/td&gt;
&lt;td&gt;10^3T&lt;/td&gt;
&lt;td&gt;10^15&lt;/td&gt;
&lt;td&gt;quadrillion&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;E&lt;/td&gt;
&lt;td&gt;exa&lt;/td&gt;
&lt;td&gt;10^3P&lt;/td&gt;
&lt;td&gt;10^18&lt;/td&gt;
&lt;td&gt;quintillion&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;On the other hand, the data size prefix is as follows:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Symbol&lt;/th&gt;
&lt;th&gt;Name&lt;/th&gt;
&lt;th&gt;Factor&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;B&lt;/td&gt;
&lt;td&gt;byte&lt;/td&gt;
&lt;td&gt;8bit&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;KB&lt;/td&gt;
&lt;td&gt;kilo byte&lt;/td&gt;
&lt;td&gt;1024B&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;MB&lt;/td&gt;
&lt;td&gt;mega byte&lt;/td&gt;
&lt;td&gt;1024KB&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;GB&lt;/td&gt;
&lt;td&gt;giga byte&lt;/td&gt;
&lt;td&gt;1024MB&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;TB&lt;/td&gt;
&lt;td&gt;tera byte&lt;/td&gt;
&lt;td&gt;1024GB&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;PB&lt;/td&gt;
&lt;td&gt;peta byte&lt;/td&gt;
&lt;td&gt;1024TB&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;EB&lt;/td&gt;
&lt;td&gt;exa byte&lt;/td&gt;
&lt;td&gt;1024PB&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The data is treated as a binary number in computer, so the prefix is every 1024, which is the 8th power of 2, instead of every 1000. There is also a notation that uses KiB instead of KB to distinguish it from the International System of Units, but this article uses KB.&lt;/p&gt;

&lt;h2&gt;
  
  
  Website data size
&lt;/h2&gt;

&lt;p&gt;According to &lt;a href="https://httparchive.org/reports/page-weight"&gt;Page Weight&lt;/a&gt;, the data size of a website component is as follows:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Name&lt;/th&gt;
&lt;th&gt;Size&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Total&lt;/td&gt;
&lt;td&gt;1.96MB&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;HTML&lt;/td&gt;
&lt;td&gt;31.4KB&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;CSS&lt;/td&gt;
&lt;td&gt;68.9KB&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;JavaScript&lt;/td&gt;
&lt;td&gt;452KB&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Font&lt;/td&gt;
&lt;td&gt;119KB&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Image&lt;/td&gt;
&lt;td&gt;956KB&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Video&lt;/td&gt;
&lt;td&gt;2.07 MB&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The target period for aggregation is from January 2017/1 to 2022/1, and the target is mobile sites.&lt;/p&gt;

&lt;p&gt;Note that it is the total size per page, not the size per file. Some people may find the size of JavaScript larger than expected. The reason is that it includes not only own code but also the code of external packages such as frameworks and libraries.&lt;/p&gt;

&lt;h2&gt;
  
  
  File size
&lt;/h2&gt;

&lt;p&gt;The size varies depending on the contents of the file, so it is for your reference. The size and format were converted from the reference as needed. Also, these data are not a comparison of good and bad file formats. This is because the appropriate file format depends on the features of the file.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Name&lt;/th&gt;
&lt;th&gt;Size&lt;/th&gt;
&lt;th&gt;cf.&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Image - small JPG (size 320 x 320)&lt;/td&gt;
&lt;td&gt;21.0KB&lt;/td&gt;
&lt;td&gt;&lt;a href="https://www.instagram.com/p/CZZyelKpmYf/"&gt;https://www.instagram.com/p/CZZyelKpmYf/&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Image - small PNG (size 320 x 320)&lt;/td&gt;
&lt;td&gt;137KB&lt;/td&gt;
&lt;td&gt;&lt;a href="https://www.instagram.com/p/CZZyelKpmYf/"&gt;https://www.instagram.com/p/CZZyelKpmYf/&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Image - small WebP (size 320 x 320)&lt;/td&gt;
&lt;td&gt;16KB&lt;/td&gt;
&lt;td&gt;&lt;a href="https://www.instagram.com/p/CZZyelKpmYf/"&gt;https://www.instagram.com/p/CZZyelKpmYf/&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Image - large JPG (size 1036 x 1036)&lt;/td&gt;
&lt;td&gt;187KB&lt;/td&gt;
&lt;td&gt;&lt;a href="https://www.instagram.com/p/CVoHltuF7_e/"&gt;https://www.instagram.com/p/CVoHltuF7_e/&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Image - large PNG (size 1036 x 1036)&lt;/td&gt;
&lt;td&gt;1.38MB&lt;/td&gt;
&lt;td&gt;&lt;a href="https://www.instagram.com/p/CVoHltuF7_e/"&gt;https://www.instagram.com/p/CVoHltuF7_e/&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Image - large WebP (size 1036 x 1036)&lt;/td&gt;
&lt;td&gt;148KB&lt;/td&gt;
&lt;td&gt;&lt;a href="https://www.instagram.com/p/CVoHltuF7_e/"&gt;https://www.instagram.com/p/CVoHltuF7_e/&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Audio - music MP3 (playback time 3:01)&lt;/td&gt;
&lt;td&gt;5.80MB&lt;/td&gt;
&lt;td&gt;&lt;a href="https://pixabay.com/music/beats-dont-you-think-lose-16073/"&gt;https://pixabay.com/music/beats-dont-you-think-lose-16073/&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Movie - short MP4 720p (playback time 0:09)&lt;/td&gt;
&lt;td&gt;851KB&lt;/td&gt;
&lt;td&gt;&lt;a href="https://www.youtube.com/shorts/Wm3F8kF9WAE"&gt;https://www.youtube.com/shorts/Wm3F8kF9WAE&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Movie - short WebM 720p (playback time 0:09)&lt;/td&gt;
&lt;td&gt;1.10MB&lt;/td&gt;
&lt;td&gt;&lt;a href="https://www.youtube.com/shorts/Wm3F8kF9WAE"&gt;https://www.youtube.com/shorts/Wm3F8kF9WAE&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Movie - short GIF 720p (playback time 0:09)&lt;/td&gt;
&lt;td&gt;3.50MB&lt;/td&gt;
&lt;td&gt;&lt;a href="https://www.youtube.com/shorts/Wm3F8kF9WAE"&gt;https://www.youtube.com/shorts/Wm3F8kF9WAE&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Document - PDF (4 pages)&lt;/td&gt;
&lt;td&gt;150KB&lt;/td&gt;
&lt;td&gt;-&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Document - DOC (4 pages)&lt;/td&gt;
&lt;td&gt;100KB&lt;/td&gt;
&lt;td&gt;-&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Document - XLSX (1000 rows)&lt;/td&gt;
&lt;td&gt;140KB&lt;/td&gt;
&lt;td&gt;-&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Document - PPT (3 pages)&lt;/td&gt;
&lt;td&gt;248KB&lt;/td&gt;
&lt;td&gt;-&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Application - Firefox 97.0.1 (Mac)&lt;/td&gt;
&lt;td&gt;364MB&lt;/td&gt;
&lt;td&gt;&lt;a href="https://www.google.com/chrome/"&gt;https://www.google.com/chrome/&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Application - Discord 0.0.265 (Mac)&lt;/td&gt;
&lt;td&gt;193MB&lt;/td&gt;
&lt;td&gt;&lt;a href="https://discord.com/"&gt;https://discord.com/&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Application - Zoom 5.1.1 (Mac)&lt;/td&gt;
&lt;td&gt;52.5MB&lt;/td&gt;
&lt;td&gt;&lt;a href="https://zoom.us/"&gt;https://zoom.us/&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Application - Xcode 13.2.1 (Mac)&lt;/td&gt;
&lt;td&gt;32.1GB&lt;/td&gt;
&lt;td&gt;&lt;a href="https://developer.apple.com/xcode/"&gt;https://developer.apple.com/xcode/&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  Hardware capacity
&lt;/h2&gt;

&lt;p&gt;The data size of the hardware memory and storage are following table. If there is no standard, the value is shown as a guide.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Name&lt;/th&gt;
&lt;th&gt;Size&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Memory - AWS EC2 instance t2.micro&lt;/td&gt;
&lt;td&gt;1GB&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Memory - AWS EC2 instance T2&lt;/td&gt;
&lt;td&gt;0.5GB ~ 32GB&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Memory - AWS EC2 instance M5&lt;/td&gt;
&lt;td&gt;5GB ~ 384GB&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Memory - MacBook Pro 13 inch 2020&lt;/td&gt;
&lt;td&gt;8GB ~ 16GB&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Memory - MacBook Pro 14 inch 2021&lt;/td&gt;
&lt;td&gt;16GB ~ 64GB&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Memory - iPhone (1st generation)&lt;/td&gt;
&lt;td&gt;128MB&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Memory - iPhone (13 Pro max)&lt;/td&gt;
&lt;td&gt;6GB&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Storage - AWS EBS Provisioned HDD&lt;/td&gt;
&lt;td&gt;125GB ~ 16TB&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Storage - AWS EBS Provisioned IOPS SSD&lt;/td&gt;
&lt;td&gt;4GB ~ 16TB&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Storage - AWS RDS SSD&lt;/td&gt;
&lt;td&gt;20GB ~ 64TB&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Storage - MacBook Pro 13 inch 2020 SSD&lt;/td&gt;
&lt;td&gt;256GB ~ 2TB&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Storage - MacBook Pro 14 inch 2021 SSD&lt;/td&gt;
&lt;td&gt;1TB ~ 8TB&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Storage - iPhone (1st generation)&lt;/td&gt;
&lt;td&gt;4 ~ 16GB&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Storage - iPhone (13 Pro max)&lt;/td&gt;
&lt;td&gt;128GB ~ 1TB&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Storage - Floppy Disk&lt;/td&gt;
&lt;td&gt;720KB ~ 1.44MB&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Storage - Compact Disk&lt;/td&gt;
&lt;td&gt;650 ~ 700MB&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Storage - DVD&lt;/td&gt;
&lt;td&gt;4.7GB ~ 8.5GB&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Storage - Blu-ray&lt;/td&gt;
&lt;td&gt;25GB ~ 128GB&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Storage - USB memory&lt;/td&gt;
&lt;td&gt;32GB ~ 256GB&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  Real application data volume
&lt;/h2&gt;

&lt;p&gt;According to &lt;a href="https://www.domo.com/learn/infographic/data-never-sleeps-9"&gt;Data Never Sleeps&lt;/a&gt;, the data created by the actual application is following:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Name&lt;/th&gt;
&lt;th&gt;Volume per minute&lt;/th&gt;
&lt;th&gt;Volume per day&lt;/th&gt;
&lt;th&gt;Volume per year&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Twitter tweet&lt;/td&gt;
&lt;td&gt;575K tweet/min&lt;/td&gt;
&lt;td&gt;828M tweet/day&lt;/td&gt;
&lt;td&gt;302G tweet/year&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Instagram photo&lt;/td&gt;
&lt;td&gt;65K photo/min&lt;/td&gt;
&lt;td&gt;93.6M photo/day&lt;/td&gt;
&lt;td&gt;34.2G photo/year&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Slack message&lt;/td&gt;
&lt;td&gt;148K message/min&lt;/td&gt;
&lt;td&gt;213M message/day&lt;/td&gt;
&lt;td&gt;77.8G message/year&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Next, let's look at it in bytes. Assuming to tweet on Twitter has 100 characters and one character is 1 byte, 1tweet is 0.1KB. Instagram 1photo is assumed to be 0.1MB. Assuming to message in Slack has 50 characters and one character is 1 byte, 1message is 0.05KB. Based on these things, it is as follows.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Name&lt;/th&gt;
&lt;th&gt;Size per year&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Twitter tweet&lt;/td&gt;
&lt;td&gt;30.2TB/year&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Instagram photo&lt;/td&gt;
&lt;td&gt;3.42PB/year&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Slack message&lt;/td&gt;
&lt;td&gt;3.89TB/year&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;If you operate the service for one year, more than terabytes of data are accumulated. It is difficult to store this amount of data on a single database server, and you need to scale it out and store it on a distributed database server. We call this a distributed database.&lt;/p&gt;

&lt;p&gt;There are two ways to build a distributed database:  Master/Slave method and partitioning method. The Master/Slave method is an approach to high traffic, not a large amount of data volume. We should use partitioning for a large amount of data.&lt;/p&gt;

&lt;p&gt;RDBs are not designed for partitioning. Therefore, maintaining the partitioned RDB is costly. If you want to build a partitioned distributed database, consider a database designed for partitioning, such as DynamoDB.&lt;/p&gt;

</description>
      <category>database</category>
      <category>rdb</category>
    </item>
    <item>
      <title>E2E Test Automation with Autify</title>
      <dc:creator>koki kitamura</dc:creator>
      <pubDate>Wed, 11 Aug 2021 12:28:01 +0000</pubDate>
      <link>https://dev.to/koukikitamura/e2e-test-automation-with-autify-1fp9</link>
      <guid>https://dev.to/koukikitamura/e2e-test-automation-with-autify-1fp9</guid>
      <description>&lt;h2&gt;
  
  
  The need for software testing
&lt;/h2&gt;

&lt;p&gt;Software is used in various places around us, such as smartphones, personal computers, home appliances, and automobiles. Software has become an integral part of our lives, and we take it for granted that it works properly. Therefore, if the software does not work properly, you risk losing the credit, time, and money of the company that developed the software.&lt;/p&gt;

&lt;h2&gt;
  
  
  The need for automation
&lt;/h2&gt;

&lt;p&gt;Software testing is the act of actually operating the implemented software and detecting defects. It is necessary to operate the software under all conditions in order to detect all defects, but it is difficult to verify all conditions because human resources and time resources are limited in the actual. Against this background, the need for software test automation is increasing.&lt;/p&gt;

&lt;h2&gt;
  
  
  Software test type
&lt;/h2&gt;

&lt;p&gt;The tests are divided into unit tests, integration tests, and E2E test(system tests).&lt;br&gt;
Unit tests are tests on the proguram components(functions, classes, modules, packages, etc.). Usually the developer tests it. Integration tests are tests on system components (API server, microvis, front). Usually tested by the developer. System testing is performed by operating the system in the same way as an actual user. Developers also check the operation, but it is common for a QA engineer other than the developer to guarantee the complex test design and system quality.&lt;/p&gt;

&lt;p&gt;Unit testing and integration testing are automated by many teams, with test code running in CI / CD. On the other hand, although there are tools for automating system tests (Selenium, Puppetter, Cypress, etc.), few teams should have continuous system test automation working well.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Test&lt;/th&gt;
&lt;th&gt;Team&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Unit Test&lt;/td&gt;
&lt;td&gt;Developer&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Integration Test&lt;/td&gt;
&lt;td&gt;Developer&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;System Test&lt;/td&gt;
&lt;td&gt;QA Enginner&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;
&lt;h2&gt;
  
  
  System test test case
&lt;/h2&gt;

&lt;p&gt;Many teams have a QA team separate from the engineer team, and engineers should not design test cases and perform system tests. For engineers, the image of a system test often has only a vague image of manipulating the UI and touching the functions to verify the validity of the system. Each screen decides various input value patterns one by one and carefully verifies whether the operation result is as expected.&lt;/p&gt;
&lt;h2&gt;
  
  
  Test cases to automate
&lt;/h2&gt;

&lt;p&gt;The role of automated system testing is not to guarantee the quality of new features, but to check the degradation of existing features. Since the purpose of automation is to reduce man-hours, it is necessary to automate test cases that reduce man-hours by automation. Test cases that automate and reduce man-hours must satisfy the following inequalities:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;(Manual test cost per time)  x (The number of implementations) &amp;gt; (The cost of starting automation) + (Maintenance costs)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The (left side)-(right side) of this formula is the reduction man-hours. Other than the number of implementations, the difference is small depending on the test case, so it is regarded as a constant. In other words, man-hour reduction depends only on the number of tests performed. Testing new features will be focused before the feature is released, but less often after that. Regression testing is performed regularly at the time of release, which is expected to reduce man-hours. In the case of agile development, the cycle of development → test → release is repeated, so it has a high affinity with system test automation. System testing does not automate everything, at most 30% of all test cases. Manual human testing is not gone.&lt;/p&gt;

&lt;p&gt;In addition to regression testing, the following may be considered for automation: Let's automate little by little while considering whether it will lead to reduction of man-hours.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Intensive testing of the core of the product&lt;/li&gt;
&lt;li&gt;Wide and shallow test of basic functions&lt;/li&gt;
&lt;li&gt;Testing for unstable and buggy features&lt;/li&gt;
&lt;li&gt;Tests that could not be carried out due to insufficient man-hours&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Tools used to automate system testing
&lt;/h2&gt;

&lt;p&gt;There are many tools for automating system tests, but we use Saas &lt;a href="https://autify.com/"&gt;Autify&lt;/a&gt;, which makes it easy for non-engineers to automate and maintain test cases.&lt;/p&gt;

&lt;p&gt;Autify can create test cases by automatically recording browser operations without the need to write scripts. Chrome Extension is used to record browser operations. Entering the url on the recording screen will open a new window for recording.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--kCbED5Zu--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jluhccs5bv4wtjxc7qwl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--kCbED5Zu--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jluhccs5bv4wtjxc7qwl.png" alt="recording_start_scren" width="880" height="333"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can save the test case by performing the operation you want to test in a new window and pressing the save button at the bottom left.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--9Ig7KqHT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6ns3dg8ha297rem2kjth.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--9Ig7KqHT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6ns3dg8ha297rem2kjth.png" alt="recodding_menu" width="561" height="336"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Autify has four concepts.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Scenario&lt;/li&gt;
&lt;li&gt;Step group&lt;/li&gt;
&lt;li&gt;Test plan&lt;/li&gt;
&lt;li&gt;Test Result&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;A scenario is the smallest unit of test execution in one test case. One scenario is to log in to the application, post an image, send a message to another user, and so on. The scenario consists of multiple steps. The scenario is divided into each step by actions such as pressing a button and URL transition.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--kl7VixkY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hv34yyo1glfc8oj8emjr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--kl7VixkY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hv34yyo1glfc8oj8emjr.png" alt="scenaio_detail" width="880" height="353"&gt;&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;Step groups can share some of the scenarios. However, there is a restriction that it is only the beginning of the scenario.&lt;/p&gt;

&lt;p&gt;Test plans allow you to group scenarios and run multiple tests. You can also run the test plan in multiple execution environments. You can also run the test plan on a regular basis.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--MgvGFz1V--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/682o2d717lviwq3402io.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--MgvGFz1V--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/682o2d717lviwq3402io.png" alt="test_plan_detail" width="880" height="511"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The test result is the execution result of the scenario or test plan. You can see where it failed.&lt;/p&gt;

&lt;h2&gt;
  
  
  Works with CI / CD
&lt;/h2&gt;

&lt;p&gt;Suppose the release is in a workflow named Deploy in Github Actions. Make sure that the system test workflow starts when the Deploy workflow ends. I want to see the test results and let slack know if it succeeded or failed, so I need to make sure that Autify's test plan is complete. Use &lt;a href="https://github.com/koukikitamura/autify-cli"&gt;autify-cli&lt;/a&gt; as the CLI for that. There is also a notification function of Autify's slack, but I made it notification in Github Actions so that it can be divided for each test plan.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;name: e2e-test

on:
  workflow_run:
    workflows:
      - Deploy
    types:
      - completed

jobs:
  autify:
    name: E2E Test
    runs-on: ubuntu-latest
    strategy:
      matrix:
        node-version: [12.x]

    if: ${{ github.event.workflow_run.conclusion == 'success' }}
    steps:
      - name: Check out source code
        uses: actions/checkout@v1
      - name: Install Autify CLI
        run: |
          curl -LSfs https://raw.githubusercontent.com/koukikitamura/autify-cli/main/scripts/install.sh | \
            sudo sh -s -- \
              --git koukikitamura/autify-cli \
              --target autify-cli_linux_x86_64 \
              --to /usr/local/bin
      - name: Run Autify
        run: |
          response=$(atf run --project-id=${AUTIFY_PROJECT_ID} --plan-id=${AUTIFY_TEST_PLAN_ID} --spinner=false)
          test_result_id=$(jq '.id' &amp;lt;&amp;lt;&amp;lt; "${response}")
          status=$(jq -r '.status' &amp;lt;&amp;lt;&amp;lt; "${response}")

          echo "TEST_RESULT_ID=${test_result_id}" &amp;gt;&amp;gt; $GITHUB_ENV
          echo "TEST_RESULT_STATUS=${status}" &amp;gt;&amp;gt; $GITHUB_ENV
        env:
          AUTIFY_PERSONAL_ACCESS_TOKEN: ${{ secrets.AUTIFY_PERSONAL_ACCESS_TOKEN }}
          AUTIFY_PROJECT_ID: ${{ secrets.AUTIFY_PROJECT_ID }}
          AUTIFY_TEST_PLAN_ID: 99999
      - name: Notify Slack
        run: |
          .github/workflows/notify_slack.sh "E2E Test has finished. Result is ${TEST_RESULT_STATUS}. \`https://app.autify.com/projects/${AUTIFY_PROJECT_ID}/results/${TEST_RESULT_ID}\`"
        env:
          AUTIFY_PROJECT_ID: ${{ secrets.AUTIFY_PROJECT_ID }}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Cases where system test automation fails
&lt;/h2&gt;

&lt;p&gt;Be aware of the following typical cases where system test automation fails.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Automators don't know about test technology&lt;/li&gt;
&lt;li&gt;Automated parts that do not reduce man-hours&lt;/li&gt;
&lt;li&gt;Large man-hours for automatic test maintenance&lt;/li&gt;
&lt;li&gt;The purpose is to automate&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Automated testing is aimed at confirming regression, and manual testing for quality assurance is never gone.&lt;/p&gt;

</description>
    </item>
  </channel>
</rss>
