<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Dawei Ma</title>
    <description>The latest articles on DEV Community by Dawei Ma (@madawei2699).</description>
    <link>https://dev.to/madawei2699</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/madawei2699"/>
    <language>en</language>
    <item>
      <title>SQLite Renaissance</title>
      <dc:creator>Dawei Ma</dc:creator>
      <pubDate>Sat, 25 Feb 2023 14:54:57 +0000</pubDate>
      <link>https://dev.to/madawei2699/sqlite-renaissance-37mf</link>
      <guid>https://dev.to/madawei2699/sqlite-renaissance-37mf</guid>
      <description>&lt;ul&gt;
&lt;li&gt;The Story of SQLite&lt;/li&gt;
&lt;li&gt;Architecture of SQLite&lt;/li&gt;
&lt;li&gt;
The Renaissance of SQLite

&lt;ul&gt;
&lt;li&gt;Serverless / Edge Computing&lt;/li&gt;
&lt;li&gt;Browser-compatible&lt;/li&gt;
&lt;li&gt;Client/Server&lt;/li&gt;
&lt;li&gt;OLAP&lt;/li&gt;
&lt;li&gt;Distributed&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;Summary&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  The Story of SQLite
&lt;/h2&gt;

&lt;p&gt;When I first encountered SQLite, I was struck by its impressive test code, which spans over 150,000 lines of source code and over 90 million lines of test code and scripts. The author, Dwayne Richard Hipp, is a perfectionist who not only developed the underlying storage engine and Parser, but also created the source hosting tool &lt;a href="https://www2.fossil-scm.org/home/doc/trunk/www/index.wiki"&gt;Fossil&lt;/a&gt;, as well as the libraries and tools that SQLite depends on, mostly from scratch. &lt;/p&gt;

&lt;p&gt;Richard even built the web server, &lt;a href="https://sqlite.org/althttpd/doc/trunk/althttpd.md"&gt;Althttpd&lt;/a&gt;, that runs the official SQLite website, with all the code in a single C file and no dependencies on any code libraries except the standard C library. The SQLite website's database is also powered by SQLite, and its dynamic data is rendered in just 0.01 seconds after over 200 SQL statements are queried.&lt;/p&gt;

&lt;p&gt;It's remarkable that this development model has been successful, considering the vast majority of the code was written by Richard alone. Although the code is open-source, there is no contribution from the open-source community. In fact, someone has even forked an open-source collaborative version of SQLite, specifically for this purpose: &lt;a href="https://github.com/libsql/libsql"&gt;libSQL&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;To learn more about the history of SQLite, check out this podcast: &lt;a href="https://corecursive.com/066-sqlite-with-richard-hipp/"&gt;The Untold Story of SQLite&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Architecture of SQLite
&lt;/h2&gt;

&lt;p&gt;SQLite is a unique database software that functions differently from most other database systems, such as MySQL, SQL Server, PostgreSQL, or Oracle, which have a client/server architecture. In the client/server model, the client communicates with the database server via a specific protocol, and the server receives and processes the client's requests before returning the results.&lt;/p&gt;

&lt;p&gt;Unlike these database systems, SQLite communicates with the application via an in-process approach, functioning as a library. Additionally, the database in SQLite is a single file stored on disk, giving it another advantage over other databases. SQLite is notably fast, especially when executing small SQL query statements. This speed has made it possible for SQLite's official website to obtain dynamic data by querying over 200 SQL statements, without the N+1 query performance overhead of network communication present in other databases.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--YzR8ONSm--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://img.bmpi.dev/e1ecd55c-bb43-40f4-84c9-d2f28f9a855d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--YzR8ONSm--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://img.bmpi.dev/e1ecd55c-bb43-40f4-84c9-d2f28f9a855d.png" alt="" width="" height=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The architecture of SQLite is illustrated below:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--bDIebiwR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://img.bmpi.dev/9ea571d8-d75c-694e-2d4f-a35b979afa9e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--bDIebiwR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://img.bmpi.dev/9ea571d8-d75c-694e-2d4f-a35b979afa9e.png" alt="" width="880" height="497"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The architecture of SQLite is composed of three main parts: the compiler, the virtual machine, and the storage engine. When an application initiates a query request, the SQL statement is parsed by the compiler, and bytecode is generated. Finally, the virtual machine executes the bytecode, calling the storage engine's interface to read or write data.&lt;/p&gt;

&lt;p&gt;The compiler consists of Parser and Code Generator, responsible for parsing SQL statements into abstract syntax trees (ASTs) and converting ASTs into bytecode, respectively. Code Generator also generates query plans.&lt;/p&gt;

&lt;p&gt;The &lt;a href="https://www.sqlite.org/opcode.html"&gt;Virtual Machine&lt;/a&gt; is a Register-Based VM responsible for executing bytecode, and it also handles query optimization. Readers interested in this part can read this article: &lt;a href="https://fly.io/blog/sqlite-virtual-machine/"&gt;How the SQLite Virtual Machine Works&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The storage engine's main responsibility is reading or writing data. It includes the B-tree, Pager, and OS Interface (also called VFS).&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;B-tree: SQLite's indexes are stored in B-tree data structure and table data are stored in B+tree data structure. For readers interested in this topic, I recommend reading this article: &lt;a href="https://fly.io/blog/sqlite-internals-btree/"&gt;SQLite Internals: Pages &amp;amp; B-trees&lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Pager: Serves as an abstraction layer between the B-tree module and VFS module. It provides the function of reading, writing, and caching disk pages, achieving atomicity, isolation, and persistence in SQLite. Pager offers two concurrent access modes: Rollback Journal and Write-ahead Log. The Write-ahead Logging mode provides better scalability and allows reading data concurrently while writing data. Although only one write thread is allowed to update the write-ahead log file at a time, the configuration of &lt;a href="https://sqlite.org/c3ref/busy_timeout.html"&gt;&lt;code&gt;busy_timeout&lt;/code&gt;&lt;/a&gt; allows multiple write threads, but execution remains serialized. For more information, refer to these articles: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://fly.io/blog/sqlite-internals-rollback-journal/"&gt;How SQLite helps you do ACID&lt;/a&gt; &lt;/li&gt;
&lt;li&gt;&lt;a href="https://fly.io/blog/sqlite-internals-wal/"&gt;How SQLite Scales Read Concurrency&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;&lt;p&gt;OS Interface (VFS): To provide portability across operating systems, SQLite uses an abstraction layer called VFS. VFS provides methods for opening, reading, writing, and closing disk files, as well as other operating system-specific functionality. For readers interested in this topic, I recommend reading the &lt;a href="https://www.sqlite.org/vfs.html"&gt;SQLite documentation on VFS&lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Above is a brief introduction to the SQLite architecture. If you want to further understand the internal implementation details, you can read this open-source e-book: &lt;a href="https://www.compileralchemy.com/books/sqlite-internals/"&gt;SQLite Internals: How The World's Most Used Database Works&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;This understanding will help you appreciate the following open-source projects that creatively utilize SQLite, showcasing its versatility.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Renaissance of SQLite
&lt;/h2&gt;

&lt;p&gt;SQLite, a software that is over 20 years old, is often seen as a database for simple local storage or testing, and rarely used in production. However, there are several interesting projects that have revived SQLite and have been hotly discussed in forums such as Hacker News. Let's take a look at some of these projects.&lt;/p&gt;

&lt;h3&gt;
  
  
  Serverless / Edge Computing
&lt;/h3&gt;

&lt;p&gt;Jamstack architecture for &lt;a href="https://dev.to/en/dev/guide-to-serverless/"&gt;serverless&lt;/a&gt; applications involves publishing static pages to a CDN and using APIs to provide dynamic updates. This architecture can greatly enhance scalability for business systems. However, there are limitations to this approach. Data must be stored in a separate hosted database, which can be expensive, and the database can become a performance bottleneck. This is particularly true when business systems are deployed to multiple regions. But what if the business system instance and the database were running on the same server? This approach has the potential to eliminate the network overhead of a single-node database, offering a more efficient solution.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--lZC3w_Xz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://img.bmpi.dev/3e89fa80-a573-38a2-4bfc-016b8ff6747d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--lZC3w_Xz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://img.bmpi.dev/3e89fa80-a573-38a2-4bfc-016b8ff6747d.png" alt="" width="880" height="338"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;SQLite is a Serverless database, which runs in the same process as the application. Compared to other databases, SQLite is faster than those with C/S architecture for network communication. However, a common problem when multiple instances are reading and writing to the same database needs to be addressed. One possible solution is &lt;a href="https://github.com/superfly/litefs"&gt;Litefs&lt;/a&gt;, a project developed by the author of &lt;a href="https://github.com/benbjohnson/litestream"&gt;Litestream&lt;/a&gt; after joining Fly.io.&lt;/p&gt;

&lt;p&gt;Litestream is capable of controlling the management of &lt;code&gt;wal&lt;/code&gt; log files by taking over the SQLite checkpointing process. When operating in Write-ahead Log mode, Litestream can continuously copy &lt;code&gt;wal&lt;/code&gt; log files to the backup location, such as S3, enabling online streaming backup of SQLite database files. For more information, see this &lt;a href="https://litestream.io/how-it-works/"&gt;document&lt;/a&gt; detailing how it works.&lt;/p&gt;

&lt;p&gt;Litefs takes it a step further than Litestream by providing a FUSE-based file system to SQLite as the VFS layer. Litefs can replicate the collection of pages related to a transaction at the page level and complete the cross-node synchronization of data by packaging these page collections into a file package in LTX data format and sending this package to the read-only node via HTTP protocol.&lt;/p&gt;

&lt;p&gt;In the distributed cluster of Litefs, only the master node can write data, and the read-only node can let the client write data only at the master node by forwarding the address of the master node to the client. The master node is elected by obtaining distributed leases (Distributed leases) from Consul to reach consensus, and static master nodes can also be set.&lt;/p&gt;

&lt;p&gt;For details about Litefs' specific architecture, please refer to this &lt;a href="https://fly.io/docs/litefs/how-it-works/"&gt;article&lt;/a&gt;. A deployment case can be found in this &lt;a href="https://news.ycombinator.com/item?id=34267434"&gt;thread&lt;/a&gt;, which discusses a migration from a Postgres cluster to distributed SQLite with Litefs.&lt;/p&gt;

&lt;p&gt;Cloudflare has also released a similar commercial solution, &lt;a href="https://developers.cloudflare.com/d1/"&gt;Cloudflare D1&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Browser-compatible
&lt;/h3&gt;

&lt;p&gt;SQLite can be run in the browser using WebAssembly (WASM) technology, and there are two projects that enable the use of SQL within the browser.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://github.com/sql-js/sql.js/"&gt;sql.js&lt;/a&gt; allows JavaScript to download the SQLite database file into the browser's memory via network request. SQL can then be used to retrieve data results from the SQLite database file all within the browser.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/jlongster/absurd-sql"&gt;absurd-sql&lt;/a&gt; is similar to sql.js, but it can use the browser's IndexedDB as persistent storage and can read and write SQLite database files.&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;What are the advantages of using SQLite in the browser? Take this open source project of mine &lt;a href="https://github.com/bmpi-dev/invest-alchemy"&gt;Invest Alchemy&lt;/a&gt; as an example. It is an ETF portfolio management system that needs to manage multiple ETF portfolios, and all the data of each ETF portfolio is stored in a SQLite database. The location of this database file is stored in AWS S3. Every day, a scheduled program automatically downloads all SQLite databases to the AWS S3 bucket. After which, the data of these portfolios is updated and finally uploaded to S3.&lt;/p&gt;

&lt;p&gt;When a user views a page of a portfolio, such as this &lt;a href="https://money.bmpi.dev/portfolio?t=robot_dma_v02&amp;amp;p=dma_11_22"&gt;portfolio&lt;/a&gt;, the page will first download the SQLite database from S3 to the browser's memory when it is first rendered. Later, it uses sql.js to initialize the SQLite database and launches multiple SQL queries to get the data results, then renders the page.&lt;/p&gt;

&lt;p&gt;The advantage of this architecture is that the browser page only needs to make one query request to get all the data for the entire portfolio. When using a traditional database, this is a matter of cost, and each query must be transmitted over the network, which increases the load time of the page.&lt;/p&gt;

&lt;p&gt;Finally, SQLite's single database file approach brings good isolation. For example, in Invest Alchemy, one database represents one portfolio. It is possible to store all the personal data of one user in one database and then store these databases in different directories in AWS S3 so that the data of different users can be well isolated.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Client/Server
&lt;/h3&gt;

&lt;p&gt;SQLite can be used as a Client/Server architecture database, although this usage can come at the cost of losing some of the advantages of SQLite and increasing the network overhead. Nevertheless, it can be useful in certain scenarios, such as a read-only data source or as a data cache where the overhead is minimal.&lt;/p&gt;

&lt;p&gt;Here are a few projects that demonstrate the use of SQLite in a client-server architecture:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://github.com/benbjohnson/postlite"&gt;Postlite&lt;/a&gt;: A web proxy library that supports PostgreSQL's communication protocol and uses SQLite as storage on the backend.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/psanford/sqlite3vfshttp"&gt;sqlite3vfshttp&lt;/a&gt;: A SQLite VFS that supports accessing SQLite database files via HTTP protocol. Compared to sql.js, which needs to download the whole SQLite database file, this library only needs to specify the &lt;code&gt;HTTP range&lt;/code&gt; header by the client to get the data of the specified range. In a large database file, this optimization can save a lot of network overhead.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://sqlite.org/cloudsqlite/doc/trunk/www/index.wiki"&gt;Cloud Backed SQLite&lt;/a&gt;: Officially supported cloud SQLite, supports Azure and GCP, can read or write directly through Storage Client database without downloading the whole database.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  OLAP
&lt;/h3&gt;

&lt;p&gt;SQLite is commonly used as an OLTP database but is rarely used for OLAP since its table data storage is row-based rather than column-based. To fill this gap, &lt;a href="https://github.com/duckdb/duckdb"&gt;DuckDB&lt;/a&gt; was developed, which has a similar architecture to SQLite but uses columnar storage, making it perfect for OLAP scenarios.&lt;/p&gt;

&lt;p&gt;However, starting from version &lt;a href="https://www.sqlite.org/releaselog/3_38_0.html"&gt;3.38.0&lt;/a&gt;, SQLite has improved the performance of large analytic query statements using Bloom Filters, which is also designed to support OLAP scenarios. More information on this optimization and a comparison with DuckDB can be found in &lt;a href="https://www.cidrdb.org/cidr2022/papers/p56-prammer.pdf"&gt;this paper&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Distributed
&lt;/h3&gt;

&lt;p&gt;The in-process architecture of SQLite is not relevant to distributed databases, which are a complex field. Distributed databases offer elastic expansion and high availability, but due to the complexity of distributed transactions, conventional practice involves vertical scaling of a single machine as much as possible. Horizontal scaling (slicing) is not a feasible solution. Nevertheless, there are some impressive projects that have brought SQLite into the realm of distributed systems.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://github.com/rqlite/rqlite"&gt;rqlite&lt;/a&gt; is a distributed database that uses &lt;a href="https://raft.github.io/"&gt;Raft&lt;/a&gt; to solve the problem of achieving consensus among nodes in a cluster. In this architecture, writes are performed through the Leader node, and the other replica nodes can pass write requests to the Leader node. Meanwhile, reads can be done by any node, making this a &lt;code&gt;Leader-Replica&lt;/code&gt; style of distributed architecture.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;However, because writes are still performed by the Leader node and there is additional overhead due to consistency checking, rqlite does not significantly improve write throughput compared to the In-Process standalone SQLite.&lt;/p&gt;

&lt;p&gt;The rqlite data API is publicly available via HTTP, which makes it a Client/Server style of architecture. To achieve inter-node synchronization, rqlite copies commands between nodes. For example, when a write command is sent to the Leader node and submitted to Raft Log, the Leader node copies the write command to other nodes.&lt;/p&gt;

&lt;p&gt;If you're interested in learning more about rqlite's design, you can check out the documentation available here: &lt;a href="https://rqlite.io/docs/design/"&gt;Rqlite Design&lt;/a&gt;.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://github.com/losfair/mvsqlite"&gt;mvsqlite&lt;/a&gt;: The subtlety of this project lies in its use of &lt;a href="https://github.com/apple/foundationdb"&gt;FoundationDB&lt;/a&gt; as the VFS layer of SQLite. It makes great use of the distributed features provided by FoundationDB, such as optimistic lock-free concurrency, distributed transactions, synchronous asynchronous replication, and backup and recovery, to achieve a distributed SQLite with MVCC support. This includes not only &lt;code&gt;Leader-Replica&lt;/code&gt;, but also parallelization of multi-node writes, which increases the throughput of writes and enables time travel at the database level (Time travel) thanks to its implementation of MVCC.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To learn more about the implementation details of mvsqlite, we recommend reading these two articles by the author: &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;a href="https://su3.io/posts/mvsqlite"&gt;Turning SQLite into a distributed database&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://su3.io/posts/mvsqlite-2"&gt;Storage and transaction in mvSQLite&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Compared with Litefs, which is also distributed through the VFS module, mvsqlite requires additional deployment of &lt;code&gt;FoundationDB&lt;/code&gt; cluster and &lt;code&gt;mvstore&lt;/code&gt; stateless instances, resulting in higher deployment and O&amp;amp;M costs.&lt;br&gt;
  Interestingly, FoundationDB used SQLite as the SSD storage engine before &lt;code&gt;7.0.0&lt;/code&gt;, but after that, FoundationDB implemented its own storage engine called &lt;a href="https://youtu.be/nlus1Z7TVTI"&gt;Redwood&lt;/a&gt;.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://github.com/vlcn-io/cr-sqlite"&gt;cr-sqlite&lt;/a&gt;: To achieve ultimate consistency, the previous project, rqlite, used a consensus algorithm to select a particular leader. However, what if multiple writers need to make changes to the same database at the same time without conflict? Fortunately, a data structure called &lt;a href="https://crdt.tech/"&gt;CRDT&lt;/a&gt; (Conflict-free Replicated Data Type) exists in the real-time collaboration field to solve this problem. The cr-sqlite project has ingeniously integrated CRDT into SQLite through its &lt;a href="https://www.sqlite.org/loadext.html"&gt;runtime extension&lt;/a&gt;, resulting in the same cluster multi-node concurrent writing feature as mvsqlite. For more information, I recommend reading the author's article on &lt;a href="https://tantaman.com/2022-08-23-why-sqlite-why-now.html"&gt;Why SQLite? Why Now?&lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://github.com/Expensify/Bedrock"&gt;Bedrock&lt;/a&gt;: Bedrock is an advanced web and distributed transaction layer built with the help of SQLite. It functions as a distributed relational database system that has been designed primarily for replicating offsite data. Using a peer-to-peer style of distributed architecture, information is eventually stored on a private blockchain across all nodes. 🤯&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The Paxos consensus algorithm is used by Bedrock in order to select Leaders from clusters. They are responsible for organizing two-phase commit transactions across different systems. Moreover, its &lt;a href="https://bedrockdb.com/synchronization.html"&gt;synchronization engine&lt;/a&gt; is based on the implementation of &lt;a href="https://bedrockdb.com/blockchain.html"&gt;blockchain&lt;/a&gt; technology.&lt;/p&gt;

&lt;p&gt;Each node contains an internal table referred to as the journal – it consists of 3 columns: id, query, and hash. Every time a query is committed into the database, a new row is added to the journal. This records the query and computes a unique hash value according to the most recent row.&lt;/p&gt;

&lt;p&gt;When a server links up with a cluster, the newest hashes and IDs are shared. If two distinct servers dispute over the assigned hash corresponding to the ID, they will recognize they were “forked” at some point and stop exchanging messages. In such situations, the leader decides which fork is deserving of becoming the master branch.&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;It is no secret that SQLite has become a highly innovative project in many areas. This is in large part due to the high-quality of its source code, boasting almost 100 million lines dedicated to testing and verifying its robustness. This minimalistic architecture constrains the total source code to just over 150,000 lines, which enables individual developers to individually innovate on it more easily compared to databases like MySQL (with over 4 million lines), Oracle (with over 10 million lines), or Postgres (over 1 million lines).&lt;/p&gt;

&lt;p&gt;This simple layout also makes SQLite a great platform for experimentation; developers can implement a vast array of modifications with relative ease. As edge computation and serverless deployments run off of Content Delivery Networks, SQLite - being a lightweight relational database - will only see further use cases and innovation flow into it. Moreover, since its codebase is straightforward, those wishing to learn about SQLite principles can do so quickly.&lt;/p&gt;

</description>
      <category>database</category>
      <category>distributedsystems</category>
      <category>sqlite</category>
    </item>
    <item>
      <title>Adventures in K8S Cloud Native App Development</title>
      <dc:creator>Dawei Ma</dc:creator>
      <pubDate>Thu, 11 Nov 2021 15:47:18 +0000</pubDate>
      <link>https://dev.to/aws-builders/adventures-in-k8s-cloud-native-app-development-1g36</link>
      <guid>https://dev.to/aws-builders/adventures-in-k8s-cloud-native-app-development-1g36</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;Note: This post is a pairing between the author and &lt;a href="https://copilot.github.com/"&gt;GitHub Copilot&lt;/a&gt;. Copilot did about 5% of the work on this post. The author also partially documented Copilot's work in this &lt;a href="https://twitter.com/madawei2699/status/1458313535792955393"&gt;Tweet&lt;/a&gt;.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;As an amateur indie developer, I don't invest much time or money in development, so I have two very basic requirements for my &lt;a href="https://dev.to/dev/tech-stack-of-side-project/"&gt;side project tech stack&lt;/a&gt;:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The tech stack can greatly improve my development efficiency.&lt;/li&gt;
&lt;li&gt;The tech stack does not require a lot of money.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The first point is that I would choose to use a more efficient tech stack, including programming language, ecology, architecture, etc.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;I tried &lt;a href="https://dev.to/dev/tech-stack-of-side-project/#%E7%BC%96%E7%A8%8B%E8%AF%AD%E8%A8%80"&gt;Elixir&lt;/a&gt; because it is based on Erlang platform, which has a powerful concurrency model and expressive language syntax, and can take advantage of Erlang/OTP ecology, which allows me to develop a product online more efficiently.&lt;/li&gt;
&lt;li&gt;I tried &lt;a href="https://dev.to/dev/guide-to-serverless/"&gt;Serverless&lt;/a&gt; because many personal products have few initial users, and the traditional way of renting VPS would waste a lot of resources, and the elastic scaling ability and availability of Serverless are more power than VPS, so I will use Serverless to develop some small products.&lt;/li&gt;
&lt;li&gt;I tried &lt;a href="https://dev.to/dev/tech-stack-of-side-project/#devops%E7%9B%B8%E5%85%B3"&gt;IaC&lt;/a&gt; because it can build infrastructure in a declarative way and also manage infrastructure versions so that my investment in infrastructure is one-time and I don't have to repeat manual operations to provision the server environment each time.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The second impact on me is that I will avoid using large, resource-intensive tech stack and instead look for cheap, lightweight alternatives. This also means that I won't be looking for high SLA, cost effectiveness is my main goal. Choosing an industrial-grade tech stack, sacrificing some availability, allows me to accept the cost on the one hand, and get industrial-grade scalability on the other. This is reflected in my cost analysis of Serverless, where I choose the service components that are affordable and the most cost-effective billing service.&lt;/p&gt;

&lt;p&gt;My initial impression of the K8S is that it doesn't meet my requirements on either of these points. That's why I never tried it on my side project. Until I came across this long post &lt;a href="https://twitter.com/madawei2699/status/1381417261425065986"&gt;The Architecture Behind A One-Person Tech Startup&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The author describes his experience using K8S on his personal projects, which is resource intensive but brings great scalability and reliability.&lt;/p&gt;

&lt;h2&gt;
  
  
  Architecture Evolution History
&lt;/h2&gt;

&lt;p&gt;Application architectures, in terms of composition pattern, are divided into monolithic and distributed architectures.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Primitive distributed architectures predate even monoliths, as early computers were all poor in performance and could not meet the expanding human demand for computing power, which in turn led to the evolution of application architectures. As the performance of single computers improved, and the original distributed technologies were so complex that monolithic architectures became popular for a long time until the performance of single computers could not meet the massive amount of information needed to compute the explosive growth of human society.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Distributed architectures provide higher performance, higher availability, and higher scalability by coordinating the use of the computing power of multiple computers. However, due to its complexity, the evolution of distributed architectures is further divided into these phases.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Phase 1: &lt;a href="https://icyfenix.cn/architecture/architect-history/soa.html"&gt;Service-Oriented Architecture (SOA)&lt;/a&gt;. Service-oriented architecture is an architectural model that successfully addresses the main problems of distributed services in a concrete and systematic way at a time. However, this architecture requires a lot of time and effort from application developers to learn the framework itself, and the architectural design of this architecture pattern is more complex and too costly to promote.&lt;/li&gt;
&lt;li&gt;Phase 2: &lt;a href="https://icyfenix.cn/architecture/architect-history/microservices.html"&gt;Microservice Architecture&lt;/a&gt;. Microservices is an architectural style of building a single application through a combination of multiple small services that are built around business capabilities rather than specific technical standards. This solves the complexity problem of SOA and allows business developers to focus more on business development. But the problem with microservices is that business developers still have to deal with these problems that need to be solved for distributed architectures such as service discovery, error tracking, load balancing, transport communications, etc.&lt;/li&gt;
&lt;li&gt;Phase 3: &lt;a href="https://icyfenix.cn/architecture/architect-history/post-microservices.html"&gt;Cloud Native Architecture&lt;/a&gt;. Cloud-native architecture is the development from the software level alone to cope with the problems that cannot be solved by microservices architecture to the soft and hard integration (software-defined computing, software-defined networking, and software-defined storage), and the combined efforts to cope with the general problems of distributed architecture. Using technologies such as containers, virtualization technology, immutable infrastructure, service mesh, declarative APIs, K8S provides out-of-the-box elastic scaling, service discovery, configuration center, service gateway, load balancing, service security, monitoring and alerting, fault-tolerant processing and other functions. These technologies enable the construction of &lt;strong&gt;loosely coupled systems&lt;/strong&gt; that are &lt;strong&gt;fault-tolerant&lt;/strong&gt;, &lt;strong&gt;easy to manage&lt;/strong&gt; and &lt;strong&gt;easy to observe&lt;/strong&gt;. Combined with reliable automation, cloud native technologies enable engineers to easily make &lt;strong&gt;frequent and predictable major changes&lt;/strong&gt; to the system.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Cloud Native Era
&lt;/h3&gt;

&lt;blockquote&gt;
&lt;p&gt;As the virtualized infrastructure expands from containers of individual services to clusters of services, communication networks and storage facilities consisting of multiple containers, the line between software and hardware is blurred. Once virtualized hardware is able to keep up with the flexibility of software, technical issues that are not business-related can be stripped away from the software and silently resolved within the hardware infrastructure, allowing software to focus solely on the business and truly "build teams and products around business capabilities. (From &lt;a href="https://icyfenix.cn/architecture/architect-history/post-microservices.html"&gt;Phoenix Architecture/Post-Microservices Era&lt;/a&gt;)&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Kubernetes(K8S)
&lt;/h3&gt;

&lt;h4&gt;
  
  
  K8S Design
&lt;/h4&gt;

&lt;p&gt;K8S creates a DSL language in which users define &lt;strong&gt;every resource&lt;/strong&gt; (e.g., compute, network, storage, routing, secrets, certificates) used in a distributed system architecture &lt;strong&gt;declaratively&lt;/strong&gt;. When the user defines the state of the resources they expect, K8S automatically helps them create those resources and manages them automatically.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Resources is an extremely common term in Kubernetes. Broadly speaking, all aspects of the Kubernetes system that you can touch are abstracted into resources, such as resources for workloads (Pod, ReplicaSet, Service, ......), resources for storage (Volume, PersistentVolume, Secret, ......), resources for policies (SecurityContext, ResourceQuota, LimitRange, ...), and resources for policies (SecurityContext, ResourceQuota, LimitRange, ...). resources (Volume, PersistentVolume, Secret, ......), resources representing policies (SecurityContext, ResourceQuota, LimitRange, ...), and resources representing identities. ...), resources representing identities (ServiceAccount, Role, ClusterRole, ......), and so on. The "everything is a resource" design is an essential prerequisite for Kubernetes to be able to implement declarative APIs, and Kubernetes uses resources as a vehicle to build a domain-specific language that encompasses both abstract elements (e.g., policies, dependencies, permissions) and physical elements (e.g., software, hardware, networks). The collection of descriptions of resource states, from the state of the entire cluster or even a cluster federation down to a memory area or a small number of processor cores, through the relationship of resource usage between different layers, together form a panoramic picture of the working operation of an information system. (From &lt;a href="https://icyfenix.cn/immutable-infrastructure/schedule/hardware-schedule.html"&gt;Phoenix Architecture/Unchangeable Infrastructure&lt;/a&gt;)&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h4&gt;
  
  
  Benefits
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Ability to build fault-tolerant, easy-to-observed applications&lt;/li&gt;
&lt;li&gt;Enables applications to be managed in a unified manner&lt;/li&gt;
&lt;li&gt;Enables applications to scale resiliently&lt;/li&gt;
&lt;li&gt;One-click application installation and deployment (Helm)&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Disadvantages
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;High resource costs. Both the Master and Worker nodes of K8S require a certain amount of compute resources.&lt;/li&gt;
&lt;li&gt;K8S redefines many abstract technical concepts and has a high learning curve.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Cost analysis of K8S hosted on cloud platforms
&lt;/h3&gt;

&lt;p&gt;My choice of Kubernetes hosting solution for different cloud platforms is mainly based on cost considerations. This &lt;a href="https://github.com/bmpi-dev/bmpi-tech-starter/blob/main/infra/k8s-cost.md"&gt;K8S Cluster Cost Compare&lt;/a&gt; document provides a cost analysis of different cloud platforms (AWS/Azure/GCP/ DigitalOcean/Vultr) for Kubernetes hosting solutions.&lt;/p&gt;

&lt;p&gt;I ended up with the cheapest DigitalOcean cloud platform, where Master Control Plane Basic is free, Worker nodes are 2-core 4GB RAM machines in Singapore region ($20/month), and a $10/month Load Balancer fee. The total cost for a month is $30/month.&lt;/p&gt;

&lt;p&gt;Since the Worker node needs to install some K8S services such as kube-proxy, core-dns, etc., there are 12 pods, which take up half of the memory of the Worker node. This leaves 2GB of resources left for user application usage.&lt;/p&gt;

&lt;h2&gt;
  
  
  Application Architecture
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--e7SkOnTE--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://img.bmpi.dev/4c613ca1-0874-d09b-79ad-243cd926bfba.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--e7SkOnTE--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://img.bmpi.dev/4c613ca1-0874-d09b-79ad-243cd926bfba.png" alt="" width="880" height="518"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The above diagram shows the development and deployment process of this cloud-native application and the architecture of each internal service deployed by K8S.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Development and deployment process. The push of the code to GitHub triggers two actions.

&lt;ul&gt;
&lt;li&gt;Vercel detects changes to the front-end code and automatically deploys it to Vercel's CDN if there are any changes.&lt;/li&gt;
&lt;li&gt;GitHub Actions detects changes to the back-end code, builds the image and publishes it to GitHub Packages, then automatically creates a new K8S Deployment and redeploys the back-end service.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;Request flow processing. When a user visits a website, DNS is parsed by Cloudflare and the browser sends two requests to:

&lt;ul&gt;
&lt;li&gt;Vercel side: the browser pulls static page resources.&lt;/li&gt;
&lt;li&gt;K8S side: the request is forwarded to the Service of ExternalName type of default Namespace after the Ingress rule is parsed by Load Balance of K8S, and then forwarded to the Service of the back-end service (Namespace is free4chat), and finally forwarded by the Service to the Container of one of the Pods. The Container is our backend business application.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Implementation
&lt;/h2&gt;

&lt;p&gt;The final result is available at: &lt;a href="https://www.free4.chat/"&gt;Online version&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The source code is available at: &lt;a href="https://github.com/madawei2699/free4chat/tree/k8s"&gt;code repository&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Pre-requisites
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Pre-requisite knowledge
&lt;/h4&gt;

&lt;p&gt;If you don't know anything about K8S, you can start by watching this high-quality introductory video: &lt;a href="https://youtu.be/X48VuDVv0do"&gt;Kubernetes Tutorial for Beginners [FULL COURSE in 4 Hours]&lt;/a&gt;. Make sure you understand the basic K8S concepts: Namespace, Deployment, Service, Pod, Node, Ingress before you get into the real thing.&lt;/p&gt;

&lt;h4&gt;
  
  
  Pre-requisites
&lt;/h4&gt;

&lt;p&gt;You will need to register for the following account first.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.digitalocean.com/"&gt;DigitalOcean&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.cloudflare.com/"&gt;Cloudflare&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;A domain name.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And install these software.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;kubectl&lt;/li&gt;
&lt;li&gt;doctl&lt;/li&gt;
&lt;li&gt;helm&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Project directory structure
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;.&lt;/span&gt;
├── .github
│   └── workflows
|       └── workflow.yml
├── Makefile
├── backend
├── frontend
└── infra
    ├── Dockerfile.backend
    ├── k8s
    │   ├── free4chat-svc.yaml
    │   ├── ingress-free4-chat.yaml
    │   ├── ingress_nginx_svc.yaml
    │   └── production_issuer.yaml
    └── tools
        └── nsenter-node.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The overall project is divided into frontend, backend, and infra parts, and this article focuses on the infra part, which is the K8S deployment. infra does not use IaC because K8S itself is a declarative yaml files, and there is no need to add complexity to use IaC if you are not using some cloud hosting service. the CI/CD part is done by using GitHub Actions.&lt;/p&gt;

&lt;h3&gt;
  
  
  Dockerfile
&lt;/h3&gt;

&lt;p&gt;The backend service is a Golang application, and the packaged &lt;a href="https://github.com/madawei2699/free4chat/blob/k8s/infra/Dockerfile.backend"&gt;Dockerfile&lt;/a&gt; is here. I also made a simple configuration of &lt;a href="https://github.com/madawei2699/free4chat/blob/k8s/backend/Makefile"&gt;Makefile&lt;/a&gt; for compiling the backend service. Local deployment of backend services using Docker can use this &lt;a href="https://github.com/madawei2699/free4chat/blob/k8s/Makefile"&gt;Makefile&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Configuring K8S
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Creating a K8S Cluster
&lt;/h4&gt;

&lt;p&gt;Creating a K8S Cluster at DigitalOcean is a very simple task, you just need to select the region (depending on where your business users are located) and the specifications of the Worker Node (depending on your cost budget) to create a Cluster.&lt;/p&gt;

&lt;h4&gt;
  
  
  Connecting to a K8S Cluster
&lt;/h4&gt;

&lt;p&gt;Configure K8S using &lt;code&gt;doctl&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;doctl kubernetes cluster kubeconfig save use_your_cluster_name
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Namespace
&lt;/h4&gt;

&lt;p&gt;Namespace is a mechanism used by K8S to isolate groups of resources within a single cluster. For example, we can create different business Namespace in the same cluster, and under this Namespace, we can store Pods, Services, Deployment and other resources related to this business, if we want to delete the resources related to this business, we just need to delete this Namespace.&lt;/p&gt;

&lt;p&gt;By default, K8S has a kube-system Namespace, which holds K8S-related resources. There is also a default Namespace, which holds the resources created by default (without the Namespace).&lt;/p&gt;

&lt;h4&gt;
  
  
  Backend Service
&lt;/h4&gt;

&lt;p&gt;First create a Namespace for the backend service.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl create namespace free4chat
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then create a backend Service template &lt;a href="https://github.com/madawei2699/free4chat/blob/k8s/infra/k8s/free4chat-svc.yaml"&gt;free4chat-svc.yaml&lt;/a&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Service&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apifree4chat&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
    &lt;span class="na"&gt;targetPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;8888&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apifree4chat&lt;/span&gt;
&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apps/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deployment&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apifree4chat&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;matchLabels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apifree4chat&lt;/span&gt;
  &lt;span class="na"&gt;replicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
  &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apifree4chat&lt;/span&gt;
    &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;echo&lt;/span&gt;
        &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;&amp;lt;IMAGE&amp;gt;&lt;/span&gt;
        &lt;span class="na"&gt;resources&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;requests&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;memory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;128Mi"&lt;/span&gt;
            &lt;span class="na"&gt;cpu&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;250m"&lt;/span&gt;
          &lt;span class="na"&gt;limits&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;memory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;500Mi"&lt;/span&gt;
            &lt;span class="na"&gt;cpu&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;1000m"&lt;/span&gt;
        &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;containerPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;8888&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The reason this is a template is that there is a &lt;code&gt;&amp;lt;IMAGE&amp;gt;&lt;/code&gt; placeholder in the image section that will be replaced with the real image when GitHub Actions are deployed later.&lt;/p&gt;

&lt;p&gt;This template defines a Deployment and Service resource; Deployment defines the CPU and memory limits for Pod instances, the number of instances, port mapping, and container images, etc. Service defines the domain name and port for accessing backend services inside the Cluster.&lt;/p&gt;

&lt;p&gt;Finally, GitHub Actions deploys this Service and Deployment to the &lt;code&gt;free4chat&lt;/code&gt; namespace in the K8S Cluster.&lt;/p&gt;

&lt;h4&gt;
  
  
  Ingress Controller
&lt;/h4&gt;

&lt;p&gt;How do you get external traffic to the backend service when you have a Service for the backend service? This is what K8S Ingress does. Our first choice is to install the Ingress Controller, which comes in many types, such as HAProxy, Nginx, Traefik, etc. We choose Nginx here. Find &lt;a href="https://marketplace.digitalocean.com/apps/nginx-ingress-controller"&gt;Nginx Ingress Controller&lt;/a&gt; on the DigitalOcean K8S administration interface, and click Install.&lt;/p&gt;

&lt;p&gt;This will automatically create an ingress-nginx namespace and a DigitalOcean Load Balance service, which costs $10/month and has a separate IP address (available in the DigitalOcean administration interface). We will use this IP in our DNS configuration later.&lt;/p&gt;

&lt;p&gt;Now we need to create an ingress rule under default Namespace to forward the LoadBalance traffic to the backend service, the configuration file is &lt;a href="https://github.com/madawei2699/free4chat/blob/k8s/infra/k8s/ingress-free4-chat.yaml"&gt;ingress-free-4-chat.yaml&lt;/a&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;networking.k8s.io/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Ingress&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;api-free4chat-ingress&lt;/span&gt;
  &lt;span class="na"&gt;annotations&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;kubernetes.io/ingress.class&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
    &lt;span class="na"&gt;cert-manager.io/cluster-issuer&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;letsencrypt-prod&lt;/span&gt; &lt;span class="c1"&gt;# This is the ClusterIssuer for cert-manager, which is used to automatically generate SSL certificates&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;tls&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;hosts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;api.k.free4.chat&lt;/span&gt;
    &lt;span class="na"&gt;secretName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;api-free4chat-tls&lt;/span&gt;
  &lt;span class="na"&gt;rules&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;host&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;api.k.free4.chat&lt;/span&gt; &lt;span class="c1"&gt;# Back-end service domain name&lt;/span&gt;
    &lt;span class="na"&gt;http&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;paths&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;pathType&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Prefix&lt;/span&gt;
        &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;/"&lt;/span&gt;
        &lt;span class="na"&gt;backend&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;service&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apifree4chat&lt;/span&gt;
            &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
              &lt;span class="na"&gt;number&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Service&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apifree4chat&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ExternalName&lt;/span&gt; &lt;span class="c1"&gt;# Because the backend service is not in the default Namespace, it needs to be forwarded to the backend service in the apifree4chat Namespace through the ExternalName service.&lt;/span&gt;
  &lt;span class="na"&gt;externalName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apifree4chat.free4chat.svc.cluster.local&lt;/span&gt; &lt;span class="c1"&gt;# Back-end service domains across Namespace&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This configuration file will generate two resources, an ingress rule and a Service of type ExternalName. we will use kubectl to create this resource after configuring the ClusterIssuer for cert-manager.&lt;/p&gt;

&lt;h4&gt;
  
  
  Cert Manager (HTTPS)
&lt;/h4&gt;

&lt;p&gt;The SSL certificate for the domain is automatically generated and updated in K8S via Cert Manager, where we use the Let's Encrypt service to issue the certificate for us.&lt;/p&gt;

&lt;p&gt;First, install the Cert Manager application via Helm with one click.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl create namespace cert-manager
helm repo add jetstack https://charts.jetstack.io
helm repo update
helm &lt;span class="nb"&gt;install &lt;/span&gt;cert-manager jetstack/cert-manager &lt;span class="nt"&gt;--namespace&lt;/span&gt; cert-manager &lt;span class="nt"&gt;--version&lt;/span&gt; v1.2.0 &lt;span class="nt"&gt;--set&lt;/span&gt; &lt;span class="nv"&gt;installCRDs&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;true&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After executing these commands you need to create a ClusterIssuer resource that issues SSL certificates for the production environment, the configuration file is &lt;a href="https://github.com/madawei2699/free4chat/blob/k8s/infra/k8s/production_issuer.yaml"&gt;production_issuer.yaml&lt;/a&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cert-manager.io/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ClusterIssuer&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;letsencrypt-prod&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;acme&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="c1"&gt;# Email address used for ACME registration&lt;/span&gt;
    &lt;span class="na"&gt;email&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;your@email.com&lt;/span&gt;
    &lt;span class="na"&gt;server&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;https://acme-v02.api.letsencrypt.org/directory&lt;/span&gt;
    &lt;span class="na"&gt;privateKeySecretRef&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="c1"&gt;# Name of a secret used to store the ACME account private key&lt;/span&gt;
      &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;letsencrypt-prod-private-key&lt;/span&gt;
    &lt;span class="c1"&gt;# Add a single challenge solver, HTTP01 using nginx&lt;/span&gt;
    &lt;span class="na"&gt;solvers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;http01&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;ingress&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;class&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In DigitalOcean, in order for Cert Manager to be self-check, Pod-Pod communication must be enabled through the Nginx Ingress Controller for Cert Manager to work properly. To create a Service resource for the K8S approach certificate, the configuration file is &lt;a href="https://github.com/madawei2699/free4chat/blob/k8s/infra/k8s/ingress_nginx_svc.yaml"&gt;ingress_nginx_svc.yaml&lt;/a&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Service&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;annotations&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;service.beta.kubernetes.io/do-loadbalancer-enable-proxy-protocol&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;true'&lt;/span&gt;
    &lt;span class="na"&gt;service.beta.kubernetes.io/do-loadbalancer-hostname&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;k.free4.chat"&lt;/span&gt;
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;helm.sh/chart&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ingress-nginx-2.11.1&lt;/span&gt;
    &lt;span class="na"&gt;app.kubernetes.io/name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ingress-nginx&lt;/span&gt;
    &lt;span class="na"&gt;app.kubernetes.io/instance&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ingress-nginx&lt;/span&gt;
    &lt;span class="na"&gt;app.kubernetes.io/version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;0.34.1&lt;/span&gt;
    &lt;span class="na"&gt;app.kubernetes.io/managed-by&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Helm&lt;/span&gt;
    &lt;span class="na"&gt;app.kubernetes.io/component&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;controller&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ingress-nginx-controller&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ingress-nginx&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;LoadBalancer&lt;/span&gt;
  &lt;span class="na"&gt;externalTrafficPolicy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Local&lt;/span&gt;
  &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;http&lt;/span&gt;
      &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
      &lt;span class="na"&gt;protocol&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;TCP&lt;/span&gt;
      &lt;span class="na"&gt;targetPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;http&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;https&lt;/span&gt;
      &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;443&lt;/span&gt;
      &lt;span class="na"&gt;protocol&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;TCP&lt;/span&gt;
      &lt;span class="na"&gt;targetPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;https&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app.kubernetes.io/name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ingress-nginx&lt;/span&gt;
    &lt;span class="na"&gt;app.kubernetes.io/instance&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ingress-nginx&lt;/span&gt;
    &lt;span class="na"&gt;app.kubernetes.io/component&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;controller&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Creating resources
&lt;/h3&gt;

&lt;p&gt;After the above steps we have some declarative K8S resource creation profiles, now it's time to actually start creating these resources:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; production_issuer.yaml &lt;span class="c"&gt;# Create the ClusterIssuer resource for issuing SSL certificates&lt;/span&gt;
kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; ingress_nginx_svc.yaml &lt;span class="c"&gt;# Create a Service resource that resolves Pod-Pod communication&lt;/span&gt;
kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; ingress-free-4-chat.yaml &lt;span class="c"&gt;# Create ingress rule resource&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When these commands are executed, we are still missing some steps.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The creation of the backend service resource. We'll create this via GitHub Actions.&lt;/li&gt;
&lt;li&gt;DNS domain configuration. We will configure this on Cloudflare.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  DNS configuration
&lt;/h4&gt;

&lt;p&gt;Configuring DNS resolution on Cloudflare.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--TgDE6N5t--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://img.bmpi.dev/a55880f1-8f45-71d5-a682-65588d0eb4b1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--TgDE6N5t--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://img.bmpi.dev/a55880f1-8f45-71d5-a682-65588d0eb4b1.png" alt="" width="880" height="269"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The reason for this is that we cannot create two different IP records for a primary domain at the same time, so we have to give the back-end domain name to solve this problem.&lt;/p&gt;

&lt;p&gt;Finally, we create two CNAME records, which are&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;api -&amp;gt; &lt;code&gt;api.k.free4.chat&lt;/code&gt;: the domain name of our backend service API.&lt;/li&gt;
&lt;li&gt;www -&amp;gt; &lt;code&gt;www.free4.chat&lt;/code&gt;: our main domain name.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;At this point we can access &lt;code&gt;https://www.free4.chat&lt;/code&gt;. But &lt;code&gt;https://api.k.free4.chat&lt;/code&gt; doesn't work yet, because the backend service hasn't been created yet. So next we need to create the backend service via GitHub Actions.&lt;/p&gt;

&lt;h3&gt;
  
  
  GitHub Workflow
&lt;/h3&gt;

&lt;p&gt;The benefit of creating backend services via GitHub Actions is that it automates deployments and automatically triggers GitHub Actions to build new images and create new backend services when changes are made to the backend code.&lt;/p&gt;

&lt;p&gt;To create a GitHub Workflow, just create &lt;a href="https://github.com/madawei2699/free4chat/blob/k8s/.github/workflows/workflow.yml"&gt;.github/workflows/workflow.yaml&lt;/a&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;DO_K8S_Deploy&lt;/span&gt;

&lt;span class="na"&gt;on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;push&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;branches&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;main&lt;/span&gt;
    &lt;span class="na"&gt;paths&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;backend/src/**'&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;infra/Dockerfile.backend'&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;.github/workflows/**'&lt;/span&gt;

&lt;span class="c1"&gt;# A workflow run is made up of one or more jobs that can run sequentially or in parallel.&lt;/span&gt;
&lt;span class="na"&gt;jobs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="c1"&gt;# This workflow contains a single job called "build".&lt;/span&gt;
  &lt;span class="na"&gt;build&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="c1"&gt;# The type of runner that the job will run on.&lt;/span&gt;
    &lt;span class="na"&gt;runs-on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ubuntu-latest&lt;/span&gt;

    &lt;span class="c1"&gt;# Steps represent a sequence of tasks that will be executed as part of the job&lt;/span&gt;
    &lt;span class="na"&gt;steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;

    &lt;span class="c1"&gt;# Checks-out your repository under $GITHUB_WORKSPACE, so your job can access it.&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Checkout master&lt;/span&gt;
      &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/checkout@main&lt;/span&gt;

    &lt;span class="c1"&gt;# Install doctl.&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Install doctl&lt;/span&gt;
      &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;digitalocean/action-doctl@v2&lt;/span&gt;
      &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;token&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ secrets.DIGITALOCEAN_ACCESS_TOKEN }}&lt;/span&gt;

    &lt;span class="c1"&gt;# Build a Docker image of your application in your registry and tag the image with the $GITHUB_SHA.&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Build container image&lt;/span&gt;
      &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;docker build -t ghcr.io/madawei2699/apifree4chat:$(echo $GITHUB_SHA | head -c7) -f ./infra/Dockerfile.backend .&lt;/span&gt;

    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Log in to GitHub Packages&lt;/span&gt;
      &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;echo "${{ secrets.GITHUB_TOKEN }}" | docker login ghcr.io -u ${{ github.actor }} --password-stdin&lt;/span&gt;

    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Push image to GitHub Packages&lt;/span&gt;
      &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;docker push ghcr.io/madawei2699/apifree4chat:$(echo $GITHUB_SHA | head -c7)&lt;/span&gt;

    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Update deployment file&lt;/span&gt;
      &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;TAG=$(echo $GITHUB_SHA | head -c7) &amp;amp;&amp;amp; sed -i 's|&amp;lt;IMAGE&amp;gt;|ghcr.io/madawei2699/apifree4chat:'${TAG}'|' $GITHUB_WORKSPACE/infra/k8s/free4chat-svc.yaml&lt;/span&gt;

    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Save DigitalOcean kubeconfig with short-lived credentials&lt;/span&gt;
      &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;doctl kubernetes cluster kubeconfig save --expiry-seconds 600 ${{ secrets.CLUSTER_NAME }}&lt;/span&gt;

    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deploy to DigitalOcean Kubernetes&lt;/span&gt;
      &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kubectl apply -f $GITHUB_WORKSPACE/infra/k8s/free4chat-svc.yaml -n free4chat&lt;/span&gt;

    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Verify deployment&lt;/span&gt;
      &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kubectl rollout status deployment/apifree4chat -n free4chat&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The only thing you need to do here is create the &lt;code&gt;CLUSTER_NAME&lt;/code&gt; and &lt;code&gt;DIGITALOCEAN_ACCESS_TOKEN&lt;/code&gt; environment variables in the Actions secrets for this repo in advance for GitHub Actions to use. where &lt;code&gt;DIGITALOCEAN_ACCESS_TOKEN&lt;/code&gt; is the DigitalOcean API Token and &lt;code&gt;CLUSTER_NAME&lt;/code&gt; is the name of our Kubernetes Cluster on DigitalOcean.&lt;/p&gt;

&lt;p&gt;This way, whenever a code update is pushed to GitHub, a new service (including front-end and back-end) is automatically built and published to Vercel and DigitalOcean K8S.&lt;/p&gt;

&lt;p&gt;When you are here, our application goes live!&lt;/p&gt;

&lt;h3&gt;
  
  
  One More Thing
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Logging: Traditional ELK requires a lot of server resources and is not suitable for a lightweight cluster like ours. The easiest way is to run multiple Pods to see the logs, and there is a &lt;a href="https://github.com/wercker/stern"&gt;stern&lt;/a&gt; tool that helps us to query the logs across multiple Pods.&lt;/li&gt;
&lt;li&gt;Monitoring and alerting: We can monitor our services by installing Prometheus and Grafana, and we can send alerts through Prometheus' Alert Manager. However, if the entire cluster is down, the monitoring and alerting service installed into the cluster will not work, so the best practice is to use an external monitoring and alerting service. This can be done using &lt;a href="https://newrelic.com/"&gt;New Relic&lt;/a&gt; or a similar service.&lt;/li&gt;
&lt;li&gt;Error tracking: Integrating &lt;a href="https://sentry.io/welcome/"&gt;Sentry&lt;/a&gt; will enable backend service error tracking.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;At this point we have built a K8S cluster from scratch (without including the K8S Master control plane). Let's think about the question, what problem does K8S help us solve?&lt;/p&gt;

&lt;p&gt;Let's start by thinking about the &lt;a href="https://12factor.net/"&gt;12 Factor&lt;/a&gt; that are often considered in modern software development.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Codebase&lt;/li&gt;
&lt;li&gt;Dependencies&lt;/li&gt;
&lt;li&gt;Config&lt;/li&gt;
&lt;li&gt;Backing services&lt;/li&gt;
&lt;li&gt;Build, release, run&lt;/li&gt;
&lt;li&gt;Processes&lt;/li&gt;
&lt;li&gt;Port binding&lt;/li&gt;
&lt;li&gt;Concurrency&lt;/li&gt;
&lt;li&gt;Disposability&lt;/li&gt;
&lt;li&gt;Dev/prod parity&lt;/li&gt;
&lt;li&gt;Logs&lt;/li&gt;
&lt;li&gt;Admin processes&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;K8S offers solutions to all of these factors, directly or indirectly, and the ecosystem around it allows engineers to build robust software that meets these software design factors at low cost.&lt;/p&gt;

&lt;p&gt;I think this is the reason why K8S is called the operating system on the cloud.&lt;/p&gt;

&lt;h2&gt;
  
  
  References
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://mike.sg/2021/08/29/modern-web-hosting-for-personal-projects/"&gt;Modern Web Hosting for Personal Projects - Mike Cartmell's blog&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.digitalocean.com/community/tutorials/architecting-applications-for-kubernetes"&gt;Architecting Applications for Kubernetes | DigitalOcean&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.doxsey.net/blog/kubernetes--the-surprisingly-affordable-platform-for-personal-projects/"&gt;Kubernetes: The Surprisingly Affordable Platform for Personal Projects | doxsey.net&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://downey.io/blog/skip-kubernetes-loadbalancer-with-hostport-daemonset/"&gt;Save Money and Skip the Kubernetes Load Balancer: Lowering Infrastructure Costs with Ingress Controllers, DNS, and Host Ports|downey.io&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://mike.sg/2021/08/31/digitalocean-kubernetes-without-a-load-balancer/"&gt;DigitalOcean Kubernetes Without a Load Balancer - Mike Cartmell's blog&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>devops</category>
      <category>k8s</category>
      <category>kubernetes</category>
      <category>cloudnative</category>
    </item>
    <item>
      <title>Cloud IDE</title>
      <dc:creator>Dawei Ma</dc:creator>
      <pubDate>Tue, 24 Aug 2021 11:32:42 +0000</pubDate>
      <link>https://dev.to/aws-builders/cloud-ide-3l0k</link>
      <guid>https://dev.to/aws-builders/cloud-ide-3l0k</guid>
      <description>&lt;p&gt;A while back, the GitHub Twitter posted the following Tweet.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fimg.bmpi.dev%2F0dd20034-d46f-8910-a6dc-cf71802979f4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fimg.bmpi.dev%2F0dd20034-d46f-8910-a6dc-cf71802979f4.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you press &lt;code&gt;.&lt;/code&gt; on any GitHub Repo page, you will automatically be redirected to the &lt;code&gt;github.dev&lt;/code&gt; which is a web version of VSCode and will automatically clone the code of the repo. In this web VSCode you can even install some specific plugins (you can't install plugins that require external dependencies) to make it easier to read the code. Because this site is official, VSCode is automatically bound to your GitHub account, so developers can read, edit, and commit code in it without having to open a local IDE. github1s), an open source project with similar functionality.&lt;/p&gt;

&lt;p&gt;VSCode's team leader Erich Gamma (one of the authors of JUnit, one of the authors of Design Patterns, and an Eclipse architect) joined Microsoft in 2011 with the following job description.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Envision new paradigms for online developer tooling that will be as successful as the IDE has been for the desktop.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Then came the birth of VSCode. We can say that VSCode is designed to be a cloud-based IDE from its inception.&lt;/p&gt;

&lt;p&gt;Why use a cloud IDE, it is due to some problems in local development environment, such as&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Environment differences: There are some differences between Mac and Linux, especially when using popular Linux kernel-based technologies like Docker on Mac, which can make the whole experience worse.&lt;/li&gt;
&lt;li&gt;Performance issues: Local development machines generally need to run a lot of software, such as some office and communication app, so the performance is not high enough to stretch.&lt;/li&gt;
&lt;li&gt;Stability problem: The local development machine takes a long time to restart, and it takes a lot of time to set up the development environment after each restart, which wastes a lot of time.&lt;/li&gt;
&lt;li&gt;Dependency issues: If the development environment relies on some specific cloud infrastructure, network communication can be troublesome. Whereas on cloud hosting, it is naturally in a network environment with other cloud infrastructures and the environment is simple to set up.&lt;/li&gt;
&lt;li&gt;Network issues: Modern software development is standing on the shoulders of giants, and a lot of software relies on a large number of libraries, frameworks, and runtime, and these dependencies require fast network speeds to download. Generally cloud hosting has better network performance than a home or office network.&lt;/li&gt;
&lt;li&gt;Security issues: There is a risk of leakage of code or keys placed in the local development environment, such as the loss of code or keys when a developer's development machine is stolen.&lt;/li&gt;
&lt;li&gt;Storage issues: Local development machines have limited disk storage and do not scale well. And the cloud host's disks are easily expandable.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The ultimate solution to these problems is to move the development environment to the cloud, and the most important thing to develop in the cloud is the need for a good IDE support, which has led to a strong demand for cloud IDEs in the industry.&lt;/p&gt;

&lt;h2&gt;
  
  
  Cloud IDE
&lt;/h2&gt;

&lt;p&gt;Before we talk about the cloud IDE let's understand some of the main functional points of the IDE, as shown in the following figure.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fimg.bmpi.dev%2Fb203c603-cc39-3ee0-3ce4-9ed3d7022777.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fimg.bmpi.dev%2Fb203c603-cc39-3ee0-3ce4-9ed3d7022777.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A good IDE should of course let the programmer write the code to write the cool, see the code to see the smooth. And to achieve this purpose, it is essential to support the following feature points.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Excellent text editing features, supporting keyboard custom layout.&lt;/li&gt;
&lt;li&gt;Code hint functions, such as syntax highlighting, code jumping, error hints, etc..&lt;/li&gt;
&lt;li&gt;Debugging features.&lt;/li&gt;
&lt;li&gt;Multi-programming language support.&lt;/li&gt;
&lt;li&gt;Code completion function.&lt;/li&gt;
&lt;li&gt;Code refactoring capabilities.&lt;/li&gt;
&lt;li&gt;Extension capabilities, support for user-defined or provided plug-ins.&lt;/li&gt;
&lt;li&gt;Good ecology.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In desktop IDEs, these features are not a problem, and there are many IDEs that support them, such as Visual Studio, Eclipse, IntelliJ IDEA, NetBeans and Xcode. But in the dimension of online support, none of these older IDEs can.&lt;/p&gt;

&lt;p&gt;The requirements for cloud IDEs in the early industry were also not high, so there were about three broad categories of cloud IDEs, as follows.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fimg.bmpi.dev%2F6743ed9b-5ffa-3032-1e3e-12aa383e5ec6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fimg.bmpi.dev%2F6743ed9b-5ffa-3032-1e3e-12aa383e5ec6.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Online editors. Web online editors, mainly CodePen and JSFiddle, make it easy to do online development of front-end pages. But this is far from the desktop IDE experience.&lt;/li&gt;
&lt;li&gt;Repl.it and Jupyter are the main online &lt;a href="https://en.wikipedia.org/wiki/Read-eval-print_loop" rel="noopener noreferrer"&gt;REPL&lt;/a&gt;. REPL is at most one of the many features supported by desktop IDEs, and its usage scenario is suitable for writing some validation type There is still a long way to go before an engineered code development experience.&lt;/li&gt;
&lt;li&gt;Cloud IDEs with limited functionality, mainly AWS Cloud9, are already very good for code development, even seamlessly using cloud infrastructure, and are suitable for collaborative code development at scale. However, these cloud IDEs are generally not scalable, for example, some plugins cannot be installed.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For example, Github Codespaces uses VSCode combined with Azure cloud servers to give developers a desktop IDE experience and the ability to install plugins. Also, in a &lt;a href="https://insights.stackoverflow.com/survey/2021" rel="noopener noreferrer"&gt;2021 Developer Survey&lt;/a&gt; questionnaire from StackOverflow, the most popular developer community, the &lt;code&gt;Integrated development environment (IDE)&lt;/code&gt; section of a StackOverflow &lt;a href=""&gt;2021 Developer Survey&lt;/a&gt; survey (with more than 80,000 developer survey responses), VSCode was voted the most popular IDE with 71% of the votes (up to 50% in 2019).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fimg.bmpi.dev%2Fef162e34-50e9-e7bd-e1aa-1de4b80a27b8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fimg.bmpi.dev%2Fef162e34-50e9-e7bd-e1aa-1de4b80a27b8.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It seems that VSCode achieves the ultimate goal that cloud IDEs are trying to achieve: &lt;strong&gt;the same development experience as desktop IDEs&lt;/strong&gt;. The question here is why VSCode?&lt;/p&gt;

&lt;h2&gt;
  
  
  Why VSCode
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Good Design
&lt;/h3&gt;

&lt;p&gt;The VSCode remote development model is shown in the following figure.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fimg.bmpi.dev%2F4c5f0a02-139f-1bb5-5dbd-46c1af97f60a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fimg.bmpi.dev%2F4c5f0a02-139f-1bb5-5dbd-46c1af97f60a.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The CS architecture is designed to give VSCode the ability to use remote servers or containers. The local VSCode is only responsible for the UI interface and theme display, while other things such as plug-ins, program runs, terminal processes and debuggers are run on the remote server. The separation of interface display and computation is a very important point for implementing a cloud IDE.&lt;/p&gt;

&lt;p&gt;The CS architecture design is also reflected in the code prompting. By developing the &lt;code&gt;Language Server Protocol&lt;/code&gt; standard protocol, the VSCode core does not need to parse ASTs of multiple programming languages or implement multiple programming language parsers, but delegates these functions to the plug-ins of each language, ensuring that the core is very small and stable.&lt;/p&gt;

&lt;p&gt;The same design is also reflected in the Debugger and the &lt;code&gt;Debug Adaptor Protocol&lt;/code&gt; standard protocol.&lt;/p&gt;

&lt;p&gt;More analysis of the architecture can be found in my article &lt;a href="https://dev.to/dev/vscode-plugin-development-notes/"&gt;VSCode Plugin Development Notes&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Cross Platform
&lt;/h3&gt;

&lt;p&gt;VSCode was formerly known as &lt;a href="https://github.com/Microsoft/monaco-editor" rel="noopener noreferrer"&gt;monaco-editor&lt;/a&gt; online editor. As a web software, it need to use &lt;code&gt;Electron&lt;/code&gt; technology to achieve cross-platform functionality. So the desktop VSCode and the server-side Web version of VSCode are actually one code base.&lt;/p&gt;

&lt;p&gt;Because it is a web software, there is a third-party Repo &lt;a href="https://github.com/cdr/code-server" rel="noopener noreferrer"&gt;code-server&lt;/a&gt; and a VSCode running in the browser.&lt;/p&gt;

&lt;h3&gt;
  
  
  Open source
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://github.com/microsoft/vscode" rel="noopener noreferrer"&gt;VSCode&lt;/a&gt; Without open source, it could have ended very differently. It was Microsoft's gorgeous turnaround and enthusiastic embrace of open source that opened the door for VSCode to go global, or it could have ended up as one of Microsoft's many internal projects.&lt;/p&gt;

&lt;h2&gt;
  
  
  Build Cloud IDE based on AWS with Pulumi
&lt;/h2&gt;

&lt;p&gt;Thanks to VSCode's open source and web features, we can quickly build a VSCode-based personal cloud IDE that is comparable to &lt;a href="https://github.com/features/codespaces" rel="noopener noreferrer"&gt;Github Codespaces&lt;/a&gt;, but much cheaper.&lt;/p&gt;

&lt;p&gt;My implementation can be found in this &lt;a href="https://github.com/bmpi-dev/code.bmpi.dev/tree/master/server" rel="noopener noreferrer"&gt;Repo&lt;/a&gt;. The architecture is as follows.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fimg.bmpi.dev%2Fdafdc38a-8e97-7daa-d860-4ad78c4d182b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fimg.bmpi.dev%2Fdafdc38a-8e97-7daa-d860-4ad78c4d182b.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Prerequisites.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;An AWS account with the AWS CLI installed and AWS Credentials configured locally. The AWS account needs to have rights to access to EC2.&lt;/li&gt;
&lt;li&gt;A &lt;a href="https://www.pulumi.com/" rel="noopener noreferrer"&gt;Pulumi&lt;/a&gt; account and create a project; (if you are not familiar with Pulumi, you can refer to this article &lt;a href="https://dev.to/dev/pulumi-aws-serverless-hugo-site-vists/"&gt;Implementing static blog access statistics based on Serverless&lt;/a&gt;)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Very simple to use (thanks to the power of Pulumi and AWS CLI).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git clone https://github.com/bmpi-dev/code.bmpi.dev.git
&lt;span class="nb"&gt;cd &lt;/span&gt;code.bmpi.dev/server
pulumi up &lt;span class="c"&gt;# Set up AWS EC2 with Pulumi&lt;/span&gt;
./run work &lt;span class="c"&gt;# Open remote VSCode&lt;/span&gt;
./run rest &lt;span class="c"&gt;# Shut down the remote VSCode&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If &lt;code&gt;./run open_tunnel&lt;/code&gt; cannot connect to the tunnel while the server is still up, you can run it again after the server is up to establish the tunnel connection.&lt;/p&gt;

&lt;p&gt;You need to enter the VSCode login password for the first time access, by executing &lt;code&gt;sh connect-server.sh&lt;/code&gt; and then execute &lt;code&gt;cat ~/.config/code-server/config.yaml | grep password:&lt;/code&gt; to get the login password.&lt;/p&gt;

&lt;p&gt;You can start using the remote VSCode by accessing &lt;code&gt;http://localhost:8888/&lt;/code&gt; through your browser.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fimg.bmpi.dev%2Fdb6642d4-1224-d743-c881-314dd043e318.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fimg.bmpi.dev%2Fdb6642d4-1224-d743-c881-314dd043e318.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you don't need this environment for a while, remember to hibernate it via &lt;code&gt;./run rest&lt;/code&gt; to hibernate the cloud server. After the server is shut down, AWS does not bill the EC2 instance, but only the storage volumes for a very cheap fee.&lt;/p&gt;

&lt;p&gt;If you don't need the environment at all and want to destroy all resources so that AWS doesn't continue to charge, just run &lt;code&gt;pulumi destroy&lt;/code&gt; to delete all AWS resources.&lt;/p&gt;

&lt;h2&gt;
  
  
  About fees
&lt;/h2&gt;

&lt;p&gt;Take an AWS EC2 T2.Medium instance (2 cores 4GB RAM + 50GB storage) as an example. For 5 hours of development per day and 100 hours per month for 20 days, the total cost is $0.0464 * 100 + $0.1 * 50 = $9.64. The same server configuration with Github Codespaces costs $21.50, 2.23 times bigger than our.&lt;/p&gt;

&lt;h2&gt;
  
  
  The future of Cloud IDE
&lt;/h2&gt;

&lt;p&gt;Cloud IDE represents the future of an R&amp;amp;D model. The possible development trends of this R&amp;amp;D model are as follows.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Standardized development environment. A set of cloud IDE development environment can be standardized for batch configuration and used out of the box, significantly reducing the time consumption of developers in configuring the development environment.&lt;/li&gt;
&lt;li&gt;Customized development environment. The development environment can be customized to meet the needs of different types of projects.&lt;/li&gt;
&lt;li&gt;Elastic development environment. The configuration of the development environment relies on the automatic elastic expansion of the cloud service, and the configuration can be dynamically adjusted to meet the dynamic needs of the development environment for resource allocation.&lt;/li&gt;
&lt;li&gt;Intelligent development environment. Relying on the cloud server's machine learning analysis of specific code repositories, it can better achieve intelligent tips to assist development, similar to &lt;a href="https://copilot.github.com/" rel="noopener noreferrer"&gt;GitHub Copilot&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Secure development environment. The code and infrastructure configuration are stored on the cloud server, which can greatly reduce the risk of code or environment key leakage caused by developer negligence. With a good system security configuration of the cloud server, the security risk of the development environment can be reduced.&lt;/li&gt;
&lt;li&gt;Ready-to-use development environment. No need for a specific development machine, just a computer with a browser to access the cloud IDE to start development.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In a nutshell.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Cloud IDE, Coding Anytime Anywhere.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Reference
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;a href="https://zhuanlan.zhihu.com/p/145981067" rel="noopener noreferrer"&gt;Large-scale IDE technology architecture from VSCode&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://holisticsecurity.io/2020/09/06/implementing-vscode-based-on-cloud-with-aws-cdk/" rel="noopener noreferrer"&gt;Implementing VSCode-based (Code-Server) on Cloud with AWS CDK&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;

</description>
      <category>aws</category>
      <category>vscode</category>
      <category>pulumi</category>
      <category>tools</category>
    </item>
    <item>
      <title>Internationalization(i18n) and Localization(L10n)</title>
      <dc:creator>Dawei Ma</dc:creator>
      <pubDate>Thu, 22 Jul 2021 15:14:06 +0000</pubDate>
      <link>https://dev.to/madawei2699/international-i18n-and-localization-l10n-48d2</link>
      <guid>https://dev.to/madawei2699/international-i18n-and-localization-l10n-48d2</guid>
      <description>&lt;ul&gt;
&lt;li&gt;
internationalization(i18N)

&lt;ul&gt;
&lt;li&gt;Problems to be solved for internationalization&lt;/li&gt;
&lt;li&gt;Text Encoding&lt;/li&gt;
&lt;li&gt;locale&lt;/li&gt;
&lt;li&gt;Language and Country Codes&lt;/li&gt;
&lt;li&gt;gettext&lt;/li&gt;
&lt;li&gt;Internationalization Process&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

Localization(L10N)

&lt;ul&gt;
&lt;li&gt;Localization Process&lt;/li&gt;
&lt;li&gt;Develop a localization strategy&lt;/li&gt;
&lt;li&gt;Region and Language&lt;/li&gt;
&lt;li&gt;Add new region/language/service&lt;/li&gt;
&lt;li&gt;Incremental Localization&lt;/li&gt;
&lt;li&gt;Management of translation&lt;/li&gt;
&lt;li&gt;Localized implementation&lt;/li&gt;
&lt;li&gt;Localized multilingual implementation&lt;/li&gt;
&lt;li&gt;The Challenges of Localization&lt;/li&gt;
&lt;li&gt;Do you need to consider SEO&lt;/li&gt;
&lt;li&gt;Localization of product design&lt;/li&gt;
&lt;li&gt;Localization under Microservices&lt;/li&gt;
&lt;li&gt;Localized technical or business standards development&lt;/li&gt;
&lt;li&gt;Development Environment and Business Processes&lt;/li&gt;
&lt;li&gt;Static text processing&lt;/li&gt;
&lt;li&gt;Whether to store language and region settings&lt;/li&gt;
&lt;li&gt;Localization of back-end services&lt;/li&gt;
&lt;li&gt;Localization of third-party services and resources&lt;/li&gt;
&lt;li&gt;Release Process&lt;/li&gt;
&lt;li&gt;Localization under micro front-end architecture&lt;/li&gt;
&lt;li&gt;Localized testing&lt;/li&gt;
&lt;li&gt;Localization Platform&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;A successful product needs to go global through many stages, from the perspective of software development there are two main processes: internationalization and localization.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fimg.bmpi.dev%2F53875fc9-00ac-e8e2-8d91-06399755dcba.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fimg.bmpi.dev%2F53875fc9-00ac-e8e2-8d91-06399755dcba.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A language environment is the use of a specific language or language variant within a country or geographic region, which determines the format and parsing of dates, times, numbers and currencies, as well as the various measurement units and translated names of time zones, languages, countries and regions. &lt;strong&gt;Internationalization enables a piece of software to handle multiple language environments, localization enables a piece of software to support a specific regional language environment&lt;/strong&gt;. This means that the process of globalization is to first make the software internationalized, and then to do the localization implementation so that it can support a specific language environment in a specific region.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;They are often abbreviated as i18n (18 means that there are 18 letters between i and n in the word "internationalization") and L10n, respectively, due to the length of their single words, using a capital L to distinguish the i in i18n and to make it easy to distinguish the lowercase l from the 1.(&lt;a href="https://en.wikipedia.org/wiki/Internationalization_and_localization" rel="noopener noreferrer"&gt;Wikipedia&lt;/a&gt;)&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  internationalization(i18N)
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Problems to be solved for internationalization
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;The ability to display text in the user's native language.&lt;/li&gt;
&lt;li&gt;Ability to enter text in the user's native language.&lt;/li&gt;
&lt;li&gt;Ability to process text in the user's native language in a specific encoding.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Text Encoding
&lt;/h3&gt;

&lt;p&gt;The Unicode character set can display almost every character known to man in code points ranging from 0 to 10FFFF (hexadecimal). It requires at least 21 bits for storage. The text encoding system UTF-8 adapts Unicode code points to a reasonable 8-bit data stream and is compatible with the ASCII data processing system.UTF stands for Unicode Transformation Format.&lt;/p&gt;

&lt;p&gt;Since 2009, UTF-8 has been the dominant encoding form on the World Wide Web. As of November 2019, UTF-8 is used in 94.3% of all web pages (some of which are ASCII only, as it is a subset of UTF-8), and 96% of the top 1000 pages. Therefore, UTF-8 encoding is recommended for internationalization.&lt;/p&gt;

&lt;p&gt;This article &lt;a href="https://mp.weixin.qq.com/s/j5hfWOBsOMYcQuMG36zqNA" rel="noopener noreferrer"&gt;Internationalization of IT products is never enough to "support English"&lt;/a&gt; mentions that some GBK-encoded texts have many The text that "looks the same" is actually slightly different. However, in order to save space in Unicode, the same Code Point is assigned to them.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fimg.bmpi.dev%2F274d1394-df55-c0cb-331c-635979581c65.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fimg.bmpi.dev%2F274d1394-df55-c0cb-331c-635979581c65.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;How can we distinguish these identical characters with the same code (displaying a character in a different glyph, i.e. the same character)? This requires the help of locale.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;When calculating the number of Chinese characters, it is usually done by character form, i.e., simplified, traditional, variant, new, old, etc., of a character representing the same phonetic meaning. This way of counting is in fact counting variants. Therefore, the number of glyphs included in large dictionaries has long been wrongly regarded as the size of the Chinese character system.(&lt;a href="https://en.wikipedia.org/wiki/Grapheme" rel="noopener noreferrer"&gt;Wikipedia&lt;/a&gt;)&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  locale
&lt;/h3&gt;

&lt;p&gt;A locale is the language environment of the software at runtime, which includes Language, Territory and Codeset. A locale is written in the following format: Language[_Territory[. UTF8. In Linux, a locale consists of the following parts.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;LC_COLLATE: Controls character sorting.&lt;/li&gt;
&lt;li&gt;LC_CTYPE: controls the character handling function in handling upper and lower case or determining if it is a character.&lt;/li&gt;
&lt;li&gt;LC_MESSAGES: format of prompt messages.&lt;/li&gt;
&lt;li&gt;LC_MONETARY: format of currency.&lt;/li&gt;
&lt;li&gt;LC_NUMERIC: the format of the number.&lt;/li&gt;
&lt;li&gt;LC_TIME: the format of time.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If your locale is en_US.UTF8, you must change it to zh_CN.UTF8 to display Chinese correctly. All supported locales are stored in the &lt;code&gt;/usr/share/locale&lt;/code&gt; directory of the macOS operating system.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fimg.bmpi.dev%2Fb40d3f61-046a-df48-735f-b27ec188a3e8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fimg.bmpi.dev%2Fb40d3f61-046a-df48-735f-b27ec188a3e8.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Language and Country Codes
&lt;/h3&gt;

&lt;p&gt;The same language may have some subtle differences in different countries and regions, for example, there are some differences between American English and British English. The same country may also have multiple languages, for example, China has simplified and traditional languages. In the introduction to locale above we saw the use of &lt;code&gt;language_region&lt;/code&gt; to express the exact language of a country.&lt;/p&gt;

&lt;p&gt;For countries and languages ISO has developed corresponding standard codes &lt;a href="https://en.wikipedia.org/wiki/ISO_3166-1" rel="noopener noreferrer"&gt;ISO 3166-1&lt;/a&gt; and &lt;a href="https://en.wikipedia.org/wiki/ISO_639-1" rel="noopener noreferrer"&gt;ISO 639-1&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The browser uses the language code to send the name of the language accepted by the browser in the &lt;code&gt;Accept-Language&lt;/code&gt; HTTP header. For example: it, de-at, es, pt-br.&lt;/p&gt;

&lt;h3&gt;
  
  
  gettext
&lt;/h3&gt;

&lt;p&gt;GNU gettext is the GNU Internationalization and Localization (i18n) library, which is often used to write multilingualization (M17N) programs. Many programming languages such as C, C++, Python, PHP, Rust, Elixir, etc. support the use of gettext from within the language.&lt;/p&gt;

&lt;p&gt;The following is the flow of how Java calls gettext to complete internationalization.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fimg.bmpi.dev%2F4a7179b3-ccae-ca0e-8554-1cb19a753e7d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fimg.bmpi.dev%2F4a7179b3-ccae-ca0e-8554-1cb19a753e7d.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;xgettext scans the source code to extract the input strings for the i18n functions tr(), trc(), and trn() and creates a pot file containing all the source language strings. The object that the translator needs to work with is the .po file, which is generated by the msginit program from the .pot template file.&lt;/li&gt;
&lt;li&gt;msgmerge merges the strings into a po file containing the translations for a single locale.&lt;/li&gt;
&lt;li&gt;msgfmt is used to generate Java class files that inherit from the Java &lt;code&gt;ResourceBundle&lt;/code&gt; class.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The following diagram shows the flow of internationalization in PHP using gettext.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fimg.bmpi.dev%2F5da7482c-d121-958e-606e-ff36aded4a1f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fimg.bmpi.dev%2F5da7482c-d121-958e-606e-ff36aded4a1f.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Elixir implements i18n's directory structure using gettext.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;priv/gettext
└─ en_US
|  └─ LC_MESSAGES
|     ├─ default.po
|     └─ errors.po
└─ it
   └─ LC_MESSAGES
      ├─ default.po
      └─ errors.po
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Internationalization Process
&lt;/h3&gt;

&lt;p&gt;The process of using gettext is a typical process of making an application support i18n internationalization.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Configure the i18n framework. i18n framework automatically obtains the relevant language files by the language identifier of the system or browser (in the case of web applications). For example, gettext uses a file with .mo suffix, while Javascript is usually a .json file and Java is a .properties file.&lt;/li&gt;
&lt;li&gt;extract the hard-coded source language text. Call the i18n function at the hard-coded place. For this part can be extracted manually, or automatically through a program or plugin (e.g. i18next internationalization framework for Javascript has i18next-scanner).&lt;/li&gt;
&lt;li&gt;finally implement &lt;strong&gt;localization&lt;/strong&gt;. Translate (either manually or by machine translation, there are also relevant translation platforms that can be integrated) these extracted files in the language of the country to be supported.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Localization(L10N)
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Localization Process
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fimg.bmpi.dev%2F9adbc2f4-7fa4-0f67-046b-f135c4b117b7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fimg.bmpi.dev%2F9adbc2f4-7fa4-0f67-046b-f135c4b117b7.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A typical localization flow chart is shown in the figure above. Among the parties involved are.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Dev Team: Developers make the system internationalized and deploy the machine-translated version of the multilingual version to the integration environment for testers to test, and can build an automated translation integration pipeline.&lt;/li&gt;
&lt;li&gt;Market Team: Confirm the market and supported languages of the product, organize the glossary of terms involved in the product, and purchase professional translation services to determine the final multilingual translated version. In the case of large companies, there may be a dedicated globalization team to do this work.&lt;/li&gt;
&lt;li&gt;Translation management platform (TMS): completes the management of translation languages, generally with specific API interfaces or SDK development kits that can be integrated into CI/CD environments and can automate the upload and download of source and translation language files. It also has a management interface for translators to modify and determine the final version of the translation. Can provide multiple machine translation services, as well as provide human translation for purchase or complete human translation in an open source collaborative manner.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Develop a localization strategy
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Region and Language
&lt;/h4&gt;

&lt;p&gt;This piece begins with these basic antecedent considerations.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Business implications of region&lt;/li&gt;
&lt;li&gt;Default region of the user&lt;/li&gt;
&lt;li&gt;Default language of the locale&lt;/li&gt;
&lt;li&gt;Whether different regions use the same system&lt;/li&gt;
&lt;li&gt;Whether to support users to switch locales&lt;/li&gt;
&lt;li&gt;Whether the user can belong to multiple locales&lt;/li&gt;
&lt;li&gt;Whether there is a one-to-one relationship between locale and country&lt;/li&gt;
&lt;li&gt;Mapping between locale and language&lt;/li&gt;
&lt;li&gt;Is there a linkage between locale and language (can the user see all languages supported by all locales)&lt;/li&gt;
&lt;li&gt;Whether language switching needs to be saved to the user's personal information&lt;/li&gt;
&lt;li&gt;Whether the user's default language needs to be set by the user's environment language identifier (OS or browser)&lt;/li&gt;
&lt;li&gt;Whether the &lt;strong&gt;service is deployed in multiple geographies and whether the data is isolated in multiple geographies&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Add new region/language/service
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Whether the system can support the process of adding new regions and new regions&lt;/li&gt;
&lt;li&gt;Whether the system can support the process of adding new languages and the process of adding new languages&lt;/li&gt;
&lt;li&gt;The process of localization when adding subservices to the system&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Incremental Localization
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Localization process when new pages or components appear when localization is implemented&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Management of translation
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Whether you need a translation management platform (TMS)&lt;/li&gt;
&lt;li&gt;Selection of Translation Management Platform&lt;/li&gt;
&lt;li&gt;Integration of translation management platform&lt;/li&gt;
&lt;li&gt;Whether to subscribe to professional translation services&lt;/li&gt;
&lt;li&gt;Development of collaborative processes with translation teams&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Localized implementation
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Whether each service of the system is localized by its own development team&lt;/li&gt;
&lt;li&gt;Whether there is a dedicated localization team to do localization&lt;/li&gt;
&lt;li&gt;The collaboration mode between localization team and each service development team

&lt;ul&gt;
&lt;li&gt;Whether localization is done by code Open PR&lt;/li&gt;
&lt;li&gt;How each service development team does incremental localization&lt;/li&gt;
&lt;li&gt;Synchronization of knowledge on localization among teams&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Development of technical standards for localization and promotion within the organization

&lt;ul&gt;
&lt;li&gt;Use of industry standard libraries (e.g. Unicode Common Locale Data Repository &lt;a href="http://cldr.unicode.org/" rel="noopener noreferrer"&gt;CLDR&lt;/a&gt;) for language-specific formats for dates, times, time zones, numbers, and currencies&lt;/li&gt;
&lt;li&gt;The locale identifier is in &lt;code&gt;language_region&lt;/code&gt; format, e.g. en_US for United States English language.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h4&gt;
  
  
  Localized multilingual implementation
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Distinguish multilingualism by subdomains (gTLDS) or ccTLDs (e.g. ccTLDs). For example: &lt;a href="https://en.wikipedia.org/" rel="noopener noreferrer"&gt;https://en.wikipedia.org/&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Distinguish multilingualism by URL paths. For example: &lt;a href="https://localizejs.com/de/" rel="noopener noreferrer"&gt;https://localizejs.com/de/&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Multilingual by URL query parameters (not SEO friendly). e.g. &lt;a href="https://locize.com/?lng=de" rel="noopener noreferrer"&gt;https://locize.com/?lng=de&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Multi-language by user language setting. E.g. &lt;a href="https://myaccount.google.com/language" rel="noopener noreferrer"&gt;https://myaccount.google.com/language&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Multilingual by browser local storage. E.g. &lt;a href="https://www.instagram.com/" rel="noopener noreferrer"&gt;https://www.instagram.com/&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  The Challenges of Localization
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fimg.bmpi.dev%2Ff655798a-fd23-5828-1b72-b6ecd6d83b7a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fimg.bmpi.dev%2Ff655798a-fd23-5828-1b72-b6ecd6d83b7a.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The challenges of localization are mainly issues arising from differences in language, culture, writing habits and laws in different geographic areas, in the following categories.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Text encoding: For text in most Western European languages, ASCII character encoding is sufficient. However, languages that use non-Latin alphabets (e.g. Russian, Chinese, Hindi and Korean) require a larger character encoding, such as Unicode.&lt;/li&gt;
&lt;li&gt;Singular and plural: Different languages have different forms of singular and plural. The plural is used to represent a number that is "not one". The singular and plural forms vary from language to language, with the most common plural form being used to represent numbers of two or greater. In some languages, it is also used to represent fractions, zeroes, negative numbers, or twos.&lt;/li&gt;
&lt;li&gt;Image translation: Images with text need to be translated.&lt;/li&gt;
&lt;li&gt;Dynamic data (data from the API): The data passed from the back-end to the front-end that is displayed on the interface needs to be localized. But it is a challenge to distinguish the source of this data, for example, although the data is sourced from the back-end, it may come from a database, it may come from a file, it may come from other internal services, or it may come from a third-party dependent service.&lt;/li&gt;
&lt;li&gt;Icons: Some icons that are highly recognizable in one region may look completely unrecognizable to users in other regions or be something else.&lt;/li&gt;
&lt;li&gt;Name/address: The order of the last name and first name, and the order in which addresses are written. For example, in Chinese, the last name comes first, then the first name.&lt;/li&gt;
&lt;li&gt;Gender: Some languages such as French place a lot of emphasis on gender.&lt;/li&gt;
&lt;li&gt;Phone: The format of phone calls varies from country to country.&lt;/li&gt;
&lt;li&gt;Voice: Inappropriate voices or cues may be offensive, and some countries are sensitive to the gender of the voice.&lt;/li&gt;
&lt;li&gt;Color: Colors and shades are associated with geographic regions or markets, for example, red in the U.S. indicates a decline and in the A-share indicates a rise.&lt;/li&gt;
&lt;li&gt;Units of Measure

&lt;ul&gt;
&lt;li&gt;Currency: Currency formatting must take into account the currency symbol, the position of the currency symbol and the position of the minus sign. Most currencies use the same decimal separator and thousands separator as the numbers in the regional or area setting. However, in some places this is not the case, for example in Switzerland, the decimal separator for the Swiss franc is a period.&lt;/li&gt;
&lt;li&gt;Date and time: The internationalization of date/time involves not only the geographical location (e.g. localized representation of calendar such as day of the week, month, etc.), but also the time zone (TimeZone, for UTC/GMT offsets). Time zones are not only geographically defined, but also politically defined. For example, China geographically spans 5 time zones, but only uses one unified time zone. Many other countries have "daylight saving time" and the difference between Berlin time and Beijing time is subject to change. Sometimes it is 7 hours (winter time), sometimes it is 6 hours (daylight saving time).&lt;/li&gt;
&lt;li&gt;Numbers: There are also differences in the way numbers are represented in different countries and regions. Factors that affect the representation of numbers include the representation of numeric characters, the representation of numeric symbols, the type of numbers, etc.&lt;/li&gt;
&lt;li&gt;Weight/length/physical units: Because of the differences in units, multiple geographical versions of the same set of data need to be converted.&lt;/li&gt;
&lt;li&gt;Business-related units of measurement: For example, different countries have different billing rules for their products. This requires business staff support to find out the corresponding position and give conversion rules.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Sentence length: German is usually longer than English, and Arabic requires more vertical space.&lt;/li&gt;

&lt;li&gt;Writing direction: In many languages it is left to right, but in Hebrew and Arabic it is right to left and in some Asian languages it is vertical.&lt;/li&gt;

&lt;li&gt;Punctuation: e.g. quotation marks ("") in English, low quotation marks (,,") in German and quotation marks (&amp;lt;&amp;lt;&amp;gt;&amp;gt;) in French.&lt;/li&gt;

&lt;li&gt;Line breaks/splits: The rules of Asian CJK (Chinese, Japanese, Korean) character set languages are completely different from those of Western languages. For example, unlike most Western written languages, Chinese, Japanese, Korean and Thai do not necessarily use spaces to separate one word from the next. Thai does not even use punctuation.&lt;/li&gt;

&lt;li&gt;Case conversion: English has case conversion, while Chinese has no case distinction.&lt;/li&gt;

&lt;li&gt;Legal related: e.g. GDPR using personal data of EU citizens.&lt;/li&gt;

&lt;li&gt;Politics-related: For example, localization involves the display of flags and maps, which can easily cause major accidents if not handled properly.&lt;/li&gt;

&lt;li&gt;Sorting methods: For example, English is sorted alphabetically, while Chinese can be sorted in pinyin.&lt;/li&gt;

&lt;/ul&gt;

&lt;h4&gt;
  
  
  Do you need to consider SEO
&lt;/h4&gt;

&lt;p&gt;If you are localizing a website for toC, you need to consider some things related to search engine optimization (SEO), such as this &lt;a href="https://marketfinder.thinkwithgoogle.com/intl/en/guide/how-to-approach-i18n/#make-languages-easily-discoverable" rel="noopener noreferrer"&gt;How to approach an international strategy&lt;/a&gt; mentions some key points.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;If you offer your site in multiple languages, use a single language for content and navigation on each page, and avoid side-by-side translations.&lt;/li&gt;
&lt;li&gt;Keep the content in each language on a separate URL and mark the language in the URL. For example, the URL &lt;code&gt;www.mysite.com/de/&lt;/code&gt; would tell the user that the page is in German.&lt;/li&gt;
&lt;li&gt;Display the language you want to locate to Google via the hreflang meta tag. For example, &lt;code&gt;&amp;lt;link rel="alternate" href="http://example.com" hreflang="en-us" /&amp;gt;&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Do not translate only the template text, but also the content within the template.&lt;/li&gt;
&lt;li&gt;do not use automatic translation exclusively, which can affect the user experience&lt;/li&gt;
&lt;li&gt;don't use cookies or scripting techniques to switch languages, Google crawlers can't index this content properly.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Localization of product design
&lt;/h4&gt;

&lt;p&gt;Using a more localized design for the same content in different geographies can lead to better results, as mentioned in the article &lt;a href="http://www.woshipm.com/pd/4404611.html" rel="noopener noreferrer"&gt;Internationalization and Localization of Product Design&lt;/a&gt; about the different presentation of Spotify's song covers in different countries.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fimg.bmpi.dev%2F165e3e02-74a9-a776-cc5a-cf8ef85f8f46.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fimg.bmpi.dev%2F165e3e02-74a9-a776-cc5a-cf8ef85f8f46.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Localization under Microservices
&lt;/h3&gt;

&lt;p&gt;The localization process for a single application is relatively simple from an architectural point of view. However, many applications nowadays are microservice architectures with multiple teams collaborating on the development model. If individual teams are responsible for the localization of their respective services, there must be a unified localization committee to develop technical standards for localization.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The identification of language markers.&lt;/li&gt;
&lt;li&gt;The design of language switching in front and back-end solutions.&lt;/li&gt;
&lt;li&gt;The design of translation automation process, etc.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Or there is a dedicated localization team to implement localization, and this team will be responsible for solving the previous problems. The project I am involved in falls into the latter category. Our team completed the localization of nearly a dozen microservice subsystems for the entire large system, and these dozens of systems were handled by several large groups of multiple teams, so the collaborative process of such cross-functional requirements (CFR) across multiple teams is a complex task.&lt;/p&gt;

&lt;h4&gt;
  
  
  Localized technical or business standards development
&lt;/h4&gt;

&lt;p&gt;Prior to the implementation of localization, it is important to identify the relevant technical or operational standards, some of which are.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Internationalization implementation standards for different technology stacks in front and back ends. Since there may be multiple technology stacks in microservices, each with its own internationalization implementation, the development of implementation standards for different technology stacks can help to use the same implementation across services.&lt;/li&gt;
&lt;li&gt;The determination of locale markers.

&lt;ul&gt;
&lt;li&gt;The possibility of storing language-related text in front-end or back-end static text extraction to files named by language identifiers, e.g. en.json for static text in English, and en_US.json for text related to US English (e.g. units of measure, dates, numbers, currency, etc.).&lt;/li&gt;
&lt;li&gt;The &lt;code&gt;language_region&lt;/code&gt; format is uniformly used in remote service calls (front-end calls to back-end or back-end calls to other internal or external services), e.g. en_US stands for getting the localized version of the English language for the US region.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Language specific formats for dates, times, time zones, numbers and currencies use industry standard libraries. For example, using libraries that implement the &lt;a href="https://en.wikipedia.org/wiki/Common_Locale_Data_Repository" rel="noopener noreferrer"&gt;CLDR&lt;/a&gt; standard.&lt;/li&gt;

&lt;li&gt;Dynamic data type identification. Identifying, for example, which data comes from internal systems (databases or files); which comes from external systems; whether these dynamic data have internationalization capabilities; and how to localize them in stages.&lt;/li&gt;

&lt;li&gt;Localization of documents. Localization of electronic documents (PDF) or emails generated by the back-office system. If these documents are sent to clients, also consider whether to generate documents in the client's language preference.&lt;/li&gt;

&lt;li&gt;List of supported regions and languages. For example, whether the error page is entered when an unsupported country or language appears or whether the default regional or language localized version is displayed.&lt;/li&gt;

&lt;li&gt;The default region and default language.&lt;/li&gt;

&lt;li&gt;whether the region and language have a binding relationship.&lt;/li&gt;

&lt;li&gt;Whether the language switch needs to be saved to the user's personal information.&lt;/li&gt;

&lt;/ul&gt;

&lt;h4&gt;
  
  
  Development Environment and Business Processes
&lt;/h4&gt;

&lt;p&gt;In fact, the most time-consuming part of localization for our team was the start-up of the local environment. With so many services involved and slight differences in the way different services are launched, and even wrong guidance documents, we needed to keep stepping on the toes to finish setting up the environment. In the end, our way of dealing with this was to contact the development teams, and each time we did the pre-localization of a service, we would ask the development team to help us set up the local environment.&lt;/p&gt;

&lt;p&gt;Another difficulty was our lack of understanding of the business. Since each service has a large number of components and pages, including dynamic data from different sources of back-end services, it was hard to figure out just by ourselves. In the end, when we did the pre-localization of this service, we would get business analysts from the development team to help us introduce the business processes involved in this service.&lt;/p&gt;

&lt;h4&gt;
  
  
  Static text processing
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Sorting through static text on the front and back ends to identify which systems have internationalization capabilities (initial language versions have had locale files extracted and internationalization libraries set up).&lt;/li&gt;
&lt;li&gt;Identifying where date, currency, and number formats occur and calling the industry standard libraries identified by the localization technical standards in those places.&lt;/li&gt;
&lt;li&gt;Identification of incremental static text translation processes, which need to be used to process new texts when they are added after the system has already been localized, using localization processes.&lt;/li&gt;
&lt;li&gt;Automated integration of translation platforms, where development teams use scripts or CI/CD streams to automatically upload and download files in the original and translated languages.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Whether to store language and region settings
&lt;/h4&gt;

&lt;p&gt;Some internationalized sites have language or region switching designed as hyperlinks that allow users to access different language and region versions of the site, which do not require storing language or region configurations.&lt;/p&gt;

&lt;p&gt;Sites with user profile configuration generally offer to set the preferred language and region in the profile settings, so that users can synchronize the last set language or region when switching devices.&lt;/p&gt;

&lt;p&gt;If your site users switch devices infrequently, a simple process can store these configurations in the browser store. When the user switches devices, the default settings are automatically restored. The advantage of such a design is that it is simple, and it is easier to overtake to other solutions later. The specific design chosen needs to be combined with the specific business to choose.&lt;/p&gt;

&lt;h4&gt;
  
  
  Localization of back-end services
&lt;/h4&gt;

&lt;p&gt;Localization of back-end services involves the following four components.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Static text. These can be read by walking through the code to find the relevant string.&lt;/li&gt;
&lt;li&gt;Databases, caches or files. Initial data that does not meet localization needs can be found by walking through the database initialization script, but for dynamically stored data, tables need to be designed to meet multilingual storage as well. For some resource files where translation is necessary there is also a need to provide multilingual versions and to adapt the code that uses the files.&lt;/li&gt;
&lt;li&gt;Remote calls to other internal services (RPC). The locale markers for internal service calls are part of the localization technology standards developed. For example, the &lt;code&gt;locale = en_US&lt;/code&gt; HTTP header can be used to request pages in English for the United States.&lt;/li&gt;
&lt;li&gt;Generate documents (PDF or Email). Generated documents include the final language version of the template static text and dynamic data rendering. Especially when these documents and emails need to be sent to users, they need to be generated in the language that matches the user's language.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If the technology stack of the back-end service is different, the localization team also needs to summarize the internationalization process for the different technology stacks of the back-end service and synchronize it with other development teams within the organization.&lt;/p&gt;

&lt;h4&gt;
  
  
  Localization of third-party services and resources
&lt;/h4&gt;

&lt;p&gt;There are cases of calling external services in the backend service remote call. If you call external services, you need to confirm whether the external services support multi-language version first, and if they do, you can integrate them according to the docking documentation. If not, you need to contact the external service provider to determine the support plan.&lt;/p&gt;

&lt;h4&gt;
  
  
  Release Process
&lt;/h4&gt;

&lt;p&gt;Since the implementation of localization involves the transformation of more than a dozen subservices, localization can be controlled by &lt;a href="https://martinfowler.com/articles/feature-toggles.html" rel="noopener noreferrer"&gt;Feature Toggles&lt;/a&gt; to be turned on or off in different environments. The tests affected by localization (unit tests, integration tests and UI tests) also need to be controlled via Feature Toggles so that the test suite of the original service is minimally affected.&lt;/p&gt;

&lt;p&gt;Once all services have been localized and implemented, the localized Feature Toggles for all services can be opened to bring the final version online.&lt;/p&gt;

&lt;p&gt;There are two designs to choose from regarding localized Feature Toggles.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Centralized Feature Toggles. for example, a centralized feature configuration service can be built, and all localization-related services request this service to get the configuration switch status. The advantage is that localized features can be switched on and off in real time without going back online. The downside is that there is no flexibility to control the localized features of each service.&lt;/li&gt;
&lt;li&gt;Independent Feature Toggles: In contrast to centralized, each service sets its own localized Feature Toggles for flexible decoupling, but the downside is that each switch requires a new release to bring a single service online.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Localization under micro front-end architecture
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fimg.bmpi.dev%2Ff57bbd44-f288-c81a-9f26-a55b767c6044.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fimg.bmpi.dev%2Ff57bbd44-f288-c81a-9f26-a55b767c6044.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As the above figure shows a micro front-end architecture website, the whole website interface is composed of five service pages of A/B/C/D/E. The language switch button is on service A. When the user switches from English to Chinese, the other services B/C/D/E need to switch their respective interfaces to the Chinese language version.&lt;/p&gt;

&lt;p&gt;One way is to have the internationalization (i18n) library instance initialized by Service A and mounted on the browser window object when the browser loads the page, and Services B/C/D/E use the internationalization library instance object initialized by Service A. When switching languages, the internationalization instance object of Service A switches the language of all services.&lt;/p&gt;

&lt;p&gt;The locale files for each service can be loaded into the browser uniformly by Service A. The advantage of this approach is that we know when the last language file is loaded, which means that the localization of all services on the whole page is initialized and the user can switch languages normally.&lt;/p&gt;

&lt;h3&gt;
  
  
  Localized testing
&lt;/h3&gt;

&lt;p&gt;Localization testing verifies that the application or website content meets the language, cultural and geographical requirements of a particular country or region.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fimg.bmpi.dev%2F1b30e009-c4bb-0d8d-f148-82ac680671e9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fimg.bmpi.dev%2F1b30e009-c4bb-0d8d-f148-82ac680671e9.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For more details, see this article &lt;a href="https://levelup.gitconnected.com/localization-testing-9b8db20fb62f" rel="noopener noreferrer"&gt;Localization testing: why and how to do it&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Localization Platform
&lt;/h3&gt;

&lt;p&gt;A very important piece of localization is the selection of a suitable translation management platform (TMS), which generally has the following function points.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Glossary: A glossary of specialized brand terms or domain terms that can help translators more accurately translate specialized language related to a product or market.&lt;/li&gt;
&lt;li&gt;Translation Memory (TM): TM is a database for storing strings of previously translated content. Translations are reused for the same or similar content. This ensures consistency of translations.&lt;/li&gt;
&lt;li&gt;Context Editor (In-Context Editor): This editor crawls website pages and allows translators to understand the context of the entire page, helping to improve the quality of the translation.&lt;/li&gt;
&lt;li&gt;Machine Translations (Machine Translations): most TMS platforms are docked to some machine translation platforms (such as Google Translate), which can automatically translate the target language and are suitable for developers.&lt;/li&gt;
&lt;li&gt;Human Translations: Professional human translation services can be ordered from the TMS platform. However, there are also features such as Crowdin, which provides localized translation collaboration for open source projects, and anyone can participate in this project for free, and the translated text with high votes will be used first.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Major localization platforms.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://crowdin.com/" rel="noopener noreferrer"&gt;Crowdin&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://lokalise.com/" rel="noopener noreferrer"&gt;Lokalise&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://localizejs.com" rel="noopener noreferrer"&gt;localizejs&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://phrase.com/" rel="noopener noreferrer"&gt;Phrase&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This concludes some introductions to internationalization and the basic process of localization. Localization is a complex task, and the biggest difficulty is &lt;strong&gt;not knowing enough about the target language and culture&lt;/strong&gt;. But after you've read this article, I hope it will give you more confidence to do localization-related work.&lt;/p&gt;

</description>
      <category>i18n</category>
      <category>l10n</category>
      <category>localization</category>
      <category>translations</category>
    </item>
    <item>
      <title>Adventures in Serverless Application Development</title>
      <dc:creator>Dawei Ma</dc:creator>
      <pubDate>Sun, 14 Feb 2021 08:13:51 +0000</pubDate>
      <link>https://dev.to/aws-builders/adventures-in-serverless-application-development-f7g</link>
      <guid>https://dev.to/aws-builders/adventures-in-serverless-application-development-f7g</guid>
      <description>&lt;h2&gt;
  
  
  Background
&lt;/h2&gt;

&lt;p&gt;This article describes the entire process of developing an application based on AWS Serverless technology. The infrastructure is built using Serverless Framework and Terraform. The core module of the system is a timed task that is executed once a day: the task obtains a set of ETF index fund price data via &lt;a href="https://tushare.pro/" rel="noopener noreferrer"&gt; Tushare &lt;/a&gt;, and after processing, generates a text of the trading signals and stores it in an S3 bucket. After that, a message is sent to the AWS SNS Topic, and users subscribed to the topic will receive an email alert. There is a user email subscription portal on the system's web page, and you can also access the daily history of trading signals.&lt;/p&gt;

&lt;p&gt;This article covers the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Build a Docker image to execute Core Service.&lt;/li&gt;
&lt;li&gt;The following AWS infrastructure services are required to build Core Service with Terraform.

&lt;ul&gt;
&lt;li&gt;Set up an ECR repository.&lt;/li&gt;
&lt;li&gt;Set up an ECS cluster using Fargate.&lt;/li&gt;
&lt;li&gt;Set up Fargate tasks.&lt;/li&gt;
&lt;li&gt;Set up CloudWatch timed tasks.&lt;/li&gt;
&lt;li&gt;Set up IAM permission roles.&lt;/li&gt;
&lt;li&gt;Setting up SNS Topics.&lt;/li&gt;
&lt;li&gt;Setting up a VPC network.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;The following AWS infrastructure services required to build API Services and the Web using the Serverless Framework.

&lt;ul&gt;
&lt;li&gt;Set up Lambda functions.&lt;/li&gt;
&lt;li&gt;Set up the API Gateway.&lt;/li&gt;
&lt;li&gt;Setting up Route53.&lt;/li&gt;
&lt;li&gt;Setting up CloudFront.&lt;/li&gt;
&lt;li&gt;Setting up TLS certificates.&lt;/li&gt;
&lt;li&gt;Setting up S3 buckets.&lt;/li&gt;
&lt;li&gt;Setting up CloudFormation.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;The final result is available at: &lt;a href="https://money.i365.tech/" rel="noopener noreferrer"&gt;Online version&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The source code is available at: &lt;a href="https://github.com/bmpi-dev/invest-alchemy" rel="noopener noreferrer"&gt;Code Repository&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The technology stack is as follows:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fimg.bmpi.dev%2Fb1e5efb6-a764-ec0a-cbed-2573936fe7e3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fimg.bmpi.dev%2Fb1e5efb6-a764-ec0a-cbed-2573936fe7e3.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;p&gt;You need to register for the following account first:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/" rel="noopener noreferrer"&gt;AWS Account&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.serverless.com/" rel="noopener noreferrer"&gt;Serverless Account&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Background Knowledge
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Application Architecture Evolution History&lt;sup id="fnref1"&gt;1&lt;/sup&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fimg.bmpi.dev%2F2d7c4b7a-ceaa-9a6d-0ea5-1e9151cbd5ac.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fimg.bmpi.dev%2F2d7c4b7a-ceaa-9a6d-0ea5-1e9151cbd5ac.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Monolithic application (Monolithic): small applications suitable for startups; good performance.&lt;/li&gt;
&lt;li&gt;Service-oriented (SOA): large applications suitable for complex enterprise business.&lt;/li&gt;
&lt;li&gt;Microservices (Microservices): complex elastic scalable applications for experienced teams.&lt;/li&gt;
&lt;li&gt;Serverless (Serverless): low cost, suitable for background tasks; also suitable for applications with a large number of customers and applications that grow rapidly and need to scale infinitely.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Serverless
&lt;/h3&gt;

&lt;blockquote&gt;
&lt;p&gt;Build and run applications without thinking about servers&lt;sup id="fnref2"&gt;2&lt;/sup&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Serverless computing, or serverless for short, is an execution model in which a cloud provider (AWS, Azure, or Google Cloud) is responsible for executing a piece of code by dynamically allocating resources and charging only for the resources used to run the code. The code typically runs in a stateless container and can be triggered by a variety of events including HTTP requests, database events, queue services, monitoring alerts, file uploads, scheduling events (cron tasks), and more. The code that is sent to the cloud provider for execution is typically in the form of a function, and thus serverless computing is sometimes referred to as "function as a service" or FAAS.&lt;sup id="fnref3"&gt;3&lt;/sup&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Advantages
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;No server management costs&lt;/li&gt;
&lt;li&gt;Flexible scaling&lt;/li&gt;
&lt;li&gt;Pay for service uptime&lt;/li&gt;
&lt;li&gt;Self-contained high availability and fault tolerance&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Disadvantages
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Poor cold start performance&lt;/li&gt;
&lt;li&gt;Complex monitoring and debugging&lt;/li&gt;
&lt;li&gt;Dependence on cloud vendors&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  DevOps
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.jamesbowman.me%2Fpost%2Fcdlandscape%2FContinuousDeliveryToolLandscape-fullsize.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.jamesbowman.me%2Fpost%2Fcdlandscape%2FContinuousDeliveryToolLandscape-fullsize.jpeg"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This is a panorama of DevOps tools, and I have used only a small part of them in this application development process, even without testing process. Of course, the application of these tools also need to consider the actual situation of the project, flexible application.&lt;/p&gt;

&lt;h2&gt;
  
  
  Application Architecture
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fimg.bmpi.dev%2F48504c01-c2d0-e05a-1eda-d82b88f6496d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fimg.bmpi.dev%2F48504c01-c2d0-e05a-1eda-d82b88f6496d.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The application is divided into three main modules:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Core Service：Background timing tasks, providing the function of getting fund price data, analyzing and generating trading signals and sending emails.&lt;/li&gt;
&lt;li&gt;API Service：Provides subscription theme API.&lt;/li&gt;
&lt;li&gt;Web：Provides subscription theme entrance page and view historical trading signal records.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Coding Implementation
&lt;/h2&gt;

&lt;p&gt;Since you'll be using a variety of AWS cloud services next, to learn about them see this article&lt;a href="https://wiki.bmpi.dev/#AWS%E5%90%84%E6%9C%8D%E5%8A%A1%E8%A7%A3%E9%87%8A:AWS%E5%90%84%E6%9C%8D%E5%8A%A1%E8%A7%A3%E9%87%8A%20AWS%E5%A5%BD%E6%96%87%20Index" rel="noopener noreferrer"&gt; AWS Services Explained &lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Code Structure
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;.&lt;/span&gt;
├── api &lt;span class="c"&gt;# api service&lt;/span&gt;
│   ├── serverless.yml
│   └── sns.js &lt;span class="c"&gt;# lambda function&lt;/span&gt;
├── core &lt;span class="c"&gt;# core service&lt;/span&gt;
│   ├── Dockerfile
│   ├── Infrastructure
│   │   └── tf-fargate
│   │       ├── cloudwatch.tf
│   │       ├── ecr.tf
│   │       ├── ecs.tf
│   │       ├── iam.tf
│   │       ├── main.tf
│   │       ├── output.tf
│   │       ├── sns.tf
│   │       ├── tasks
│   │       │   └── task_definition.json
│   │       ├── variables.tf
│   │       └── vpc.tf
│   ├── Makefile &lt;span class="c"&gt;# CLI entry&lt;/span&gt;
│   ├── requirements.txt
│   └── src
│       ├── fund.txt
│       └── main.py &lt;span class="c"&gt;# fargate task&lt;/span&gt;
└── web &lt;span class="c"&gt;# web service&lt;/span&gt;
    ├── binaryMimeTypes.js
    ├── client
    │   ├── assets
    │   │   └── styles
    │   │       └── global.less
    │   ├── components
    │   │   └── navbar.vue
    │   ├── layouts
    │   │   └── default.vue
    │   ├── pages
    │   │   └── index.vue
    │   └── plugins
    │       └── iview.js
    ├── index.js
    ├── nuxt.config.js
    ├── nuxt.js &lt;span class="c"&gt;# lambda function&lt;/span&gt;
    ├── package-lock.json
    ├── package.json
    ├── secrets_example.json
    ├── serverless.yml
    └── yarn.lock
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Core Service
&lt;/h3&gt;

&lt;p&gt;Core Service runs through AWS Fargate, which is better suited than Lambda for running long background tasks. core Service is developed in Python, and to make it run in an AWS ECS environment, a Docker image is first built and then pushed to an AWS ECR repository.&lt;/p&gt;

&lt;h4&gt;
  
  
  Docker Image
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;FROM python:3.8-slim-buster

USER root
WORKDIR /tmp

&lt;span class="c"&gt;# for source&lt;/span&gt;
RUN &lt;span class="nb"&gt;rm&lt;/span&gt; /bin/sh &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;ln&lt;/span&gt; &lt;span class="nt"&gt;-s&lt;/span&gt; /bin/bash /bin/sh

&lt;span class="c"&gt;# for compile&lt;/span&gt;
RUN  apt-get update &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; apt-get &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-y&lt;/span&gt; wget &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; apt-get &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-y&lt;/span&gt; build-essential &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;rm&lt;/span&gt; &lt;span class="nt"&gt;-rf&lt;/span&gt; /var/lib/apt/lists/&lt;span class="k"&gt;*&lt;/span&gt;

&lt;span class="c"&gt;# for TA-Lib&lt;/span&gt;
RUN pip &lt;span class="nb"&gt;install &lt;/span&gt;numpy &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  wget http://prdownloads.sourceforge.net/ta-lib/ta-lib-0.4.0-src.tar.gz &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nb"&gt;tar&lt;/span&gt; &lt;span class="nt"&gt;-xvzf&lt;/span&gt; ta-lib-0.4.0-src.tar.gz &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nb"&gt;cd &lt;/span&gt;ta-lib/ &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  ./configure &lt;span class="nt"&gt;--prefix&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;/usr &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  make &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  make &lt;span class="nb"&gt;install
&lt;/span&gt;RUN &lt;span class="nb"&gt;rm&lt;/span&gt; &lt;span class="nt"&gt;-R&lt;/span&gt; ta-lib ta-lib-0.4.0-src.tar.gz

&lt;span class="c"&gt;# set the working directory in the container&lt;/span&gt;
WORKDIR /code
&lt;span class="c"&gt;# copy the dependencies file to the working directory&lt;/span&gt;
COPY requirements.txt &lt;span class="nb"&gt;.&lt;/span&gt;
&lt;span class="c"&gt;# install dependencies&lt;/span&gt;
RUN pip3 &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-r&lt;/span&gt; requirements.txt
&lt;span class="c"&gt;# copy the content of the local src directory to the working directory&lt;/span&gt;
COPY src/ &lt;span class="nb"&gt;.&lt;/span&gt;
&lt;span class="c"&gt;# command to run on container start&lt;/span&gt;
CMD &lt;span class="o"&gt;[&lt;/span&gt; &lt;span class="s2"&gt;"python"&lt;/span&gt;, &lt;span class="s2"&gt;"./main.py"&lt;/span&gt; &lt;span class="o"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;One thing to keep in mind here is the choice of base image, usually we use the alpine version when choosing a Python image. However, the alpine version requires a lot of compilation when installing some local binary packages, and this compilation will encounter various errors, so I finally chose the buster version, which is based on ubuntu. If you want to know more about it, see this article&lt;a href="https://pythonspeed.com/articles/alpine-docker-python/" rel="noopener noreferrer"&gt;《Using Alpine can make Python Docker builds 50× slower》&lt;/a&gt;。&lt;/p&gt;

&lt;p&gt;After obtaining the API Token for Tushare, you need to run Docker locally.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker build -t invest-alchemy/core . # Build
docker run -t -i -e TUSHARE_API_TOKEN=xxxx invest-alchemy/core # Local Run
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The ECR repository is then created in AWS, and the locally built images can then be pushed to the ECR for use by ECS tasks.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;aws ecr get-login-password &lt;span class="nt"&gt;--region&lt;/span&gt; us-east-1 | docker login &lt;span class="nt"&gt;--username&lt;/span&gt; AWS &lt;span class="nt"&gt;--password-stdin&lt;/span&gt; replace_with_your_ecr_addr.dkr.ecr.us-east-1.amazonaws.com &lt;span class="c"&gt;# Login ECR&lt;/span&gt;

docker build &lt;span class="nt"&gt;-t&lt;/span&gt; invest-alchemy/core &lt;span class="nb"&gt;.&lt;/span&gt; &lt;span class="c"&gt;# Local Build&lt;/span&gt;

docker tag invest-alchemy/core:latest replace_with_your_ecr_addr.dkr.ecr.us-east-1.amazonaws.com/invest-alchemy/core:latest &lt;span class="c"&gt;# Add Tag&lt;/span&gt;

docker push replace_with_your_ecr_addr.dkr.ecr.us-east-1.amazonaws.com/invest-alchemy/core:latest &lt;span class="c"&gt;# Push to Remote Repo&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Infrastructure as Code
&lt;/h4&gt;

&lt;p&gt;The next step is to build the required infrastructure (ECS/IAM/SNS/VPC/CloudWatch) via Terraform. The main reference for this piece is this&lt;a href="https://zoph.me/posts/2019-09-22-serverless-jobs-scheduling-using-aws-fargate/" rel="noopener noreferrer"&gt;《Serverless job scheduling using AWS Fargate》&lt;/a&gt;。&lt;/p&gt;

&lt;h5&gt;
  
  
  ECR/ECS/Task
&lt;/h5&gt;

&lt;p&gt;See the source code for details, note that set &lt;code&gt;capacity_provider&lt;/code&gt; to &lt;code&gt;FARGATE_SPOT&lt;/code&gt; can significant cost reduction.&lt;/p&gt;

&lt;h5&gt;
  
  
  CloudWatch
&lt;/h5&gt;

&lt;p&gt;See the source code for details, note that &lt;code&gt;ecs_target&lt;/code&gt;/&lt;code&gt;network_configuration&lt;/code&gt; can use the default VPC network, be sure to set the &lt;code&gt;assign_public_ip&lt;/code&gt; to &lt;code&gt;true&lt;/code&gt;, otherwise the container task will not be able to access the external network.&lt;/p&gt;

&lt;h5&gt;
  
  
  VPC
&lt;/h5&gt;

&lt;p&gt;Use the AWS default VPC network. AWS Fargate can run in multiple network modes; the simplest, public subnet mode, is chosen here.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fimg.bmpi.dev%2Fff9ced8e-daf9-b600-f3b8-033a3c78939e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fimg.bmpi.dev%2Fff9ced8e-daf9-b600-f3b8-033a3c78939e.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For other scenes see &lt;a href="https://github.com/nathanpeck/aws-cloudformation-fargate" rel="noopener noreferrer"&gt;《CloudFormation Templates for AWS Fargate deployments》&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;For further study you can read this article &lt;a href="https://cloudonaut.io/fargate-networking-101/" rel="noopener noreferrer"&gt;《Fargate networking 101》&lt;/a&gt;.&lt;/p&gt;

&lt;h5&gt;
  
  
  IAM
&lt;/h5&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;######################### Role used by the container regulates what AWS services the task has access to, e.g. your application is using a DynamoDB, then the task role must give the task access to Dynamo.
resource "aws_iam_role" "ecs_service_role" {
  name               = "${var.project}_ecs_service_role_${var.env}"
  assume_role_policy = "${data.aws_iam_policy_document.ecs_service_assume_role_policy.json}"
}

resource "aws_iam_role_policy" "ecs_service_policy" {
  name   = "${var.project}_ecs_service_role_policy_${var.env}"
  policy = "${data.aws_iam_policy_document.ecs_service_policy.json}"
  role   = "${aws_iam_role.ecs_service_role.id}"
}

data "aws_iam_policy_document" "ecs_service_policy" {
  statement {
    effect = "Allow"
    resources = ["*"]
    actions = [
        "iam:ListPolicies",
        "iam:GetPolicyVersion"
    ]
  }
}

data "aws_iam_policy_document" "ecs_service_assume_role_policy" {
  statement {
    actions = ["sts:AssumeRole"]
    principals {
      type = "Service"
      identifiers = ["ecs-tasks.amazonaws.com"]
    }
  }
}

resource "aws_iam_role_policy_attachment" "ecs_service_role_policy_attachment" {
  role       = aws_iam_role.ecs_service_role.name
  policy_arn = "arn:aws:iam::aws:policy/AWSLambdaFullAccess" # https://gist.github.com/gene1wood/55b358748be3c314f956
}

######################### Role used by the container enables the service to e.g. pull the image from ECR, spin up or deregister tasks etc

resource "aws_iam_role" "ecs_task_execution_role" {
  name = "${var.project}_ecs_task_execution_role_${var.env}"

  assume_role_policy = &amp;lt;&amp;lt;EOF
{
 "Version": "2012-10-17",
 "Statement": [
   {
     "Action": "sts:AssumeRole",
     "Principal": {
       "Service": "ecs-tasks.amazonaws.com"
     },
     "Effect": "Allow",
     "Sid": ""
   }
 ]
}
EOF
}

resource "aws_iam_role_policy_attachment" "ecs_task_execution_role_policy_attachment" {
  role       = aws_iam_role.ecs_task_execution_role.name
  policy_arn = "arn:aws:iam::aws:policy/service-role/AmazonECSTaskExecutionRolePolicy" # https://gist.github.com/gene1wood/55b358748be3c314f956
}

######################### Role used for ECS Events

resource "aws_iam_role" "ecs_events_role" {
  name               = "${var.project}_ecs_events_role_${var.env}"
  assume_role_policy = "${data.aws_iam_policy_document.ecs_events_assume_role_policy.json}"
}

resource "aws_iam_role_policy_attachment" "ecs_events_role_policy" {
  policy_arn = "${data.aws_iam_policy.ecs_events_policy.arn}"
  role       = "${aws_iam_role.ecs_events_role.id}"
}

data "aws_iam_policy" "ecs_events_policy" {
  arn = "arn:aws:iam::aws:policy/service-role/AmazonEC2ContainerServiceEventsRole" # https://gist.github.com/gene1wood/55b358748be3c314f956
}

data "aws_iam_policy_document" "ecs_events_assume_role_policy" {
  statement {
    actions = ["sts:AssumeRole"]
    principals {
      type = "Service"
      identifiers = ["events.amazonaws.com"]
    }
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Three roles are defined here.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;ecs_service_role: container application privileges, such as Core Service needs to push data to S3 and SNS, you need to have the relevant privileges for this role.&lt;/li&gt;
&lt;li&gt;ecs_task_execution_role: ECS task execution permission, for example, if ECS needs to pull images from ECR, it needs to have the permission to access ECR.&lt;/li&gt;
&lt;li&gt;ecs_events_role: CloudWatch timing task permission, such as timing tasks need to execute ECS tasks, you need to have the permission of &lt;code&gt;AmazonEC2ContainerServiceEventsRole&lt;/code&gt; role.&lt;/li&gt;
&lt;/ul&gt;

&lt;h5&gt;
  
  
  Application Secrets
&lt;/h5&gt;

&lt;p&gt;Applications always need to rely on some sensitive information, such as various api tokens. Core Service relies on Tushare API token, so it needs to be injected through Terraform. Here i refer to this &lt;a href="https://blog.gruntwork.io/a-comprehensive-guide-to-managing-secrets-in-your-terraform-code-1d586955ace1" rel="noopener noreferrer"&gt;《A comprehensive guide to managing secrets in your Terraform code》&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The article provides these ways of managing sensitive information.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Environment variables&lt;/li&gt;
&lt;li&gt;Encrypted files (AWS KMS)&lt;/li&gt;
&lt;li&gt;Key repository (AWS Secrets manager)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The first way of environment variables is the simplest, and the next two have a certain cost of use, because the key security requirements are not high, the first simple way is used here.&lt;/p&gt;

&lt;p&gt;First define the variables.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="err"&gt;variable&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"TUSHARE_API_TOKEN"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="err"&gt;description&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Tushare API Token from .env"&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="err"&gt;type&lt;/span&gt;&lt;span class="w"&gt;        &lt;/span&gt;&lt;span class="err"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;string&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then inject this environment variable at the ECS definition.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="err"&gt;data&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"template_file"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"task"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="err"&gt;template&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"${file("&lt;/span&gt;&lt;span class="err"&gt;./Infrastructure/tf-fargate/tasks/task_definition.json&lt;/span&gt;&lt;span class="s2"&gt;")}"&lt;/span&gt;&lt;span class="w"&gt;

  &lt;/span&gt;&lt;span class="err"&gt;vars&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="err"&gt;project&lt;/span&gt;&lt;span class="w"&gt;             &lt;/span&gt;&lt;span class="err"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"${var.project}"&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="err"&gt;aws_region&lt;/span&gt;&lt;span class="w"&gt;          &lt;/span&gt;&lt;span class="err"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"${var.aws_region}"&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="err"&gt;ecr_image_uri&lt;/span&gt;&lt;span class="w"&gt;       &lt;/span&gt;&lt;span class="err"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"${var.ecr_image_uri}"&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="err"&gt;TUSHARE_API_TOKEN&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"${var.TUSHARE_API_TOKEN}"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;#&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;注入变量&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Finally, inject this variable into the container at &lt;code&gt;task_definition.json&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="nl"&gt;"environment"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[{&lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"TUSHARE_API_TOKEN"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"value"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"${TUSHARE_API_TOKEN}"&lt;/span&gt;&lt;span class="p"&gt;}]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This key needs to be entered each time a change is made, so that the key is not leaked to the code repository.&lt;/p&gt;

&lt;h4&gt;
  
  
  Make Build Script
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight make"&gt;&lt;code&gt;&lt;span class="c"&gt;################ Config ########################
&lt;/span&gt;&lt;span class="nv"&gt;S3_BUCKET&lt;/span&gt; &lt;span class="o"&gt;?=&lt;/span&gt; invest-alchemy
&lt;span class="nv"&gt;AWS_REGION&lt;/span&gt; &lt;span class="o"&gt;?=&lt;/span&gt; us-east-1
&lt;span class="nv"&gt;ENV&lt;/span&gt; &lt;span class="o"&gt;?=&lt;/span&gt; dev
&lt;span class="nv"&gt;ECR&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; 745121664662.dkr.ecr.us-east-1.amazonaws.com/invest-alchemy-core-ecr-dev &lt;span class="c"&gt;# ECR Repository Example: 123456789012.dkr.ecr.eu-west-1.amazonaws.com/{project_name&lt;/span&gt;&lt;span class="o"&gt;}&lt;/span&gt;&lt;span class="nt"&gt;-ecr-&lt;/span&gt;&lt;span class="o"&gt;{&lt;/span&gt;&lt;span class="nb"&gt;env&lt;/span&gt;&lt;span class="o"&gt;}&lt;/span&gt;
&lt;span class="c"&gt;################################################
&lt;/span&gt;
&lt;span class="c"&gt;################ Artifacts Bucket ##############
&lt;/span&gt;&lt;span class="nl"&gt;artifacts&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;
    &lt;span class="p"&gt;@&lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"Creation of artifacts bucket"&lt;/span&gt;
    &lt;span class="p"&gt;@&lt;/span&gt;aws s3 mb s3://&lt;span class="p"&gt;$(&lt;/span&gt;S3_BUCKET&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;@&lt;/span&gt;aws s3api put-bucket-encryption &lt;span class="nt"&gt;--bucket&lt;/span&gt; &lt;span class="p"&gt;$(&lt;/span&gt;S3_BUCKET&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
        &lt;span class="nt"&gt;--server-side-encryption-configuration&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
        &lt;span class="s1"&gt;'{"Rules": [{"ApplyServerSideEncryptionByDefault": {"SSEAlgorithm": "AES256"}}]}'&lt;/span&gt;
    &lt;span class="p"&gt;@&lt;/span&gt;aws s3api put-bucket-versioning &lt;span class="nt"&gt;--bucket&lt;/span&gt; &lt;span class="p"&gt;$(&lt;/span&gt;S3_BUCKET&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="nt"&gt;--versioning-configuration&lt;/span&gt; &lt;span class="nv"&gt;Status&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;Enabled
&lt;span class="c"&gt;################################################
&lt;/span&gt;

&lt;span class="nl"&gt;build-docker&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;
    &lt;span class="p"&gt;@&lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"run aws ecr get-login --region &lt;/span&gt;&lt;span class="p"&gt;$(&lt;/span&gt;&lt;span class="s2"&gt;AWS_REGION&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="s2"&gt; first"&lt;/span&gt;
    &lt;span class="p"&gt;@&lt;/span&gt;docker build &lt;span class="nt"&gt;-t&lt;/span&gt; &lt;span class="p"&gt;$(&lt;/span&gt;PROJECT&lt;span class="p"&gt;)&lt;/span&gt; .
    &lt;span class="p"&gt;@&lt;/span&gt;docker tag &lt;span class="p"&gt;$(&lt;/span&gt;PROJECT&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;$(&lt;/span&gt;ECR&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;@&lt;/span&gt;docker push &lt;span class="p"&gt;$(&lt;/span&gt;ECR&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c"&gt;################ Terraform #####################
&lt;/span&gt;
&lt;span class="nl"&gt;init&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;
    &lt;span class="p"&gt;@&lt;/span&gt;&lt;span class="nb"&gt;.&lt;/span&gt; .env
    &lt;span class="p"&gt;@&lt;/span&gt;terraform init &lt;span class="se"&gt;\&lt;/span&gt;
        &lt;span class="nt"&gt;-backend-config&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"bucket=&lt;/span&gt;&lt;span class="p"&gt;$(&lt;/span&gt;&lt;span class="s2"&gt;S3_BUCKET&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
        &lt;span class="nt"&gt;-backend-config&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"key=&lt;/span&gt;&lt;span class="p"&gt;$(&lt;/span&gt;&lt;span class="s2"&gt;PROJECT&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;/terraform.tfstate"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
        ./Infrastructure/tf-fargate/

&lt;span class="nl"&gt;validate&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;
    &lt;span class="p"&gt;@&lt;/span&gt;terraform validate ./Infrastructure/tf-fargate/

&lt;span class="nl"&gt;plan&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;
    &lt;span class="p"&gt;@&lt;/span&gt;terraform plan &lt;span class="se"&gt;\&lt;/span&gt;
        &lt;span class="nt"&gt;-var&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"env=&lt;/span&gt;&lt;span class="p"&gt;$(&lt;/span&gt;&lt;span class="s2"&gt;ENV&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
        &lt;span class="nt"&gt;-var&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"project=&lt;/span&gt;&lt;span class="p"&gt;$(&lt;/span&gt;&lt;span class="s2"&gt;PROJECT&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
        &lt;span class="nt"&gt;-var&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"description=&lt;/span&gt;&lt;span class="p"&gt;$(&lt;/span&gt;&lt;span class="s2"&gt;DESCRIPTION&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
        &lt;span class="nt"&gt;-var&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"aws_region=&lt;/span&gt;&lt;span class="p"&gt;$(&lt;/span&gt;&lt;span class="s2"&gt;AWS_REGION&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
        &lt;span class="nt"&gt;-var&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"artifacts_bucket=&lt;/span&gt;&lt;span class="p"&gt;$(&lt;/span&gt;&lt;span class="s2"&gt;S3_BUCKET&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
        ./Infrastructure/tf-fargate/

&lt;span class="nl"&gt;apply&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;
    &lt;span class="p"&gt;@&lt;/span&gt;terraform apply &lt;span class="se"&gt;\&lt;/span&gt;
        &lt;span class="nt"&gt;-var&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"env=&lt;/span&gt;&lt;span class="p"&gt;$(&lt;/span&gt;&lt;span class="s2"&gt;ENV&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
        &lt;span class="nt"&gt;-var&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"project=&lt;/span&gt;&lt;span class="p"&gt;$(&lt;/span&gt;&lt;span class="s2"&gt;PROJECT&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
        &lt;span class="nt"&gt;-var&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"description=&lt;/span&gt;&lt;span class="p"&gt;$(&lt;/span&gt;&lt;span class="s2"&gt;DESCRIPTION&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
        &lt;span class="nt"&gt;-compact-warnings&lt;/span&gt; ./Infrastructure/tf-fargate/

&lt;span class="nl"&gt;destroy&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;
    &lt;span class="p"&gt;@&lt;/span&gt;&lt;span class="nb"&gt;read&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; &lt;span class="s2"&gt;"Are you sure that you want to destroy: '&lt;/span&gt;&lt;span class="p"&gt;$(&lt;/span&gt;&lt;span class="s2"&gt;PROJECT&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;-&lt;/span&gt;&lt;span class="p"&gt;$(&lt;/span&gt;&lt;span class="s2"&gt;ENV&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;-&lt;/span&gt;&lt;span class="p"&gt;$(&lt;/span&gt;&lt;span class="s2"&gt;AWS_REGION&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;'? [yes/N]: "&lt;/span&gt; sure &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt; &lt;span class="p"&gt;$${&lt;/span&gt;sure:-N&lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s1"&gt;'yes'&lt;/span&gt; &lt;span class="o"&gt;]&lt;/span&gt;
    &lt;span class="p"&gt;@&lt;/span&gt;terraform destroy ./Infrastructure/tf-fargate/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;First execute &lt;code&gt;make build-docker&lt;/code&gt; to build the image and upload it to ECR, then execute &lt;code&gt;make init&lt;/code&gt; to initialize Terraform, then execute &lt;code&gt;make validate &amp;amp;&amp;amp; make plan&lt;/code&gt; to verify that there are no problems with the infrastructure configuration. If there are no problems, run &lt;code&gt;make apply&lt;/code&gt; to build the real infrastructure.&lt;/p&gt;

&lt;h4&gt;
  
  
  Workflow
&lt;/h4&gt;

&lt;p&gt;If there are changes to the system code, the following process can be repeated.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Modify the code&lt;/li&gt;
&lt;li&gt;&lt;code&gt;make build-docker&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;make apply&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  API Service
&lt;/h3&gt;

&lt;p&gt;The API Service has only one API for subscribing to SNS topics, which is used to help users subscribe to the topics provided by Core Service.&lt;/p&gt;

&lt;h4&gt;
  
  
  Serverless Framework
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;service&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;api&lt;/span&gt;
&lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;invest-alchemy&lt;/span&gt;
&lt;span class="na"&gt;org&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;madawei2699&lt;/span&gt;

&lt;span class="na"&gt;provider&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;aws&lt;/span&gt;
  &lt;span class="na"&gt;role&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;arn:aws:iam::745121664662:role/invest-alchemy-lambda&lt;/span&gt;
  &lt;span class="na"&gt;runtime&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nodejs12.x&lt;/span&gt;
  &lt;span class="na"&gt;stage&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;dev&lt;/span&gt;
  &lt;span class="na"&gt;region&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;us-east-1&lt;/span&gt;

&lt;span class="na"&gt;functions&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;subscribe_sns&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;handler&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;sns.subscribe_sns&lt;/span&gt;
    &lt;span class="na"&gt;memorySize&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;128&lt;/span&gt;
    &lt;span class="na"&gt;description&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Subscribe sns topic.&lt;/span&gt;
    &lt;span class="na"&gt;environment&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;snsTopicArn&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;arn:aws:sns:us-east-1:745121664662:trade-signal-topic&lt;/span&gt;
    &lt;span class="na"&gt;events&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;http&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; 
          &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;subscribe&lt;/span&gt;
          &lt;span class="na"&gt;method&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;post&lt;/span&gt;
          &lt;span class="na"&gt;cors&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
          &lt;span class="na"&gt;integration&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;LAMBDA&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The function subscribe_sns is defined here, and we implement it with Javascript, the source code is available at &lt;a href="https://raw.githubusercontent.com/bmpi-dev/invest-alchemy/master/api/sns.js" rel="noopener noreferrer"&gt;sns.js&lt;/a&gt;, so we won't go over it here.&lt;/p&gt;

&lt;p&gt;Note that the code needs to define the SNS topic, so it needs to have the role permission to subscribe to SNS, which we specify in the Serverless configuration as &lt;code&gt;arn:aws:iam::745121664662:role/invest-alchemy-lambda&lt;/code&gt;, which has permission to subscribe to the SNS, with the additional policies &lt;code&gt;AWSLambdaBasicExecutionRole&lt;/code&gt; and &lt;code&gt;AmazonSNSFullAccess&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Code executed in Lambda can be imported directly into the AWS SDK without having to install it, and there is no need to set AWS Credentials, as the function executes with the permissions attached to the role directly.&lt;/p&gt;

&lt;h3&gt;
  
  
  Web
&lt;/h3&gt;

&lt;p&gt;The web UI provides user-accessible pages for users to sign up for email subscriptions. Vue.js and Nuxt.js are used in this module to build SEO-friendly Server-Side Rendered web pages. I mainly refer to this article &lt;a href="https://www.serverless.com/examples/aws-node-vue-nuxt-ssr" rel="noopener noreferrer"&gt; AWS | Vue Nuxt Ssr &lt;/a&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;service&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;web&lt;/span&gt;
&lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;invest-alchemy&lt;/span&gt;
&lt;span class="na"&gt;org&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;madawei2699&lt;/span&gt;

&lt;span class="na"&gt;provider&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;aws&lt;/span&gt;
  &lt;span class="na"&gt;runtime&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nodejs12.x&lt;/span&gt;
  &lt;span class="na"&gt;stage&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${self:custom.secrets.NODE_ENV}&lt;/span&gt;
  &lt;span class="na"&gt;region&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;us-east-1&lt;/span&gt;
  &lt;span class="na"&gt;environment&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; 
    &lt;span class="na"&gt;NODE_ENV&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${self:custom.secrets.NODE_ENV}&lt;/span&gt;

&lt;span class="na"&gt;functions&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;nuxt&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;handler&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;index.nuxt&lt;/span&gt;
    &lt;span class="na"&gt;memorySize&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;256&lt;/span&gt;
    &lt;span class="na"&gt;events&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;http&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/&lt;/span&gt;
          &lt;span class="na"&gt;method&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;any&lt;/span&gt;
          &lt;span class="na"&gt;cors&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;http&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/{proxy+}&lt;/span&gt; 
          &lt;span class="na"&gt;method&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;any&lt;/span&gt;
          &lt;span class="na"&gt;cors&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;

&lt;span class="na"&gt;plugins&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;serverless-apigw-binary&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;serverless-domain-manager&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;serverless-offline&lt;/span&gt;

&lt;span class="na"&gt;custom&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;secrets&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${file(secrets.json)}&lt;/span&gt;
  &lt;span class="na"&gt;apigwBinary&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;types&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;*/*'&lt;/span&gt;
  &lt;span class="na"&gt;customDomain&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;domainName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${self:custom.secrets.DOMAIN}&lt;/span&gt;
    &lt;span class="na"&gt;basePath&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;'&lt;/span&gt;
    &lt;span class="na"&gt;stage&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${self:custom.secrets.NODE_ENV}&lt;/span&gt;
    &lt;span class="na"&gt;createRoute53Record&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Note here that the gateway integrates custom domain names and TLS certificates: you need to first configure the domain name in AWS Route53 and then apply for a TLS certificate in AWS Certificate Manager, please refer to the article mentioned above for the detailed process.&lt;/p&gt;

&lt;p&gt;Once the certificate application is successful, you can execute &lt;code&gt;sls create_domain&lt;/code&gt; to create the DNS information for the domain name.&lt;/p&gt;

&lt;p&gt;Finally, run &lt;code&gt;npm run deploy&lt;/code&gt; to deploy to AWS. If you want to debug locally, you can run &lt;code&gt;npm run start-server&lt;/code&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Debugging / Logging
&lt;/h3&gt;

&lt;p&gt;Select the CloudWatch service in the AWS console to view the log group and locate the issue by analyzing the associated logs. If no logs are generated, you can also see if timed task events are generated by viewing the logs in &lt;code&gt;CloudTrail&lt;/code&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Issues That Need Attention
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Serverless Costing
&lt;/h3&gt;

&lt;p&gt;One of the major advantages of Serverless is that it is pay-as-you-go, which is cheaper than buying a separate VPS for timed tasks or low traffic sites. It also has extremely high availability and elastic scalability, which is not possible with a single VPS.&lt;/p&gt;

&lt;p&gt;To analyze the cost, you can use the AWS Billing service. The billing services we use (excluding some services that have negligible billing on the system such as S3/VPC/CloudFront, etc.) include&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;API Gateway: $1 per 1 million requests.&lt;/li&gt;
&lt;li&gt;ECS Spot Fargate: $0.01289974 per vCPU/hour and $0.00141649 per GB of memory/hour.&lt;/li&gt;
&lt;li&gt;Lambda: $0.0000002083 per 128M RAM/100 ms.&lt;/li&gt;
&lt;li&gt;SNS: $2 per 100,000 email pushes.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;All of the above are based on the US East region.&lt;/p&gt;

&lt;p&gt;We know from the CloudWatch log set that Lambda execution time per request is 1200 ms per run of the API and Web service; Core service runs 3 minutes per day. The cost of Lambda is: 200000 * 1200 / 100 * 0.0000002083 = $0.5; ECS cost is 3 * 30 / 60 * (0.01289974 + 0.00141649) + 0.00141649) = $0.02. API Gateway costs 200000 / 1000000 = $0.2; SNS assuming 1,000 people subscribe to the email is 1000 * 30 / 100000 = $0.3.&lt;/p&gt;

&lt;p&gt;Then the monthly cost is: 0.5 + 0.02 + 0.2 + 0.3 = $1.02. Take &lt;code&gt;vultr VPS&lt;/code&gt; for example, the cheapest configuration with 1 vCPU + 512 MB configuration is $2.50 a month, and that doesn't include the cost of sending emails.&lt;/p&gt;

&lt;h3&gt;
  
  
  Cold Starts
&lt;/h3&gt;

&lt;p&gt;The background timing tasks are not sensitive to the start time, if you are very concerned about the start time can be constantly warmed up by some timing tasks, but this will also lead to higher costs. For further information, please refer to this article &lt;a href="https://medium.com/hackernoon/serverless-cold-starts-using-them-to-your-advantage-3dfdf9a0bc66" rel="noopener noreferrer"&gt;《Solving Serverless Cold Starts with Advanced Tooling》&lt;/a&gt;。&lt;/p&gt;

&lt;h3&gt;
  
  
  VPC Price
&lt;/h3&gt;

&lt;p&gt;Most of the VPC services are not charged, except VPN/NAT Gateway/Endpoints (Gateway is free/Interface is charged), while the Endpoints of Interface are charged very expensive and should be stopped as much as possible when they are not needed, and you have to remove the related AWS services to stop charging.&lt;/p&gt;

&lt;p&gt;Note that Endpoints are charged based on available zones, so if an Endpoint is created with 6 available zones, it will be charged 6 times as much.&lt;/p&gt;

&lt;h2&gt;
  
  
  Reading Materials
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://serverless-stack.com/" rel="noopener noreferrer"&gt;Serverless Stack&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://wiki.bmpi.dev/#Serverless%E5%A5%BD%E6%96%87:Serverless%E5%A5%BD%E6%96%87" rel="noopener noreferrer"&gt;Serverless Notes&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  &lt;em&gt;References&lt;/em&gt;
&lt;/h4&gt;




&lt;ol&gt;

&lt;li id="fn1"&gt;
&lt;p&gt;&lt;a href="https://rubygarage.org/blog/monolith-soa-microservices-serverless" rel="noopener noreferrer"&gt;https://rubygarage.org/blog/monolith-soa-microservices-serverless&lt;/a&gt; ↩&lt;/p&gt;
&lt;/li&gt;

&lt;li id="fn2"&gt;
&lt;p&gt;&lt;a href="https://aws.amazon.com/serverless/" rel="noopener noreferrer"&gt;https://aws.amazon.com/serverless/&lt;/a&gt; ↩&lt;/p&gt;
&lt;/li&gt;

&lt;li id="fn3"&gt;
&lt;p&gt;&lt;a href="https://serverless-stack.com/chapters/zh/what-is-serverless.html" rel="noopener noreferrer"&gt;https://serverless-stack.com/chapters/zh/what-is-serverless.html&lt;/a&gt; ↩&lt;/p&gt;
&lt;/li&gt;

&lt;/ol&gt;

</description>
      <category>aws</category>
      <category>serverless</category>
      <category>iac</category>
      <category>devops</category>
    </item>
  </channel>
</rss>
