<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Rob Pearson</title>
    <description>The latest articles on DEV Community by Rob Pearson (@robpearson).</description>
    <link>https://dev.to/robpearson</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/robpearson"/>
    <language>en</language>
    <item>
      <title>Lessons learned porting Octopus Server to .NET Core 3.1</title>
      <dc:creator>Rob Pearson</dc:creator>
      <pubDate>Fri, 20 Mar 2020 00:04:18 +0000</pubDate>
      <link>https://dev.to/robpearson/lessons-learned-porting-octopus-server-to-net-core-3-1-4hm1</link>
      <guid>https://dev.to/robpearson/lessons-learned-porting-octopus-server-to-net-core-3-1-4hm1</guid>
      <description>&lt;p&gt;With the release of &lt;a href="https://octopus.com/blog/octopus-release-2020-1"&gt;Octopus 2020.1&lt;/a&gt;, Octopus Server now runs on .NET Core 3.1, which means it can be installed on Linux, Docker Containers, and Kubernetes. This was a significant effort, we already shared our &lt;a href="https://octopus.com/blog/octopus-cloud-1.0-reflections"&gt;reflections on the launch of Octopus Cloud 1.0&lt;/a&gt; and &lt;a href="https://octopus.com/blog/octopus-cloud-v2-why-kubernetes"&gt;why we chose Kubernetes, Linux, and .NET Core for Octopus Cloud 2.0&lt;/a&gt;, in this post, we share the benefits of the change and our top three lessons learned.&lt;/p&gt;

&lt;h2&gt;
  
  
  Benefits
&lt;/h2&gt;

&lt;p&gt;Before we share the lessons learned, here are the benefits of porting Octopus Server to .NET Core:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Modern development environment, framework, and tooling. &lt;/li&gt;
&lt;li&gt;Cross-platform support for Windows and Linux.&lt;/li&gt;
&lt;li&gt;Access to the mature ecosystem for running Linux containers in Kubernetes.&lt;/li&gt;
&lt;li&gt;Choice and flexibility: Our customers can choose to run Octopus on Windows or Linux, or they can use Octopus Cloud.&lt;/li&gt;
&lt;li&gt;Octopus Cloud has gained improved performance and reduced operating costs. See the links above for concrete numbers.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This also makes our development environment more productive as our engineering teams now have the choice to develop on Windows or Linux. &lt;/p&gt;

&lt;h2&gt;
  
  
  Top three lessons learned
&lt;/h2&gt;

&lt;p&gt;We learned a lot going through this process; however, there are three major lessons we learned.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Know and plan for differences between Windows and Linux
&lt;/h3&gt;

&lt;p&gt;There are platform-specific differences in the implementations of .NET Core on Windows and Linux. Most of the issues were small and had easy workarounds, but we did find a few significant problems that are worth sharing.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Configuration settings and the Windows registry&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;To support both Windows and Linux platforms, we had to remove any Windows-specific code. Octopus Server started as a Windows product, and it followed the platform conventions and stored some configuration settings in the Windows registry, which is a problem for Linux.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Solution&lt;/strong&gt;: &lt;/p&gt;

&lt;p&gt;We shifted everything stored in the registry to the file system or the Octopus database. This was a simple task, but it took time and testing to get it right.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Database performance problems&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The most significant problem we encountered was abysmal database performance due to the different way of handling database queries on Windows and Linux. Octopus uses Microsoft SQL Server as its data store, and we discovered a &lt;a href="https://github.com/dotnet/SqlClient/issues/422"&gt;significant problem&lt;/a&gt; in the .NET Core SQL client library on Linux. If we have the &lt;code&gt;MultipleActiveResultSets&lt;/code&gt; setting set to &lt;code&gt;True&lt;/code&gt; we get exceptions and database timeouts. The GitHub issue linked above shares the full details and a simple code sample to reproduce the problem. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Solution&lt;/strong&gt;:&lt;/p&gt;

&lt;p&gt;Our short-term solution was to disable the &lt;code&gt;MultipleActiveResultSets&lt;/code&gt; setting and use it very sparingly. In general, we open two connections to our database, one with this setting enabled and one with it disabled. We primarily use the disabled connection and only use the enabled one when it is required.&lt;/p&gt;

&lt;p&gt;We have been working with Microsoft to help provide information to resolve the issue, and we hope to see a proper fix in the future. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Authentication providers&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;We also encountered the need to host the Octopus Server web host differently on each platform. We use &lt;code&gt;HttpSys&lt;/code&gt; on Windows and &lt;em&gt;Kestrel&lt;/em&gt; on Linux, and this made our authentication challenging. Octopus needs to support multiple authentication schemes, including cookies-based authentication, and the ability for users to log in and out and have multiple authentication providers enabled at once.&lt;/p&gt;

&lt;p&gt;The core issue we hit was that &lt;code&gt;HttpSys&lt;/code&gt; supports integrated authentication (i.e., Windows authentication), but its a binary on/off setting for every endpoint in the host. This is inflexible, and it's a change from our non-.NET Core code-base. Users could log in automatically, but they could never log out. &lt;/p&gt;

&lt;p&gt;Note: We don't use Kestrel on Windows because it doesn't support virtual directories and we have customers that share the same port with other services. So to ensure we maintain backwards compatibility, we decided to use &lt;code&gt;HttpSys&lt;/code&gt; for Windows only.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Solution&lt;/strong&gt;: &lt;/p&gt;

&lt;p&gt;We considered several options, but after going through this &lt;a href="https://github.com/dotnet/aspnetcore/issues/5888"&gt;ASP.NET Core issue&lt;/a&gt;, we decided to follow the advice there and use two hosts. One standard web host and a second one to look/behave like a virtual directory off the main API site's root, i.e., &lt;code&gt;/integrate-challenge&lt;/code&gt;, and is therefore consistent with the location in earlier versions of Octopus Server. The host only has that one route, and it initiates the login challenge using a 401 response when the user isn't already authenticated.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Sharpen your debugging skills for Linux and Docker
&lt;/h3&gt;

&lt;p&gt;As we progressed through the .NET Core port, we also learned how to code, test, and debug problems with Windows, Windows Subsystem for Linux (WSL), Linux, and Docker. Historically, our team all developed on Windows, but this has evolved into individuals coding on Windows, Linux, and macOS, and as a result, we've learned several lessons:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Running as non-root or non-admin&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;We found it much easier to run all builds and test Octopus Server on Linux as non-root or non-sudo to limit permission-based errors and problems. That said, we sometimes need to run commands as root using &lt;code&gt;sudo&lt;/code&gt; and then afterwards fix up the ownership of files that were created during those commands with &lt;code&gt;chown&lt;/code&gt;, like &lt;code&gt;sudo chown -R $user:$user ~/Octopus/MyInstance&lt;/code&gt;. This is a bit quick and dirty, but it does the job. &lt;/p&gt;

&lt;p&gt;We are planning to change this in future, such that octopus runs as a user that is a member of a group, and during installation we configure the group owner of &lt;code&gt;/etc/octopus&lt;/code&gt; to be that group. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Certificate management&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Our end-to-end (E2E) test suite runs a range of tests that run against the Octopus Server listening over HTTPS (i.e., port 443). This requires us to convert and import some self-signed certificates into the local certificate store in &lt;code&gt;/etc/&lt;/code&gt; on a Linux box. To solve this problem, we wrote the following script:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;#!/bin/bash&lt;/span&gt;
&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"Setup certificates"&lt;/span&gt;
&lt;span class="nv"&gt;CERTS_PATH_DEST&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"/usr/local/share/ca-certificates"&lt;/span&gt;
&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt; &lt;span class="o"&gt;!&lt;/span&gt; &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$CERTS_PATH_DEST&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;]&lt;/span&gt;
&lt;span class="k"&gt;then
    &lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"Creating &lt;/span&gt;&lt;span class="nv"&gt;$CERTS_PATH_DEST&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
    &lt;span class="nb"&gt;mkdir&lt;/span&gt; &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;CERTS_PATH_DEST&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;
&lt;span class="k"&gt;fi
&lt;/span&gt;&lt;span class="nv"&gt;MY_PATH&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="sb"&gt;`&lt;/span&gt;&lt;span class="nb"&gt;dirname&lt;/span&gt; &lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="nv"&gt;$0&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="sb"&gt;`&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;span class="nv"&gt;CERT_PFX_FILES&lt;/span&gt;&lt;span class="o"&gt;=(&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;PWD&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;/&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;MY_PATH&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;/../Octopus.E2ETests/Universe/&lt;span class="k"&gt;*&lt;/span&gt;.pfx&lt;span class="o"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;for &lt;/span&gt;CERT_PFX &lt;span class="k"&gt;in&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;CERT_PFX_FILES&lt;/span&gt;&lt;span class="p"&gt;[@]&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;span class="k"&gt;do
    &lt;/span&gt;&lt;span class="nv"&gt;FILE_NAME&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;basename&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$CERT_PFX&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;
    &lt;span class="nv"&gt;CERT_CRT&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;CERTS_PATH_DEST&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;/&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;FILE_NAME&lt;/span&gt;&lt;span class="p"&gt;%.*&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;.crt"&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt; &lt;span class="o"&gt;!&lt;/span&gt; &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;CERT_CRT&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;]&lt;/span&gt;
    &lt;span class="k"&gt;then
        &lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"Converting '&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;CERT_PFX&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;' to '&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;CERT_CRT&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;'"&lt;/span&gt;
        openssl pkcs12 &lt;span class="nt"&gt;-in&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;CERT_PFX&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="nt"&gt;-clcerts&lt;/span&gt; &lt;span class="nt"&gt;-nokeys&lt;/span&gt; &lt;span class="nt"&gt;-out&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;CERT_CRT&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="nt"&gt;-passin&lt;/span&gt; pass:password
    &lt;span class="k"&gt;fi
done
&lt;/span&gt;update-ca-certificates
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Connecting to the SQL Server database&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Octopus uses Microsoft SQL Server as its database, and teams generally connect to it via integrated Windows authentication on Windows Servers. This no longer works. Our solution here was to switch to user name and password-based authentication. &lt;/p&gt;

&lt;p&gt;Further, we found our suite of end-to-end (E2E) tests runs much faster and more reliable with database connection pooling turned off. We haven't gotten to the bottom of this yet, but it's likely a platform-specific problem related to the database performance issue mentioned above.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Debug Octopus Server on Linux with Visual Studio Code&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Our team writes code with a variety of tools, including the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://visualstudio.microsoft.com/vs/"&gt;Visual Studio&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.jetbrains.com/rider/"&gt;JetBrains Rider&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://code.visualstudio.com"&gt;Visual Studio Code&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Currently, the most popular is &lt;a href="https://code.visualstudio.com/"&gt;Visual Studio Code&lt;/a&gt; with the &lt;a href="https://aka.ms/vscode-remote/download/extension"&gt;Remote Development extension&lt;/a&gt;. This extension is still in preview, but we find it works very well.&lt;/p&gt;

&lt;p&gt;With Visual Studio Code and the Remote Development extension, we can run applications, test, and debug them as well as edit code in Linux (or a Docker container). Simply point VS Code at the folder that contains the Octopus Server code, and then it's pretty much just F5 from there. It's that simple!&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Simplify with self-contained packages
&lt;/h3&gt;

&lt;p&gt;Porting Octopus to .NET Core has allowed us to ship &lt;a href="https://www.hanselman.com/blog/MakingATinyNETCore30EntirelySelfcontainedSingleExecutable.aspx"&gt;self-contained packages&lt;/a&gt; which brings multiple benefits.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Fewer dependencies&lt;/strong&gt;: Shipping a single self-contained executable means we no longer require .NET Core to be installed on the Octopus server. The result is reduced installation requirements that make Octopus easier to install. This is a big win.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Improved supportability&lt;/strong&gt;: In a nutshell, fewer dependencies make Octopus easier to install and support. There are fewer components and fewer things that can be accidentally changed. Shipping Docker container images for &lt;a href="https://hub.docker.com/r/octopusdeploy/octopusdeploy"&gt;Windows&lt;/a&gt; and Linux (coming soon) eliminates further dependencies as even more of the dependencies are built into the containers. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Modern software and tooling&lt;/strong&gt;: Using modern tools and frameworks enables our team to continue to innovate and ship software quickly with useful features for our customers. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Unfortunately, this also has some tradeoffs as moving to .NET Core 3.1 required us to &lt;a href="https://octopus.com/blog/raising-minimum-requirements-for-octopus-server"&gt;drop support&lt;/a&gt; for older operating systems, including Windows Server 2008-2012 and some Linux distros. Supporting older servers and browsers drains our time and attention, making it harder for us to innovate and move the Octopus ecosystem forward. &lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;We ported Octopus Server to .NET Core 3.1 so that the server can run on Linux, Docker Containers, and Kubernetes. We made this change to reduce costs and increase the performance of our Octopus Cloud SaaS product, and it has been a great success. &lt;/p&gt;

&lt;p&gt;It wasn't a simple journey, but we learned a lot on the way.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Know and plan for differences between Windows and Linux&lt;/li&gt;
&lt;li&gt;Sharpen your debugging skills for Linux and Docker&lt;/li&gt;
&lt;li&gt;Simplify with self-contained packages&lt;/li&gt;
&lt;/ol&gt;

</description>
      <category>dotnet</category>
      <category>linux</category>
      <category>octopusdeploy</category>
    </item>
    <item>
      <title>MVPs and $100k AWS Bills: Reflections on the launch of Octopus Cloud 1.0</title>
      <dc:creator>Rob Pearson</dc:creator>
      <pubDate>Mon, 25 Nov 2019 11:45:34 +0000</pubDate>
      <link>https://dev.to/robpearson/mvps-and-100k-aws-bills-reflections-on-the-launch-of-octopus-cloud-1-0-125l</link>
      <guid>https://dev.to/robpearson/mvps-and-100k-aws-bills-reflections-on-the-launch-of-octopus-cloud-1-0-125l</guid>
      <description>&lt;p&gt;We (&lt;a href="https://octopus.com/" rel="noopener noreferrer"&gt;Octopus Deploy&lt;/a&gt;) are publishing a series about our engineering journey with Octopus Cloud. It’s the story of our v1 launch of Octopus Cloud on AWS, our $100K/month AWS bills, MVP’s and testing customer demand, spending 6 months of engineering effort and then running the service at a loss, spending another 9 months rebuilding it from the ground-up, and of all the considerations we made when rebuilding Octopus Cloud v2, including switching from AWS to Azure, going all-in on Kubernetes, and more.&lt;/p&gt;

&lt;p&gt;In this first post in the series, we’ll look at the design choices we made in v1, why it cost so much, and why we decided to start over.&lt;/p&gt;




&lt;p&gt;About a year ago &lt;a href="https://octopus.com/blog/announcing-octopus-cloud" rel="noopener noreferrer"&gt;we launched Octopus Cloud&lt;/a&gt;, a SaaS version of Octopus, as an experiment to see whether it would deliver significant value to our customers and simplify their deployments. We wanted to empower developers to focus on their deployment needs and leave managing the infrastructure to us, but we had no idea how difficult it would be to implement or how much time and money it would cost to get up and running.&lt;/p&gt;

&lt;p&gt;Overall, we think it has been a huge success; enough for us to invest the last year rebuilding almost the entire platform from the ground up. I was a part of the team that shipped the current version of Octopus Cloud, and I wanted to take some time to celebrate a few of our wins and reflect on the lessons we learned that have shaped our redesign.&lt;/p&gt;
&lt;h2&gt;
  
  
  Will anybody use it?
&lt;/h2&gt;

&lt;p&gt;Going into this experiment, we knew there was interest in a cloud solution, but there were a lot of things we didn’t know: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;How many customers will actually use it?&lt;/li&gt;
&lt;li&gt;What is it all going to cost?&lt;/li&gt;
&lt;li&gt;What should we charge for it? Is it going to cover the infrastructure costs?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We could guess most of these, but knowing the answers was impossible. Engineering time is very expensive too, and it’s time we could spend adding features to our self-hosted product. We didn’t want to spend years designing a perfectly-optimized cloud-native product if there was no demand for it. &lt;/p&gt;

&lt;p&gt;We decided to build an &lt;a href="https://en.wikipedia.org/wiki/Minimum_viable_product" rel="noopener noreferrer"&gt;MVP&lt;/a&gt; based on our best estimates and test the market that way. The goal was to launch something in 6 months and test if the demand was there; and if it wasn’t, we’d only wasted 6 months. We chose to optimize for getting to market quickly rather than worrying about how much it would cost. &lt;/p&gt;

&lt;p&gt;Well, the demand was there. In the first few days, we had over 500 new cloud trials spin up. As customers came to our website and decided between whether to trial Octopus self-hosted or Octopus Cloud, roughly half chose to trial Octopus in the cloud. &lt;/p&gt;
&lt;h2&gt;
  
  
  V1 architecture
&lt;/h2&gt;

&lt;p&gt;To bring Octopus Cloud to market quickly, we did the simplest thing possible; we took our self-hosted Octopus Server product and bundled it into an EC2 instance for each customer that signed up. We had to make changes to the product, but mostly around permissions. &lt;/p&gt;

&lt;p&gt;We actually had an internal alpha of Octopus Cloud v1 ready within a month or two; I remember the team doing bug bashes to test our security. What took longer, was all the steps required to make something we were happy for customers to use: hardening security, pentesting, planning for recovery, etc.&lt;/p&gt;

&lt;p&gt;To ensure there was no way for one user’s data to mingle with another, each cloud instance had its own dedicated VM, database, and a large number of security configurations to prevent any funny business. Here’s a diagram of what it all looked like. Note that we actually use Octopus Deploy to provision and deploy each Octopus Cloud v1 customer:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Fbl6p4rdtgwjoytjbtc1n.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Fbl6p4rdtgwjoytjbtc1n.png" alt="Octopus Cloud 1.0 architecture diagram"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Service limits
&lt;/h2&gt;

&lt;p&gt;You know that feeling when you're driving, and you seem to hit every red traffic light? That’s kinda what launching Octopus Cloud felt like; only the red lights were AWS service limits! We quickly learned that &lt;em&gt;everything&lt;/em&gt; in AWS has some kind of service limit, and we hit all of them. Customers would sign up, we’d hit a limit, we’d ask Amazon to increase it, we’d onboard more customers, and we’d hit another limit. Every time we thought we were in the clear, we’d hit another service limit we didn’t know about. &lt;/p&gt;

&lt;p&gt;This caused a few issues as we scaled, and at one point, we had to pause new signups while we tried to provision more headroom.&lt;/p&gt;

&lt;p&gt;&lt;iframe class="tweet-embed" id="tweet-1015048915605831680-609" src="https://platform.twitter.com/embed/Tweet.html?id=1015048915605831680"&gt;
&lt;/iframe&gt;

  // Detect dark theme
  var iframe = document.getElementById('tweet-1015048915605831680-609');
  if (document.body.className.includes('dark-theme')) {
    iframe.src = "https://platform.twitter.com/embed/Tweet.html?id=1015048915605831680&amp;amp;theme=dark"
  }



&lt;/p&gt;

&lt;h2&gt;
  
  
  Cloud stuff can be really expensive
&lt;/h2&gt;

&lt;p&gt;An EC2 instance for every customer adds up, and as our databases were backed by Amazon RDS, we were limited to 30 databases per RDS instance. Add storage, network, etc. and we were spending over $100 per month to keep a single Octopus Cloud instance online. &lt;/p&gt;

&lt;p&gt;Octopus Cloud customers could start a free 30-day trial, which meant that those hundreds of trial signups per month, each of which cost us $100 to host, quickly added up. &lt;/p&gt;

&lt;p&gt;We also didn’t have our pricing quite right. We initially launched Octopus Cloud with a $10/month starting tier, with a different pricing model from the one we currently use. Unfortunately, this was one of the most painful lessons we learned because the deficit between what we were charging and spending was magnified by the sheer number of people using Octopus Cloud; continued growth would further amplify the problem.&lt;/p&gt;

&lt;p&gt;Two months in, we started having serious conversations about our $100k USD monthly AWS spend, and the fact that we had very little revenue to offset that expense:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Fsmhcsi6ktyhb38bps1wb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Fsmhcsi6ktyhb38bps1wb.png" alt="Octopus Cloud AWS is $100,000 plus per month"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We should explain that &lt;a href="https://octopus.com/company" rel="noopener noreferrer"&gt;Octopus Deploy as a company&lt;/a&gt; is not publicly listed or VC funded. We’ve been proudly bootstrapped and profitable since Octopus 1.0, and we’ve always run the business conservatively and sustainably. We suddenly found ourselves with a new business that was costing us money at an alarming rate. &lt;/p&gt;

&lt;p&gt;We decided to take the lessons we were learning and start a huge body of work we called &lt;strong&gt;Cloud v2&lt;/strong&gt;; a reimagining of Octopus Cloud, built to scale sustainably. &lt;/p&gt;

&lt;p&gt;Even when we were building v1, the team knew it wasn’t the ideal architecture. Before v1 even launched, there were plenty of conversations in our Slack about whether we should port Octopus to Linux and run it on Kubernetes, or see if we could run it on Windows within Kubernetes or use Nomad by Hashicorp? And all of this was back in early 2018 when everything was churning and not as mature as it is today. So the unit cost wasn’t really a surprise. &lt;/p&gt;

&lt;p&gt;What was surprising was the demand. If only a few customers had signed up each month, we could have easily worn the costs (and truth be told, we still can, Octopus self-hosted has great margins!). But with so many customers signing up, it became much more urgent. $100K+ per month is $1.2M+ over the year, which is plenty to justify spending engineering effort to bringing it down. &lt;/p&gt;

&lt;h2&gt;
  
  
  Starting over
&lt;/h2&gt;

&lt;p&gt;We explored all kinds of options to bring down our costs, and the initial plan was to iterate on the v1 architecture and look for cost savings, and we did make some progress there, but eventually, we concluded the best way to dramatically reduce our costs (without sacrificing customer performance) involved some substantial architecture changes, and essentially, starting from scratch. &lt;/p&gt;

&lt;p&gt;In the rest of this series, we’ll go into each of the decisions we made when re-engineering Octopus Cloud for v2. These include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Switching from AWS to Azure.&lt;/li&gt;
&lt;li&gt;Porting Octopus Server to Linux.&lt;/li&gt;
&lt;li&gt;Running Octopus in containers and using Kubernetes.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;With Octopus Cloud v2, we’re also backing ourselves to learn and use a lot of technologies at the leading edge of hosting and orchestration. &lt;a href="https://github.com/OctopusDeploy/terraform-provider-octopusdeploy" rel="noopener noreferrer"&gt;Terraform&lt;/a&gt;, &lt;a href="https://hub.docker.com/r/octopusdeploy/octopusdeploy" rel="noopener noreferrer"&gt;containers&lt;/a&gt;, &lt;a href="https://docs.microsoft.com/en-us/azure/aks/" rel="noopener noreferrer"&gt;Kubernetes&lt;/a&gt;, and &lt;a href="https://docs.microsoft.com/en-us/azure/azure-functions/" rel="noopener noreferrer"&gt;Azure Functions&lt;/a&gt; are just a few of the spaces we are currently working in. This approach brings its own challenges, but it will also feed into the next generation of Octopus tooling we can build with the expertise we acquire along the way. &lt;a href="https://en.wikipedia.org/wiki/Eating_your_own_dog_food" rel="noopener noreferrer"&gt;Drinking our own champagne&lt;/a&gt; has already improved a ton of the functionality within Octopus as we’ve become one of our own biggest customers. &lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Deciding to rebuild something from scratch is brutal, it can feel like your earlier attempt was wasted time and effort, and that work is being thrown out. It’s also very easy to go into an “MVP” project knowingly, but then become attached to it, assuming it will be the final architecture. We needed to recognize that a second step was only possible because the first step was taken, and every lesson learned along the way was essential input. &lt;/p&gt;

&lt;p&gt;Building Octopus Cloud v2 has been very different from v1, where before we were guessing and flying blind, this time we have real data we can analyze to answer some of the questions we have: we know what users will spend, we know what things will cost, we know what sort of resource consumption to expect, and between those things we know how to make a platform that will truly scale.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;This post was originally published at &lt;a href="https://octopus.com/blog/octopus-cloud-1.0-reflections" rel="noopener noreferrer"&gt;octopus.com&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>dotnet</category>
      <category>devops</category>
      <category>saas</category>
    </item>
  </channel>
</rss>
