<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Adam Szecówka</title>
    <description>The latest articles on DEV Community by Adam Szecówka (@aszecowka).</description>
    <link>https://dev.to/aszecowka</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/aszecowka"/>
    <language>en</language>
    <item>
      <title>My takeaways from Shape Up</title>
      <dc:creator>Adam Szecówka</dc:creator>
      <pubDate>Sun, 07 Feb 2021 18:05:11 +0000</pubDate>
      <link>https://dev.to/aszecowka/my-takeaways-from-shape-up-55a8</link>
      <guid>https://dev.to/aszecowka/my-takeaways-from-shape-up-55a8</guid>
      <description>&lt;p&gt;Ryan Singer, head of Strategy at Basecamp, in his book &lt;em&gt;Shape Up&lt;/em&gt;, describes an interesting approach for the development process used by Basecamp that focuses on shipping work that matters.&lt;/p&gt;

&lt;p&gt;Below you can find my favorite takeaways:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Use appetite not estimation. Appetite says how much time you want to invest in the given idea. It prevents a situation that you spend plenty of time on the feature that improves your product only a little bit.&lt;/li&gt;
&lt;li&gt;Default response to any idea should be "Interesting, maybe someday". Ideas are cheap.&lt;/li&gt;
&lt;li&gt;Backlogs are big weight we don't need to carry. Reviewing items in a huge backlog it's a waste of time. A huge backlog demotivates a team that feels constantly behind the schedule. &lt;/li&gt;
&lt;li&gt;Define cool down periods in your project. If you are doing scrum, don't organize Sprint review and Sprint planning back-to-back.&lt;/li&gt;
&lt;li&gt;No automatic extension of unfinished work. If it took more than you defined it as an appetite maybe it is not worth doing? This is also an additional stimulus for your team, to decide about priorities, what is a must-have and what is only nice-to-have if their work can be completely abandoned.&lt;/li&gt;
&lt;li&gt;Assign projects, not tasks. Talented people don’t like being treated like “code monkeys” or ticket takers.&lt;/li&gt;
&lt;li&gt;What to build first: something core, small, novel.
&lt;/li&gt;
&lt;li&gt;Use the hill diagram to visualize your progress. There’s the uphill phase of figuring out what our approach is and what we’re going to do. Then, once we can see all the work involved, there’s the downhill phase of execution. Use gut feeling instead of concrete numbers like a number of finished tasks.&lt;/li&gt;
&lt;li&gt;When it is good enough and you can finish a project? Compare to the baseline, initial state, not to the ideal solution to avoid frustration.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Shape Up&lt;/strong&gt; book is available for free at &lt;a href="https://basecamp.com/shapeup"&gt;https://basecamp.com/shapeup&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>
How we approached integration testing in Kubernetes, and why we stopped using Helm tests</title>
      <dc:creator>Adam Szecówka</dc:creator>
      <pubDate>Mon, 27 Jan 2020 16:57:21 +0000</pubDate>
      <link>https://dev.to/aszecowka/how-we-approached-integration-testing-in-kubernetes-and-why-we-stopped-using-helm-tests-4bn1</link>
      <guid>https://dev.to/aszecowka/how-we-approached-integration-testing-in-kubernetes-and-why-we-stopped-using-helm-tests-4bn1</guid>
      <description>&lt;p&gt;In Kubernetes, you often come across projects that are true mosaics of cloud-native applications. We don't meet too many stand-alone services in such a microservice architecture, as most of them have dependencies that aren't immediately obvious. Integrating such pieces and checking if they all work together may be a daunting challenge.&lt;/p&gt;

&lt;p&gt;Kubernetes projects usually consist of a number of Helm charts that you can roughly divide into these categories:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Charts of well-known open-source products, such as Istio or Jaeger, that provide service communication, tracing, and many other features you use along with the "Don't reinvent the wheel" rule
&lt;/li&gt;
&lt;li&gt;Charts with in-house components, such as Kubernetes controllers and microservices exposing REST or GraphQL APIs, that you develop to fill in the gaps not addressed yet by the external projects but required for your application to work&lt;/li&gt;
&lt;li&gt;Charts with full-blown solutions that can perfectly work on their own&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Such a mixture creates a web of dependencies. For example, imagine a situation in which all your components depend on the properly configured Istio. Upgrading it would be a nightmare without a set of automated integration tests. Such tests create a Kubernetes cluster, deploy your suite of services on it, and run integration tests. This allows you to check resource dependencies, provide consistent deployment order, and ensure all pieces of your puzzle fit together at all times.&lt;/p&gt;

&lt;p&gt;When thinking about a proper integration testing tool for your project, you also want to have an all-purpose solution that meets the needs of both developers and users. You want to deploy integration tests on any Kubernetes cluster - locally to allow developers or system administrators to validate their work easily and immediately, and on clusters provisioned on cloud providers to assure users can use your application safely in their production environment.&lt;/p&gt;

&lt;h2&gt;
  
  
  Helm tests &amp;amp; related issues
&lt;/h2&gt;

&lt;p&gt;When we started to work on Kyma, we had all those things in mind. We decided to define integration tests as &lt;a href="https://helm.sh/docs/topics/chart_tests/"&gt;&lt;strong&gt;Helm tests&lt;/strong&gt;&lt;/a&gt;. In this approach, a test is a &lt;a href="https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/"&gt;Kubernetes Job&lt;/a&gt; with the &lt;code&gt;helm.sh/hook: test&lt;/code&gt; annotation. You place the test under the &lt;code&gt;templates&lt;/code&gt; directory of the given Helm chart. Helm creates such a test in a Kubernetes cluster, just like it does with any other resource.&lt;/p&gt;

&lt;p&gt;The reason why we took this testing path was quite simple - we used Helm extensively in our project, and the Helm's built-in tool for testing was a natural choice. Also, writing Helm tests turned out to be quite easy.&lt;/p&gt;

&lt;p&gt;As our project grew, we came across a few obstacles that painfully hindered our work and couldn't be easily addressed with Helm tests at that time:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Running the whole suite of integration tests took ages, so we needed an easy way of selecting tests we want to run.&lt;/li&gt;
&lt;li&gt;The number of flaky tests increased, and we wanted to ensure they are automatically rerun.&lt;/li&gt;
&lt;li&gt;We needed a way of verifying the tests' stability and detecting flaky tests.&lt;/li&gt;
&lt;li&gt;We wanted to run tests concurrently to reduce the overall testing time.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;At that point, we decided we need a more powerful tool. Since we couldn't find a project that would serve all our needs, we developed one on our own.&lt;/p&gt;

&lt;h2&gt;
  
  
  The rise of Octopus
&lt;/h2&gt;

&lt;p&gt;This is how &lt;a href="https://github.com/kyma-incubator/octopus/blob/master/README.md"&gt;Octopus&lt;/a&gt; was born. In short, Octopus is a Kubernetes controller which operates on two custom resources called &lt;a href="https://github.com/kyma-incubator/octopus/blob/master/docs/crd-test-definition.md"&gt;TestDefinition&lt;/a&gt; and &lt;a href="https://github.com/kyma-incubator/octopus/blob/master/docs/crd-cluster-test-suite.md"&gt;ClusterTestSuite&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;TestDefinition, as its very name indicates, defines a test for a single component or a cross-component scenario. In the simplest scenario, you have to provide a Pod template that specifies the image with the test:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: testing.kyma-project.io/v1alpha1
kind: TestDefinition
metadata:
  labels:
    component: service-catalog
  name: test-example
spec:
  template:
    spec:
      containers:
        - name: test
          image: alpine:latest
          command:
            - "pwd"
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;ClusterTestSuite, on the other hand, defines which tests to run on a cluster and how to do that. In the example, we define that we want to run all tests with the &lt;code&gt;component=service-catalog&lt;/code&gt; label, and we specify that every test should be executed once (&lt;code&gt;count=1&lt;/code&gt;). If a test from the suite fails, we will retry it once (&lt;code&gt;maxRetries=1&lt;/code&gt;). Tests can be executed concurrently, in the maximum number of two at a time (&lt;code&gt;concurrency=2&lt;/code&gt;).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: testing.kyma-project.io/v1alpha1
kind: ClusterTestSuite
metadata:
  labels:
    controller-tools.k8s.io: "1.0"
  name: testsuite-selected-by-labels
spec:
  count: 1
  maxRetries: 1
  concurrency: 2
  selectors:
    matchLabelExpressions:
      - component=service-catalog
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;When Octopus notices a new ClusterTestSuite, it first calculates which tests should be executed and then schedules them according to the test suite specification. All information about the status of test execution is stored in the &lt;strong&gt;status&lt;/strong&gt; field. This status informs you which tests will be executed, which of them have already been executed, which succeeded or failed, which were retried, and how much time all tests and every single one took. See this example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: testing.kyma-project.io/v1alpha1
kind: ClusterTestSuite
metadata:
  name: testsuite-all
status:
  completionTime: 2019-03-06T12:37:20Z
  conditions:
  - status: "True"
    type: Succeeded
  results:
  - executions:
    - completionTime: 2019-03-06T12:37:20Z
      id: octopus-testing-pod-jz9qq
      podPhase: Succeeded
      startTime: 2019-03-06T12:37:15Z
    name: test-example
    namespace: default
    status: Succeeded
  startTime: 2019-03-06T12:37:15Z
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;With Octopus, all test preparation steps come down to creating:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Test in the language of your choice (yes, Octopus is language-agnostic).&lt;/li&gt;
&lt;li&gt;TestDefinition that specifies the image to use and commands to run.&lt;/li&gt;
&lt;li&gt;ClusterTestSuite that defines which tests to run on a cluster, and how you want to run them.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;In Kyma, we created integration jobs in the continuous integration tool called &lt;a href="https://github.com/kyma-project/test-infra/blob/master/prow/README.md"&gt;Prow&lt;/a&gt;. These Prow jobs are run before and after merging any changes to the &lt;code&gt;master&lt;/code&gt; branch. Upon triggering, a Prow job runs the &lt;a href="https://github.com/kyma-project/kyma/blob/master/installation/scripts/testing.sh"&gt;&lt;code&gt;testing.sh&lt;/code&gt;&lt;/a&gt; script that creates a ClusterTestSuite, builds a cluster, and runs all integration tests on it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Features &amp;amp; benefits
&lt;/h2&gt;

&lt;p&gt;Migration from Helm tests to Octopus went smoothly and came down to minor modifications in Job definitions, such as changing them to the &lt;code&gt;TestDefinition&lt;/code&gt; kind and removing the Helm annotation from them. However, the benefits that Octopus gave us were massive and just the ones we expected:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Selective testing&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;In the ClusterTestSuite, you can define which tests you want to execute. You can select them by providing the &lt;strong&gt;labels&lt;/strong&gt; expression or listing TestDefinition names. If you don't list any, Octopus will run all tests by default. Selective testing is particularly helpful in a situation when you have 50 TestDefinitions on your cluster, but you want to check only the tests for the component you are working on. Thanks to selective testing, you can get feedback on your changes almost immediately.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Automatic retries of failed tests&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;At one point, we had huge problems with flaky tests in Kyma. To merge a pull request, all 22 tests had to pass on a given Kubernetes cluster. If every test fails in only 2% of executions, the probability that all 22 tests pass is only 64%. Executing tests takes no longer than 20 minutes, but when you add the time required for creating a cluster and provisioning Kyma, the overall time doubles. You can imagine the frustration of developers who had to retrigger the whole Prow job because of a failure of one test that was totally not connected with the changes included in their pull requests. By introducing retries through the &lt;strong&gt;maxRetries&lt;/strong&gt; parameter, we didn't solve the issues with flaky tests completely, but we managed to reduce the number of situations in which retriggering a Prow job was required.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Running tests multiple times&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;You can add the &lt;strong&gt;count&lt;/strong&gt; field to the ClusterTestSuite to specify how many times every test should be executed. That can be particularly useful to detect flaky tests or ensure that a newly created test is stable.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Full support for concurrent testing&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;You can add the &lt;strong&gt;concurrency&lt;/strong&gt; field to the ClusterTestSuite to define how many tests can be executed. In our integration Prow jobs, we define the ClusterTestSuite with &lt;strong&gt;concurrency&lt;/strong&gt; set to &lt;code&gt;5&lt;/code&gt;. All tests are executed in around 20 minutes, but if they were executed one after another, they would take twice as long. Thanks to concurrency, we are saving time and money, developers have immediate feedback, and clusters created for executing tests are removed faster.&lt;/p&gt;

&lt;p&gt;You can also specify on the TestDefinition level if you want to exclude the given test from running concurrently as part of the ClusterTestSuite (&lt;strong&gt;disableConcurrency&lt;/strong&gt;). That feature might be useful in cases when you don't want to run a test with dependencies on other tests from the suite.&lt;/p&gt;

&lt;p&gt;In general, the concurrency level which you define depends on the size of your cluster. Every Pod consumes some amount of CPU and memory, and you need to take those two parameters into account when defining the concurrency level.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Visibility&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Octopus gave us much more insight into test definitions and results than we had with Helm tests. After executing the ClusterTestSuite, you can easily analyze how much time each test takes and identify the problematic ones.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;CLI&lt;/strong&gt; - We integrated Octopus with Kyma Command Line Interface (CLI). This means that you can use simple commands to get test definitions (&lt;code&gt;kyma test definitions&lt;/code&gt;), run tests with selected flags (&lt;code&gt;kyma test run --concurrency=5 --max-retries=1&lt;/code&gt;), or watch the tests execution status (&lt;code&gt;watch kyma test status&lt;/code&gt;).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://asciinema.org/a/287696"&gt;Here&lt;/a&gt; you can take a look at Kyma CLI in action.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Dashboards&lt;/strong&gt; - We used the information available in the &lt;strong&gt;status&lt;/strong&gt; field of the ClusterTestSuite to visualize test details on Prow dashboards. In the below example, you can clearly see all details of the &lt;code&gt;post-master-kyma-gke-integration&lt;/code&gt; Prow job that builds our artifacts on a GKE cluster after every merge to the &lt;code&gt;master&lt;/code&gt; branch.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--wuOPYSVk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://raw.githubusercontent.com/kyma-project/website/master/content/blog-posts/2020-01-16-integration-testing-in-k8s/test-status.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--wuOPYSVk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://raw.githubusercontent.com/kyma-project/website/master/content/blog-posts/2020-01-16-integration-testing-in-k8s/test-status.png" alt="Testing time"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Room for improvement
&lt;/h2&gt;

&lt;p&gt;As much as we love Octopus and appreciate how it did the trick for us, we realize it's not perfect (yet). We already have a few ideas in mind that would improve it even more. For example, we would like to introduce validation for both ClusterTestSuite and TestDefinition custom resources and add new fields that:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Define the maximum duration for a ClusterTestSuite, after which test executions are interrupted and marked as failed (&lt;strong&gt;suiteTimeout&lt;/strong&gt;).&lt;/li&gt;
&lt;li&gt;Indicate that a test shouldn't be executed (&lt;strong&gt;skip&lt;/strong&gt;).&lt;/li&gt;
&lt;li&gt;Specify the maximum duration for a test, after which it is terminated and marked as failed (&lt;strong&gt;timeout&lt;/strong&gt;).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We track all our ideas for enhancement as &lt;a href="https://github.com/kyma-incubator/octopus/issues"&gt;GitHub issues&lt;/a&gt;, so you can easily refer to them for details.&lt;/p&gt;

&lt;p&gt;As an open-source project, we always welcome external contributions. If you only wish, you can help us in many ways:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://github.com/kyma-incubator/octopus/issues"&gt;Pick&lt;/a&gt; one of the existing issues, and try to propose a solution for it in a pull request.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/kyma-incubator/octopus/issues/new/choose"&gt;Add&lt;/a&gt; your own issue with ideas for improving Octopus.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/kyma-incubator/octopus"&gt;Star&lt;/a&gt; Octopus to support us in the attempt to join the &lt;a href="https://github.com/ramitsurana/awesome-kubernetes#testing"&gt;awesome Kubernetes&lt;/a&gt; family of testing projects, where we believe Octopus could take pride of place.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you have any questions or want to find out more about us, contact us on the &lt;a href="http://slack.kyma-project.io/"&gt;&lt;code&gt;#octopus&lt;/code&gt;&lt;/a&gt; Slack channel or visit our &lt;a href="https://kyma-project.io/"&gt;website&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;But first of all, give &lt;a href="https://github.com/kyma-incubator/octopus"&gt;Octopus&lt;/a&gt; a try to see if it does the trick for you as well.&lt;/p&gt;




&lt;p&gt;Originally published at &lt;a href="https://kyma-project.io/blog/2020/1/16/integration-testing-in-k8s/"&gt;https://kyma-project.io/blog/2020/1/16/integration-testing-in-k8s/&lt;/a&gt; by Adam Szecowka and Karolina Zydek&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>testing</category>
    </item>
  </channel>
</rss>
