<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Ash Moosa</title>
    <description>The latest articles on DEV Community by Ash Moosa (@amoosa).</description>
    <link>https://dev.to/amoosa</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/amoosa"/>
    <language>en</language>
    <item>
      <title>Using Apache Solr in Production — Bitbucket</title>
      <dc:creator>Ash Moosa</dc:creator>
      <pubDate>Thu, 22 Aug 2019 09:13:54 +0000</pubDate>
      <link>https://dev.to/atlassian/using-apache-solr-in-production-bitbucket-4p1d</link>
      <guid>https://dev.to/atlassian/using-apache-solr-in-production-bitbucket-4p1d</guid>
      <description>&lt;h3&gt;
  
  
  Using Apache Solr in Production — Bitbucket
&lt;/h3&gt;

&lt;p&gt;This is a guest post by &lt;a href="https://www.linkedin.com/in/pulkit-kedia-389a9a122"&gt;Pulkit Kedia&lt;/a&gt;, a backend engineer at Womaniya.&lt;/p&gt;

&lt;p&gt;Solr is a search engine built on top of Apache Lucene. Apache Lucene uses an inverted index to store documents(data) and gives you search and indexing functionality via a Java API. However, to use features like full text you would need to write code in Java.&lt;/p&gt;

&lt;p&gt;Solr is a more advanced version of Lucene’s search. It offers more functionality and is designed for scalability. Solr comes loaded with features like Pagination, sorting, faceting, auto-suggest, spell check etc. Also, Solr uses a trie structure for numeric and date data types e.g. there is normal &lt;strong&gt;int&lt;/strong&gt; field and another &lt;strong&gt;tint&lt;/strong&gt; field which signifies the trie int field.&lt;/p&gt;

&lt;p&gt;Solr is really fast for text searching/analyzing and credit goes to its inverted index structure. If your application requires extensive text searching, Solr is a good choice. Several companies like Netflix, Verizon, AT&amp;amp;T, and Qualcomm use Solr as their search engine. Even Amazon Cloudsearch which is a search engine service by AWS uses Solr internally.&lt;/p&gt;

&lt;p&gt;This article provides a method to deploy Solr in production and deals with creating Solr collections. If you are just starting with Solr, you should start by building a Solr core. Core is a single node Solr server, with no shards and replicas, while collections consist of various shards and its replicas which are the cores.&lt;/p&gt;

&lt;h3&gt;
  
  
  Implementation
&lt;/h3&gt;

&lt;p&gt;In a distributed search, a collection is a logical index across multiple servers. The part of each server that runs a collection is called a core. So in a non-distributed search, a core and a collection are the same because there is only one server.&lt;/p&gt;

&lt;p&gt;In production, you need a collection to be implemented rather than a Solr core, because a core won’t be able to hold production data (unless you do vertical scaling). Apache Zookeeper helps create the connection across multiple servers.&lt;/p&gt;

&lt;p&gt;There are two ways you can set this up:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Multiple solr servers and use Zookeeper on one of the servers&lt;/li&gt;
&lt;li&gt;Zookeeper on a different server and all the other Solr servers connecting to it&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;We’ll go through the process of implementing using the second approach. The first approach is similar to the second one but the latter is a more scalable approach.&lt;/p&gt;

&lt;h3&gt;
  
  
  Installing Solr
&lt;/h3&gt;

&lt;p&gt;Spawn up 3 servers and install Solr on 2 servers (note: you can spawn any number of solr servers — we use 3 in our example). To install Solr, you need to install Java first, then download the desired version and untar it.&lt;/p&gt;

&lt;p&gt;Installation: wget &lt;a href="http://archive.apache.org/dist/lucene/solr/8.1.0/solr-8.1.0.tgz"&gt;http://archive.apache.org/dist/lucene/solr/8.1.0/solr-8.1.0.tgz&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Untar: tar -zxvf solr-8.1.0.tgz
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;You can start Solr by going to the &lt;strong&gt;/home/ubuntu/solr-8.0.0&lt;/strong&gt; folder with &lt;strong&gt;bin/solr start&lt;/strong&gt; or in the bin folder with  &lt;strong&gt;./solr start&lt;/strong&gt;. This would start solr on port 8983, and you can test it in the browser.&lt;/p&gt;

&lt;p&gt;Replicate the exact same steps to install Solr on your 2nd server.&lt;/p&gt;

&lt;p&gt;Also remember to setup the list of IP’s and names for each in /etc/hosts&lt;/p&gt;

&lt;p&gt;For example :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;IPv4 Public IP-solr-node-1 solr-node-1 IPv4 Public IP-solr-node-2 solr-node-2 IPv4 Public IP-zookeeper-node zookeeper-node
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h3&gt;
  
  
  Installing Zookeeper
&lt;/h3&gt;

&lt;p&gt;Now the 3rd server would require only zookeeper to which you would push configsets.&lt;/p&gt;

&lt;p&gt;Installation: wget &lt;a href="https://archive.apache.org/dist/zookeeper/zookeeper-3.4.9/zookeeper-3.4.9.tar.gz"&gt;https://archive.apache.org/dist/zookeeper/zookeeper-3.4.9/zookeeper-3.4.9.tar.gz&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Untar : tar -zxvf zookeeper-3.4.9.tar.gz
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;If you like, you can add the path to zookeeper to the bashrc file.&lt;/p&gt;

&lt;p&gt;Next, in the zookeeper-3.4.9 folder, there is a sample configuration file which comes with zookeeper -&amp;gt; &lt;strong&gt;zoo_sample.cfg&lt;/strong&gt;. Copy this file in the path and rename it to &lt;strong&gt;zoo.cfg&lt;/strong&gt;. The configuration file contains various parameters like “ &lt;strong&gt;dataDir&lt;/strong&gt; ” which specifies the directory to store the snapshots of in-memory database and transaction logs, “ &lt;strong&gt;maxClientCnxns&lt;/strong&gt; ” which limits the max number of client connections etc.&lt;/p&gt;

&lt;p&gt;Open the zoo.cfg file and uncomment “ &lt;strong&gt;autopurge.snapRetainCount=3&lt;/strong&gt; ” and “ &lt;strong&gt;autopurge.purgeInterval=1&lt;/strong&gt; ” and edit the “ &lt;strong&gt;dataDir = data&lt;/strong&gt; ”&lt;/p&gt;

&lt;p&gt;Next start the zookeeper.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;bin/zkServer.sh start
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h3&gt;
  
  
  Creating A Configset
&lt;/h3&gt;

&lt;p&gt;Configsets are basically the blueprint of the data to be stored. Configsets are stored at &lt;strong&gt;server/solr/configsets&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;You can create your own configset and use it to store your data. Change the &lt;strong&gt;managed-schema&lt;/strong&gt; file content to customise the config.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You can modify the  tag to denote the data fields to be stored in one document&lt;/li&gt;
&lt;li&gt;you can define the type or create a new type by defining it with the  tag.&lt;/li&gt;
&lt;li&gt;the id field is compulsory so you cannot delete that&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;There are many other things you can do in Solr like dynamic fields, copy fields etc. Explaining each of them is beyond the scope of this blog but for more information, here is the official &lt;a href="https://lucene.apache.org/solr/features.html"&gt;documentation&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Now that you’ve created a config and have &lt;strong&gt;chmod -R 777 config&lt;/strong&gt; folder, push the config to the zookeeper.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;bin/solr zk upconfig -n config\_folder\_name -d /solr-8.0.0/server/solr/configsets/config\_folder\_name/ -z zookeeper-node:2181
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;After pushing the config, start SolrCloud on each Solr servers. To install SolrCloud, refer to &lt;a href="https://lucene.apache.org/solr/guide/6_6/getting-started-with-solrcloud.html"&gt;this documentation&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Connecting to Zookeeper
&lt;/h3&gt;

&lt;p&gt;To connect to the zookeeper:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;bin/solr start -cloud -s example/cloud/node1/solr/ -c -p 8983 -h solr-node-1 -z zookeeper-node:2181
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Solr stores the inverted index at this location -&amp;gt; example/cloud/node1/solr/ , so you need to mention that path while connecting. Zookeeper will automatically distribute shards and replicas over the 2 solr servers. When you add some data, a hash would be generated and then it would in a particular shard. This is all handled by zookeeper.&lt;/p&gt;

&lt;p&gt;To add data to the server you need to POST to the link &lt;strong&gt;http://:8983/solr//update?commit=true&lt;/strong&gt;  .&lt;/p&gt;

&lt;p&gt;The IP can be of any server , the data automatically gets distributed among the shards.&lt;/p&gt;

&lt;p&gt;To get data from your solr search &lt;strong&gt;http://:8983/solr/user/select?q=&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Note: If you are using one of the Solr servers as a zookeeper, all the above steps are the same but replace zookeeper ip with that solr nodes ip and port to 9983 instead of 2181&lt;/p&gt;

&lt;h3&gt;
  
  
  Troubleshooting
&lt;/h3&gt;

&lt;p&gt;Here are a couple common problems that may arise while setting up SolrCloud.&lt;/p&gt;

&lt;p&gt;After you have created SolrCloud and are connecting to zookeeper, you may see an error like 8983 or 7574 is already in use.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;:fuser -k 8983/tcp -
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;This would find the process and kill it&lt;/p&gt;

&lt;p&gt;Another error you may see is that SolrCloud cannot find the newly created configset.&lt;/p&gt;

&lt;p&gt;Solution: Do chmod 777 to the new configset. The more secure approach is to chown the folder to solr user.&lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;Solr has a large community of experienced users and contributors and is more mature when compared to its competitors. Solr faces competition from Elasticsearch, which is open source and is also built on Apache Lucene. Elasticsearch is considered to be better at searching dynamic data such as log data while Solr handles static data better. In terms of scaling, while Elasticsearch has better in-built scalability features, with Zookeeper and SolrCloud, it’s easy to scale with Solr too.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Author bio&lt;/em&gt;&lt;/strong&gt; &lt;em&gt;:&lt;/em&gt; &lt;a href="https://www.linkedin.com/in/pulkit-kedia-389a9a122"&gt;&lt;em&gt;Pulit Kidia&lt;/em&gt;&lt;/a&gt; &lt;em&gt;is a backend engineer with experience in cloud services, system design and creating scalable backend systems. He loves to learn and integrate new backend technologies.&lt;/em&gt; &lt;a href="https://bitbucket.org/account/signup/"&gt;Get started, it’s free&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Originally published at&lt;/em&gt; &lt;a href="https://bitbucket.org/blog/using-apache-solr-in-production"&gt;&lt;em&gt;https://bitbucket.org&lt;/em&gt;&lt;/a&gt; &lt;em&gt;on August 22, 2019.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>solr</category>
      <category>apache</category>
    </item>
    <item>
      <title>Automate Coverage Reports in Pull Requests with Bitbucket, Jenkins and SonarCloud — Bitbucket</title>
      <dc:creator>Ash Moosa</dc:creator>
      <pubDate>Tue, 16 Jul 2019 07:00:37 +0000</pubDate>
      <link>https://dev.to/atlassian/automate-coverage-reports-in-pull-requests-with-bitbucket-jenkins-and-sonarcloud-bitbucket-5hdc</link>
      <guid>https://dev.to/atlassian/automate-coverage-reports-in-pull-requests-with-bitbucket-jenkins-and-sonarcloud-bitbucket-5hdc</guid>
      <description>&lt;h3&gt;
  
  
  Automate Coverage Reports in Pull Requests with Bitbucket, Jenkins and SonarCloud — Bitbucket
&lt;/h3&gt;

&lt;p&gt;At &lt;a href="https://www.instaclustr.com/" rel="noopener noreferrer"&gt;Instaclustr&lt;/a&gt;, we’ve experienced significant growth in our team sizes that has been great for increasing the scope and speed of our development. The flip-side to this benefit is that with the increased velocity of projects came increased pressure on approvers to take time from their own tasks, and provide quality feedback at a faster pace.&lt;/p&gt;

&lt;p&gt;To address this, we overhauled our existing build systems to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Automate static code analysis&lt;/li&gt;
&lt;li&gt;Expose important metrics (such as test coverage, whether tests have passed); and&lt;/li&gt;
&lt;li&gt;Expose it to reviewers within pull requests&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F768%2F0%2A2l0tbuwUWTzl1Kkr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F768%2F0%2A2l0tbuwUWTzl1Kkr.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now, our review workflow is:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Developer creates a PR in Bitbucket, targeting the release branch&lt;/li&gt;
&lt;li&gt;Jenkins sees the creation of the PR and starts our build-and-test pipeline beginning with unit and system tests. If successful, the pipeline progresses through to our end-to-end tests. At each stage, coverage results are forwarded to SonarCloud for analysis&lt;/li&gt;
&lt;li&gt;When an approver views the PR, Bitbucket (via the SonarCloud widget) pulls in the code analysis results and provides relevant context&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Approvers can immediately know:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Coverage of new code code in the PR&lt;/li&gt;
&lt;li&gt;If there are any common code errors (e.g. not closing resources)&lt;/li&gt;
&lt;li&gt;That style guidelines have been followed (e.g. how deep is the inheritance tree)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Best of all, relatively few changes were required to implement this!&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Jenkins
&lt;/h3&gt;

&lt;p&gt;We use Jenkins as our build system, so we created a multibranch pipeline job that uses the &lt;a href="https://wiki.jenkins.io/display/JENKINS/Bitbucket+Branch+Source+Plugin" rel="noopener noreferrer"&gt;Bitbucket Branch Source Plugin&lt;/a&gt; to poll for any new or updated PRs targeting our release branch. The pipeline trigger can then be configured to scan every minute.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F768%2F0%2AxnqYkF3XV1Dt0_Gl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F768%2F0%2AxnqYkF3XV1Dt0_Gl.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once triggered, the job will run our test pipeline Jenkinsfile.&lt;/p&gt;

&lt;p&gt;The relevant parts of our Jenkinsfile are:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mvn clean org.jacoco:jacoco-maven-plugin:prepare-agent test --fail-at-end -DskipTests=false -am

mvn sonar:sonar --batch-mode --errors " + "-pl ${context.env.TEST\_MODULES} -am " + "-Dsonar.projectKey=${Constants.SONARCLOUD\_PROJECT\_KEY} " + "-Dsonar.organization=${Constants.SONARCLOUD\_ORGANISATION} " + "-Dsonar.verbose=true " + "-Dsonar.host.url=${Constants.SONARCLOUD\_URL} " + "-Dsonar.login=${context.env.SONARCLOUD\_TOKEN} " + "-Dsonar.pullrequest.branch=${context.env.BRANCH\_NAME} " + "-Dsonar.pullrequest.base=${Constants.RELEASES\_BRANCH} " + "-Dsonar.pullrequest.key=${context.env.CHANGE\_ID}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  2. SonarCloud
&lt;/h3&gt;

&lt;p&gt;Uploaded reports will automatically register a new PR in SonarCloud and can be explored through the SonarCloud Console to show inline views of code issues, test coverage trends and whether the PR meets customisable Quality Gates. It’s important to note that these metrics are &lt;a href="https://sonarcloud.io/documentation/analysis/pull-request/" rel="noopener noreferrer"&gt;calculated against new code introduced by the PR&lt;/a&gt;, so developers don’t have to sort through an analysis of the entire codebase.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F768%2F0%2AanvUj3utHySXYjz5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F768%2F0%2AanvUj3utHySXYjz5.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  3. BitBucket
&lt;/h3&gt;

&lt;p&gt;At this point, we have Jenkins automatically testing PRs and SonarCloud providing analysis. To make it as easy as possible for approvers to see that information, we just need to enable the SonarCloud widget in our Bitbucket repository.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F768%2F0%2Aplqx5i7S2E1eYpJk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F768%2F0%2Aplqx5i7S2E1eYpJk.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now, whenever a PR is created, the widget will pull in test coverage, bugs and code smell metrics. Using the Bitbucket Jenkins plugin also means that our PRs will show a handy “build passed” status to let us know when a branch has is successfully passing all test cases.&lt;/p&gt;

&lt;h3&gt;
  
  
  Summary
&lt;/h3&gt;

&lt;p&gt;For Instaclustr, this setup has:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Simplified the basic checks a reviewer would undertake by providing summary metrics for assessing the quality of a PR&lt;/li&gt;
&lt;li&gt;Highlighted test cases that don’t work in a parallel setting (e.g. incorrect usage of singleton objects)&lt;/li&gt;
&lt;li&gt;Highlighted poor test verification patterns (e.g. count pre- and post-test comparison rather than checking for presence of record)&lt;/li&gt;
&lt;li&gt;Code coverage highlights when tests aren’t being run (e.g. names not matching Surefire inclusion patterns)&lt;/li&gt;
&lt;li&gt;Ensures highlighting of simple code errors that can have significant impacts, such as not closing resources&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Hopefully this has provided a quick guide to getting up and running with static code analysis; if you have any feedback, ideas or questions about this article, &lt;a href="//mailto:alwyn@instaclustr.com"&gt;I would love to hear it&lt;/a&gt;!&lt;/p&gt;

&lt;p&gt;Author bio: &lt;a href="//mailto:alwyn@instaclustr.com"&gt;&lt;em&gt;Alwyn Davis&lt;/em&gt;&lt;/a&gt; &lt;em&gt;is a Senior Software Developer at&lt;/em&gt; &lt;a href="https://www.instaclustr.com/" rel="noopener noreferrer"&gt;&lt;em&gt;Instaclustr&lt;/em&gt;&lt;/a&gt; &lt;em&gt;where he has worked on multiple infrastructure and development projects, in addition to providing client-facing support and delivering consulting projects. He has focused on the development of Instaclustr’s client-facing management systems implementation of Cassanda, Spark, and Kafka deployment processes. He also has a background in technical consulting experience in search engine, database and CRM implementation, management and application development.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Love sharing your technical expertise? Learn more about the &lt;a href="https://bitbucket.org/product/write?utm_source=blog&amp;amp;utm_medium=post&amp;amp;utm_campaign=bottom-post" rel="noopener noreferrer"&gt;Bitbucket writing program&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://bitbucket.org/account/admin/plans/" rel="noopener noreferrer"&gt;Scaling your Bitbucket team? Upgrade your plan here&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Originally published at&lt;/em&gt; &lt;a href="https://bitbucket.org/blog/automate-coverage-reports-in-pull-requests-bitbucket-jenkins-sonarcloud" rel="noopener noreferrer"&gt;&lt;em&gt;https://bitbucket.org&lt;/em&gt;&lt;/a&gt; &lt;em&gt;on July 16, 2019.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>jenkins</category>
      <category>staticcodeanalysis</category>
      <category>sonarcloud</category>
      <category>continuousintegrati</category>
    </item>
    <item>
      <title>Software development with UML and modern Java</title>
      <dc:creator>Ash Moosa</dc:creator>
      <pubDate>Wed, 05 Jun 2019 17:29:52 +0000</pubDate>
      <link>https://dev.to/atlassian/software-development-with-uml-and-modern-java-bitbucket-1p9j</link>
      <guid>https://dev.to/atlassian/software-development-with-uml-and-modern-java-bitbucket-1p9j</guid>
      <description>&lt;p&gt;&lt;em&gt;This post was written by Bitbucket user&lt;/em&gt; &lt;a href="https://www.linkedin.com/in/aleksandar-radulovic/"&gt;&lt;em&gt;Aleksandar Radulović.&lt;/em&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;With technology evolving fast, there is a need to write and maintain software more efficiently, and better communicate with team members. As developers, we rarely get to think about these things as we rush to meet deadlines.&lt;/p&gt;

&lt;p&gt;Current software development practices rarely include software modeling. Even when models are used, they are mostly used as a part of the documentation process and often seem more of a burden.&lt;/p&gt;

&lt;p&gt;The purpose of this article is to describe a different approach to software development that puts visual modeling and code generation into the heart of the development process. Visual software models put emphasis on communication and internal software design rather than simply making things work.&lt;/p&gt;

&lt;p&gt;I’ll describe how we use code generators to automate software development by using a UML model as a starting point for creating modern Java back-end applications using frameworks such as Spring, Spring Data and Hibernate.&lt;/p&gt;

&lt;p&gt;In order to understand the potential of this approach, we need to consider different cornerstones of the development process and the impact this approach has on them:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Unified Modeling language (UML) — used to visualize software systems and flows&lt;/li&gt;
&lt;li&gt;Modern Java — essentials of the modern Java ecosystem&lt;/li&gt;
&lt;li&gt;Building the Software Product — faster prototyping and maintenance&lt;/li&gt;
&lt;li&gt;Team dynamics — better communication and faster onboarding&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--1BtWRwef--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/0%2Akl2Kyu7RiNglbBu8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--1BtWRwef--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/0%2Akl2Kyu7RiNglbBu8.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let’s briefly go through each of these.&lt;/p&gt;

&lt;h3&gt;
  
  
  Unified Modeling Language
&lt;/h3&gt;

&lt;p&gt;The Unified Modeling Language (UML) is a standardized, visual language for modeling software. It was developed with an ambitious intention: to provide software teams a standard way to visualize the design of a system and to improve the team’s understanding of the domain of the problem they were solving. Using UML, one can visually model concepts, processes, state machines, interactions or use cases.&lt;/p&gt;

&lt;p&gt;The approach we take in our day to day work is to use class diagrams for modeling domain concepts and relations between them and state machines for modeling process flows. We also document different model elements: classes, interfaces, attributes, etc. so that we can derive documentation from the model at any time, using different formats and structures: javadoc or Swagger, just to mention the two.&lt;/p&gt;

&lt;p&gt;Here is an example of a UML class diagram.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--JGMSAZCv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/0%2ADnR7SagqqnSzwQZk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--JGMSAZCv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/0%2ADnR7SagqqnSzwQZk.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Modern Java
&lt;/h3&gt;

&lt;p&gt;Modern Java has a vibrant ecosystem. While it takes time to learn a programming language, adopting modern frameworks from that language’s ecosystem is an additional learning curve.&lt;/p&gt;

&lt;h4&gt;
  
  
  Declarative programming
&lt;/h4&gt;

&lt;p&gt;The emergence of declarative software development practices has silently opened new ways for model-driven development. Unlike the imperative programming flows that are inconvenient to specify using modeling techniques, declarative programming constructs describe structural aspects of the software that can naturally be represented by the class diagrams.&lt;/p&gt;

&lt;p&gt;Contemporary Java development heavily relies on declarative constructs: annotations most of all. For example, different frameworks, such as Hibernate and Jackson, use annotations to map object models to relational databases or to different export formats (JSON, XML, Protobuf, BSON, CSV). The Spring Framework, among many other things, brings great support for declarative development of RESTful endpoints and Spring Data introduces many essential constructs for abstracting data store access operations.&lt;/p&gt;

&lt;p&gt;Like other types of programming, declarative programming does come at a cost — we introduce complexities of different frameworks and libraries into our applications. While these dependencies bring complexity to the project, developers must learn that they offer a return on developer productivity by letting them focus on high level objectives.&lt;/p&gt;

&lt;h4&gt;
  
  
  Code generators
&lt;/h4&gt;

&lt;p&gt;Declarative programming allows for code generation. Instead of having to write Java annotations by hand, it is enough to mark a class as persistent in the model and let the code generation tool create Java Persistence API (JPA) annotations for you. Instead of having to write lines and lines of JPA annotations, which can be cumbersome at times, code generation can do the magic without letting you bother with the details. Code generation is either built in to the UML tool you’re using or may be available as a plugin — it’s usually a one click process to go from UML to code.&lt;/p&gt;

&lt;p&gt;Here is a sample of the Java code generated from the UML model shown above.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--1QV41F05--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/988/0%2APVq0mZkB3q9szwc7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--1QV41F05--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/988/0%2APVq0mZkB3q9szwc7.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why use code generators?&lt;/strong&gt; Code generators translate the language of design (UML) into the language of implementation (Java). It brings automation to our development process, reducing overall development complexity and simplifying maintenance. We can be truly focused on modeling application concepts and services, the core abstractions we are dealing with, while the code generator synchronizes the model with the codebase. Further, it promotes the usage of best practices and significantly impacts the quality as well as the uniformity of the codebase.&lt;/p&gt;

&lt;h3&gt;
  
  
  Software Product
&lt;/h3&gt;

&lt;p&gt;When the model is completed, the code generator creates a complete starting project that reflects our design — so we can focus on implementing business logic. When it comes to software maintenance, you can change the design and let the generator propagate changes to your codebase. This process of working with a software model and using a code generator allows for rapid prototyping, easier software maintenance and gives you better documentation of your product.&lt;/p&gt;

&lt;p&gt;The question that quickly arises when you start working with code generators is: how to synchronize changes that you introduce in the code with the model? Our answer to that question is simple: don’t do that. The model is a set of abstractions and it should be kept separate from implementation.&lt;/p&gt;

&lt;p&gt;This one-way transformation is typically referred to as the “model first” approach because it clearly puts emphasis on modeling and not vice versa.&lt;/p&gt;

&lt;p&gt;On the other hand, we still want to be able to modify the generated Java code. For that purpose, we rely on preserved sections within Java source files, that keep custom changes intact through multiple code generations.&lt;/p&gt;

&lt;h3&gt;
  
  
  Development Team Now Has a Visual Map
&lt;/h3&gt;

&lt;p&gt;Team development and communication are often underestimated topics in the everyday hectic run towards achieving results.&lt;/p&gt;

&lt;p&gt;Using code generation brings the UML model to the heart of the software development process. The UML model of the product becomes a visual map that evolves as the work progresses. Having this map, different team members can understand the software better and have focused discussions. Onboarding of new team members is now much faster: instead of reading lines and lines of code, they rely upon a live map that communicates backbone ideas without implementation specific details.&lt;/p&gt;

&lt;p&gt;This visual software development technique changes the traditional responsibilities of the team members, promoting mutual understanding of the domain and improving team cohesion. When using model-driven development, the role of software developer comes closer to the role of a business analyst. On the other hand, a business analyst clearly understands how the software is being built and the relationships between domain concepts. Finally, QA engineers have a better understanding of the application, and all team members speak the same language.&lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;While it’s possible to use code generation and modeling to automate parts of software development, we do not see this often in practice — either due to lack of awareness or lack of resources to invest in reviewing and researching new ways of working. If the ideas expressed in this article get you interested in model-driven development, there are several ways to go further.&lt;/p&gt;

&lt;p&gt;There are multiple providers of low-code development solutions. Mendix is one of them and has a comprehensive &lt;a href="https://www.mendix.com/low-code-guide/"&gt;guide to low code development.&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The best open-source example of this category of products is &lt;a href="https://www.jhipster.tech/"&gt;JHipster&lt;/a&gt;, a project that has been embraced by thousands of developers worldwide. The JHipster core team managed to connect experts from different areas of software development to make this amazing application generator.&lt;/p&gt;

&lt;p&gt;Our own endeavor is in extending StarUML, our preferred tool for software modeling, with a &lt;a href="https://archetypesoftware.com"&gt;plugin for code generation&lt;/a&gt; — this is the plugin used in the example in this post.&lt;/p&gt;

&lt;p&gt;Finally, no matter which tools and methodologies you use, software development is a people business and as such, it has many different sides that are difficult to measure and manage. Model-driven development cannot replace the lack of quality requirements, lack of empathy within the team, or lack of organizational culture in general. It complements agile development methodologies but does not replace them.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Author bio:&lt;/em&gt;&lt;/strong&gt; &lt;em&gt;Aleksandar Radulović is a software developer and architect who developed&lt;/em&gt; &lt;a href="https://www.archetypesoftware.com/"&gt;&lt;em&gt;Rebel&lt;/em&gt;&lt;/a&gt;&lt;em&gt;, a code generator plugin for StarUML. When he’s not developing software, he enjoys reading classics like Shakespeare or is dancing the tango. Connect with him on&lt;/em&gt; &lt;a href="https://www.linkedin.com/in/aleksandar-radulovic/"&gt;&lt;em&gt;Linkedin&lt;/em&gt;&lt;/a&gt;&lt;em&gt;.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Love sharing your technical expertise? Learn more about the &lt;a href="https://bitbucket.org/product/write?utm_source=blog&amp;amp;utm_medium=post&amp;amp;utm_campaign=bottom-post"&gt;Bitbucket writing program&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Originally published at&lt;/em&gt; &lt;a href="https://bitbucket.org/blog/software-development-with-uml-and-modern-java"&gt;&lt;em&gt;https://bitbucket.org&lt;/em&gt;&lt;/a&gt; &lt;em&gt;on June 5, 2019.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>java</category>
      <category>softwaredevelopment</category>
      <category>programming</category>
      <category>softwareengineering</category>
    </item>
    <item>
      <title>How Building An IDE Extension Changed The Way We Ship Code</title>
      <dc:creator>Ash Moosa</dc:creator>
      <pubDate>Wed, 15 May 2019 21:53:29 +0000</pubDate>
      <link>https://dev.to/atlassian/how-building-an-ide-extension-changed-the-way-we-ship-code-20oo</link>
      <guid>https://dev.to/atlassian/how-building-an-ide-extension-changed-the-way-we-ship-code-20oo</guid>
      <description>&lt;p&gt;When our team set out on the adventure of building the &lt;a href="https://marketplace.visualstudio.com/items?itemName=Atlassian.atlascode"&gt;Atlassian for VS Code extension&lt;/a&gt;, our mission was simple: create an MVP to test if using Bitbucket Cloud and Jira Software Cloud features inside of VS Code would make a better developer experience.&lt;/p&gt;

&lt;p&gt;To begin, we did what we all knew: scheduled planning meetings, had daily stand-ups, setup a Slack channel for all of the discussion that happens between meetings, tried to guess at release dates and scheduled retros to discuss what went wrong and what went well.&lt;/p&gt;

&lt;p&gt;Over time we discovered that through the use of our own tool, we tackled a notoriously difficult problem: changing developer behavior in ways that make the team more productive while easing the administrative tasks that usually slow them down.&lt;/p&gt;

&lt;p&gt;Dogfooding our own extension has helped our team develop a better shared understanding of the code base, and integrate the non-coding tasks required for healthy project management into the dev-loop so tightly that it becomes the preferred way for developers to work.&lt;/p&gt;

&lt;p&gt;We’ve realized that if you build tooling that’s fun to use, this will all happen organically.&lt;/p&gt;

&lt;h3&gt;
  
  
  Iterating in the Dark
&lt;/h3&gt;

&lt;p&gt;The thing about silos is that they’re usually dark inside.&lt;/p&gt;

&lt;p&gt;When we first started building our VS Code extension for Bitbucket and Jira users, we were working in a very familiar style where we each went into our “coding caves” for long periods of time and every once and a while came up for air to check the Slack channel we had setup.&lt;/p&gt;

&lt;p&gt;It was almost guaranteed that there was a message in the channel begging for someone to review a PR from an hour or two ago, if not more. Then, as any good team member, we’d stop what we were doing, open up Bitbucket and review the code in our browsers. We might add a comment here and there (to be forgotten by it’s author as soon as it was submitted), then certainly take a coffee break, and then come back to crawl into our caves again.&lt;/p&gt;

&lt;p&gt;Rinse, Repeat. The problem was, it still felt like separate people working on separate areas of the code base without a lot of cross-functional knowledge of how things worked. We were basically in the dark.&lt;/p&gt;

&lt;p&gt;To shed some light (pun intended) on some of the issues:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Nobody likes sitting in a room for hours with a Jira board on the screen while somebody frantically types trying to enter Jira issues while heated discussions conversations are happening.&lt;/li&gt;
&lt;li&gt;Once coding commences, developers tend to only take the tasks within their comfortable domain knowledge (silos) and cross-functional learning is lost&lt;/li&gt;
&lt;li&gt;The lack of cross-functional code base knowledge means bugs and ideas about other parts of the code base are left up to the developer who works on it lacking collaborative input&lt;/li&gt;
&lt;li&gt;Developers hate context switching which in-turn means:&lt;/li&gt;
&lt;li&gt;bugs or features may be discovered, but new Jira issues don’t get entered for them&lt;/li&gt;
&lt;li&gt;pull requests are submitted, but nagging has to happen to get them reviewed&lt;/li&gt;
&lt;li&gt;comments may be entered on a pull request, but replies are either not followed up on or take days to get new responses&lt;/li&gt;
&lt;li&gt;updating Jira issues to reflect the state of the project is a chore that happens through nagging, thus leaving Jira in a state of lies&lt;/li&gt;
&lt;li&gt;Ensuring Jira units of work are linked to code in every branch and every commit is not consistent at best and doesn’t happen in the worst cases&lt;/li&gt;
&lt;li&gt;There’s less collaboration when developers feel like someone else is the expert&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Out of the Caves
&lt;/h3&gt;

&lt;p&gt;One day, it happened. We had finally hit the point where we could authenticate against both Jira and Bitbucket and we had the essential features in place to start dogfooding our extension. That first developer exclaimed with triumph “Hey, I just approved that PR with our extension!” and we knew feasibility was no longer our mission…&lt;/p&gt;

&lt;p&gt;The new roadmap, almost entirely led by engineers with little oversight and lots of dogfooding, emerged: we would take a selfish approach and focus on making our own lives easier inside of VS Code, knowing that if we got the Jira and Bitbucket experience (mostly) right, we would have a positive impact on other development teams outside of Atlassian. If we have pain points in our development cycle, chances are other teams do as well.&lt;/p&gt;

&lt;p&gt;To illustrate some of the successes our team has enjoyed, let’s take a look at our new development practices which integrate the features of our extension to blur the lines between coding, Jira-ing and Bitbucket-ing. And what better place to start than… the middle.&lt;/p&gt;

&lt;h3&gt;
  
  
  A Shared Cave with Really Nice Lighting
&lt;/h3&gt;

&lt;p&gt;To illustrate some of the successes our team has enjoyed, let’s take a look at our new development practices which integrate the features of our extension to blur the lines between coding, Jira-ing and Bitbucket-ing. And what better place to start than… the middle.&lt;/p&gt;

&lt;h4&gt;
  
  
  Jira Units of Work: Never Lose Track of Discoveries
&lt;/h4&gt;

&lt;p&gt;OK, let’s say I’m a developer… and I’m working on some code when I notice a bunch of comments littered with Jira issue keys. (we’ll get to why that is in a bit) It’s a bit frustrating to see the keys and a small comment but not have access to the issue details. It’s even more frustrating to hop out of my IDE to paste the key into the Jira web interface just to see the details, and so I add a little TODO comment:&lt;/p&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// TODO: add a "quick view" when hovering over Jira issue keys
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Even though there are various extensions to list/manage TODO comments, this is going to get lost if I don’t remember to bring it up at the next planning meeting, or stop what I’m doing, jump into the Jira web interface and create an issue for it.&lt;/p&gt;

&lt;p&gt;Using the “Create Jira Issue” code link that appears for customizable comment prefixes (TODO, BUG, FIME, ISSUE, etc), I can simply click on the link, create the issue and move along with my coding.&lt;/p&gt;

&lt;p&gt;To top it all off, the extension will update the comment and add the newly created issue key as a prefix so other developers can reference it.&lt;/p&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// TODO: VSCODE-12324 - add a "quick view" when hovering over Jira issue keys
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--5L_inGJg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://cdn-images-1.medium.com/max/800/0%2Auhoy2va6RncsmUal.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--5L_inGJg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://cdn-images-1.medium.com/max/800/0%2Auhoy2va6RncsmUal.gif" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Turning Ideas into Code: Starting Work
&lt;/h4&gt;

&lt;p&gt;Let’s say there’s another developer on the team that just finished a task and is looking for something to pick up. She sees the Jira issue we made above in her Jira Issue Tree within VS Code and opens it up. After reading through the details she decides she wants to work on it.&lt;/p&gt;

&lt;p&gt;Following our best coding practices, getting started on a new task is a little more complicated than just coding away. She needs to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create a branch ensuring that the Jira issue key is in the branch name&lt;/li&gt;
&lt;li&gt;Link the local branch with a new upstream branch on Bitbucket&lt;/li&gt;
&lt;li&gt;Assign the issue to herself&lt;/li&gt;
&lt;li&gt;Transition the issue to an “In Progress” state so other developers and “project people” know what’s in flight&lt;/li&gt;
&lt;li&gt;Finally checkout the local branch and start coding&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is a small list of things that needs to be done, but after doing it a few (hundred) times, most developers tend to miss a few of these steps.&lt;/p&gt;

&lt;p&gt;Using the “Start Work On Issue” button provided on the Jira Details screen within VS Code, this can all happen in a single step.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--OTAVSQBC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://cdn-images-1.medium.com/max/800/0%2AzAtJWURQtBj3shuE.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--OTAVSQBC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://cdn-images-1.medium.com/max/800/0%2AzAtJWURQtBj3shuE.gif" alt=""&gt;&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Pro Tip: Make sure all commits contain the issue key&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;So what’s this about putting issue keys in branch names?&lt;/p&gt;

&lt;p&gt;The idea is simple: if you link your Jira instance in your Bitbucket repository settings, Bitbucket will look for issue keys in your branch names and commit/PR comments and be able to link them in the Bitbucket UI, among other things.&lt;/p&gt;

&lt;p&gt;On top of the Bitbucket UI, the Atlassian for VS Code extension checks those same places for issue keys and “magically” gives you lists of issues related to your PRs within various UIs. You can now easily get a better picture of issue/code relationships right within VS Code.&lt;/p&gt;

&lt;p&gt;So how to you ensure every commit has a Jira issue key in the comment?&lt;/p&gt;

&lt;p&gt;Although outside of the VS Code extension, we use a small Git script that automates the process by finding the issue key in your branch name and pre-pending it to all commit comments on that branch.&lt;/p&gt;

&lt;p&gt;Developers never have to type in the issue key in a comment as long as the branch contains the key.&lt;/p&gt;

&lt;p&gt;You can grab this nifty script and instructions from our &lt;a href="https://bitbucket.org/snippets/atlassian/qedp7d/prepare-commit-with-jira-issue"&gt;Bitbucket Snippet.&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;So our developer feels like her feature is complete and now needs to create a pull request. This is going to serve multiple purposes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Her teammates will do a code review to catch anything she might have missed&lt;/li&gt;
&lt;li&gt;Her teammates will have the opportunity to test out the new feature&lt;/li&gt;
&lt;li&gt;The entire team can discuss the approach and make any changes as needed&lt;/li&gt;
&lt;li&gt;The team has a chance to “sign off” on the pull request by approving it&lt;/li&gt;
&lt;li&gt;Finally, the pull request can be merged&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Let’s take this step-by-step, first by creating a pull request.&lt;/p&gt;

&lt;p&gt;Creating a pull request traditionally meant hopping out of your IDE, navigating the Bitbucket UI, and creating the pull request. With our extension, this can now be done right within VS Code.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--0LCtojol--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://cdn-images-1.medium.com/max/800/0%2AwKAwPeyu0f0aRMZ7.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--0LCtojol--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://cdn-images-1.medium.com/max/800/0%2AwKAwPeyu0f0aRMZ7.gif" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A pull request has been created and now is typically when our developer takes (a much deserved) coffee break and when she gets back, checks to see if anyone’s looked at it. Usually the answer is no and so she needs to hop into Slack and gently request that her teammates take a look at the PR. Even this requires some luck that the other developers are in one of their “come up for air” moments.&lt;/p&gt;

&lt;p&gt;In our extension we wanted to try to remove as much nagging and waiting as possible, so when she created the pull request, her teammates got a small notification popup right within VS Code and her pull request showed up in all of their “Pull Request Tree” views right within VS Code.&lt;/p&gt;

&lt;p&gt;Now her teammates not only know that there’s something to review, but they can see the detailed summary of the pull request, see any and all Jira issues that a related to the pull request, and go through the individual file diffs right withing their IDEs.. you know, where developers like to look at code in whatever crazy theme they have decided to use that week.&lt;/p&gt;

&lt;p&gt;As they go through the details and the diffs, they can quickly add comments on a line-by-line basis, and since developers are notified as new comments are added within VS Code, the time spent waiting for replies is greatly reduced, effectively turning pull request into a meaningful communication tool.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--_8abd8RB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://cdn-images-1.medium.com/max/800/0%2AK_jR5ZMjBuih2-wY.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--_8abd8RB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://cdn-images-1.medium.com/max/800/0%2AK_jR5ZMjBuih2-wY.gif" alt=""&gt;&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;Since using these pull request features, our team has enjoyed a huge acceleration in knowledge sharing which in-turn has help break down our silos and turn each team member into a product expert instead of an expert of a smaller domain within the code base.&lt;/p&gt;

&lt;p&gt;We all have a deeper understanding of our entire code base and have also become a lot more collaborative simply through greater communication.&lt;/p&gt;




&lt;p&gt;Once each teammate finishes reviewing the pull request, they can simply click the approve button from the details screen within VS Code and go back to their coding. Similarly, our developer that submitted the PR can simply merge right from the details screen as well.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--aNeXQzPf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://cdn-images-1.medium.com/max/800/0%2AhjFJF4wMJ0FC25pl.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--aNeXQzPf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://cdn-images-1.medium.com/max/800/0%2AhjFJF4wMJ0FC25pl.gif" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once merged, it’s time for a new task. Our developer can either go through the list of issues in the tree view like before, or now she can simply hover over an issue key she’s found in the code and use the new “Issue Quick View” feature which will allow her to get essential issue details, open up the full detail view and optionally start work on the issue.&lt;/p&gt;

&lt;h3&gt;
  
  
  Versioning is a Result, Not a Plan
&lt;/h3&gt;

&lt;p&gt;Our team has come a long way since deciding to test out using Bitbucket and Jira features inside of VS Code. We set out to create a useful tool to help ease some of the pain points within common coding cycles. What we ended up with was exactly that. What we didn’t plan on, was that through the use of our own tool, we would organically change the way we worked.&lt;/p&gt;

&lt;p&gt;The biggest change we’ve seen so far is that we no longer go into a room armed with a version number for the “next release”, give it a date, and pack it with things we think we can get done. Instead, we code, we discover bugs and features along the way, we are more collaborative in our development, and everyone now feels ownership in the entire code base and are eager to work on any part of it.&lt;/p&gt;

&lt;p&gt;Where we used to do a release at some scheduled time period, we now do many more releases as useful features are completed. We don’t wait to ship anything because “1.x is supposed to release at the end of the month”. In fact, we only have a single version we ever work towards and it’s label is vNext. At any point in time, we can say “it’s time.” and then minutes later we release and assign that snapshot in time an appropriate label.&lt;/p&gt;

&lt;h3&gt;
  
  
  When there’s no deadline, you’re never done
&lt;/h3&gt;

&lt;p&gt;Our team is moving faster than ever and we’ve already see vast improvements to our coding cycles.&lt;/p&gt;

&lt;p&gt;By no means is our extension a “silver bullet” and as they say “results may vary”. Our team is passionate (and selfish) about continuously finding ways to use new tooling and new workflows to make every team the best they can be.&lt;/p&gt;

&lt;p&gt;We’re excited to learn how teams outside of Atlassian make use of these features and how we can improve.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://marketplace.visualstudio.com/items?itemName=Atlassian.atlascode"&gt;Install the extension today&lt;/a&gt; and let us know what you think!&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Written by Jonathan Doklovic: Jonathan has been working at Atlassian for almost a decade and is currently a Principal Developer on the Product Integrations team. He has worked on many Atlassian products as well as the core plugin system and Atlassian Connect.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Originally published at&lt;/em&gt; &lt;a href="https://bitbucket.org/blog/how-atlassian-for-vscode-changed-the-way-we-ship-code"&gt;&lt;em&gt;https://bitbucket.org&lt;/em&gt;&lt;/a&gt; &lt;em&gt;on May 15, 2019.&lt;/em&gt;&lt;/p&gt;




</description>
      <category>development</category>
      <category>programming</category>
      <category>webdev</category>
      <category>coding</category>
    </item>
    <item>
      <title>LAMP vs. MEAN: Which stack is right for you?</title>
      <dc:creator>Ash Moosa</dc:creator>
      <pubDate>Thu, 02 May 2019 16:57:51 +0000</pubDate>
      <link>https://dev.to/atlassian/lamp-vs-mean-which-stack-is-right-for-you-4he2</link>
      <guid>https://dev.to/atlassian/lamp-vs-mean-which-stack-is-right-for-you-4he2</guid>
      <description>&lt;p&gt;A web stack is a collection of software or technologies that are used to build a web application. Choices are plenty, but picking one can be hard.&lt;/p&gt;

&lt;p&gt;When chatting with co-workers, developers or customers, suggestions for what technologies and stacks to use couldn’t be more different. When I started off as a web developer, I went the usual way at that time: learning HTML &amp;amp; CSS, exploring some PHP — and of course MySQL. That was, if you were not using Java or ASP.NET, the technology stack of that time. Whether you wanted to host a blog, a bulletin board or become an image hoster — you would more than often need these things: Linux, Apache, MySQL and PHP (LAMP).&lt;/p&gt;

&lt;p&gt;Here is a detailed overview of LAMP and the relatively new, MEAN stack, which are currently the most popular open source web stacks and a brief overview of other stacks. Whichever stack you choose, &lt;a href="http://bitbucket.org/product"&gt;Bitbucket&lt;/a&gt; works with them all.&lt;/p&gt;

&lt;h3&gt;
  
  
  LAMP
&lt;/h3&gt;

&lt;p&gt;LAMP delivers a strong platform for developing and hosting large, performant web applications. With the biggest and oldest community, countless libraries and tools, you get great support and will find developers quite easily.&lt;/p&gt;

&lt;p&gt;Its integral components are:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;L&lt;/strong&gt;inux (OS)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;A&lt;/strong&gt;pache (Webserver)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;M&lt;/strong&gt;ySQL (Data persistence)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;P&lt;/strong&gt;HP (Programming language)&lt;/p&gt;

&lt;p&gt;There are also some derivatives of this stack:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;LAMP (with Perl or Python instead of PHP)&lt;/li&gt;
&lt;li&gt;LAMP (with MongoDB instead of MySQL)&lt;/li&gt;
&lt;li&gt;WAMP (Windows as OS)&lt;/li&gt;
&lt;li&gt;MAMP (Mac OS X as OS)&lt;/li&gt;
&lt;li&gt;XAMPP (Any OS + Perl or PHP + FTP Server)&lt;/li&gt;
&lt;li&gt;LAPP (PostgreSQL as database)&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Pros:
&lt;/h4&gt;

&lt;p&gt;LAMP is kind of the dinosaur of web development, used by hundreds of thousands of companies and therefore maintained and supported very well. With endless modules, libraries and add-ons available you can adapt it to your company’s needs.&lt;/p&gt;

&lt;p&gt;Being Linux based, you will find help for any topic in the large open source community. MySQL is a very reliable and scalable solution. PHP is in version 7 and is also supported by a mature and big community. PHP is also very fast and integrates well with the rest of the stack.&lt;/p&gt;

&lt;p&gt;You can control the server and decide which versions and software you install, so you don’t have to depend on the client’s browser. Best for if you have lots of server-side tasks.&lt;/p&gt;

&lt;h4&gt;
  
  
  Cons:
&lt;/h4&gt;

&lt;p&gt;Because it’s easy to learn, there are a lot of developers out there who are not following best practices and building garbage apps. Starting with PHP is easy, but mastering it is hard. This is also true for security in these PHP apps. Some would also describe it as a script language instead of a real programming language because it’s not strongly typed and not pre-compiled. I’d recommend diving in deeper into pros and cons of PHP, Python or Perl.&lt;/p&gt;

&lt;p&gt;As for MySQL, other options are becoming more mature. NoSQL databases like MongoDB are popular among enterprises today due to it’s scalability. Plus, pure JavaScript Stacks like MEAN gain more traction every year and new developers might not be interested in learning all of the LAMP’s skills.&lt;/p&gt;

&lt;h3&gt;
  
  
  MEAN
&lt;/h3&gt;

&lt;p&gt;Compared to LAMP, the MEAN stack is fairly new. One of its biggest differences is that MEAN is not dependent on a specific operating system — Node.js takes care of server-side execution. The MEAN Stack is especially recommended for JavaScript enthusiasts — as it uses JavaScript at all levels. This also makes it preferred by new developers.&lt;/p&gt;

&lt;p&gt;MongoDB is a popular and flexible document based, NoSQL database, compared to MySQL’s relational database system. Angular helps build progressive and modern web apps.&lt;/p&gt;

&lt;p&gt;Its components are:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;M&lt;/strong&gt;ongoDB (Data persistence)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;E&lt;/strong&gt;xpress.js (server-side application framework)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;A&lt;/strong&gt;ngular.js (client-side application framework)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;N&lt;/strong&gt;ode.js (server-side environment)&lt;/p&gt;

&lt;p&gt;This stack has some derivatives too:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;MERN (React instead of Angular)&lt;/li&gt;
&lt;li&gt;MEEN (Ember.js instead of Angular)&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Pros:
&lt;/h4&gt;

&lt;p&gt;Using JavaScript as the primary programming language is a huge advantage. Everything can be set up quickly and done in JS, which makes it much easier to find developers, and LAMP developers typically know JavaScript as well. MongoDB is very popular for its easy schemaless data persistence and is faster than MySQL if you have a lot of read requests. The fact that Angular is maintained by Google is also a big plus. It receives new releases and functions on a constant basis. Another huge advantage is the ability to easily build mobile or desktop apps, for example with Ionic. Code and components can easily be reused or added.&lt;/p&gt;

&lt;h4&gt;
  
  
  Cons:
&lt;/h4&gt;

&lt;p&gt;Like all new technologies, MEAN’s glamour is creating some hype. Developers fall for this hype and build their apps in JavaScript, just because it’s trendy. Many of these libraries and frameworks are quite new, and new versions get released quickly, so maintaining your app can become quite a hassle. Since many technologies disappear after a few years, sustainability can become an issue. It’s also harder to maintain a clean code base and follow best practices as your app grows. Further, you have to rely on the client and the client’s available technologies e.g. if you are targeting IE users, embedded systems or low end PCs, there may be usability issues.&lt;/p&gt;

&lt;h3&gt;
  
  
  A few other stacks to consider:
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;WISA&lt;/strong&gt; &lt;em&gt;Windows Server / IIS / Microsoft SQL Server /&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Not open source, but all components are from Microsoft, so it should work seamlessly.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;LAMP (With MongoDB)&lt;/strong&gt; &lt;em&gt;Linux, Apache, MongoDB, PHP&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;NoSQL Databases like MongoDB can also be used in a classic LAMP environment.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Ruby Stack&lt;/strong&gt; &lt;em&gt;Ruby/Ruby on Rails/RVM (Ruby Virtual Machine) / SQLite&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;This stack is losing popularity. Ruby on Rails was an often used framework once, and thus the whole stack.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Java+Spring&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Preferred by large enterprises and shied by indie developers for its complexity, Spring offers an entire full-stack framework written in Java.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Django Stack&lt;/strong&gt; &lt;em&gt;Python / Django / Apache / MySQL&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The Django framework is loved by Python developers, delivers performance and is often referred to as an easy to learn stack.&lt;/p&gt;

&lt;h3&gt;
  
  
  Which stack is used more frequently?
&lt;/h3&gt;

&lt;p&gt;It’s hard to compare the popularity of stacks, but you can use&lt;a href="https://trends.google.com/trends/"&gt;Google Trends&lt;/a&gt; to compare programming languages and get a feel for what people are searching for. As the chart below shows, JavaScript is searched for more than PHP right now.&lt;/p&gt;

&lt;p&gt;I’d recommend checking development trends using Google’s trend tool from time to time.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cdn-images-1.medium.com/max/666/1*0CzIomQQWlccHamS7Rxrlg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://cdn-images-1.medium.com/max/666/1*0CzIomQQWlccHamS7Rxrlg.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I’d also suggest diving deeper on databases (SQL vs. NoSQL) to gain a basic understanding of the two concepts and make a choice.&lt;/p&gt;

&lt;h3&gt;
  
  
  So, how do you pick a stack?
&lt;/h3&gt;

&lt;p&gt;Picking a stack depends on many factors. If you are a developer or project owner, here are a few questions to ask yourself.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;What kind of web-application am I planning to create?&lt;/li&gt;
&lt;li&gt;What is its expected lifetime?&lt;/li&gt;
&lt;li&gt;What technologies are available at my customer’s/client’s /cat’s/… infrastructure?&lt;/li&gt;
&lt;li&gt;How easy is it to find developers to maintain the application?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Let me give you an example. Let’s say you have a website for listing used cars. It was developed using a LAMP stack a while ago. But your website lacks a back-end for used car dealers, where they can manage their listings on your website. Depending on your company size, time and budget, you have to ask yourself all the above questions. If you have a small team, it might make sense to extend your existing application in the LAMP environment. Since your developers know the ecosystem and it’ll be much faster. If you have time and resources to spare, you could take another approach and extend your existing LAMP application with an API. Later, your team could focus on developing a small, standalone (M)EAN application, that can easily be maintained, improved with new features and released using a much faster cycle.&lt;/p&gt;

&lt;p&gt;Another example: You want to build a newsletter platform, where people can sign up, upload mailing lists, compose mailings and so on. You could of course use MEAN, but you have large scale and high traffic potential. It may make more sense to use a LAMP stack as your foundation, since Linux, MySQL and Apache provide a stable, scalable environment with lots of community support for any thinkable problem. You will also have lots of server-side tasks and cronjobs and will encounter mailing topics like SMTP and so on. I would recommend a Linux environment customized to your needs in this case.&lt;/p&gt;

&lt;p&gt;Here is a summary of things to know/consider.&lt;/p&gt;

&lt;p&gt;MEAN&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Single code base (JavaScript)&lt;/li&gt;
&lt;li&gt;Popular for modern web apps and hybrid apps&lt;/li&gt;
&lt;li&gt;Supported by large companies like Google&lt;/li&gt;
&lt;li&gt;Better for apps where a lot of the logic happens on the client’s side&lt;/li&gt;
&lt;li&gt;Harder to maintain long term due to the rapidly evolving javascript ecosystem&lt;/li&gt;
&lt;li&gt;Best for progressive web apps&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;LAMP&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Better for large applications&lt;/li&gt;
&lt;li&gt;More mature, huge community&lt;/li&gt;
&lt;li&gt;Well-established application frameworks like Symfony, Zend, Laravel&lt;/li&gt;
&lt;li&gt;Easier to follow standards and easier to keep code clean&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you are new to programming and web development, ask yourself:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;What is the easiest to learn for you and your team?&lt;/li&gt;
&lt;li&gt;What technologies are trending and which will win in the long run?&lt;/li&gt;
&lt;li&gt;If open source, could you imagine contributing to the project?&lt;/li&gt;
&lt;li&gt;Which technologies will serve you personally in the long term?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;A great resource for finding answers on JavaScript technologies is the StateOfJS project &lt;a href="https://stateofjs.com/"&gt;— https://stateofjs.com/&lt;/a&gt; — it’s a project that conducts a survey every year asking thousands of developers on their opinions on current technologies and salary.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;This is a guest post for Bitbucket by Christoph Heike — “I’ve been developing web applications for over 10 years and currently run a web-development agency in Bonn, Germany. I’m also involved in an Amazon-based tech startup. My goal is to always deliver clean, sustainable and high performant software solutions. Connect with me on&lt;/em&gt; &lt;a href="https://de.linkedin.com/in/christoph-heike-37ab626a"&gt;&lt;em&gt;LinkedIn&lt;/em&gt;&lt;/a&gt;&lt;em&gt;.”&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Love sharing your technical expertise? Learn more about the &lt;a href="https://bitbucket.org/product/write?utm_source=blog&amp;amp;utm_medium=post&amp;amp;utm_campaign=bottom-post"&gt;Bitbucket writing program&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Originally published at&lt;/em&gt; &lt;a href="https://bitbucket.org/blog/lamp-vs-mean-which-stack-is-right-for-you"&gt;&lt;em&gt;https://bitbucket.org&lt;/em&gt;&lt;/a&gt; &lt;em&gt;on May 2, 2019.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>appdevelopment</category>
      <category>coding</category>
      <category>websitedevelopment</category>
      <category>webdev</category>
    </item>
    <item>
      <title>Deploying an Angular app on a Google VM Using Bitbucket Pipelines</title>
      <dc:creator>Ash Moosa</dc:creator>
      <pubDate>Tue, 30 Apr 2019 17:27:56 +0000</pubDate>
      <link>https://dev.to/atlassian/deploying-an-angular-app-on-a-google-vm-using-bitbucket-pipelines-550b</link>
      <guid>https://dev.to/atlassian/deploying-an-angular-app-on-a-google-vm-using-bitbucket-pipelines-550b</guid>
      <description>&lt;p&gt;&lt;em&gt;This is a guest post written for&lt;/em&gt; &lt;a href="https://bitbucket.org/product" rel="noopener noreferrer"&gt;&lt;em&gt;Bitbucket&lt;/em&gt;&lt;/a&gt; &lt;em&gt;by&lt;/em&gt; &lt;a href="https://twitter.com/surenkonathala" rel="noopener noreferrer"&gt;&lt;em&gt;Suren Konathala&lt;/em&gt;&lt;/a&gt;&lt;em&gt;.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Angular is one of the most widely used javascript frameworks. But though the builds are easy, developers face issues when configuring deployments and setting up CI/CD pipelines. This post outlines the steps required to deploy an Angular application to a Google VM using Bitbucket Pipelines.&lt;/p&gt;

&lt;h3&gt;
  
  
  What are Pipelines?
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://bitbucket.org/product/features/pipelines" rel="noopener noreferrer"&gt;Bitbucket Pipelines&lt;/a&gt; allows developers to configure continuous delivery (in the cloud) of source files to test/production servers. These pipelines are configured to connect to the production server using YML scripts.&lt;/p&gt;

&lt;h3&gt;
  
  
  Why I used Pipelines?
&lt;/h3&gt;

&lt;p&gt;For users to be able to access the application, the source code needs to be deployed to a server. The server from which the web application is rendered/delivered to users is called a &lt;strong&gt;Production&lt;/strong&gt; server. And before the application reaches the production server, it goes through many iterations of development and testing. These iterations are usually deployed to a &lt;strong&gt;Development&lt;/strong&gt; server or a &lt;strong&gt;Stage&lt;/strong&gt;  server.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F0%2AMSGZbUT0CptVfWhK.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F0%2AMSGZbUT0CptVfWhK.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For an application to be deployed to each of the above servers, there are several steps involved that can get cumbersome.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Copy the code files to the server&lt;/li&gt;
&lt;li&gt;Run the build and deploy scripts&lt;/li&gt;
&lt;li&gt;Repeat the same on each server. Sometimes teams have multiple servers for each stage.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Okay! The above has been automated to some extent by tools like Jenkins but the drawback is that developers/admins have to install another software on top of their servers, and learn to use and administer it.&lt;/p&gt;

&lt;p&gt;Pipelines simplifies the above process, and in fact automates the entire build and deploy process — also known as CI/CD (Continuous Integration / Continuous Deployment). The best part with Bitbucket Pipelines is that applications can be built and deployed directly from the Bitbucket repository to any destination server.&lt;/p&gt;

&lt;p&gt;The entire copy, build and deploy process can be defined using simple YAML based scripts without the need for any additional software or a solution. All we need to do is to pick the docker image (like NodeJs, Java, etc.) for the pipeline to use to build your project and select the frequency (e.g. manual, or automatically when files are updated in the source repository). This saves a lot of time and resources for teams and organizations.&lt;/p&gt;

&lt;h3&gt;
  
  
  Tutorial: Configure an Angular Project on Bitbucket
&lt;/h3&gt;

&lt;p&gt;This tutorial covers how to deploy an Angular based web application to a Google Cloud Virtual Machine (VM). Source code for the application is in a Bitbucket repository and this VM will need to be connected to using SSH security keys.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Pre-requisites&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;External server (or VM) with private and public SSH keys. This will the hosting machine for the website or web application.&lt;/li&gt;
&lt;li&gt;Repository on&lt;a href="https://bitbucket.org/account/signup/" rel="noopener noreferrer"&gt;Bitbucket&lt;/a&gt; with project sources files. These will be used to build &amp;amp; deploy to the server.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Step 1: Setup SSH Keys
&lt;/h3&gt;

&lt;p&gt;Under Bitbucket &amp;gt; Project source repository &amp;gt; Settings &amp;gt; Pipelines &amp;gt; SSH Keys&lt;/p&gt;

&lt;p&gt;Add private and public keys. You need to get these from the external server that you need to connect to.&lt;/p&gt;

&lt;p&gt;Add Known hosts (this will be the IP address of the external server you want to push the code to. In our example, we used a VM on Google Cloud)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F0%2AowrA1vugumgrEhoC.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F0%2AowrA1vugumgrEhoC.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 2: Define the YML deployment script
&lt;/h3&gt;

&lt;p&gt;Go to Project source repository &amp;gt; Pipelines &amp;gt; New pipeline and define the script. Here is an example script:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Sample build file # @author Suren Konathala 
# ----- 
image: node:8 
pipelines:   
  default: 
- step:      
     caches:        
     - node      
     script: # Modify the commands below to build your repository.
       - echo "$(ls -la)"        
       - npm install        
       - npm install -g @angular/cli        
       - ng build --prod        
       - echo "$(ls -la dist/)"        
       - scp -r dist/ user@34.73.227.137:/projects/commerce1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The above script performs the required commands/steps to build an Angular project. Once done, it pushes/deploys the contents of the build files under the dist folder to an external server.&lt;/p&gt;

&lt;p&gt;In this example, we used an SCP command to push code to an external server. Since the SSH keys are already set in step 1 above, your local repo can now connect to the server and copy the files over.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 3: Run the pipeline
&lt;/h3&gt;

&lt;p&gt;Save and “run the pipeline”. You can see the running log and status of the pipeline.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F0%2A180llwPX1K8hO7FK.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F0%2A180llwPX1K8hO7FK.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;On successful completion, you can verify if the files actually being copied to the external server by either by testing your live application and checking if the latest updates are reflected or you can manually check if the files on the server were updated (no need to do this for every push).&lt;/p&gt;

&lt;h3&gt;
  
  
  Troubleshooting/Info
&lt;/h3&gt;

&lt;p&gt;To update NodeJS version in the pipeline.. change the version of the docker node image to node:8 in the YML script.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;image: node:8
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To know what folder you are in and what files are being generated, you can use echo commands. Some examples..&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;echo "Starting the deployment..."
echo "$(ls -la)"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This &lt;a href="https://stackoverflow.com/questions/55227229/how-can-we-ssh-to-a-google-cloud-vm-from-mac-terminal-using-public-key-generated" rel="noopener noreferrer"&gt;post&lt;/a&gt; shows how to add SSH Keys on a Linux server/VM.&lt;/p&gt;

&lt;p&gt;The source files for this project are shared&lt;a href="https://bitbucket.org/konathalasuren/angular-ionic-4-all-components-demo/" rel="noopener noreferrer"&gt; here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Learn more about building Angular projects &lt;a href="https://angular.io/guide/deployment" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;This is a guest post written for Bitbucket by Suren Konathala. “My mission is to simplify technology adoption for organizations. I’m a developer, architect, consultant and manager. I love to write and talk about technology. Connect with me on&lt;/em&gt; &lt;a href="https://www.linkedin.com/in/ksurendra/" rel="noopener noreferrer"&gt;&lt;em&gt;LinkedIn&lt;/em&gt;&lt;/a&gt; &lt;em&gt;or&lt;/em&gt; &lt;a href="https://twitter.com/surenkonathala" rel="noopener noreferrer"&gt;&lt;em&gt;Twitter&lt;/em&gt;&lt;/a&gt;&lt;em&gt;.”&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Love sharing your technical expertise? Learn more about the &lt;a href="https://bitbucket.org/product/write?utm_source=blog&amp;amp;utm_medium=post&amp;amp;utm_campaign=bottom-post" rel="noopener noreferrer"&gt;Bitbucket writing program&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Originally published at&lt;/em&gt; &lt;a href="https://bitbucket.org/blog/deploying-an-angular-app-on-a-google-vm-using-bitbucket-pipelines" rel="noopener noreferrer"&gt;&lt;em&gt;https://bitbucket.org&lt;/em&gt;&lt;/a&gt; &lt;em&gt;on April 30, 2019.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>programming</category>
      <category>webdev</category>
      <category>javascript</category>
      <category>frontenddev</category>
    </item>
    <item>
      <title>Bitbucket + Bitrise: Configuring Continuous Integration for an iOS app</title>
      <dc:creator>Ash Moosa</dc:creator>
      <pubDate>Mon, 29 Apr 2019 17:03:05 +0000</pubDate>
      <link>https://dev.to/atlassian/bitbucket-bitrise-configuring-continuous-integration-for-an-ios-app-34m1</link>
      <guid>https://dev.to/atlassian/bitbucket-bitrise-configuring-continuous-integration-for-an-ios-app-34m1</guid>
      <description>

&lt;p&gt;&lt;em&gt;This is a guest post written for Bitbucket by Ivan Parfenchuk. Ivan is an independent iOS and Ruby developer passionate about building delightful experiences. Connect with him on Twitter&lt;/em&gt; &lt;a href="https://twitter.com/gazebushka"&gt;&lt;em&gt;@gazebushka&lt;/em&gt;&lt;/a&gt;&lt;em&gt;.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;When iOS applications start growing, at some point it becomes essential to have a quick develop-release-test feedback loop. You can create this loop by doing everything manually, but it can be much quicker and more advanced if you use Continuous Integration (CI) tools.&lt;/p&gt;

&lt;p&gt;With a CI tool, you can build up a history of releases and quickly see which build contained what. You can run tests for every build automatically and catch some inevitable bugs. You can have consistency in your release notes. And you can streamline your release cycles, which automates your checklists.&lt;/p&gt;

&lt;p&gt;Sound interesting? Let’s try to build this feedback loop using Bitbucket Webhooks, Bitrise and fastlane.&lt;/p&gt;

&lt;h3&gt;
  
  
  Deployment flow
&lt;/h3&gt;

&lt;p&gt;The flow we are going to use for our Continuous Integration is going to look like this:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create &amp;amp; merge a Pull Request in Bitbucket&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://bitbucket.org/product"&gt;Bitbucket&lt;/a&gt; performs a “Webhook” HTTP request to Bitrise&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.bitrise.io/"&gt;Bitrise&lt;/a&gt; starts building the process and launches fastlane&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://fastlane.tools/"&gt;fastlane&lt;/a&gt; builds the app and sends it to App Store Connect&lt;/li&gt;
&lt;li&gt;App Store Connect processes the build, and it becomes available in TestFlight&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Bitbucket Webhooks and git branching model
&lt;/h3&gt;

&lt;p&gt;Each deployment starts from us creating a Pull Request.&lt;/p&gt;

&lt;p&gt;Let’s say your team is using master git branch for code in the releasable state. It also makes new releases by merging this master branch to the release branch.&lt;/p&gt;

&lt;p&gt;The following section describes how to create a Webhook manually. However, if you use Bitrise, it can create a Webhook for you automatically, so, feel free to skip to the Bitrise section.&lt;/p&gt;

&lt;h3&gt;
  
  
  Manual Webhook configuration
&lt;/h3&gt;

&lt;p&gt;Next, let’s configure Bitbucket Webhooks so that whenever someone pushes to release branch or merges Pull Request to release branch, the Webhook is triggered.&lt;/p&gt;

&lt;p&gt;To do that go to your Bitbucket repository and click “Settings” in the side menu. Then click “Webhooks” in the “Workflows” section and then click “Add webhook.”&lt;/p&gt;

&lt;p&gt;Fill out Title, URL (see below), set Status to Active, and select “Choose from a full list of triggers” for Triggers. The triggers we are going to use are:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Repository: Push&lt;/li&gt;
&lt;li&gt;Pull Request: Created, Updated&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;To get the URL for our Webhook:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;head over to Bitrise, create a new app&lt;/li&gt;
&lt;li&gt;open the Dashboard -&amp;gt; Your app -&amp;gt; Code tab&lt;/li&gt;
&lt;li&gt;scroll to Incoming Webhooks section and click Setup Manually.&lt;/li&gt;
&lt;li&gt;Select “Bitbucket Webhooks” and copy the Webhook URL&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://cdn-images-1.medium.com/max/823/0*mGhsp9fuqfcOBHh6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://cdn-images-1.medium.com/max/823/0*mGhsp9fuqfcOBHh6.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Bitrise and automatic Webhook configuration
&lt;/h3&gt;

&lt;p&gt;Bitrise is a platform for Continuous Integration. You can configure different deployment “workflows” in it and have the Bitrise servers build and publish your application. Here are the steps to create a new deployment workflow for our CI setup.&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/UeIekFewrKM"&gt; &lt;/iframe&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;First sign up in Bitrise, go to Dashboard, click “Add New App”&lt;/li&gt;
&lt;li&gt;Select “Private” if you want your config and logs to stay private&lt;/li&gt;
&lt;li&gt;Select Bitbucket and connect it to your account&lt;/li&gt;
&lt;li&gt;Click “Auto-add SSH key” or configure SSH access manually&lt;/li&gt;
&lt;li&gt;Enter &lt;code&gt;release&lt;/code&gt; as branch name in the “Choose branch” step&lt;/li&gt;
&lt;li&gt;In Project build configuration select “fastlane” , check that Fastlane lane is set to &lt;code&gt;ios release&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Select the stack that you normally use to build your app or just the latest available Xcode/macOS and click Confirm.&lt;/li&gt;
&lt;li&gt;In the last step “Webhook setup” click “Register Webhook for me”&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The last step creates a Webhook in Bitbucket, so you don’t have to do anything manually. You can head over to Bitbucket repository and check the Webhook configuration in Settings -&amp;gt; Webhooks.&lt;/p&gt;

&lt;p&gt;In our setup, we are going to use fastlane to build and publish the app to App Store Connect.&lt;/p&gt;

&lt;h3&gt;
  
  
  Fastlane configuration
&lt;/h3&gt;

&lt;p&gt;Fastlane is a set of tools for automating development and release process.&lt;/p&gt;

&lt;p&gt;Follow this guide to install fastlane:&lt;a href="https://docs.fastlane.tools/getting-started/ios/setup"&gt;Setup — fastlane docs&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In short, you need to install Xcode development tools:&lt;/p&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;xcode-select --install
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;and then install fastlane via RubyGems&lt;/p&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo gem install fastlane -NV
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;or via brew:&lt;/p&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;brew cask install fastlane
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Then open the working directory of your app in the Terminal and initialize fastlane.&lt;/p&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd /path/to/your/app 
fastlane init
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Select “3. 🚀 Automate App Store distribution”&lt;/p&gt;

&lt;p&gt;Then follow the configuration requests. Fastlane can create and configure the new App Id for you and create a sample deployment “lane.” Lane is just a collection of steps, required to complete some scenario.&lt;/p&gt;

&lt;p&gt;Once the configuration is finished, let’s open the Fastfile which has been created and configure our first deployment script. Fastlane has large set of tools for automating various processes like code signing, uploading of screenshots, running tests and so on. However, we are going to start with a simple setup:&lt;/p&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;default\_platform(:ios)  

platform :ios do   
  desc "Push a new release build to the App Store"   
  lane :release do     
    build\_app(scheme: "CITest")     
    upload\_to\_app\_store(force: true, skip\_metadata: true, skip\_screenshots: true)   
  end 
end
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;The “release” lane will&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;perform a build of your project build_app(scheme: “CITest”)&lt;/li&gt;
&lt;li&gt;Upload the resulting ipa file to App Store Connect upload_to_app_store. In this guide, we are skipping Fastlane metadata upload.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;You can test your setup by opening your project directory in Terminal and running fastlane release :&lt;/p&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd /path/to/your/app/directory 
fastlane release
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h3&gt;
  
  
  Code-signing
&lt;/h3&gt;

&lt;p&gt;If you see problems with code-signing start here:&lt;a href="https://docs.fastlane.tools/codesigning/troubleshooting"&gt;Troubleshooting — fastlane docs&lt;/a&gt;. You can use fastlane match to manage code signing, but be careful: if you already have generated Certificates and Provisioning Profiles, match can break things. However, if it’s a completely new setup or you don’t care much about the existing profiles, match is going to speed things up considerably.&lt;/p&gt;

&lt;p&gt;We’ll use fastlane match in our example:&lt;/p&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd /path/to/your/app/directory 
fastlane match development 
fastlane match adhoc 
fastlane match appstore
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Then open the Xcode, turn off Automatic code signing and select provisioning profiles that match has generated.&lt;/p&gt;

&lt;p&gt;After that, we can add match to our Fastfile:&lt;/p&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;default\_platform(:ios)

platform :ios do 
  desc "Push a new release build to the App Store" 
  lane :release do 
      match(type: "appstore", readonly: true) 
    build\_app(scheme: "CITest") 
    upload\_to\_app\_store(force: true, skip\_metadata: true, skip\_screenshots: true) 
  end 
end
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h3&gt;
  
  
  Finishing up the Bitrise setup
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://cdn-images-1.medium.com/max/1024/0*v17Nea3yHD2Jgdv6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://cdn-images-1.medium.com/max/1024/0*v17Nea3yHD2Jgdv6.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Bitrise is building the project on its servers which don’t have any of your passwords and credentials required to code-sign and upload your app to App Store Connect. Therefore we’ll have to share some of those, precisely these two:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Login/password for App Store Connect user&lt;/li&gt;
&lt;li&gt;Password to decrypt your match repository (if you use match)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The App Store Connect user doesn’t have to be the one you use to control your apps. You can create a new user in App Store Connect, which only has access to the app you automate and has at least a Developer role.&lt;/p&gt;

&lt;p&gt;Once you set your new App Store Connect user, head over to Bitrise and open Workflow Editor tab and then Secrets. Add two new secrets:&lt;/p&gt;

&lt;p&gt;ITUNES_CONNECT_USER and ITUNES_CONNECT_PASSWORD with App Store credentials for this new user. Plus put the same password into FASTLANE_PASSWORD secret.&lt;/p&gt;

&lt;p&gt;If you use match, then add one more secret called MATCH_PASSWORD with the password you used to encrypt match repository.&lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;That should be it. Try to create a new Pull Request and merge it and see if Bitrise triggers the new build. If everything goes well, you will see the new build in TestFlight and will be able to select it for your new iOS app version.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cdn-images-1.medium.com/max/1024/0*mq3JtlN9KrLcFM_f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://cdn-images-1.medium.com/max/1024/0*mq3JtlN9KrLcFM_f.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;There is much more that you can do with automated deployments, such as:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Testing using a fastlane scan&lt;/li&gt;
&lt;li&gt;Automated build number incrementation&lt;/li&gt;
&lt;li&gt;dSYM uploads to Crashlytics_Raygun_etc&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;However, start with simple things first. I hope this guide helps you!&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Love sharing your technical expertise? Learn more about the&lt;/em&gt; &lt;a href="http://bitbucket.org/product/write"&gt;&lt;em&gt;Bitbucket writing program&lt;/em&gt;&lt;/a&gt;&lt;em&gt;.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Originally published at&lt;/em&gt; &lt;a href="https://bitbucket.org/blog/bitbucket-bitrise-configuring-continuous-integration-for-an-ios-app"&gt;&lt;em&gt;https://bitbucket.org&lt;/em&gt;&lt;/a&gt; &lt;em&gt;on April 29, 2019.&lt;/em&gt;&lt;/p&gt;





</description>
      <category>ci</category>
      <category>programming</category>
      <category>ruby</category>
      <category>ios</category>
    </item>
    <item>
      <title>Searching DynamoDB: An indexer sidecar for Elasticsearch</title>
      <dc:creator>Ash Moosa</dc:creator>
      <pubDate>Thu, 28 Mar 2019 22:43:02 +0000</pubDate>
      <link>https://dev.to/atlassian/searching-dynamodb-an-indexer-sidecar-for-elasticsearch-24d8</link>
      <guid>https://dev.to/atlassian/searching-dynamodb-an-indexer-sidecar-for-elasticsearch-24d8</guid>
      <description>&lt;p&gt;TLDR;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;DynamoDB is great, but partitioning and searching are hard&lt;/li&gt;
&lt;li&gt;We built alternator and migration-service to make life easier&lt;/li&gt;
&lt;li&gt;We open sourced a sidecar to index DynamoDB tables in Elasticsearch that you should totes use. &lt;a href="https://bitbucket.org/atlassian/dynamodb-elasticsearch-indexer" rel="noopener noreferrer"&gt;Here’s the code&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;When we embarked on Bitbucket Pipelines more than three years ago we had little experience using NoSQL databases. But as a small team looking to produce quality at speed, we decided on DynamoDB as a managed service with great availability and scalability characteristics. Three years on, we’ve learnt a lot about how to use and how to not use DynamoDB, and we’ve built some things along the way that might be useful to other teams or that could be absorbed by the ever growing platform.&lt;/p&gt;

&lt;h3&gt;
  
  
  To NoSQL or not to NoSQL that is the question
&lt;/h3&gt;

&lt;p&gt;First off the bat, relational databases aren’t werewolves and NoSQL isn’t a silver bullet. Relational databases have served large scale applications for years and they continue to scale well beyond many people’s expectations. Many teams in Atlassian continue to choose Postgres over DynamoDB for example and there are plenty of perfectly valid reasons to do so. Hopefully this blog will highlight some of the reasons to choose one technology over the other. At a high level they include considerations such as operational overhead, the expected size of your tables, data access patterns, data consistency and querying requirements.&lt;/p&gt;

&lt;h3&gt;
  
  
  Partitions, partitions, partitions
&lt;/h3&gt;

&lt;p&gt;A good understanding of &lt;a href="https://shinesolutions.com/2016/06/27/a-deep-dive-into-dynamodb-partitions" rel="noopener noreferrer"&gt;how partitioning works&lt;/a&gt; is probably the single most important thing in being successful with DynamoDB and is necessary to avoid the dreaded &lt;a href="https://cloudonaut.io/dynamodb-pitfall-limited-throughput-due-to-hot-partitions" rel="noopener noreferrer"&gt;hot partition problem&lt;/a&gt;. Getting this wrong could mean restructuring data, redesigning APIs, full table migrations or worse at some time in the future when the system has hit a critical threshold. Of course there is zero visibility in to a table’s partitions — you can calculate them given a table’s throughput and size but it’s inaccurate, cumbersome and we’ve found, largely unnecessary if you’ve designed well distributed keys as the &lt;a href="https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/bp-partition-key-design.html" rel="noopener noreferrer"&gt;best practices developer guide&lt;/a&gt; suggests. Fortunately we took the time to understand partitioning from the get go and have managed to avoid any of these issues. As an added bonus, we’re now able to utilize autoscaling without concern for partition boundaries because requests remain evenly distributed even as partitions change and throughput is redistributed between them.&lt;/p&gt;

&lt;h3&gt;
  
  
  Throughput, bursting and throttling
&lt;/h3&gt;

&lt;p&gt;Reads and writes on DynamoDB tables are limited by the amount of &lt;a href="https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/HowItWorks.ProvisionedThroughput.html" rel="noopener noreferrer"&gt;throughput capacity&lt;/a&gt; configured for the table. Throughput also determines how the table is partitioned and it affects costs so it’s worth ensuring you’re not over provisioning. DynamoDB allows bursting above the throughput limit for a short period of time before it starts throttling requests and while throttled requests can result in a failed operation in your application, we’ve found that it very rarely does so due to the default retry configuration in the &lt;a href="https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Programming.SDKOverview.html" rel="noopener noreferrer"&gt;AWS SDK for DynamoDB&lt;/a&gt;. This is particularly reassuring because &lt;a href="https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/AutoScaling.html" rel="noopener noreferrer"&gt;autoscaling&lt;/a&gt; in DynamoDB is delayed by design and allows throughput to exceed capacity for long enough that throttling can occur.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;DynamoDB auto scaling modifies provisioned throughput settings only when the actual workload stays elevated (or depressed) for a sustained period of several minutes. The Application Auto Scaling target tracking algorithm seeks to keep the target utilization at or near your chosen value over the long term.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Sudden, short-duration spikes of activity are accommodated by the table’s built-in burst capacity.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;You’ll still want to set alerts for when throughput is exceeded so you can monitor and act accordingly when necessary (e.g. there’s an upper limit to autoscaling) but we’ve found that burst capacity, default SDK retries, autoscaling and &lt;a href="https://www.youtube.com/watch?v=kMY0_m29YzU" rel="noopener noreferrer"&gt;adaptive throughput&lt;/a&gt; combine extremely effectively such that intervention is seldom required.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F225%2F1%2AtVb7LXA5pY2fCWp6dBvZ0A.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F225%2F1%2AtVb7LXA5pY2fCWp6dBvZ0A.jpeg"&gt;&lt;/a&gt;&lt;a href="https://www.carid.com/quality-built/alternator.html" rel="noopener noreferrer"&gt;Source&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Alternator: an object-item mapping library for DynamoDB
&lt;/h3&gt;

&lt;p&gt;After Pipelines got the green light and refactoring of the horrendous alpha code began, one of the first things we built was alternator: an internal object-item mapping library for DynamoDB (similar to an ORM). Alternator abstracts the AWS SDK from the application and provides annotation based, reactive (RxJava — although currently still using the blocking AWS API under the covers until v2 of the SDK becomes stable) interfaces for interacting with DynamoDB. It also adds circuit breaking via Hystrix removing much of the boilerplate code that was present in early versions of the system.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;@Table(
    name = “pipeline”,
    primaryKey = @PrimaryKey(hash = @HashKey(name = “uuid”))
)

@ItemConverter(PipelineItemConverter.class)
public interface PipelineDao {
    @PutOperation(conditionExpression = “attribute\_not\_exists(#uuid)”)
    Single&amp;lt;Pipeline&amp;gt; create(@Item Pipeline pipeline);
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  The migration service
&lt;/h3&gt;

&lt;p&gt;DynamoDB tables are of course schema-less, however that doesn’t mean that you won’t need to perform migrations. Aside from a typical data migration to add or change attributes in a table, there are a number of features that can only be configured when a table is first created such as local secondary indexes which are useful for querying and sorting on additional attributes other than the primary key.&lt;/p&gt;

&lt;p&gt;The first few migrations in Pipelines involved writing bespoke code to move large quantities of data to new tables and synchronizing that with often complex changes in the application to support both old and new tables to avoid downtime. We learnt early on that having a migration strategy would remove a lot of that friction and so the migration service was born.&lt;/p&gt;

&lt;p&gt;The migration service is an internal service we developed for migrating data in DynamoDB tables. It supports 2 types of migrations:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Same table migrations for adding to or amending data in an existing table.&lt;/li&gt;
&lt;li&gt;Table to table migrations for moving data to a new table.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Migrations work by scanning all the data in the source table, passing it through a transformer (that is specific to the migration taking place) and writing it to the destination table. It does this using a &lt;a href="https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/bp-query-scan.html#bp-query-scan-parallel" rel="noopener noreferrer"&gt;parallel scan&lt;/a&gt; to distribute load evenly amongst a table’s partitions and maximize throughout to complete the migration in as short a time as possible.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F816%2F1%2A22mFh3YTYWCEMsUs_lA9jw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F816%2F1%2A22mFh3YTYWCEMsUs_lA9jw.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Table to table migrations then attach to the source table’s &lt;a href="https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Streams.html" rel="noopener noreferrer"&gt;stream&lt;/a&gt; to keep the destination table in sync until you decide to switch over to using it. This allows the application to switch directly to using the new table without having to support both old and new tables during the migration.&lt;/p&gt;

&lt;h3&gt;
  
  
  Querying in DynamoDB: a tale of heartache and misery
&lt;/h3&gt;

&lt;p&gt;DynamoDB provides limited querying and sorting capability via &lt;a href="https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/SecondaryIndexes.html" rel="noopener noreferrer"&gt;local and global secondary indexes&lt;/a&gt;. Tables are restricted to five local secondary indexes, with each index sharing the same partition key as the table’s primary key, and a query operation can only be performed against one index at a time. This means that on a “user” table partitioned by email address, a query operation can only be performed in the context of the email address and one other value.&lt;/p&gt;

&lt;p&gt;Global secondary indexes remove the partition key requirement at the cost of paying for a second lot of throughput, and only support eventually consistent reads. Both types of indexes are useful and sufficient for many use cases and Pipelines continues to use them extensively but they do not satisfy the more complex querying requirements of some applications. In Pipelines this need predominantly came from our REST API which rather typically allows clients to filter and sort on multiple properties at the same time.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F384%2F1%2AMb6KY3caZPfE1svjOtrjKA.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F384%2F1%2AMb6KY3caZPfE1svjOtrjKA.jpeg"&gt;&lt;/a&gt;&lt;a href="https://www.scooterworks.com/Sidecar-10-Wheel-Rocket-Vespa-Large-Frame-Stella-P11587.aspx" rel="noopener noreferrer"&gt;Source&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Our solution
&lt;/h3&gt;

&lt;p&gt;Our first attempt at solving this problem, MultiQuery, was built in to an alternator. With this approach we queried multiple local secondary indexes (LSI) and aggregated the results in memory allowing us to perform filtering and sorting on up to five values (the maximum number of LSI) in a single API request. While this worked at the time, it started to suffer from pretty terrible performance degradation as our tables grew in size and exponentially so the more values you were filtering on. Before multi query was replaced, the pipelines list page on our end-to-end test repository (one with a large number of pipelines) took up to tens of seconds to respond and consistently timed out when filtering on branches.&lt;/p&gt;

&lt;p&gt;A common pattern for searching DynamoDB content is to index it in a search engine. Conveniently AWS provides a &lt;a href="https://aws.amazon.com/blogs/aws/new-logstash-plugin-search-dynamodb-content-using-elasticsearch" rel="noopener noreferrer"&gt;logstash plugin&lt;/a&gt; for indexing Dynamo tables in Elasticsearch so we set about creating an indexing service using this plugin and the results were encouraging. Query performance vastly improved as expected but the logstash plugin left a lot to be desired taking almost 11 hours to index 700,000 documents.&lt;/p&gt;

&lt;p&gt;Some analysis of the logstash plugin and the realization that we had already built what was essentially a high performance indexer in the migration service led us to replace the logstash plugin with a custom indexer implementation. Our indexer, largely based on the same scan/stream semantics of the migration service and utilizing Elasticsearch’s bulk indexing API, managed to blow through almost 7 million documents in 27 minutes.&lt;/p&gt;

&lt;h3&gt;
  
  
  The indexer sidecar
&lt;/h3&gt;

&lt;p&gt;The custom indexer has since been repackaged as a sidecar, allowing any service application to seamlessly index a DynamoDB table in Elasticsearch. Both the initial scan and ongoing streaming phases are made highly available and resumable by a lease / checkpointing mechanism (custom built for the scan, standard kinesis client for the stream).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F448%2F1%2A87Gj1dBxfrsBC-Rj3yUAqA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F448%2F1%2A87Gj1dBxfrsBC-Rj3yUAqA.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We currently utilize the excellent &lt;a href="https://bitbucket.org/atlassian/atlassian-elasticsearch-client" rel="noopener noreferrer"&gt;elasticsearch client&lt;/a&gt; built by the Bitbucket code search team to query the index and have started work on an internal library which adds RxJava and Hystrix in the same vein as alternator.&lt;/p&gt;

&lt;p&gt;Here is the repo with &lt;a href="https://bitbucket.org/atlassian/dynamodb-elasticsearch-indexer" rel="noopener noreferrer"&gt;the code and a readme&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Final thoughts
&lt;/h3&gt;

&lt;p&gt;If you haven’t used NoSQL before, it certainly requires a mind shift but overall our experience with DynamoDB has been a positive one. The platform has proven to be extremely reliable over the past three years (I can’t remember a single major incident caused by it) with our biggest challenge coming from our querying requirements. Some might say that’s reason enough to have chosen a relational database in the first place and I wouldn’t strongly disagree with them. But we’ve managed to overcome that issue with a solution that is for the most part abstracted away from day to day operations. On the plus side, we haven’t had to run an explain query in all that time or deal with poorly formed SQL, complex table joins or missing indexes and we’re not in a hurry to go back.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;This post was written by Sam Tannous. Sam is a senior software engineer at Atlassian and is part of the team that brought CI/CD Pipelines to Bitbucket cloud. With over 15 years industry experience he has co-authored 3 patents, maintains 1 open source project and has a track record of successfully delivering large scale cloud based applications to end users. Connect with him on&lt;/em&gt; &lt;a href="http://www.linkedin.com/in/samuel-tannous" rel="noopener noreferrer"&gt;&lt;em&gt;linkedin&lt;/em&gt;&lt;/a&gt;&lt;em&gt;.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Originally published at&lt;/em&gt; &lt;a href="https://bitbucket.org/blog/searching-dynamodb-indexer-sidecar-elasticsearch" rel="noopener noreferrer"&gt;&lt;em&gt;bitbucket.org&lt;/em&gt;&lt;/a&gt; &lt;em&gt;on March 28, 2019. To contribute to the Bitbucket blog, &lt;a href="https://bitbucket.org/product/write" rel="noopener noreferrer"&gt;&lt;em&gt;apply here&lt;/em&gt;&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>elasticsearch</category>
      <category>nosql</category>
      <category>programming</category>
      <category>aws</category>
    </item>
    <item>
      <title>Beer, Bravado &amp; Bitbucket: Using data to drive CODE decisions</title>
      <dc:creator>Ash Moosa</dc:creator>
      <pubDate>Tue, 26 Feb 2019 21:47:28 +0000</pubDate>
      <link>https://dev.to/atlassian/beer-bravado--bitbucket-using-data-to-drive-code-decisions-10am</link>
      <guid>https://dev.to/atlassian/beer-bravado--bitbucket-using-data-to-drive-code-decisions-10am</guid>
      <description>

&lt;p&gt;Product teams and marketing teams continually use data to drive decisions. But what about us as software engineers? What can we do with metrics, besides just pulling them in for somebody else?&lt;/p&gt;

&lt;h3&gt;
  
  
  “You mentioned beer. I was led to believe there would be beer.”
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--acz_xR_V--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2A1czQ2b5BzonNzSK8Im9Q_A.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--acz_xR_V--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2A1czQ2b5BzonNzSK8Im9Q_A.jpeg" alt=""&gt;&lt;/a&gt;Source: &lt;a href="https://www.guinness.com/en-ie/our-beers/guinness-rye-pale-ale/"&gt;Guinness.com&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;People like Guinness… a lot. By 1914, their annual output was almost a billion pints.&lt;/p&gt;

&lt;p&gt;That’s going to need a lot of quality control. So clever Mr. Claude Guinness decided to &lt;a href="https://priceonomics.com/the-guinness-brewer-who-revolutionized-statistics"&gt;hire smart graduates&lt;/a&gt; to do this work, sort of like tech companies do now.&lt;/p&gt;

&lt;p&gt;See the guy in the photo? That’s Billy. Or William Sealy Gosset to his less-intimate friends. “&lt;a href="https://pubs.aeaweb.org/doi/pdfplus/10.1257/jep.22.4.199"&gt;An energetic — if slightly loony — 23 year-old scientist&lt;/a&gt;”.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--z2GMFK3f--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AtqaIanim_FHZTUSfuZ4HVQ.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--z2GMFK3f--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AtqaIanim_FHZTUSfuZ4HVQ.jpeg" alt=""&gt;&lt;/a&gt;Source: &lt;a href="https://en.wikipedia.org/wiki/William_Sealy_Gosset"&gt;Wikipedia&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To help him maintain quality over a billion pints, our friend Billy invented a way to answer important questions, by taking small samples of beer and using some smart math. The methods invented by our clever Billy became the fundamental basis for how doctors decide if a new &lt;a href="https://www.sciencedirect.com/science/article/pii/S0140673603129480"&gt;drug will save you&lt;/a&gt;(or kill you), how &lt;a href="https://www.pnas.org/content/111/24/8788"&gt;Facebook decides how to manipulate you&lt;/a&gt;, and a zillion other &lt;a href="https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/19980045313.pdf"&gt;out of this world&lt;/a&gt; uses.&lt;/p&gt;

&lt;h3&gt;
  
  
  “What does this have to do with code?”
&lt;/h3&gt;

&lt;p&gt;Bitbucket (and the rest of the company) has a big push on &lt;strong&gt;performance&lt;/strong&gt;. On the frontend, we’re making a number of changes to boost performance of the Pull Request page. In particular we want to speed up rendering the code diffs.&lt;/p&gt;

&lt;p&gt;Rendering the diffs uses a Chunks component which renders a great many separate CodeLinecomponents.&lt;/p&gt;

&lt;p&gt;I wondered, &lt;strong&gt;could we speed this up&lt;/strong&gt; by merging the CodeLinemarkup directly into Chunks?&lt;/p&gt;

&lt;p&gt;But that reduces modularity and possibly maintainability. Since there’s a cost, I want to know there’s a real benefit to rendering time. But how do I tell? All I have so far is an idea.&lt;/p&gt;

&lt;p&gt;Sure, I could load the Pull Request page a couple times before and after the change. But that’s hardly a rigorous check, for many, many reasons. And it’s going to be hard to tell, if the improvement is small (but real).&lt;/p&gt;

&lt;h3&gt;
  
  
  “Now tell me more.”
&lt;/h3&gt;

&lt;p&gt;I needed something better than refreshing my browser a couple times. So, I used &lt;a href="https://developers.google.com/web/tools/lighthouse"&gt;Lighthouse&lt;/a&gt;, an awesome tool for measuring frontend performance and collecting metrics. And it’s now available for the command line.&lt;/p&gt;

&lt;p&gt;I wrote a ‘custom audit’ that let me measure diff rendering times for the Pull Request page, and a batch tool for executing multiple runs. Hooray for &lt;strong&gt;reliable frontend metric gathering&lt;/strong&gt;!&lt;/p&gt;

&lt;p&gt;In this instance, the audit measures our diff rendering time. This is shared for everyone’s enjoyment (everyone loves code right?) but bear in mind this was written as an internal dev tool. Production code would have things like nice error handling. Also note that this is for &lt;strong&gt;v3.2.1&lt;/strong&gt; of Lighthouse.&lt;/p&gt;

&lt;p&gt;First, the code you are measuring should mark and measure &lt;a href="https://developer.mozilla.org/en-US/docs/Web/API/User_Timing_API/Using_the_User_Timing_API"&gt;User Timing&lt;/a&gt; events for Lighthouse to pickup:&lt;/p&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;performance.mark("DIFFSET.RENDER.START");

(...time passes..)

performance.mark("DIFFSET.RENDER.END");
performance.measure("DIFFSET.RENDER.DURATION", "DIFFSET.RENDER.START", "DIFFSET.RENDER.END");
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Next, the custom audit.&lt;/p&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight hack"&gt;&lt;code&gt;&lt;span class="c1"&gt;// diff-rendering-only-audit.js&lt;/span&gt;

&lt;span class="s1"&gt;'use strict'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="nx"&gt;const&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;Audit&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'lighthouse'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="c1"&gt;// Custom audits are provided to Lighthouse as classes&lt;/span&gt;
&lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;DiffRenderingOnlyAudit&lt;/span&gt; &lt;span class="k"&gt;extends&lt;/span&gt; &lt;span class="nx"&gt;Audit&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;

 &lt;span class="c1"&gt;// Tell Lighthouse about this audit&lt;/span&gt;
 &lt;span class="k"&gt;static&lt;/span&gt; &lt;span class="nx"&gt;get&lt;/span&gt; &lt;span class="nx"&gt;meta&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
 &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
 &lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'diff-rendering-only-audit'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
 &lt;span class="nx"&gt;title&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'Diffs rendered'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
 &lt;span class="nx"&gt;failureTitle&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'Diff rendering slow'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
 &lt;span class="nx"&gt;description&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'Time to render a diffset'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
 &lt;span class="nx"&gt;requiredArtifacts&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s1"&gt;'traces'&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
 &lt;span class="p"&gt;};&lt;/span&gt;
 &lt;span class="p"&gt;}&lt;/span&gt;

 &lt;span class="c1"&gt;// Lighthouse will call this static method&lt;/span&gt;
 &lt;span class="k"&gt;static&lt;/span&gt; &lt;span class="nx"&gt;audit&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;artifacts&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
 &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nx"&gt;Promise&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;resolve&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="err"&gt;\&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
 &lt;span class="nx"&gt;getAuditResult&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;artifacts&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;resolve&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
 &lt;span class="p"&gt;});&lt;/span&gt;
 &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="c1"&gt;// Lighthouse produces a 'trace' of the page render.&lt;/span&gt;
&lt;span class="c1"&gt;// Process the trace to get the timing information.&lt;/span&gt;
&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;getAuditResult&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;artifacts&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;resolveAuditResult&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
 &lt;span class="k"&gt;const&lt;/span&gt; &lt;span class="no"&gt;trace&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;artifacts&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;traces&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;defaultPass&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

 &lt;span class="k"&gt;const&lt;/span&gt; &lt;span class="no"&gt;tabTrace&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;artifacts&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;requestTraceOfTab&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;trace&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
 &lt;span class="k"&gt;const&lt;/span&gt; &lt;span class="no"&gt;duration&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;extractDuration&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;tabTrace&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

 &lt;span class="c1"&gt;// Tell Lighthouse the result&lt;/span&gt;
 &lt;span class="nx"&gt;resolveAuditResult&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
 &lt;span class="nx"&gt;rawValue&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;duration&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
 &lt;span class="nx"&gt;displayValue&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;duration&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
 &lt;span class="nx"&gt;score&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
 &lt;span class="nx"&gt;scoreDisplayMode&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'manual'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
 &lt;span class="p"&gt;});&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="c1"&gt;// Compute the duration of the specific User Timing measurement&lt;/span&gt;
&lt;span class="k"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;extractDuration&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;tabTrace&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
 &lt;span class="nx"&gt;let&lt;/span&gt; &lt;span class="nx"&gt;duration&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

 &lt;span class="c1"&gt;// Find the User Timing measurement event&lt;/span&gt;
 &lt;span class="k"&gt;const&lt;/span&gt; &lt;span class="no"&gt;durationEvents&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;tabTrace&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;processEvents&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;filter&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
 &lt;span class="nx"&gt;event&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="err"&gt;\&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;event&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;name&lt;/span&gt; &lt;span class="o"&gt;===&lt;/span&gt; &lt;span class="s1"&gt;'DIFFSET.RENDER.DURATION'&lt;/span&gt;
 &lt;span class="p"&gt;);&lt;/span&gt;

 &lt;span class="c1"&gt;// Lighthouse records 'begin' and 'end' events for a measure&lt;/span&gt;
 &lt;span class="k"&gt;const&lt;/span&gt; &lt;span class="no"&gt;begin&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;durationEvents&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;find&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;event&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="err"&gt;\&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;event&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;ph&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;toLowerCase&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;===&lt;/span&gt; &lt;span class="s1"&gt;'b'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
 &lt;span class="k"&gt;const&lt;/span&gt; &lt;span class="no"&gt;end&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;durationEvents&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;find&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;event&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="err"&gt;\&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;event&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;ph&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;toLowerCase&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;===&lt;/span&gt; &lt;span class="s1"&gt;'e'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

 &lt;span class="c1"&gt;// Event duration in milliseconds&lt;/span&gt;
 &lt;span class="nx"&gt;duration&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nb"&gt;end&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;ts&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="nx"&gt;begin&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;ts&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
 &lt;span class="nx"&gt;duration&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;Math&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="nb"&gt;ceil&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nb"&gt;end&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;ts&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="nx"&gt;begin&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;ts&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt; &lt;span class="mi"&gt;1000&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

 &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;info&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'\&amp;gt;\&amp;gt;\&amp;gt; Diff Rendering Only duration (ms): '&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;duration&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

 &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;duration&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;module&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;exports&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;DiffRenderingOnlyAudit&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Create a config file for Lighthouse that runs the above audit.&lt;/p&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// example-config.js
module.exports = {
 extends: 'lighthouse:default',
 settings: {
 throttlingMethod: 'provided',
 onlyAudits: [
 'user-timings',
 'diff-rendering-only-audit',
 ],
 emulatedFormFactor: 'desktop',
 logLevel: 'info',
 },

 audits: [
 'diff-rendering-only-audit',
 ],

 categories: {
 pullRequestMetrics: {
 title: 'Pull Request Metrics',
 description: 'Metrics for pull request page',
 auditRefs: [
 { id: 'diff-rendering-only-audit', weight: 1 },
 ],
 },
 },
};
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Run the above audit once.&lt;/p&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;lighthouse \&amp;lt;your url\&amp;gt; --config-path=./config.js --emulated-form-factor=desktop --chrome-flags="--headless"
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Next, an example node script to batch run the Lighthouse audit.&lt;/p&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight hack"&gt;&lt;code&gt;&lt;span class="k"&gt;const&lt;/span&gt; &lt;span class="no"&gt;commandLineArgs&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'command-line-args'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="k"&gt;const&lt;/span&gt; &lt;span class="no"&gt;lighthouse&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'lighthouse'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="k"&gt;const&lt;/span&gt; &lt;span class="no"&gt;chromeLauncher&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'chrome-launcher'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="k"&gt;const&lt;/span&gt; &lt;span class="no"&gt;log&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'lighthouse-logger'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="nx"&gt;const&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;mean&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;sampleStandardDeviation&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'simple-statistics'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="k"&gt;const&lt;/span&gt; &lt;span class="no"&gt;lighthouseConfig&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'./config.js'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="c1"&gt;// Command line arguments&lt;/span&gt;
&lt;span class="k"&gt;const&lt;/span&gt; &lt;span class="no"&gt;argsDefintion&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
 &lt;span class="c1"&gt;// n: number of runs to execute &lt;/span&gt;
 &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;name&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'n'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;type&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;Number&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;

 &lt;span class="c1"&gt;// url: url to test against&lt;/span&gt;
 &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;name&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'url'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;type&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;defaultOption&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
&lt;span class="p"&gt;];&lt;/span&gt;
&lt;span class="k"&gt;const&lt;/span&gt; &lt;span class="no"&gt;args&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;commandLineArgs&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;argsDefintion&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="nx"&gt;args&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;url&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
 &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;error&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'No url specified'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
 &lt;span class="k"&gt;return&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;const&lt;/span&gt; &lt;span class="no"&gt;lighthouseFlags&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
 &lt;span class="nx"&gt;logLevel&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'info'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;

&lt;span class="nx"&gt;go&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;

&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;go&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
 &lt;span class="k"&gt;const&lt;/span&gt; &lt;span class="no"&gt;n&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;args&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;n&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
 &lt;span class="k"&gt;const&lt;/span&gt; &lt;span class="no"&gt;diffRenderTimes&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[];&lt;/span&gt;

 &lt;span class="c1"&gt;// Launch Chrome&lt;/span&gt;
 &lt;span class="k"&gt;const&lt;/span&gt; &lt;span class="no"&gt;chromeFlags&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s1"&gt;'--headless'&lt;/span&gt;&lt;span class="p"&gt;];&lt;/span&gt;
 &lt;span class="k"&gt;const&lt;/span&gt; &lt;span class="no"&gt;chrome&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;chromeLauncher&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;launch&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="nx"&gt;chromeFlags&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;
 &lt;span class="nx"&gt;lighthouseFlags&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;port&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;chrome&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;port&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

 &lt;span class="c1"&gt;// Run lighthouse n-times&lt;/span&gt;
 &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;let&lt;/span&gt; &lt;span class="nx"&gt;i&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nx"&gt;i&lt;/span&gt; &lt;span class="err"&gt;\&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt; &lt;span class="nx"&gt;n&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nx"&gt;i&lt;/span&gt;&lt;span class="o"&gt;++&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
 &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="nb"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sb"&gt;`Start run ${i + 1}...`&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
 &lt;span class="nx"&gt;diffRenderTimes&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;push&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;runLighthouse&lt;/span&gt;&lt;span class="p"&gt;());&lt;/span&gt;
 &lt;span class="p"&gt;}&lt;/span&gt;

 &lt;span class="c1"&gt;// Compute statistics&lt;/span&gt;
 &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="nb"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'All render times:'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
 &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="nb"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;diffRenderTimes&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="nb"&gt;join&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'\n'&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;
 &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="nb"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'MEAN (ms):'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;mean&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;diffRenderTimes&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;toFixed&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;
 &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="nb"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'SD (ms):'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;sampleStandardDeviation&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;diffRenderTimes&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;toFixed&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;

 &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;chrome&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;kill&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;runLighthouse&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
 &lt;span class="k"&gt;const&lt;/span&gt; &lt;span class="no"&gt;results&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;lighthouse&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;args&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;url&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;lighthouseFlags&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;lighthouseConfig&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
 &lt;span class="nb"&gt;delete&lt;/span&gt; &lt;span class="nx"&gt;results&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;artifacts&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

 &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;results&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;lhr&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;audits&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s1"&gt;'diff-rendering-only-audit'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;rawValue&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Finally, run the batch script.&lt;/p&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;node batch.js \&amp;lt;your url\&amp;gt; --n=\&amp;lt;the number of executions\&amp;gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Running my new custom Lighthouse audit gave me these two sets of numbers (rendering times in milliseconds):&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--mxnsPGeV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AxhQ0ROVJAQBJiTAy88PtzQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--mxnsPGeV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AxhQ0ROVJAQBJiTAy88PtzQ.png" alt=""&gt;&lt;/a&gt;Source (lighthouse image): &lt;a href="https://upload.wikimedia.org/wikipedia/commons/d/d8/AgulhasLighthouse.jpg"&gt;Wikimedia.org&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So, the average time is lower by ~5%. Hooray!! All is well!&lt;/p&gt;

&lt;h3&gt;
  
  
  “But not so fast… the average isn’t a lot lower, and the rendering times are all a little different. How do you know this shows a real improvement?? And also, get back to the beer.”
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--KjexEi8A--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/774/1%2AYmFkXN4ucGsiqZGIiCTnhw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--KjexEi8A--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/774/1%2AYmFkXN4ucGsiqZGIiCTnhw.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You got me! And well spotted! This is the core of the question! The rendering times &lt;em&gt;are&lt;/em&gt; all a bit different. I need to be confident the improved rendering time isn’t just a random quirk of this sample.&lt;/p&gt;

&lt;p&gt;Luckily our friend Billy the beer brewer gave us a tool called the &lt;strong&gt;&lt;em&gt;t-test&lt;/em&gt;&lt;/strong&gt;. This is how Billy worked out which supply of hops would give him the best beer!&lt;/p&gt;

&lt;p&gt;It lets me ask: Given the amount of ‘noise’ in my example rendering times, &lt;strong&gt;is this difference between the averages likely to be a &lt;em&gt;real&lt;/em&gt; difference&lt;/strong&gt; or just a random fluctuation? The phrasing you hear is: is it &lt;em&gt;significant?&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Using Billy’s tool, I get these two values:&lt;/p&gt;

&lt;p&gt;t = 10.5457, &lt;strong&gt;p &amp;lt; .0001&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This tells me: &lt;strong&gt;The probability that my improved rendering times came from random chance are less than one-in-ten-thousand&lt;/strong&gt; 1 &lt;strong&gt;.&lt;/strong&gt; I can conclude: the code change gives a speed improvement that averages only ~5%, but it’s highly likely to be a _real _5% &lt;strong&gt;**&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;** Caveat: I’m glossing over a LOT of details around good experimental design. This is just highly simplified example! And FYI, results in the real, non-dev, world (medicine, etc.) are never as neat as in this example. They show a lot more noise.&lt;/em&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  “What’s the takeaway?”
&lt;/h3&gt;

&lt;p&gt;This post is about opening developer eyes to the connection between statistics and coding. (It’s not meant to be a detailed how-to.)&lt;/p&gt;

&lt;p&gt;The core message is: use the power beer gave us to &lt;strong&gt;make data-driven &lt;em&gt;code&lt;/em&gt; decisions.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Many small improvements in performance can add up to big improvements — as long as those small improvements are &lt;em&gt;real&lt;/em&gt; and not just random quirks. To make sure they’re real:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Collect metrics from your own code.&lt;/li&gt;
&lt;li&gt;Run significance tests to see if you’re making a real difference.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  “So, can I have a beer now?”
&lt;/h3&gt;

&lt;p&gt;Yes.&lt;/p&gt;

&lt;p&gt;This post was written by Christian Doan — “&lt;em&gt;I’m a Senior Developer on Bitbucket frontend team. My current project is migrating legacy code to a modern SPA stack. I’m passionate about great user experiences and fast performance. Connect with me on&lt;/em&gt; &lt;a href="https://www.linkedin.com/in/christianadoan/"&gt;&lt;em&gt;linkedin&lt;/em&gt;&lt;/a&gt;&lt;em&gt;.”&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Originally published at&lt;/em&gt; &lt;a href="https://bitbucket.org/blog/beer-bravado-bitbucket-using-data-to-drive-code-decisions"&gt;&lt;em&gt;bitbucket.org&lt;/em&gt;&lt;/a&gt; &lt;em&gt;on February 26, 2019.&lt;/em&gt;&lt;/p&gt;





</description>
      <category>javascript</category>
      <category>softwaredevelopment</category>
      <category>frontenddev</category>
      <category>programming</category>
    </item>
    <item>
      <title>Preventing Git Rebase Fights</title>
      <dc:creator>Ash Moosa</dc:creator>
      <pubDate>Thu, 11 Oct 2018 15:58:52 +0000</pubDate>
      <link>https://dev.to/atlassian/preventing-git-rebase-fights-4hd4</link>
      <guid>https://dev.to/atlassian/preventing-git-rebase-fights-4hd4</guid>
      <description>

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--mPNeajhW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2ACWQi11d0x2x15gGyowtA4Q.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--mPNeajhW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2ACWQi11d0x2x15gGyowtA4Q.jpeg" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  What’s a Git Rebase Fight?
&lt;/h3&gt;

&lt;p&gt;Have you ever experienced this situation?&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;You go to merge your PR (pull-request), but the PR says it must be rebased before you can merge.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ZEUb2TFO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/667/0%2A81bGI8nqBr3mF3Gn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ZEUb2TFO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/667/0%2A81bGI8nqBr3mF3Gn.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;You rebase it, which kicks off a new build. But the build must complete before you’re allowed to merge.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--g2fS6RhJ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/667/0%2AASYWGCcux35sILLW.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--g2fS6RhJ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/667/0%2AASYWGCcux35sILLW.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The build completes! Yay! But while you were waiting, someone else managed to merge &lt;em&gt;their&lt;/em&gt; PR, updating &lt;em&gt;origin/master&lt;/em&gt; with their work. Uh oh!&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--LhNNxQ5H--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/667/0%2Ai-nHPN2YrkcLSLzE.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--LhNNxQ5H--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/667/0%2Ai-nHPN2YrkcLSLzE.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;You go back to step 1 and hope for better luck this time…&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Bumping into this loop once or twice a month is not a big deal (especially if step 2 takes under 10 seconds). But sometimes the situation can become pathological.&lt;/p&gt;

&lt;h3&gt;
  
  
  Solution: Optimistic Build Status Propagation
&lt;/h3&gt;

&lt;p&gt;You can use $(git diff TARGET...SOURCE | git patch-id) to prevent these rebase fights. This is handy when you know the build is very likely to succeed (e.g., squashes, amends, clean rebases, clean sync-merges).&lt;/p&gt;

&lt;p&gt;The technique is called “Optimistic Build Status Propagation” because it uses the output of “git patch-id” as a heuristic to propagate build status to newer commit-ids without requiring the actual full build to finish. It works like this:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;You push a new branch to your central git server.&lt;/li&gt;
&lt;li&gt;The push triggers a build on your CI (continuous integration) server.&lt;/li&gt;
&lt;li&gt;The build eventually succeeds. The branch’s tip commit is marked with a SUCCESS flag.&lt;/li&gt;
&lt;li&gt;You decide to rebase your branch.&lt;/li&gt;
&lt;li&gt;During the rebase your git server notices two things: 1. The commit before the rebase had a successful build filed against it, and &lt;strong&gt;2. The rebase was clean (no conflicts).&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;The rebase triggers a build on your CI server. The CI server sends your git server an IN-PROGRESS notification.&lt;/li&gt;
&lt;li&gt;Your git server receives the IN-PROGRESS notification. Because the git server also knows the rebase was clean, and also knows the pre-rebase commit had a SUCCESS flag, your git server optimistically marks the new tip commit with SUCCESS instead of IN-PROGRESS.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;In other words, the “SUCCESS” flag from before the rebase is &lt;em&gt;propagated&lt;/em&gt; to the commit created &lt;em&gt;after&lt;/em&gt; the rebase. That’s why it’s called “Optimistic Build Status Propagation”. It lets you merge immediately after the rebase, since there’s no need to wait for the build to complete.&lt;/p&gt;

&lt;p&gt;This optimistic window is temporary. Only the IN-PROGRESS flag is intercepted and replaced with a SUCCESS flag. Eventually the CI server will complete the rebased build and send a final SUCCESS or FAILURE notification. These are dutifully recorded of course, replacing any previous flags filed against the commit.&lt;/p&gt;

&lt;p&gt;This technique is invaluable for shops running a fast-forward merge policy alongside a build-all-branches policy. If your builds are even just a little bit slow (e.g., 3 minutes or worse), your staff are probably waging an infinite rebase war against each other. Or they’ve found a better job somewhere else. Or they’ve disabled those merge policies.&lt;/p&gt;

&lt;p&gt;If you’re on Bitbucket Server there is at least one plugin for this: &lt;a href="https://marketplace.atlassian.com/apps/1214545/pr-booster-for-bitbucket-server?hosting=server&amp;amp;tab=overview"&gt;PR-Booster for Bitbucket Server&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Sm7NUA8U--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/806/0%2Aff0HI2d7KO69tpAe.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Sm7NUA8U--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/806/0%2Aff0HI2d7KO69tpAe.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I’m not aware of any pre-baked solutions for this problem on Gitlab or Github.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--9vP7BJTY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/668/0%2ApBVV7hEgCxQaEo88.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--9vP7BJTY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/668/0%2ApBVV7hEgCxQaEo88.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  What Causes Git Rebase Fights?
&lt;/h3&gt;

&lt;p&gt;Fast-forward merge policy causes rebase fights.&lt;/p&gt;

&lt;p&gt;A fast-forward merge policy only lets PRs merge if they are ahead of &lt;em&gt;origin/master&lt;/em&gt;. In other words, PRs must be rebased before they can merge. The policy keeps git history neat, clean and linear by eliminating merge commits. But the policy can also cause rebase fights. I can think of two situations in particular where this happens:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;High contention for the merge right-of-way and repo is so large that rebases are slow.&lt;/strong&gt; I suspect this situation is rare and confined to very large teams. Rebases are only slow with very large repos, and you’d need at least 50+ engineers targeting the same &lt;em&gt;origin/master&lt;/em&gt; before the contention would get high enough. Monorepos in particular may be vulnerable to this.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Slow secondary commit validation processes (e.g., builds must succeed before merge, but builds are 3+ minutes).&lt;/strong&gt; I suspect this situation is much more common.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;If your rebase fights are happening because of scenario 1 (very large repo + very large team), then you should probably forget about running a fast-forward policy. Sorry, but you need those merge commits. In exchange for a messier commit graph you get improved productivity. It’s a good tradeoff!&lt;/p&gt;

&lt;p&gt;If your rebase fights are happening because of scenario 2 (slow secondary processes), then “Optimistic Build Status Propagation” is available as a solid mitigation. Under scenario 2 you can have both a clean commit graph and the productive team!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The rest of this blog post is about how&lt;/strong&gt; &lt;strong&gt;$(git diff TARGET...SOURCE | git patch-id) works under the hood.&lt;/strong&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Triple-Dot-Diff and “git patch-id”
&lt;/h3&gt;

&lt;p&gt;I refer to “git diff A…B” as triple-dot-diff. When people complain about Git’s usability, the triple-dot operator is certainly one of Git’s blemishes. The operator’s behaviour is inconsistent across various commands (e.g., “git log A…B” does something quite different).&lt;/p&gt;

&lt;p&gt;The manual for “git diff” explains the triple-dot-diff like so:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“git diff A…B” is equivalent to “git diff $(git-merge-base A B) B”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Visually, it looks like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--JXY2i3RJ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/568/0%2AxjCjjp-EAUJet2-1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--JXY2i3RJ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/568/0%2AxjCjjp-EAUJet2-1.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This is because the merge-base of &lt;em&gt;master&lt;/em&gt; and &lt;em&gt;branch&lt;/em&gt; is commit cc603d1, the last commit they had in common before they diverged. And so “git diff &lt;em&gt;master&lt;/em&gt;…&lt;em&gt;branch&lt;/em&gt;” is equivalent to “git diff cc603d1 ee7b565”.&lt;/p&gt;

&lt;p&gt;Turns out &lt;strong&gt;&lt;em&gt;clean&lt;/em&gt;&lt;/strong&gt; rebases, squashes, merge-squashes, and sync-merges (and amends, of course) do not perturb this fundamental diff. The command “git &lt;em&gt;merge&lt;/em&gt;…&lt;em&gt;branch&lt;/em&gt;” (with three dots) is stable even if &lt;em&gt;master&lt;/em&gt; advances or &lt;em&gt;branch&lt;/em&gt; is rebased. The line numbers might change, and hunks might be rearranged, but the fundamental diff itself does not change unless there’s a conflict resolution (or an &lt;a href="https://stackoverflow.com/questions/1461909/evil-merges-in-git"&gt;evil merge&lt;/a&gt;). Atlassian’s &lt;a href="https://marketplace.atlassian.com/apps/1211449/auto-unapprove-for-bitbucket-server?hosting=server&amp;amp;tab=overview"&gt;Auto Unapprove&lt;/a&gt; plugin explores this in detail in its &lt;a href="https://bitbucket.org/atlassian/stash-auto-unapprove-plugin/issues/15/base-unapprove-on-git-diff-targetsource"&gt;issue #15&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;If we were writing “Optimistic Build Status Propagation” from scratch, generating canonicalized diffs would be a big headache. Fortunately, the “git patch-id” command already has this covered, with some extra help coming from its “ — stable” option:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;— stable&lt;br&gt;&lt;br&gt;
Use a “stable” sum of hashes as the patch ID. With this option&lt;br&gt;&lt;br&gt;
reordering file diffs that make up a patch does not affect the ID&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Here’s some examples using &lt;em&gt;master&lt;/em&gt; and &lt;em&gt;branch&lt;/em&gt; from the diagram (clone from &lt;a href="https://bitbucket.org/gsylviedavies/triple-dot-diff/commits/all"&gt;here&lt;/a&gt; if you must!):&lt;/p&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git diff master...branch | git patch-id --stable
**790e0c0693c61e28fa1b3eea204bafe3946f5cba**
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;If I synch-merge (I’m on &lt;em&gt;branch&lt;/em&gt;):&lt;/p&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git merge master -m 'merge'
git diff master...branch | git patch-id --stable
**790e0c0693c61e28fa1b3eea204bafe3946f5cba**
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;If I retreat and rebase:&lt;/p&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git reset --hard origin/branch
git rebase master
git diff master...branch | git patch-id --stable
**790e0c0693c61e28fa1b3eea204bafe3946f5cba**
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;The patch-id doesn’t change! This makes the command (triple-dot-diff piped into patch-id) perfect for determining when rebases and other common branch operations have not changed the underlying work sitting on the source branch. Since the underlying patch has not changed, one can optimistically presume the build will probably have the same result.&lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;Fast-forward merges are great, because they avoid pointless merges and keep the history clean. Requiring successful builds before merging is great because it prevents broken builds. But add these together and you might find yourself in an infinite rebase fight!&lt;/p&gt;

&lt;p&gt;Fortunately you can use $(git diff TARGET...SOURCE | git patch-id) to stop the fighting.&lt;/p&gt;

&lt;p&gt;If you’re on Bitbucket Server, install the &lt;a href="https://marketplace.atlassian.com/apps/1214545/pr-booster-for-bitbucket-server?hosting=server&amp;amp;tab=overview"&gt;PR-Booster&lt;/a&gt; add-on to deploy the fix instantly.&lt;/p&gt;

&lt;p&gt;Otherwise roll your own, and let me know when you do! Email me at julius at mergebase.com.&lt;/p&gt;

&lt;p&gt;Happy rebasing!&lt;/p&gt;

&lt;p&gt;(p.s. For those on Bitbucket Server, I use &lt;a href="https://marketplace.atlassian.com/apps/1217635/control-freak-for-bitbucket-server?hosting=server&amp;amp;tab=overview"&gt;Control Freak&lt;/a&gt; to enforce a fast-forward merge policy on git repositories I control.)&lt;/p&gt;

&lt;p&gt;&lt;em&gt;This is a guest post for Bitbucket written by Julius Musseau at &lt;a href="http://www.mergebase.com"&gt;&lt;em&gt;mergebase.com&lt;/em&gt;&lt;/a&gt;. To contribute to the Bitbucket blog, &lt;a href="https://bitbucket.org/product/write"&gt;&lt;em&gt;apply here&lt;/em&gt;&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;


</description>
      <category>versioncontrol</category>
      <category>git</category>
      <category>technology</category>
      <category>programming</category>
    </item>
  </channel>
</rss>
