<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Mattsi-Jansky</title>
    <description>The latest articles on DEV Community by Mattsi-Jansky (@mattsijansky).</description>
    <link>https://dev.to/mattsijansky</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/mattsijansky"/>
    <language>en</language>
    <item>
      <title>Testing Your Website for Visual Regressions With BackstopJS</title>
      <dc:creator>Mattsi-Jansky</dc:creator>
      <pubDate>Mon, 17 May 2021 15:01:08 +0000</pubDate>
      <link>https://dev.to/mattsijansky/testing-your-website-for-visual-regressions-with-backstopjs-bn2</link>
      <guid>https://dev.to/mattsijansky/testing-your-website-for-visual-regressions-with-backstopjs-bn2</guid>
      <description>&lt;p&gt;&lt;em&gt;First posted on &lt;a href="https://codurance.com/2020/01/16/backstopjs-tutorial/" rel="noopener noreferrer"&gt;the Codurance blog&lt;/a&gt;. Cover photo by Alexander Popov on &lt;a href="https://unsplash.com/photos/-1qb8SIBzKY" rel="noopener noreferrer"&gt;Unsplash&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Snapshot testing compares a "snapshot" of the output of a prior version of your software to output from the latest version, to check for unintended changes. When a difference is found you either approve it, by updating the expected output snapshot, or fix the cause of the difference.&lt;/p&gt;

&lt;p&gt;Visual regression testing is a form of snapshot testing that tests a web front-end. It goes beyond testing the markup or layout by testing the rendered page captured in an emulated browser. As such they can "catch CSS Curve Balls" &lt;a href="https://garris.github.io/BackstopJS/" rel="noopener noreferrer"&gt;as BackstopJS says&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;BackstopJS is a framework for visual regression testing, written in Javascript. It treats your web service as a black box, so your website doesn't need to be written in Javascript to work with BackstopJS. One of the benefits it offers is a very comprehensive and helpful diff between your snapshots, embedded in a HTML report that it generates. An example below shows how the scrubber diff method allows you to see both test and reference snapshots simultaneously. You can move the red line to change where the boundry lies.&lt;/p&gt;

&lt;p&gt;This article will explain how to set up BackstopJS and some good practices I've picked up from using it in anger. You'll need to have some awareness of Docker Compose and Yarn or NPM.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fraw.githubusercontent.com%2FMattsi-Jansky%2Fmatt.si%2Fmaster%2Fcontent%2Fblog%2F2020-01%2Fbackstopjs-tutorial%2Fbackstopjs-scrubber.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fraw.githubusercontent.com%2FMattsi-Jansky%2Fmatt.si%2Fmaster%2Fcontent%2Fblog%2F2020-01%2Fbackstopjs-tutorial%2Fbackstopjs-scrubber.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Why
&lt;/h2&gt;

&lt;p&gt;Why use visual regression testing? I'll assume that you appreciate why testing as a general practice is necessary, so here are several scenarios that visual regression tests will catch and other testing techniques won't:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;CSS regressions: Often we'll make a change to a CSS style to move that one button into the space we'd like it to be, but how do we know that it hasn't had an undesirable knock-on effect on some other part of the website that uses the same CSS class? Instead of checking every element that may match your CSS selector manually (which we will most often forget to do), run your visual regression tests.&lt;/li&gt;
&lt;li&gt;Responsiveness: Most often we're working on a 1080p screen, but many (perhaps most) of our users will be using their smartphones. Visual regression tests can test an assortment of different screen sizes, giving you confidence that your changes haven't broken the responsive page at other sizes and saving you time manually testing the page at different resolutions.&lt;/li&gt;
&lt;li&gt;Dependabot: It's a wonderful tool that saves you from manually keeping all of your dependencies up-to-date. In my current team &lt;a href="https://www.codurance.com/publications/2019/02/24/taming-dependabot" rel="noopener noreferrer"&gt;we use dependabot aggressively&lt;/a&gt; - we have extensive testing and auto-merge any Dependabot PRs that pass all our tests. In fact, in terms of PRs or commits it's the most active member of our team by far. However, you can't place your trust in Dependabot's auto-merge in the front-end if you aren't testing for visual regressions. Before we introduced visual regression testing we had instances where Dependabot would automatically update to a new version of a dependency that introduced a visual change (at one point even removing a prominent navigation link), and it would be automatically merged and deployed to production. We trust Dependabot to change our software's behaviour because we have extensive tests of that behaviour, and we trust it to change our software's visual appearance because we have visual regression tests.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  How
&lt;/h2&gt;

&lt;p&gt;Setting up BackstopJS isn't difficult. But I'm going to make it more difficult because we want to set it up in a way that's repeatable (ie it always has the same result for the same version of the software) and automated. You don't want your visual regression tests to be "flaky", to pass some times and not others. It's worth putting extra effort in to get this right, otherwise they may be less than useful. And you want them to be automated because that way you can trust yourself and others not to forget to run the test or update the snapshots, and you can use them as checks for Dependabot's auto-merge.&lt;/p&gt;

&lt;p&gt;Ultimately, your workflow should look like:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;You push some horribly misguided code, introducing an unintended visual change.&lt;/li&gt;
&lt;li&gt;Your CI runs the visual regression tests and they fail, turning your build red.&lt;/li&gt;
&lt;li&gt;You look at BackstopJS's test report from your CI, and spot the problem.&lt;/li&gt;
&lt;li&gt;You decide whether this change is acceptable, and either update the snapshots to this change or fix the code that caused this change.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Docker
&lt;/h3&gt;

&lt;p&gt;The first step to creating a repeatable visual regression test is to run the tests on the same platform every time. Otherwise, you're in for trouble. Small changes in things like font rendering between operating systems can prevent the reference snapshots generated on your local machine from matching the test snapshots generated on your CI server. And if your CI server has multiple test runners on different platforms you've got even more unpredictability on your hands. To get around issues like these we use Docker containers via Docker Compose. This guarantees the same platform for every test run. This approach also has the advantage that you don't need to install BackstopJS locally and end up with different versions of it on each developer's machine; instead you have one consistent version, and Dependabot can keep it up-to-date. The disadvantage is of course that it's slower.&lt;/p&gt;

&lt;p&gt;Add the following entry to your root &lt;code&gt;docker-compose.yml&lt;/code&gt; (&lt;a href="https://docs.docker.com/compose/gettingstarted/" rel="noopener noreferrer"&gt;create one&lt;/a&gt; if necessary):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;  &lt;span class="na"&gt;visual_regression_tests&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;backstopjs/backstopjs:4.4.2&lt;/span&gt;
    &lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;./test/visual:/src&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This describes a &lt;code&gt;visual_regression_tests&lt;/code&gt; Docker container using the official &lt;code&gt;backstopjs&lt;/code&gt; image, version &lt;code&gt;4.4.2&lt;/code&gt;. The version can be left out, but it's important that it be there for repeatability. You can use Dependabot to keep it up-to-date by creating a Dockerfile for it (until &lt;a href="https://github.com/dependabot/feedback/issues/82" rel="noopener noreferrer"&gt;Dependabot adds Docker Compose support&lt;/a&gt;), which is described below. Of course, you should copy the latest version number from &lt;a href="https://hub.docker.com/r/backstopjs/backstopjs/builds" rel="noopener noreferrer"&gt;the BackstopJS Docker image releases&lt;/a&gt; and use that; &lt;code&gt;4.4.2&lt;/code&gt; may be outdated at time of reading. If you're using a Docker container for your website as well you should add &lt;a href="https://docs.docker.com/compose/compose-file/#depends_on" rel="noopener noreferrer"&gt;a depends_on entry&lt;/a&gt; to that container.&lt;/p&gt;

&lt;p&gt;The last part is the key; the volume configuration &lt;code&gt;./test/visual:/src&lt;/code&gt;. This maps the local relative path &lt;code&gt;./test/visual&lt;/code&gt; to &lt;code&gt;/src&lt;/code&gt; in the container. You may change &lt;code&gt;./test/visual&lt;/code&gt; to any relative path you like, but &lt;code&gt;/src&lt;/code&gt; must be constant because that is where BackstopJS will look inside the container for it's configuration.&lt;/p&gt;

&lt;h3&gt;
  
  
  BackstopJS
&lt;/h3&gt;

&lt;p&gt;Previously, I said that with Docker developers don't need to install a local instance of BackstopJS on their machines. This is true, but with one exception: You. That's because for our next step we're going to create the BackstopJS configuration, and you'll need to use Backstop to create a default instance of the configuration. So try this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;yarn global add backstopjs
&lt;span class="nb"&gt;mkdir&lt;/span&gt; ./test/visual
&lt;span class="nb"&gt;cd&lt;/span&gt; ./test/visual
backstop init
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;First we install BackstopJS (NPM alternative: &lt;code&gt;npm install -g backstopjs&lt;/code&gt;) and then create the folder where our container expects to find the configuration (so change this as you like, but ensure it's consistent with the Docker Compose volume). Then we open the folder and initialise a BackstopJS config there. This creates a few files; &lt;code&gt;backstop.json&lt;/code&gt;, and &lt;code&gt;backstop_data/engine_scripts&lt;/code&gt;. The engine scripts are basic defaults that determine how to run the browser emulator. Unless you're doing something unusual you shouldn't need to change most of them.&lt;/p&gt;

&lt;p&gt;Before going any further, create a &lt;code&gt;.gitignore&lt;/code&gt; in your BackstopJS folder with the following entries:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;backstop_data/bitmaps_test
backstop_data/html_report
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will ensure that the test snapshots and HTML reports generated by Backstop are ignored by Git. You don't want to commit these to version control, but you &lt;em&gt;do&lt;/em&gt; want to commit the other folder it generates; the snapshot references to test against.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;backstop.json&lt;/code&gt; file is your main means of interacting with BackstopJS and to start with should look something like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"backstop_default"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"viewports"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"label"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"phone"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"width"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;320&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"height"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;480&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="err"&gt;…&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"onBeforeScript"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"puppet/onBefore.js"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"onReadyScript"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"puppet/onReady.js"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"scenarios"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"label"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"BackstopJS Homepage"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"cookiePath"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"backstop_data/engine_scripts/cookies.json"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"url"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"https://garris.github.io/BackstopJS/"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"referenceUrl"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;""&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"readyEvent"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;""&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"readySelector"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;""&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"delay"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"hideSelectors"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[],&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"removeSelectors"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[],&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"hoverSelector"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;""&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"clickSelector"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;""&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"postInteractionWait"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"selectors"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[],&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"selectorExpansion"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"expect"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"misMatchThreshold"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mf"&gt;0.1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"requireSameDimensions"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"paths"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"bitmaps_reference"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"backstop_data/bitmaps_reference"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"bitmaps_test"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"backstop_data/bitmaps_test"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"engine_scripts"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"backstop_data/engine_scripts"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"html_report"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"backstop_data/html_report"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"ci_report"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"backstop_data/ci_report"&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="err"&gt;…&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The first thing I'd advise changing is the &lt;code&gt;viewports&lt;/code&gt; property. This property determines the resolutions that the site will be tested at. The default is not very extensive, and in my current team we've settled on the following viewport configuration:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="nl"&gt;"viewports"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"label"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"small"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"width"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;640&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"height"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;480&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"label"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"medium"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"width"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;814&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"height"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;768&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"label"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"large"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"width"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;1066&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"height"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;814&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"label"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"xlarge"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"width"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;1400&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"height"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;1050&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"label"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"xxlarge"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"width"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;1600&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"height"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;1200&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The next interesting property is &lt;code&gt;scenarios&lt;/code&gt;. A scenario defines a test, and you'll want to add one for each major section of your website. With a blog for example you may want to test the blog page and the blog list page, so you would have two scenarios. &lt;/p&gt;

&lt;p&gt;The real trick here that will lead you to either jubilation or despair is figuring out &lt;em&gt;when&lt;/em&gt; to take the snapshot. Browsers, Javascript, web services and HTTP are all such fickle beasts; they may load slightly faster or slower each time you create a snapshot. For your visual regression tests to be repeatable you need them to create the snapshot only when the page has finished loading. If you don't you'll find many test failures caused because the font hadn't loaded in yet, or a pop-up hasn't appeared yet, or a HTTP request to an AJAX dependency hadn't completed yet, et cetra. As such &lt;em&gt;a lot&lt;/em&gt; of the scenario configuration options are about &lt;em&gt;when&lt;/em&gt; to decide that the page has finished loading. This is the real meat of the configuration and each possible option is documented in &lt;a href="https://github.com/garris/BackstopJS#using-backstopjs" rel="noopener noreferrer"&gt;BackstopJS's readme&lt;/a&gt;, but a few key ones to highlight are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;cookiePath&lt;/code&gt;: This enables you to enter faked cookies into the browser emulator, this can be useful to send a token to an authenticated web service. Just set it to a relative path to a JSON file; the expected format is described in a sample file, &lt;code&gt;engine_scripts/cookies.json&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;url&lt;/code&gt;: This is the full address of the web page being tested. If you're using a Docker container to host your site you may use the name of the container, like &lt;code&gt;http://website:8080/myPage&lt;/code&gt;. Otherwise, you may run it locally and use something like &lt;code&gt;http://localhost:8080/myPage&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;readyEvent&lt;/code&gt;: Listen out for a console log telling you the page is fully loaded before starting. This is useful for repeatability.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;readySelector&lt;/code&gt;: Similar to the above, this configures Backstop to wait until a particular element (defined by CSS selector) is appearing before starting. &lt;strong&gt;I recommend using this setting and setting it to something that won't appear on any of your error pages&lt;/strong&gt;. If your service doesn't work during a visual regression test you may not know until after you get the report and are staring at a diff between your reference and a 404 page. But if your &lt;code&gt;readySelector&lt;/code&gt; fails you get a timeout error in the output that lets you know that the expected page hasn't loaded, so you get the feedback sooner.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;delay&lt;/code&gt;: &lt;strong&gt;Avoid using this setting if you can&lt;/strong&gt;. It allows you to set an arbitrary time to wait for the page to load before assuming it'll be ready to test. By default it is 0, which means no delay. If you find yourself using this setting, it's because you haven't found a reliable method to tell the browser that the page is loaded. You should only use this as an absolute last resort.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;hideSelectors&lt;/code&gt;/&lt;code&gt;removeSelectors&lt;/code&gt;: If you have some problematic element on the page that you either can't rely on to load in a reliable, timely fashion, or which has some random element that changes each time it's loaded then you can hide those elements using either of these properties.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;scrollToSelector&lt;/code&gt;: BackstopJS will capture the entire document, not just the visible section (unless configured otherwise). However, you may want to trigger some event via scrolling. This setting makes Backstop scroll to a particular selector.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;selectors&lt;/code&gt;: By default BackstopJS will capture the entire document. But if you want to test a specific set of regions, you can use this to limit the elements used to generate the snapshot. It's the opposite of &lt;code&gt;hideSelectors&lt;/code&gt;/&lt;code&gt;removeSelectors&lt;/code&gt; (but they can be used together). It's especially useful when you want to break a complex page down into smaller parts; you'll get more specific feedback on individual components, making regressions easier to identify.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;misMatchThreshold&lt;/code&gt;: The degree to which two snapshots must be different before the scenario fails. This defaults to &lt;code&gt;0.1&lt;/code&gt; (That's 0.1%, not 10%) and I wouldn't increase it without good reason.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Running the tests
&lt;/h3&gt;

&lt;p&gt;At this point you should be able to run your tests. From the root of the project do &lt;code&gt;docker-compse run visual_regression_tests reference&lt;/code&gt;; this will generate your first reference images. Then try &lt;code&gt;docker-compose run visual_regression_tests test&lt;/code&gt;; this will generate new references and test them against the last ones you captured. I suggest recording these commands as scripts, so that every developer doesn't have to remember them. In Yarn/NPM we add a script to &lt;code&gt;package.json&lt;/code&gt; for run these commands, otherwise we create a shell script inside a &lt;code&gt;./scripts&lt;/code&gt; folder.&lt;/p&gt;

&lt;p&gt;Don't worry if your tests aren't passing the first time; I'll explain some ways that you can improve their consistency and reliabiliy.&lt;/p&gt;

&lt;h3&gt;
  
  
  Dependencies
&lt;/h3&gt;

&lt;p&gt;Just in case I haven't said this enough times: &lt;strong&gt;Repeatability is key&lt;/strong&gt;. One of the obstacles to this repeatability is ensuring that your dependencies are consistent. If you depend on a web service or a database that sends you some data to display on the page, then that service needs to send the same data every time the visual regression tests are run. This means that you need the capability to fake your dependencies. If you depend on a database, then you may want to achieve this by creating a Docker container of your particular database dependency with some minimal fake data. If you're dependent on web services, then I'd recommend using &lt;a href="https://github.com/quii/mockingjay-server" rel="noopener noreferrer"&gt;Mockingjay Server&lt;/a&gt;. It's a Docker container around the mocking service Mockingjay. This makes for a painless and platform agnostic way to fake your dependencies with web services that respond with fake data. Just add something like the following to your &lt;code&gt;docker-compose.yml&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;  &lt;span class="na"&gt;fake_my_service&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;quii/mockingjay-server:1.10.4&lt;/span&gt;
    &lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;./test/fakes:/fakes&lt;/span&gt;
    &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;-config=/fakes/my_service.yaml -port=9099&lt;/span&gt;
    &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;9099:9099"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For this to work you have to have a directory, here specified as &lt;code&gt;./test/fakes&lt;/code&gt;, with a YML file that specifies the endpoints to fake &lt;a href="https://github.com/quii/mockingjay-server#Running-a-fake-server" rel="noopener noreferrer"&gt;following Mockingjay-Server's format&lt;/a&gt;. This may include multiple fakes for different services. We specify which fake file to use in the &lt;code&gt;command&lt;/code&gt; property. Then, we just configure our web service to talk to this fake service when the tests are run. This way we know that our service will reliably, repeatably generate the same output and that's a huge benefit to our testing.&lt;/p&gt;

&lt;h3&gt;
  
  
  Custom Fonts
&lt;/h3&gt;

&lt;p&gt;Downloading a non-default font from some server somewhere will take an unpredictable amount of time, so it harms our repeatability. Rather than reaching for that unreliable &lt;code&gt;delay&lt;/code&gt; setting, however, we can pre-install the fonts on the Docker image to get around this problem altogether. Simply create a &lt;code&gt;Dockerfile&lt;/code&gt; inside your BackstopJS directory with an entry like the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; backstopjs/backstopjs:4.4.2&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;apt-get update &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; apt-get &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-y&lt;/span&gt; fonts-lato
&lt;span class="k"&gt;RUN &lt;/span&gt;apt-get update &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; apt-get &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-y&lt;/span&gt; fonts-font-awesome
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is a very basic Dockerfile that extends the official BackstopJS image (remember to include the version!) and uses &lt;code&gt;apt-get&lt;/code&gt; to install the requisite fonts. This way the browser emulator won't need to download the fonts as they're already installed. You should be able to find the package name of any font you need by searching &lt;a href="https://www.debian.org/distrib/packages" rel="noopener noreferrer"&gt;Debian's package registry&lt;/a&gt;. Then you just need to change your docker-compose entry to build your BackstopJS directory, like so:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;  &lt;span class="na"&gt;visual_regression_tests&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;image: build&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;./tests/visual&lt;/span&gt;
    &lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;./test/visual:/src&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Continuous Integration
&lt;/h3&gt;

&lt;p&gt;You'll want to add a step to your build to run the visual regression tests. If you created a script to run the tests earlier then you can simply plug it in here. Because we're using Docker you needn't install BackstopJS on your CI server, and it will play nicely with Docker-based CI systems like CircleCI. There is an important extra step though; you need to extract the build artifact from  BackstopJS. Otherwise, you won't be able to see why your tests have failed.&lt;/p&gt;

&lt;p&gt;For Jenkins you can achieve this with the &lt;a href="https://wiki.jenkins.io/display/JENKINS/HTML+Publisher+Plugin" rel="noopener noreferrer"&gt;HTML Publisher Plugin&lt;/a&gt;. Though the &lt;a href="https://github.com/garris/BackstopJS/tree/master/examples/Jenkins" rel="noopener noreferrer"&gt;official jenkins support guide&lt;/a&gt; involves setting up a Jenkins job in the traditional web interface, I'd advise against that and use &lt;a href="https://jenkins.io/blog/2017/02/10/declarative-html-publisher/" rel="noopener noreferrer"&gt;the declarative Jenkinsfile method&lt;/a&gt;. If you already have a declarative Jenkinsfile just add something like the following to your &lt;code&gt;always&lt;/code&gt; post-step in your Jenkinsfile:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight groovy"&gt;&lt;code&gt;&lt;span class="n"&gt;post&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
  &lt;span class="n"&gt;always&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;publishHTML&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nl"&gt;target:&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt;
      &lt;span class="nl"&gt;allowMissing:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt;
      &lt;span class="nl"&gt;alwaysLinkToLastBuild:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt;
      &lt;span class="nl"&gt;keepAll:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt;
      &lt;span class="nl"&gt;reportDir:&lt;/span&gt; &lt;span class="s1"&gt;'./test/visual/backstop_data'&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt;
      &lt;span class="nl"&gt;reportFiles:&lt;/span&gt; &lt;span class="s1"&gt;'html_report/index.html'&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt;
      &lt;span class="nl"&gt;reportName:&lt;/span&gt; &lt;span class="s1"&gt;'Visual Regression Tests Report'&lt;/span&gt;
    &lt;span class="o"&gt;])&lt;/span&gt;
  &lt;span class="o"&gt;}&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Testing Repeatability
&lt;/h2&gt;

&lt;p&gt;Lastly, to ensure repeatability you can actually &lt;em&gt;repeat&lt;/em&gt; the tests. When first setting up any tests involving browser emulation or browser automation I won't accept a passing test as correct unless it passes many times, so I can be confident that it isn't going to prove unreliable ("flaky") in the future. A simple Bash script like the following will suffice:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;set&lt;/span&gt; &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="c"&gt;#ensures the script will halt if any of the test runs fail&lt;/span&gt;

&lt;span class="k"&gt;for &lt;/span&gt;i &lt;span class="k"&gt;in&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;0..19&lt;span class="o"&gt;}&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;do
  &lt;/span&gt;yarn &lt;span class="nb"&gt;test&lt;/span&gt;:visual
&lt;span class="k"&gt;done&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Sure, it may take a while to run. Just leave it running in the background while you're doing something else.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;There are a number of other approaches to running BackstopJS, but this is the best type of setup I've found. Hopefully I've equipped you with everything you'll need to get up and running with repeatable BackstopJS visual regression tests on your websites. &lt;a href="https://twitter.com/mattsijansky" rel="noopener noreferrer"&gt;Let me know&lt;/a&gt; how you get on.&lt;/p&gt;

</description>
      <category>testing</category>
      <category>javascript</category>
      <category>frontend</category>
      <category>webdev</category>
    </item>
    <item>
      <title>Open Rights Group Conference 2019 Review</title>
      <dc:creator>Mattsi-Jansky</dc:creator>
      <pubDate>Mon, 20 Jan 2020 08:11:41 +0000</pubDate>
      <link>https://dev.to/mattsijansky/open-rights-group-conference-2019-review-3ah9</link>
      <guid>https://dev.to/mattsijansky/open-rights-group-conference-2019-review-3ah9</guid>
      <description>&lt;p&gt;&lt;em&gt;Originally posted on &lt;a href="https://matt.si/2019-10/orgcon/"&gt;matt.si&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;In July I attended this year's Open Rights Group Conference, a one-day multi-track conference organised by the Open Rights Group (ORG). ORG is the UK's biggest advocacy group for digital rights and campaigns on issues like surveillance, net neutrality and censorship. Despite being a member for many years this is only the second time I've attended an ORGCon, and it was honestly one of the best tech-related events I've been to all year. It's not exclusively for techies, on the contrary I was in the minority. There were many more journalists, activists and lawyers there. I'll dive into what made it great, describing each fascinating session I attended.&lt;/p&gt;

&lt;h2&gt;
  
  
  Session one- Keynote by Edward Snowden
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s---tlkogHw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://raw.githubusercontent.com/Mattsi-Jansky/matt.si/master/content/blog/2019-10/orgcon/orgcon19-snowden.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s---tlkogHw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://raw.githubusercontent.com/Mattsi-Jansky/matt.si/master/content/blog/2019-10/orgcon/orgcon19-snowden.jpg" alt=""&gt;&lt;/a&gt;&lt;br&gt;
Image from &lt;a href="https://twitter.com/greekemmy/status/1149971927848669184"&gt;@greekemmy&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;For the first session of the day American whistleblower Edward Snowden gave a keynote speech followed by Q&amp;amp;A, via videocall from Russia. No-one in the room missed the irony of relying on Google's Hangouts service to talk at a digital rights conference, and the audio was appropriately unreliable. In spite of that Snowden delivered an enlightening, inspiring speech. On discussing his whistleblowing, he made an interesting clarification: That his motivation was about democracy, not about surveillance. What disturbed him more was not the orwellian surveillance of hundreds of millions of innocent people, but the fact that none of them had agreed to it. The fact that there'd been no debate, no chance to object before this policy was implemented. Other highlights from the session:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;He very heavily criticised the Ghost Proposal, a GCHQ bid to encrypt all our communications with a key that only the government can decrypt&lt;sup id="fnref1"&gt;1&lt;/sup&gt;. Specifically he referred to how the NSA lost track of some of their most valuable digital assets that were then used by malicious actors cause huge amounts of damage&lt;sup id="fnref2"&gt;2&lt;/sup&gt;, and questioned whether the "rinky-dink UK government" (with tongue firmly in cheek, I'm sure) would be able to protect a set of keys that anyone could use to decrypt millions of people's communications.&lt;/li&gt;
&lt;li&gt;"The law does not protect society, society protects the law"; Snowden made the argument that laws are tools and only effective so long as conscientious citizens use them and uphold them. 

&lt;ul&gt;
&lt;li&gt;He further arguded that technology is another tool that, like law, can and should be utilised to enforce human rights. One obvious example is encryption; it should surely be illegal to eavesdrop your communications without due process, but encrypting one's communications is another way to enforce that principle.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;He ended his speech on an inspiring note; “We will lose this year and the next one, but we will win- it’s a way of life, a system of beliefs that will not go away, what we did to resist governmental abuse of surveillance will matter to our children” (paraphrased to the best of my memory).&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Session two- "How Can I Trust What I See Online?" panel from Teen AI
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--pgHFiErI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://raw.githubusercontent.com/Mattsi-Jansky/matt.si/master/content/blog/2019-10/orgcon/orgcon19-teenai.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--pgHFiErI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://raw.githubusercontent.com/Mattsi-Jansky/matt.si/master/content/blog/2019-10/orgcon/orgcon19-teenai.jpeg" alt=""&gt;&lt;/a&gt;&lt;br&gt;
Image from &lt;a href="https://medium.com/@AcornAspiration/truth-fiction-and-accountability-how-can-i-trust-what-i-see-online-orgcon-panel-2019-18a443b1b41c"&gt;Acorn Aspirations&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;This panel consisted of four teenagers from Teen AI who've been studying computers, security and AI as well as attending various tech events for young people. It was really inspiring to see young people so interested in technology, they were each well in advance of where I'd been at their age. Unfortunately I've forgotten exactly which panelists said what, having misplaced some notes. Nontheless, my highlights were: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;One panelist made the point that deepfakes may actually make us trust what we see less, thereby making us more skeptical. This may be a positive change in society rather than the bleak forecasts that we usually associate with deepfake tech.&lt;/li&gt;
&lt;li&gt;One panelist when aksed whether young people need to be taught how to be safe online responded "No, old people do". They went on to discuss how young people are that much more tech-savvy and more likely to understand what they're looking at, while it may be that older people are more susceptible to misleading or malicious content.&lt;/li&gt;
&lt;li&gt;"If we don't understand technology we will be controlled by people who do"- a wonderful response from one of the panelists discussing their motivations for learning about technology.&lt;/li&gt;
&lt;li&gt;Another panelist said something that was very novel to me, when making the point that technology is not inherently good or bad. I've always thought of facial recognition as being an inherently orwellian and oppressive tool, but they cited a much more positive example: Police in New Delhi are using it to locate missing children&lt;sup id="fnref3"&gt;3&lt;/sup&gt;. This is one of those wonderfully simple yet effective ideas; go around taking pictures of children at orphenages, use facial recognition to compare them to the faces of missing children that their family have posted online. In just the first four days they identified 3,000 missing children.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Session three- Can Tech be Truly Ethical?
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--RoA7_9m7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://raw.githubusercontent.com/Mattsi-Jansky/matt.si/master/content/blog/2019-10/orgcon/orgcon19-ethics-panel.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--RoA7_9m7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://raw.githubusercontent.com/Mattsi-Jansky/matt.si/master/content/blog/2019-10/orgcon/orgcon19-ethics-panel.jpg" alt=""&gt;&lt;/a&gt;&lt;br&gt;
Image from &lt;a href="https://twitter.com/DavidRaho/status/1149998408075272193"&gt;David Raho&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This was a fantastic session, it really began to open my eyes to how complex ethics in technology can get. We had Paul Dourish (University of California), Lilian Edwards (Newcastle University), Ann Light (University of Sussex) and Gina Neff (University of Oxford) discussing the topic, and each demonstrated obvious expertise in the subject. Among numerous enlightening moments were:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;"Ethics washing"- a new term to me; it refers to the tendency of organisations to create an ethics board, often entirely internally with no public oversight or transparency, and declare themselves to now be ethical. The accusation is that such organisations are trying to escape true scrutiny and regulation by saying "We're ethical now, so there's no need to regulate us".&lt;/li&gt;
&lt;li&gt;Ethical behaviour can be difficult to enforce because of combinatorial complexity. This is an especially significant issue with Internet of Things devices, filling homes and interacting with one another. You may ensure your software acts ethically in as many ways as you can think of, but your software will very frequently find itself in situations that you hadn't imagined. When a thousand different devices are capable of interacting with one another, we can't predict every interaction ahead of time.

&lt;ul&gt;
&lt;li&gt;This may be a good use case apply property-based testing.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;One person in the audience was a member of a worker-owned software business, which is the first time I've heard of such a thing. It sounds like a great development.&lt;/li&gt;
&lt;li&gt;"How do we stop a raging bull"- while we're clamouring to figure out what technology's role in society is and how it affects all of us, technology continues to develop rapidly and crosses new boundaries into new parts of our lives every day. It's a lot easier to start a fire than to put it out; how can ethicists and human rights campaigners keep from always being caught on the back foot?

&lt;ul&gt;
&lt;li&gt;It was suggested that we should look at business models, and the way different revenue sources can affect a business' motivations. Right now, huge swathes of the tech industry are profiting from automated surveillance of their users. What revenue sources could offer more ethical motivations?&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;It's one rule for them, one rule for us: San Francisco has banned facial recognition within the city&lt;sup id="fnref4"&gt;4&lt;/sup&gt;. Yet many of the city's occupants are affluent technologists making their money by continually improving the very same problematic technologies that they're exempt from, and much of the rest of the world is increasingly subjected to.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.jstor.org/stable/20024652"&gt;Do Artifacts Have Politics&lt;/a&gt; was dropped in as a piece of recommended reading.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Session four- Facial Recognition is Now a Reality in the UK
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--KJAaJuPb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://raw.githubusercontent.com/Mattsi-Jansky/matt.si/master/content/blog/2019-10/orgcon/orgcon19-facial-recognition.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--KJAaJuPb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://raw.githubusercontent.com/Mattsi-Jansky/matt.si/master/content/blog/2019-10/orgcon/orgcon19-facial-recognition.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This panel was packed with talented and interesting people- a senior technology correspondent at The Financial Times, the director of Big Brother Watch, the Chair of the London Policing Ethics Panel, and Sian Berry co-leader of the Green Party. The topic was on facial recognition in the UK, and in particular how it's been deployed at Notting Hill Carnival in recent years. The Met Police started using the technology without notice or discussion, and studies have found that it's grossly ineffective&lt;sup id="fnref5"&gt;5&lt;/sup&gt;. My notes from the session:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The Met Police didn't admint to using the technology until Big Brother Watch pushed the issue.&lt;/li&gt;
&lt;li&gt;There is no basis in law for the use of facial recognition. No-one knows whether it is something that needs to be consented to under UK law, or what limitations there are on it.&lt;/li&gt;
&lt;li&gt;There is an ongoing legal battle, brought by Big Brother Watch, to decide the legitimacy of facial recognition in UK law&lt;sup id="fnref6"&gt;6&lt;/sup&gt;.&lt;/li&gt;
&lt;li&gt;Suzanne Shale was a great balancing voice representing the Policing Ethics Panel, and the discussion was much less partisan or one-sided for having her there.&lt;/li&gt;
&lt;li&gt;Sian Berry assured the audience that progress was being made, and the Met Police would be putting a memorandum on place on the use of facial recognition technology.&lt;/li&gt;
&lt;li&gt;Another issue was raised regarding regulation of private facial recognition, taking the cameras in self checkout machines as a particular example. They're constantly facing outward from behind the screen and have a clear view of the consumer. They could be, and may currently be, used to build facial recognition databases and map them to our cards and purchasing histories. There's currently little if any regulation to prevent this.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Session five- The Encryption Dilemma
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--K3VTrxMj--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://raw.githubusercontent.com/Mattsi-Jansky/matt.si/master/content/blog/2019-10/orgcon/orgcon19-ghost-proposal.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--K3VTrxMj--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://raw.githubusercontent.com/Mattsi-Jansky/matt.si/master/content/blog/2019-10/orgcon/orgcon19-ghost-proposal.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This was perhaps my favourite session, it was really remarkable and speaks to the integrity of the organisers and the whole community. The first thing to notice in the picture is that Dr. Ian Levy is missing a photograph. This is because he was added at the last possible minute. This session was supposed to be a one-on-one interview, where Cori Crider and the audience asks questions of Nate Cardozo, privacy policy manager at Facebook and former senior policy-maker at the Electronic Frontier Foundation (EFF), the biggest digital rights advocacy group in the United States. &lt;/p&gt;

&lt;p&gt;Today Nate mostly works with WhatsApp, which is setting new standards for private digital communications by popularising the end-to-end encryption model. This model is infuriating to the likes of GCHQ because it removes their ability to intercept communications. GCHQ's Ghost Proposal is supposed to be a middle-ground where end-to-end encryption can exist between a server and their client, but the state can sit in the middle and read everything. Nate is a big time digital privacy advocate and a very outspoken critic of the Ghost Proposal. He even wrote a well publicised article for the EFF in which he called the Ghost Proposal a "backdoor by another name"&lt;sup id="fnref7"&gt;7&lt;/sup&gt;. &lt;/p&gt;

&lt;p&gt;Enter Dr. Ian Levy, co-author of The Ghost Proposal and Technical Director of the Cyber Security Centre (a part of GCHQ). I gather that he found out about this interview and argued that that, as it was obviously going to center around the ghost proposal, he ought to have an opportunity to defend the proposal. So at the twilight hour the planned session turns from an interview into a debate, and Ian is added to the session. I think it's really wonderful that we can have a nonpartisan debate like this. I really admire Ian for coming to the event knowing that the audience would largely disagree with him and to ORG, Cori and Nate for doing the right thing by giving Ian this chance, and avoiding the conference becoming an echo chamber. Here are my highlights:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Ian went to great lengths to make the point that he wishes to genuinely engage with the digital rights community, to come to a common understanding and find some resolution for the encryption debate that satisfies everyone. Needless to say this is something that GCHQ and the like have an extremely poor track record of, so it's great to see someone making a change. "The fact I'm here should tell you something", Ian says, and it does.&lt;/li&gt;
&lt;li&gt;Ian insists that the Ghost Proposal is nothing more than a proposal and says that he wants to use it to trigger debate, to iterate on this idea and get public discussions moving.&lt;/li&gt;
&lt;li&gt;The end-to-end encryption model is coming to Facebook Messenger too.&lt;/li&gt;
&lt;li&gt;Questioned on WhatsApp security, Nate assures the audience and adds: "If I leave in a huff, stop using WhatsApp".&lt;/li&gt;
&lt;li&gt;Ian responded really well to a difficult question- "If we introduce a backdoor, won't terrorists just write their own encryption?". Ian responds that he'd love that. Cryptography, he argues, is &lt;em&gt;hard&lt;/em&gt;. Unless you're an expert cryptographer, when writing your own encrypted communication software you're almost certain to make a mistake that intelligence services can exploit. &lt;/li&gt;
&lt;li&gt;Ian, perhaps as part of trying to humanise his work and reach out to people, recommends the "Top Secret" exhibition at the Science Museum as a great way to learn the history of GCHQ.&lt;/li&gt;
&lt;li&gt;There was a great question from the crowd criticising the Ghost Protocol- "Are you leading the way by using it in all internal GCHQ messaging? Do the royals use it?". Going back to the point of "one rule for us, another for them"; GCHQ would obviously resist any attempt to weaken their own communication privacy, as would the more affluent members of society.&lt;/li&gt;
&lt;li&gt;The discussion also went briefly into Cloudflre's DNS over HTTPs. Ian and GCHQ are very much not fond of this as it once again makes tracking people on the internet more difficult. Conversely, I must say I'm quite enamored with the idea.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Session six- Jack Poulson on Censorship
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ELznkers--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://raw.githubusercontent.com/Mattsi-Jansky/matt.si/master/content/blog/2019-10/orgcon/orgcon19-censorship.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ELznkers--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://raw.githubusercontent.com/Mattsi-Jansky/matt.si/master/content/blog/2019-10/orgcon/orgcon19-censorship.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Jack Poulson is known for quite publicly leaving Google, where he was a Senior Research Scientist, over their "Dragonfly" project to censor search results in China&lt;sup id="fnref8"&gt;8&lt;/sup&gt;. The project would have blocked results for searches like "human rights", "Nobel Prize" and "student protest"&lt;sup id="fnref9"&gt;9&lt;/sup&gt;. My notes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Challenged on what he had expected his resignation to do, Jack pointed out the successes Google staff have had in changing their employer's policy&lt;sup id="fnref10"&gt;10&lt;/sup&gt;.&lt;/li&gt;
&lt;li&gt;Jack made a similar stance when he worked in academia, refusing to support closed access journals. He was threatened with tenure removal if he didn't comply, and resigned two days later.&lt;/li&gt;
&lt;li&gt;Google first shut down Google.cn because China attacked them, compromised their source code and more in order to track human rights activists&lt;sup id="fnref11"&gt;11&lt;/sup&gt;.&lt;/li&gt;
&lt;li&gt;Jack suggests that we too often discuss ethics in a future tense, as if it were theoretical- "Oh, what if technology were to harm society…" but tech already is harming society. It's better, he argues, to refer to actual historical human rights abuses, such as when Cisco wrote software for the Chinese government knowing that it would be used to detain and torture dissidents&lt;sup id="fnref12"&gt;12&lt;/sup&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Art installations
&lt;/h2&gt;

&lt;p&gt;Lastly, there were a number of interesting art installations around the conference. I particularly liked these:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--PcSE23B5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://raw.githubusercontent.com/Mattsi-Jansky/matt.si/master/content/blog/2019-10/orgcon/orgcon19-zuck.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--PcSE23B5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://raw.githubusercontent.com/Mattsi-Jansky/matt.si/master/content/blog/2019-10/orgcon/orgcon19-zuck.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This display demonstrated the amount of land that Mark Zuckerberg has purchased, at great expense, around his home to ensure privacy. The author asks; are we entering a world where only the rich can afford privacy?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--pUTcRN-O--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://raw.githubusercontent.com/Mattsi-Jansky/matt.si/master/content/blog/2019-10/orgcon/orgcon19-alphabet.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--pUTcRN-O--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://raw.githubusercontent.com/Mattsi-Jansky/matt.si/master/content/blog/2019-10/orgcon/orgcon19-alphabet.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A children's connect-the-dot game in which each dot is a subsidiary of Alphabet, Google's parent company. A great way to visually demonstrate the sheer scale technology corporations are reaching.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--nNKOzioi--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://raw.githubusercontent.com/Mattsi-Jansky/matt.si/master/content/blog/2019-10/orgcon/orgcon19-informatics.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--nNKOzioi--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://raw.githubusercontent.com/Mattsi-Jansky/matt.si/master/content/blog/2019-10/orgcon/orgcon19-informatics.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Each talk in the biggest room was accompanied by an artist drawing a wonderful piece to summarise the topics covered in the talk.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;ORGCon is great. I never imagined I'd learn so much about ethics in technology. There were some really unexpected and interesting sessions. I definitely recommend going next year, and perhaps you'll consider &lt;a href="https://www.openrightsgroup.org/join/"&gt;joining the Open Rights Group&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Footnotes
&lt;/h2&gt;




&lt;ol&gt;

&lt;li id="fn1"&gt;
&lt;p&gt;Alex Hearn, The Guardian, &lt;a href="https://www.theguardian.com/uk-news/2019/may/30/apple-and-whatsapp-condemn-gchq-plans-to-eavesdrop-on-encrypted-chats"&gt;Apple and WhatsApp Condem GCHQ Plans to Eavesdrop on Encrypted Chats&lt;/a&gt; ↩&lt;/p&gt;
&lt;/li&gt;

&lt;li id="fn2"&gt;
&lt;p&gt;Nicole Perlroth, David E. Sanger, New York Times, &lt;a href="https://www.nytimes.com/2019/05/06/us/politics/china-hacking-cyber.html"&gt;How Chinese Spies Got the N.S.A.’s Hacking Tools, and Used Them for Attacks&lt;/a&gt; ↩&lt;/p&gt;
&lt;/li&gt;

&lt;li id="fn3"&gt;
&lt;p&gt;Anthony Cuthberston, The Independent, &lt;a href="https://www.independent.co.uk/life-style/gadgets-and-tech/news/india-police-missing-children-facial-recognition-tech-trace-find-reunite-a8320406.html"&gt;Indian Police Trace 3,000 Missing Children In Just Four Days Using Facial Recognition Technology&lt;/a&gt; ↩&lt;/p&gt;
&lt;/li&gt;

&lt;li id="fn4"&gt;
&lt;p&gt;Dave Lee, BBC News, &lt;a href="https://www.bbc.co.uk/news/technology-48276660"&gt;San Francisco is first US city to ban facial recognition&lt;/a&gt; ↩&lt;/p&gt;
&lt;/li&gt;

&lt;li id="fn5"&gt;
&lt;p&gt;Matt Burgess, Wired.co.uk, &lt;a href="https://www.wired.co.uk/article/met-police-london-facial-recognition-test"&gt;The Met Police's facial recognition tests are fatally flawed&lt;/a&gt; ↩&lt;/p&gt;
&lt;/li&gt;

&lt;li id="fn6"&gt;
&lt;p&gt;Big Brother Watch, &lt;a href="https://bigbrotherwatch.org.uk/all-media/big-brother-watch-begins-landmark-legal-challenge-to-police-use-of-facial-recognition-surveillance/"&gt;Big Brother Watch Begins Landmark Legal Challenge to Police Use of Facial Recognition Surveillance&lt;/a&gt; ↩&lt;/p&gt;
&lt;/li&gt;

&lt;li id="fn7"&gt;
&lt;p&gt;Nate Cardozo, EFF, &lt;a href="https://www.eff.org/deeplinks/2019/01/give-ghost-backdoor-another-name"&gt;Give Up the Ghost: A Backdoor by Another Name&lt;/a&gt; ↩&lt;/p&gt;
&lt;/li&gt;

&lt;li id="fn8"&gt;
&lt;p&gt;Ryan Gallagher, The Intercept, &lt;a href="https://theintercept.com/2018/09/13/google-china-search-engine-employee-resigns/"&gt;SENIOR GOOGLE SCIENTIST RESIGNS OVER “FORFEITURE OF OUR VALUES” IN CHINA&lt;/a&gt; ↩&lt;/p&gt;
&lt;/li&gt;

&lt;li id="fn9"&gt;
&lt;p&gt;Anthony Cuthberston, The Independent, &lt;a href="https://www.independent.co.uk/life-style/gadgets-and-tech/news/google-china-project-dragonfly-cancelled-protest-censored-a8808541.html"&gt;GOOGLE EMPLOYEES SUSPECT IT IS STILL WORKING ON SECRET CHINESE 'DRAGONFLY' PROJECT&lt;/a&gt; ↩&lt;/p&gt;
&lt;/li&gt;

&lt;li id="fn10"&gt;
&lt;p&gt;Daisuke Wakabayashi, Scott Shane, New York Times, &lt;a href="https://www.nytimes.com/2018/06/01/technology/google-pentagon-project-maven.html"&gt;Google Will Not Renew Pentagon Contract That Upset Employees&lt;/a&gt; ↩&lt;/p&gt;
&lt;/li&gt;

&lt;li id="fn11"&gt;
&lt;p&gt;Elinor Mills, CNet, &lt;a href="https://www.cnet.com/news/behind-the-china-attacks-on-google-faq/"&gt;Behin the China attacks on Google&lt;/a&gt; ↩&lt;/p&gt;
&lt;/li&gt;

&lt;li id="fn12"&gt;
&lt;p&gt;Stephen Lawson, CIO, &lt;a href="https://www.cio.com/article/3021959/eff-says-cisco-shouldnt-get-off-the-hook-for-torture-in-china.html"&gt;EFF says Cisco shouldn't get off the hook for torture in China&lt;/a&gt; ↩&lt;/p&gt;
&lt;/li&gt;

&lt;/ol&gt;

</description>
      <category>digitalrights</category>
      <category>conference</category>
    </item>
  </channel>
</rss>
