<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Dmitrii Morozov</title>
    <description>The latest articles on DEV Community by Dmitrii Morozov (@mtmorozov).</description>
    <link>https://dev.to/mtmorozov</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/mtmorozov"/>
    <language>en</language>
    <item>
      <title>Measurements with MetricKit</title>
      <dc:creator>Dmitrii Morozov</dc:creator>
      <pubDate>Sat, 27 Apr 2024 21:47:09 +0000</pubDate>
      <link>https://dev.to/mtmorozov/measurements-with-metrickit-4239</link>
      <guid>https://dev.to/mtmorozov/measurements-with-metrickit-4239</guid>
      <description>&lt;h1&gt;
  
  
  Intro
&lt;/h1&gt;

&lt;p&gt;Sometimes you need to measure something in the app. It can be different things like app launch time or just custom time spent between certain points of execution. One of the most convenient tools in this regard is MetricKit framework. With its help, you can measure different performance and diagnostic metrics, monitor regressions and identify problems in your app. The article describes the basic setup, output data format and possible options to process it. Also, it describes custom measurements and events reporting with MetricKit.&lt;/p&gt;

&lt;h1&gt;
  
  
  Getting started
&lt;/h1&gt;

&lt;p&gt;Let’s start with a simple example: how to measure your app’s launch time?&lt;/p&gt;

&lt;h2&gt;
  
  
  Setup
&lt;/h2&gt;

&lt;p&gt;First, we need to define an object responsible for handling info from the framework and implement &lt;code&gt;MXMetricManagerSubscriber&lt;/code&gt;. This protocol contains two optional methods:&lt;br&gt;
&lt;code&gt;optional func didReceive(_ payloads: [MXMetricPayload])&lt;/code&gt;&lt;br&gt;
&lt;code&gt;optional func didReceive(_ payloads: [MXDiagnosticPayload])&lt;/code&gt;&lt;br&gt;
In this article we will focus only on the first one that provides metrics.&lt;br&gt;
According to &lt;a href="https://developer.apple.com/documentation/metrickit/mxmetricmanagersubscriber/didreceive(_:)-3zq5g" rel="noopener noreferrer"&gt;docs&lt;/a&gt;:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The system calls this method at most once per day. It’s safe to process the payload on a separate thread.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;In real life you can open an app every day but the callback will be called once in three days, it only guarantees to call the method no more than once a day. in my experience, it is called on average once every 2-3 days.&lt;br&gt;
Let’s create a separate class responsible for metrics reporting:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;


import MetricKit

final class MetricReporter: MXMetricManagerSubscriber {
    func didReceive(_ payloads: [MXMetricPayload]) {
        // TODO: Report metrics
    }
}


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Those methods are called in the background so they don’t affect app performance. Later I will explain how to parse data from those metrics. Now we need to have this object during the whole app lifecycle so let’s add it to AppDelegate.swift:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

@main
class AppDelegate: UIResponder, UIApplicationDelegate {
    private lazy var metricReporter = MetricReporter()
    func application(_ application: UIApplication, didFinishLaunchingWithOptions launchOptions: [UIApplication.LaunchOptionsKey: Any]?) -&amp;gt; Bool {
        MXMetricManager.shared.add(metricReporter)
        return true
    }
}


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;So now we are ready to try payload example.&lt;/p&gt;

&lt;h2&gt;
  
  
  Payload overview
&lt;/h2&gt;

&lt;p&gt;To get a sample for MetricKit we can use Xcode &lt;code&gt;Debug -&amp;gt; Simulate MetricKit Payloads&lt;/code&gt;, but it is important to use a real device, for a simulator this option is disabled.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fargbbokpv1br9px9y4rt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fargbbokpv1br9px9y4rt.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Every &lt;code&gt;MXMetricPayload&lt;/code&gt; contains different instances of &lt;code&gt;MXMetric&lt;/code&gt; subclasses such as &lt;code&gt;MXAppLaunchMetric&lt;/code&gt;, &lt;code&gt;MXAppExitMetric&lt;/code&gt;, &lt;code&gt;MXMemoryMetric&lt;/code&gt; and others. All of them in turn contain different sets of following classes and structures:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;Measurement&lt;/code&gt; structure represents value including information about units. This is how it looks like in the example of peak memory usage:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

"peakMemoryUsage" : "200000 kB"


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;code&gt;MXHistogram&lt;/code&gt; class represents the number of times the measured value falls into a specific range of possible values. This is how it looks in json form in the example of the launch time of the app:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

"histogrammedTimeToFirstDrawKey" : {
      "histogramNumBuckets" : 3,
      "histogramValue" : {
        "0" : {
          "bucketEnd" : "1010 ms",
          "bucketCount" : 50,
          "bucketStart" : "1000 ms"
        },
        "1" : {
          "bucketEnd" : "2010 ms",
          "bucketCount" : 60,
          "bucketStart" : "2000 ms"
        },
        "2" : {
          "bucketEnd" : "3010 ms",
          "bucketCount" : 30,
          "bucketStart" : "3000 ms"
        }
      }
    }


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;code&gt;MXAverage&lt;/code&gt; class represents average value including information about sample count and standard deviation. This is how it looks like in the example of average pixel luminance:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

"averagePixelLuminance" : {
    "averageValue" : "50 apl",
    "standardDeviation" : 0,
    "sampleCount" : 500
  }


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h1&gt;
  
  
  Processing
&lt;/h1&gt;

&lt;p&gt;There are two different approaches for metrics data processing: preprocess data on a device and send it or send data as it is. Both have disadvantages and advantages, let’s take a look at the details.&lt;/p&gt;
&lt;h2&gt;
  
  
  Processing on device
&lt;/h2&gt;

&lt;p&gt;Process data on mobile and send result values to your analytics solution. In general, you need to transform metrics-specific data structures into simpler ones, for example extract average value from MXHistogram data structure. A great example of this approach can be found &lt;a href="https://www.avanderlee.com/swift/metrickit-launch-time/" rel="noopener noreferrer"&gt;here&lt;/a&gt;. Just a few important things to remember:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Remember to check &lt;code&gt;includesMultipleApplicationVersions&lt;/code&gt; field to filter out reports containing data from different versions&lt;/li&gt;
&lt;li&gt;Remember to filter out abnormal values with &lt;code&gt;isNan&lt;/code&gt; , &lt;code&gt;isNormal&lt;/code&gt; or &lt;code&gt;isFinite&lt;/code&gt; checks. In the previously mentioned example author used &lt;code&gt;isNan&lt;/code&gt; but you can also utilise &lt;code&gt;isFinite&lt;/code&gt; or &lt;code&gt;isNormal&lt;/code&gt;, but remember that zero is not a normal number&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In this case, you select on mobile the exact data you want to send for analyzing. Unfortunately, if you make any mistake on this step it would be quite hard to catch since you don’t have access to raw data.&lt;/p&gt;
&lt;h2&gt;
  
  
  Offload processing computation off mobile devices
&lt;/h2&gt;

&lt;p&gt;Offload processing computation off mobile devices and send data as it is for processing. In &lt;a href="https://nshipster.com/metrickit/" rel="noopener noreferrer"&gt;this&lt;/a&gt; great example the author uses a web service to process metric payloads. The main advantages of this method are:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;All raw data is available so if later you discover any issues with your processing you can fix it without any consequences&lt;/li&gt;
&lt;li&gt;At any point in time you can access data from the past&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The main disadvantage here is the higher complexity.&lt;/p&gt;
&lt;h1&gt;
  
  
  Custom metrics
&lt;/h1&gt;

&lt;p&gt;Apart from predefined measurements MetricKit also supports custom measurements and event tracking with &lt;code&gt;mxSignpost&lt;/code&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Custom measurements
&lt;/h2&gt;

&lt;p&gt;This is how you can do it on the example of applying some heavy operation:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

func apply() {
    // create log handler
    let handle = MXMetricManager.makeLogHandle(category: "ApplyCategory")
    mxSignpost(.begin, log: handle, name: "ApplyTrace")
    // critical code section begins
    // ...
    // critical code section ends
    // end measuring
    mxSignpost(.end, log: handle, name: "ApplyTrace")
}

func cancel() {
    let handle = MXMetricManager.makeLogHandle(category: "CancelCategory")
    mxSignpost(.begin, log: handle, name: "CancelTrace")
    // ...
    mxSignpost(.end, log: handle, name: "CancelTrace")
}


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Though it looks pretty simple there are several important things to highlight:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;It is important to specify the same name in mxSignpost calls, otherwise, you will not get results in a report&lt;/li&gt;
&lt;li&gt;There is special &lt;a href="https://developer.apple.com/documentation/metrickit/mxsignpostmetric" rel="noopener noreferrer"&gt;note&lt;/a&gt; about this API usage:&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;The system limits the number of custom signpost metrics saved to the log in order to reduce on-device memory overhead. Limit the use of custom metrics to critical sections of code.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;As a result, you get something similar to the following piece in result payloads:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

{
  "signpostMetrics" : [
    {
      "signpostIntervalData" : {
        "histogrammedSignpostDurations" : {
          "histogramNumBuckets" : 2,
          "histogramValue" : {
            "0" : {
              "bucketCount" : 1,
              "bucketStart" : "0 ms",
              "bucketEnd" : "99 ms"
            },
            "1" : {
              "bucketCount" : 1,
              "bucketStart" : "100 ms",
              "bucketEnd" : "199 ms"
            }
          }
        },
        "signpostCumulativeCPUTime" : "262 ms",
        "signpostAverageMemory" : "64433 kB",
        "signpostCumulativeLogicalWrites" : "748 kB"
      },
      "signpostCategory" : "ApplyCategory",
      "signpostName" : "ApplyTrace",
      "totalSignpostCount" : 2
    },
    {
      "signpostIntervalData" : {
        "histogrammedSignpostDurations" : {
          "histogramNumBuckets" : 1,
          "histogramValue" : {
            "0" : {
              "bucketCount" : 81,
              "bucketStart" : "0 ms",
              "bucketEnd" : "99 ms"
            }
          }
        },
        "signpostCumulativeCPUTime" : "295 ms",
        "signpostAverageMemory" : "211037 kB",
        "signpostCumulativeLogicalWrites" : "168 kB"
      },
      "signpostCategory" : "CancelCategory",
      "signpostName" : "CancelTrace",
      "totalSignpostCount" : 81
    }
  ]
}


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;As you can see in the attached payload there is histogram data that allows you to collect information about the execution time for specific code sections.  &lt;/p&gt;

&lt;h2&gt;
  
  
  Events tracking
&lt;/h2&gt;

&lt;p&gt;Apart from &lt;code&gt;begin&lt;/code&gt; and &lt;code&gt;end&lt;/code&gt;, you can use &lt;code&gt;event&lt;/code&gt; &lt;code&gt;OSSignpostType&lt;/code&gt; and in this case in a payload you will receive a number of times this event occurred. Here is an example:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

let handle = MXMetricManager.makeLogHandle(category: "TestViewController")

override func viewDidLoad() {
    super.viewDidLoad()
    mxSignpost(.event, log: handle, name: "viewDidLoad")
}

override func viewWillAppear() {
    super.viewWillAppear()
    mxSignpost(.event, log: handle, name: "viewWillAppear")
}


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;With this code we can get the following payload part:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

"signpostMetrics" : [
    {
      "signpostCategory" : "TestViewController",
      "totalSignpostCount" : 5,
      "signpostName" : "viewDidLoad"
    },
    {
      "signpostCategory" : "TestViewController",
      "totalSignpostCount" : 5,
      "signpostName" : "viewWillAppear"
    }
  ]


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;As you can see this can be utilised as an event-collecting tool to build various types of event funnels&lt;/p&gt;

&lt;h1&gt;
  
  
  Conclusion
&lt;/h1&gt;

&lt;p&gt;MetricKit provides wide options of reporting tools to monitor:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Predefined metrics like application launch time, network usage, memory usage, etc.&lt;/li&gt;
&lt;li&gt;Custom metrics recorded with &lt;code&gt;mxSignpost&lt;/code&gt; for critical code sections&lt;/li&gt;
&lt;li&gt;Events tracking with &lt;code&gt;mxSignpost&lt;/code&gt; for critical events&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;These options can be utilised in various scenarios such as performance related improvements or event funnels.&lt;/p&gt;

</description>
      <category>swift</category>
      <category>ios</category>
      <category>metrics</category>
    </item>
    <item>
      <title>Discovering UI Performance testing with XCTest. Navigation performance.</title>
      <dc:creator>Dmitrii Morozov</dc:creator>
      <pubDate>Sun, 21 May 2023 19:19:28 +0000</pubDate>
      <link>https://dev.to/mtmorozov/discovering-ui-performance-testing-with-xctest-navigation-performance-32c</link>
      <guid>https://dev.to/mtmorozov/discovering-ui-performance-testing-with-xctest-navigation-performance-32c</guid>
      <description>&lt;p&gt;One of the most important components of app performance is the performance of navigation transitions between different screens. Applying new changes to a screen without proper testing can significantly degrade this aspect of an app’s performance. In this article, I will show how to build performance tests to address this problem. This article has some references to my previous &lt;a href="https://dev.to/mtmorozov/discovering-ui-performance-testing-with-xctest-scrolling-performance-pon"&gt;article&lt;/a&gt; about scrolling performance and I recommend to read it first.&lt;/p&gt;

&lt;h2&gt;
  
  
  Setup
&lt;/h2&gt;

&lt;p&gt;I prepared the demo &lt;a href="https://github.com/mtmorozov/Plants"&gt;project&lt;/a&gt;. This is a simple app that shows you a list of plants, you can select a plant and open the details screen. For all measurements I used MacBook M2 Pro with Ventura 13.0 and iPhone 13 Pro with iOS 16.0, XCode version is 14.2.&lt;/p&gt;

&lt;p&gt;Here is the test I used for testing. It tests navigation performance when opening the details screen from the main screen:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight swift"&gt;&lt;code&gt;&lt;span class="kd"&gt;import&lt;/span&gt; &lt;span class="kt"&gt;XCTest&lt;/span&gt;

&lt;span class="kd"&gt;final&lt;/span&gt; &lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="kt"&gt;NavigationPerformanceTests&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kt"&gt;XCTestCase&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="nv"&gt;app&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="kt"&gt;XCUIApplication&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

    &lt;span class="kd"&gt;func&lt;/span&gt; &lt;span class="nf"&gt;testNavigationTransition&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;app&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;launch&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
        &lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="nv"&gt;measureOptions&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="kt"&gt;XCTMeasureOptions&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
        &lt;span class="n"&gt;measureOptions&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;invocationOptions&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;manuallyStop&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;

        &lt;span class="nf"&gt;measure&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;metrics&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="kt"&gt;XCTOSSignpostMetric&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;navigationTransitionMetric&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="nv"&gt;options&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;measureOptions&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="n"&gt;app&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;staticTexts&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s"&gt;"Monstera"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;tap&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
            &lt;span class="n"&gt;app&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;navigationBars&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;buttons&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;element&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;boundBy&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;tap&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
            &lt;span class="nf"&gt;stopMeasuring&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;As I found out in the &lt;a href="https://dev.to/mtmorozov/discovering-ui-performance-testing-with-xctest-scrolling-performance-pon"&gt;previous article&lt;/a&gt; there are limited metrics available on a simulator compared with a real device. To understand if this method is applicable I will test it both on a simulator and a real device. &lt;/p&gt;
&lt;h2&gt;
  
  
  Testing
&lt;/h2&gt;

&lt;p&gt;Firstly we need to set up baselines for our test. I described the process in detail &lt;a href="https://dev.to/mtmorozov/discovering-ui-performance-testing-with-xctest-scrolling-performance-pon"&gt;here&lt;/a&gt;. After baseline setup we are ready to go. To simulate delay I used this:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight swift"&gt;&lt;code&gt;&lt;span class="cm"&gt;/* x is number of microseconds, for example usleep(10000)
blocks current thread for 10000 microseconds
or 0.01 second and usleep(1000000) blocks for 1 second */&lt;/span&gt;

&lt;span class="nf"&gt;usleep&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;x&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;I did not notice any difference between different launches in contrast to measuring scroll performance so there is only one try for each delay value. Following hitch-related metrics are not relevant in this case and during testing value for them is always zero:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Hitch Time Ratio (NavigationTransition): 0.000 ms per s&lt;/li&gt;
&lt;li&gt;Hitches Total Duration (NavigationTransition): 0.000 ms&lt;/li&gt;
&lt;li&gt;Number of Hitches (NavigationTransition): 0.000 hitches&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I ignored these values during testing. For delay values I tried to find the pivot point when tests start to fail. Following table demonstrates test results:&lt;br&gt;
&lt;/p&gt;
&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;



&lt;p&gt;Results show the following: &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Duration is the most sensitive metric during measuring navigation performance.&lt;/li&gt;
&lt;li&gt;Simulator results are very similar to real device results.&lt;/li&gt;
&lt;li&gt;Tests start to fail both for a simulator and a real device with a delay equal to or more than 50 ms&lt;/li&gt;
&lt;li&gt;Frame Count and Frame Rate are not predictable in the described case and hardly usable&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;The described method is an efficient tool to detect navigation performance issues in early development stages. It has some constraints such as using the same device model for test recording and testing, but even with a simulator it can detect delays that are not noticeable for a human. In my opinion, this can be a great way of controlling crucial flows performance in your app.&lt;/p&gt;

&lt;h2&gt;
  
  
  Useful links
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://developer.apple.com/documentation/xcode/writing-and-running-performance-tests"&gt;Writing and running performance tests&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ios</category>
      <category>swift</category>
      <category>mobile</category>
    </item>
    <item>
      <title>Discovering UI Performance testing with XCTest. Scrolling performance.</title>
      <dc:creator>Dmitrii Morozov</dc:creator>
      <pubDate>Wed, 10 May 2023 22:42:19 +0000</pubDate>
      <link>https://dev.to/mtmorozov/discovering-ui-performance-testing-with-xctest-scrolling-performance-pon</link>
      <guid>https://dev.to/mtmorozov/discovering-ui-performance-testing-with-xctest-scrolling-performance-pon</guid>
      <description>&lt;p&gt;Mobile app performance is a critical aspect of the user experience. Poor performance can lead to frustration, negative reviews, and ultimately, loss of users. During app development it is crucial to detect performance issues as early as possible. In 2020 Apple &lt;a href="https://developer.apple.com/videos/play/wwdc2020/10077/" rel="noopener noreferrer"&gt;introduced&lt;/a&gt; new tools to build Performance tests for UI interactions and currently this is one of &lt;a href="https://developer.apple.com/documentation/metrickit/improving_your_app_s_performance" rel="noopener noreferrer"&gt;recommended&lt;/a&gt; methods of improving your apps performance. In this article I tested these tools and described my experience with them. This article is focused on scrolling performance specifically.&lt;/p&gt;

&lt;h2&gt;
  
  
  Setup
&lt;/h2&gt;

&lt;p&gt;I prepared the demo &lt;a href="https://github.com/mtmorozov/Plants" rel="noopener noreferrer"&gt;project&lt;/a&gt;. This is simple app which shows you a list of plants, you can select a plant and open details screen:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsqzpitsda6ps33gi6ub9.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsqzpitsda6ps33gi6ub9.gif" alt="Image description"&gt;&lt;/a&gt;&lt;br&gt;
For all measurements I used MacBook M2 Pro with Ventura 13.0 and iPhone 13 Pro with iOS 16.0, XCode version is 14.2.&lt;br&gt;
Here is the test I used for testing (it is based on content from &lt;a href="https://developer.apple.com/videos/play/wwdc2020/10077/" rel="noopener noreferrer"&gt;here&lt;/a&gt;). It tests scrolling performance on the main screen of the app.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight swift"&gt;&lt;code&gt;

&lt;span class="kd"&gt;final&lt;/span&gt; &lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="kt"&gt;ScrollingPerformanceTests&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kt"&gt;XCTestCase&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="nv"&gt;app&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="kt"&gt;XCUIApplication&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

    &lt;span class="kd"&gt;func&lt;/span&gt; &lt;span class="nf"&gt;testScrollAnimationPerformance&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;app&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;launch&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
        &lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="nv"&gt;collection&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;app&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;collectionViews&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;firstMatch&lt;/span&gt;
        &lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="nv"&gt;measureOptions&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="kt"&gt;XCTMeasureOptions&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
        &lt;span class="n"&gt;measureOptions&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;invocationOptions&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;manuallyStop&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;

        &lt;span class="nf"&gt;measure&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;metrics&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="kt"&gt;XCTOSSignpostMetric&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;scrollingAndDecelerationMetric&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="nv"&gt;options&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;measureOptions&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="n"&gt;collection&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;swipeUp&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;velocity&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;fast&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
            &lt;span class="nf"&gt;stopMeasuring&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
            &lt;span class="n"&gt;collection&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;swipeDown&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;velocity&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;fast&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;Test runner performs 5 iterations of measure (there is also additional first iteration when measuring does not happen) and shows average values. The number of iterations can be changed via XCTMeasureOptions object. Also for the test scheme I disabled all diagnostic features and switched to release configuration as mentioned &lt;a href="https://developer.apple.com/videos/play/wwdc2020/10077/" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;br&gt;
It is time to start our testing, let’s launch it on a simulator.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb5y7so3uv02gjkismg77.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb5y7so3uv02gjkismg77.png" alt="Image description"&gt;&lt;/a&gt;&lt;br&gt;
Unfortunately the simulator supports only Duration metric. &lt;strong&gt;You have to use a real device to get all available metrics.&lt;/strong&gt; Let’s try it with a real device.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgbf1pvynq134pdybekqq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgbf1pvynq134pdybekqq.png" alt="Image description"&gt;&lt;/a&gt;&lt;br&gt;
There are five new metrics: frame count, frame rate, number of hitches, hitches total duration, hitches time ratio. While frame count and frame rate are more or less obvious, there are three metrics mentioning the term “hitch”. According to &lt;a href="https://developer.apple.com/videos/play/tech-talks/10855" rel="noopener noreferrer"&gt;this&lt;/a&gt;: a hitch is any time a frame appears on screen later than expected. Thus hitch time ratio is the total hitch time in an interval divided by its duration.&lt;/p&gt;
&lt;h2&gt;
  
  
  Baseline setup
&lt;/h2&gt;

&lt;p&gt;Now for our performance tests to work we need to set up baselines. To set a baseline you need to tap on the performance measurement status icon (gray icon with a dot) and tap on Set Baseline button. Also on that screen you can see performance of the test through different iterations:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F18daxlli9actg4zpup1x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F18daxlli9actg4zpup1x.png" alt="Image description"&gt;&lt;/a&gt;&lt;br&gt;
After you set a baseline you will see a new folder with .baseline suffix inside of xcshareddata folder (it is inside of .xcodeproj package). Inside this folder you can see two files: Info.plist and file with generated name and .plist suffix. First file stores information about a device used to record baseline. That will be used to compare test results, &lt;strong&gt;you can’t use test results with baseline if baseline was recorded on another device model&lt;/strong&gt;. For our case the device model is iPhone 14,2 (iPhone 13 Pro), device model for simulator is the device model of the current computer. Second file stores class names and baseline values.&lt;br&gt;
During baseline recording I encountered with two issues:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;In most of the cases the test fails because of exceeding maximum standard deviation during one of the iteration for hitch related metrics (Number of Hitches, Hitches Total Duration and Hitches Time Ratio).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Test fails sometimes because of exceeding max allowed deviation for average value (this also relates only to above mentioned metrics).&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Those issues show that currently performance tools don’t work perfectly with hitches related metrics and we have to search for a workaround to make it work stably. As a workaround for the first case I increased maximum standard deviation to 400% to basically ignore this check and focus only on average values. For the second case I did 5 runs to see if the current baseline is stable and if the test fails I accepted test values as a baseline, after that test had become stable and it was green without any changes in code.&lt;/p&gt;
&lt;h2&gt;
  
  
  Testing
&lt;/h2&gt;

&lt;p&gt;For the testing purposes I put following line right before cellForItemAt in collection view data source:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight swift"&gt;&lt;code&gt;

&lt;span class="cm"&gt;/* x is number of microseconds, for example usleep(10000)
blocks current thread for 10000 microseconds
or 0.01 second and usleep(1000000) blocks for 1 second */&lt;/span&gt;

&lt;span class="nf"&gt;usleep&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;x&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;For test delays I tried to find turning values when test starts to fail. Here is the table of results:&lt;br&gt;
&lt;/p&gt;
&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;

&lt;p&gt;&lt;br&gt;&lt;br&gt;
This table uses following abbreviations: HTR — Hitch Time Ratio, HTD — Hitch Total Duration, NoH — Number of Hitches, FR — Frame Rate, FC — Frame Count.&lt;br&gt;&lt;br&gt;
From the table you see that tests start to fail on 25ms delay and all tries fail when there is 75 ms delay which I believe is a good result. Also as you can see from the table metrics related to hitches are more sensitive than others: frame rate and frame count failed only on 100ms when hitches are noticeable visually.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;To sum up, tests are quite sensitive and easy to build. Unfortunately mandatory use of real devices imposes restrictions, to automate the process you need a device farm. One time fails because of max standard deviation without any code changes impairs efficiency. Also worth mentioning that baseline recording can be complicated if we talk about using CI.&lt;br&gt;
In my opinion this tool is powerful and easy to use, it is great for controlling scrolling performance in your app. It also can be automated as a part of your release process along with your other unit and UI tests. But there are still a lot of things to improve, first of all in terms of stability and I hope to see such improvements in the next version of Xcode.&lt;/p&gt;

&lt;p&gt;Useful links:&lt;br&gt;
&lt;a href="https://developer.apple.com/videos/play/wwdc2020/10077/" rel="noopener noreferrer"&gt;Eliminate animation hitches with XCTest&lt;/a&gt;&lt;br&gt;
&lt;a href="https://developer.apple.com/videos/play/wwdc2021/10181" rel="noopener noreferrer"&gt;Ultimate application performance survival guide&lt;/a&gt;&lt;br&gt;
&lt;a href="https://developer.apple.com/videos/play/tech-talks/10855/" rel="noopener noreferrer"&gt;Explore UI animation hitches and the render loop&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ios</category>
      <category>swift</category>
      <category>mobile</category>
    </item>
  </channel>
</rss>
