<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Michael Roks</title>
    <description>The latest articles on DEV Community by Michael Roks (@uwuthless).</description>
    <link>https://dev.to/uwuthless</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/uwuthless"/>
    <language>en</language>
    <item>
      <title>Big-O notation VS. Amortized, Complexity Analysis</title>
      <dc:creator>Michael Roks</dc:creator>
      <pubDate>Sun, 03 Mar 2024 16:53:39 +0000</pubDate>
      <link>https://dev.to/uwuthless/big-o-notation-vs-amortized-complexity-analysis-4a9a</link>
      <guid>https://dev.to/uwuthless/big-o-notation-vs-amortized-complexity-analysis-4a9a</guid>
      <description>&lt;p&gt;Hi everybody, my name is Michael Roks, I'm a backend software engineer at Yandex with 5+ YOE, and an hobbyist competitive programmer!&lt;/p&gt;

&lt;p&gt;Today we are going to talk about different ways to evaluate the Complexity of your algorithms, specifically - the Big-O Assymptotic Complexity VS. the Amortized complexity. Understanding this helps you get a better grasp on how the common data structures work inside, and it would be a plus for you to mention this in an interview :)&lt;/p&gt;

&lt;p&gt;Speaking in simple terms, the &lt;code&gt;Big-O&lt;/code&gt; analysis is the "Worst possible" complexity of the algorithm - Time and/or Space complexity. And the &lt;code&gt;Amortized&lt;/code&gt; analysis is the "Average amount of work for 1 run, if we run K times".&lt;/p&gt;

&lt;p&gt;For example, adding an element to a Dynamic Array is &lt;strong&gt;O(n)&lt;/strong&gt; Time Complexity (I used the Big-O symbol &lt;code&gt;O&lt;/code&gt; to denote the &lt;code&gt;Big-O - Worst-case-scenario Asymptotic analysis notation&lt;/code&gt;), and &lt;strong&gt;Amortized(1)&lt;/strong&gt; (There is no dedicated symbol for the amortized analysis, so I'm just putting the word &lt;code&gt;Amortized&lt;/code&gt; in front of the complexity itself). &lt;/p&gt;

&lt;p&gt;Why are the complexities like this?&lt;/p&gt;

&lt;p&gt;It's because of how the Dynamic Array handles adding and removing elements. Quick reminder - the dynamic array, in its most simple form, has the following methods:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Add to the end&lt;/li&gt;
&lt;li&gt;Remove from end&lt;/li&gt;
&lt;li&gt;Get by any index&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;As well as the dynamic array has the underlying "regular" array &amp;amp; a &lt;code&gt;lastIndex&lt;/code&gt; pointer to the place in that "regular" array, denoting the location of the last item.&lt;br&gt;
When you are adding to the end of the array, there are 2 ways it can go:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;You have spare space at the end of your "regular" array, you just write the provided element by &lt;code&gt;lastIndex&lt;/code&gt;, and do &lt;code&gt;++lastIndex&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;You ran out of the spare space in the "regular" array, and now have to &lt;code&gt;expand&lt;/code&gt;, and then append the provided element to the expanded array
A visual example!&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp1g3i672czl5zord10bg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp1g3i672czl5zord10bg.png" alt="Using regular array in the Dynamic array, at some point we run out of available regular array and have to expand"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In steps 2,3 &amp;amp; 4 we successfully added elements by &lt;code&gt;lastIndex&lt;/code&gt; and moved it 1 to the right. But then we ran out of space &amp;amp; had to &lt;code&gt;Expand&lt;/code&gt; the underlying regular array (create a new one greater in size &amp;amp;  copy all the existing elements into it). &lt;/p&gt;

&lt;p&gt;As you can tell, the &lt;code&gt;Expand&lt;/code&gt; operation is O(N) time complexity. As the &lt;code&gt;Expand&lt;/code&gt; is eventually triggered by a single &lt;code&gt;add&lt;/code&gt; operation, the &lt;code&gt;add&lt;/code&gt; operation is also O(N), as O is the &lt;code&gt;Worst-case-scenario&lt;/code&gt; notation. So adding to the dynamic array is &lt;strong&gt;O(N)&lt;/strong&gt; operation.&lt;/p&gt;

&lt;p&gt;But next, let's look at how many expansions we have to do over K runs of &lt;code&gt;add&lt;/code&gt;. In our example we started at size 3, and expand each time we fill up the dynamic array. Another visual example!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy2151anjww7woue2cpjd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy2151anjww7woue2cpjd.png" alt="Expanding dynamic array multiple times, it takes 2*K time on average"&gt;&lt;/a&gt;&lt;br&gt;
   Here, we have to: &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Fill the initial array of size 3 (3 add operations), then expand (3 copy operations)&lt;/li&gt;
&lt;li&gt;Finish filling the array of size 6 (another 3 add operations), then expand (6 copy operations)&lt;/li&gt;
&lt;li&gt;Finish filling the array of size 12 (another 6 add operations), then expand (12 copy operations)&lt;/li&gt;
&lt;li&gt;Finish filling the array of size 24 (another 12 add operations), then expand (24 copy operations)&lt;/li&gt;
&lt;li&gt;Finish filling the array of size 48 (another 24 add operations), then expand (48 copy operations)&lt;/li&gt;
&lt;li&gt;And so on…&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;As you can see, continuing this series into the infinity, the total number of copy operations done because of K &lt;code&gt;.add&lt;/code&gt; operations is ~2*K, so on average, adding an item to the end of a dynamic array requires 3 operations - 1 for adding &amp;amp; 2 for future expanding (as for K &lt;code&gt;.add&lt;/code&gt; operations we require 3*K total amount of work)&lt;/p&gt;

&lt;p&gt;And this is exactly what Amortized analysis is - the average amount of operations for 1 run over K runs. And this is the proof, that adding to the end of a dynamic array is Amortized(1) Time Complexity. Thank you for you time &amp;amp; attention! Plz like &amp;amp; share if you thought this is a good read :D&lt;/p&gt;

</description>
      <category>leetcode</category>
      <category>algorithms</category>
      <category>datastructures</category>
      <category>interview</category>
    </item>
  </channel>
</rss>
