<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Safia Abdalla</title>
    <description>The latest articles on DEV Community by Safia Abdalla (@captainsafia).</description>
    <link>https://dev.to/captainsafia</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/captainsafia"/>
    <language>en</language>
    <item>
      <title>You and Me Learn All About HTTP with Safia Abdalla</title>
      <dc:creator>Safia Abdalla</dc:creator>
      <pubDate>Thu, 23 Jul 2020 11:54:13 +0000</pubDate>
      <link>https://dev.to/captainsafia/you-and-me-learn-all-about-http-with-safia-abdalla-3nd0</link>
      <guid>https://dev.to/captainsafia/you-and-me-learn-all-about-http-with-safia-abdalla-3nd0</guid>
      <description>&lt;p&gt;I'm an open source maintainer on the nteract project, a writer, and a software engineer at Microsoft working open source web technologies. I'm passionate about bringing people together to build great things and using storytelling to share knowledge. &lt;/p&gt;

&lt;p&gt;This talk covers the history, fundamentals, and applications of HTTP (HyperText Transfer Protocol). The talk follows this rough breakdown:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Introduce the original problem that HTTP was intended to solve when the Internet was created and why it was the right solution at the time.&lt;/li&gt;
&lt;li&gt;Describe the fundamental process of the original version of HTTP&lt;/li&gt;
&lt;li&gt;Exploration of the shortcomings in HTTP that were discovered as the Internet grew and how HTTP evolved to address them, including:&lt;/li&gt;
&lt;li&gt;Supporting transport of different document types&lt;/li&gt;
&lt;li&gt;Improving the performance/reliability of the protocol&lt;/li&gt;
&lt;li&gt;Improving security via HTTPS&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://drive.google.com/file/d/1TsnL7RcMPn4prE6yit_oIDhZRa3ZaODI/view?usp=sharing"&gt;Here is a download link to the talk slides (PDF)&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;This talk will be presented as part of &lt;a href="https://codelandconf.com"&gt;CodeLand:Distributed&lt;/a&gt; on &lt;strong&gt;July 23&lt;/strong&gt;.  After the talk is streamed as part of the conference, it will be added to this post as a recorded video.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>codeland</category>
      <category>http</category>
      <category>https</category>
    </item>
    <item>
      <title>Sorting that's smooth like butter</title>
      <dc:creator>Safia Abdalla</dc:creator>
      <pubDate>Thu, 26 Mar 2020 19:25:48 +0000</pubDate>
      <link>https://dev.to/captainsafia/sorting-that-s-smooth-like-butter-360f</link>
      <guid>https://dev.to/captainsafia/sorting-that-s-smooth-like-butter-360f</guid>
      <description>&lt;p&gt;Continuing on with the trend of exploring esoterically named sorting algorithms, I bring to you a blog post on....smoothsort! I'm gonna be real with you. I was sold on the name the moment I learned about this algorithm. &lt;/p&gt;

&lt;p&gt;Smoothsort is a sorting algorithm invented by Edsger Dijkstra. That name might sound familiar because Dijkstra is a notable computer scientists, known for his work on a variety of algorithms, including his famous self-named algorithm: Dijkstra's shortest path.&lt;/p&gt;

&lt;p&gt;There isn't a lot of content out there about smoothsort. So hopefully, this blog post will help you (and let's be real, me) get an understanding of the algorithm. Before we learn about smoothsort, we have to learn about...&lt;/p&gt;

&lt;h2&gt;
  
  
  Heapsort
&lt;/h2&gt;

&lt;p&gt;Heapsort is a sorting algorithm that works on a special data structure known as a heap, particularly a binary heap. A binary heap is a is a tree structure that meets the following constraints:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Each node in the tree must have two children.&lt;/li&gt;
&lt;li&gt;The value of each node is greater than or equal to the value of its children.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Here is an example of a binary heap that meets these constraints.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyrfraw9ahpd9ub7ez02f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyrfraw9ahpd9ub7ez02f.png" alt="Photo of binary heap"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So, heapsort starts by generating a binary heap for some input array. To build the sorted array, heapsort picks the top-most node in the heap and adds it to the beginning of a sorted array. Then, the heap is rebalanced and another value is selected from the top. And so on, until eventually heapsort has run through all the elements in the list.&lt;/p&gt;

&lt;p&gt;So most of the grunt work in heapsort is actually happening in the heap, specifically when it comes to building and rebalancing the heap as we extract values out of it.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Leonardo Heap
&lt;/h3&gt;

&lt;p&gt;Smoothsort is similar to heap sort with one key distinction. Instead of using a binary heap, smooth sort uses a Leonardo heap.&lt;/p&gt;

&lt;p&gt;Before we dive into Leonardo heaps, we have to talk about Leonardo numbers. It feels like there's a lot of "before we...we have to..." in this blog post, but that's because smoothsort is quite a layer implementation.&lt;/p&gt;

&lt;p&gt;Leonardo numbers are numbers that satisfy the following sequence.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;L(0) = 1&lt;/li&gt;
&lt;li&gt;L(1) = 1&lt;/li&gt;
&lt;li&gt;L(n) = L(n-1) + L(n-2) + 1&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;With this result in mind, the first few Leonardo numbers are: 1, 1, 3, 5, 9, 15, 25, 41, etc.&lt;/p&gt;

&lt;p&gt;Leonardo heaps are built from one or more binary trees. Where does the Leonardo number come in? Well, each of the binary trees in the Leonardo heap must have a number of nodes that is a Leonardo number. For example, here is a Leonardo heap that consists of three binary trees, one with 9 nodes, another with 3, and a third with 1.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.keithschwarz.com%2Fsmoothsort%2Fimages%2Fleonardo-heap.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.keithschwarz.com%2Fsmoothsort%2Fimages%2Fleonardo-heap.png" alt="Image of Leonardo Heap courtesy of keithschwarz.com"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As you can see from this image, the root node in each tree is the largest value.&lt;/p&gt;

&lt;p&gt;When adding and removing values from a Leonardo heap, we have to maintain the properties of the heap: specifically that it has to consist of a Leonardo-numbered set of binary trees.&lt;/p&gt;

&lt;p&gt;As we remove values from the heap, the root node will shift to be the next largest value in the heap. So if we wanted to get a list sorted in descending order, we would remove values from the heap until all values had been removed.&lt;/p&gt;

&lt;h3&gt;
  
  
  Why Leonardo heaps?
&lt;/h3&gt;

&lt;p&gt;The first thought I had when researching this algorithm is: why Leonardo heaps in particular? In some scenarios, Leonardo heaps can perform better than binary heaps for insertion and deletion. &lt;/p&gt;

&lt;h2&gt;
  
  
  The implementation
&lt;/h2&gt;

&lt;p&gt;With all this mind, let's take a stab at implementing smoothsort for ourselves. We're going to cheat a little bit in this blog post and assume that we already have an implementation of a Lenardo heap. That has the following interface.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;LeonardoHeap&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt;
    &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;__init__&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;input_array&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
        &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;input_array&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;input_array&lt;/span&gt;
        &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;heap&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;create_heap&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;input_array&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;
    Returns the rightmost root node in the Leonardo heap
    which matches with the largest value.
    &lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;
    &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;pop&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;largest_value&lt;/span&gt; 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Side note: It might be worthwhile to do a blog post on this structure in and off-itself.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;smoothsort&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;input_array&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="n"&gt;length&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;len&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;input_array&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;length&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;=&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;input_arary&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nf"&gt;sort_with_heap&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;input_array&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;sort_with_heap&lt;/code&gt; function will take the input array, populate the values of the array onto a Lenardo heap, then pop values out of the heap to get an ordered list of values.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;sort_with_heap&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;input_array&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="n"&gt;heap&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;LeonardoHeap&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;input_array&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;length&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;len&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;input_array&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[]&lt;/span&gt;
    &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;x&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="nf"&gt;range&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;length&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
        &lt;span class="n"&gt;result&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;append&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;heap&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;pop&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;result&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;So that's that on that. Hopefully, you found this blog post a smooth read. Heh. Pun intended...&lt;/p&gt;

&lt;p&gt;Alright, I'll see myself out. Thanks for reading folks!&lt;/p&gt;

</description>
      <category>computerscience</category>
      <category>algos</category>
    </item>
    <item>
      <title>I really couldn't think of a punny title for this post</title>
      <dc:creator>Safia Abdalla</dc:creator>
      <pubDate>Wed, 26 Feb 2020 05:24:01 +0000</pubDate>
      <link>https://dev.to/captainsafia/i-really-couldn-t-think-of-a-punny-title-for-this-post-4a38</link>
      <guid>https://dev.to/captainsafia/i-really-couldn-t-think-of-a-punny-title-for-this-post-4a38</guid>
      <description>&lt;p&gt;Heads up! This blog post is a reading log I'm writing. I'm spending 30 minutes a day reading &lt;a href="https://github.com/dotnet/coreclr/blob/master/Documentation/botr/README.md"&gt;The Book of the Runtime&lt;/a&gt;.&lt;/p&gt;

&lt;h1&gt;
  
  
  Reading Log 3: The Book of the Runtime
&lt;/h1&gt;

&lt;p&gt;I left the last reading log starting to cover some content on how the allocator component of the garbage collector in the CLR worked. Continuing from there, the book clarifies that when objects are allocated they are categorized into large or small objects depending on their size. The distinction is important because the size of an object determines how difficult it is to garbage collect.&lt;/p&gt;

&lt;p&gt;As I recall, this type of categorization is pretty common across a few GC implementations, so the CLR is not unique here. Correct me, if I completely imagined things in my programming language design courses.&lt;/p&gt;

&lt;p&gt;After this, the book introduced two new keywords: an allocation context and an allocation quantum. Oh boy! Did the word quantum just get dropped on us? I'm gonna be real with y'all. I got a little bit lost in this lingo.&lt;/p&gt;

&lt;blockquote&gt;
&lt;ul&gt;
&lt;li&gt;Allocation contexts are smaller regions of a given heap segment that are each dedicated for use by a given thread. On a single-processor (meaning 1 logical processor) machine, a single context is used, which is the generation 0 allocation context.&lt;/li&gt;
&lt;li&gt;The Allocation quantum is the size of memory that the allocator allocates each time it needs more memory, in order to perform object allocations within an allocation context. The allocation is typically 8k and the average size of managed objects are around 35 bytes, enabling a single allocation quantum to be used for many object allocations.&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;

&lt;p&gt;OK, so let's break down the first one. Allocation contexts are a per-thread concept. For multi-threaded processes, each thread will have its own allocation context. On single processers, only one allocation context is used. This part seemed incomplete to me. It seems like the documentation is trying to draw our attention to a special point here but I might be missing it. Single-processor machines cannot have multiple threads running at the same time so there's no need to manage separate allocation contexts for memory used by each thread. This seemed obvious to me but maybe there is more to it?&lt;/p&gt;

&lt;p&gt;In any case, the "allocation quantum" bit is where things get interesting. First of all, this phrase made me laugh out loud because it sounds rather silly if you say it a few times fast&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;is the size of memory that the allocator allocates each time it needs more memory,&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;I think I was trying to hard to understand the allocation quantum for more than what it was. But ultimately, I would rephrase the description as follows. Let me know if I'm missing something here.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The allocation quantum is a unit that represents the amount of memory that is allocated whenever the allocator requests more memory to allocate new objects. This unit is typically 8k bytes.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Speaking of 8k bytes, that's a great segaway into the next section of the book. It makes the point that the allocation quantum unit is small enough that it probably won't fit any large objects in it. As a result, large objects are allocated directly onto the GC heap and not into an allocation context.&lt;/p&gt;

&lt;p&gt;I'm unconvinced as to the benefits of this but the book goes on to clarify that a lot of benefits of the allocation context really come in to play when dealing with small objects. I'll avoid retelling this list of benefits here since I actually found the list in &lt;a href="https://github.com/dotnet/coreclr/blob/master/Documentation/botr/garbage-collection.md#design-of-allocator"&gt;the book&lt;/a&gt; to be rather easy-reading. Long story short: the architecture helps keep memory close and tidy which makes clean-ups easier.&lt;/p&gt;

&lt;p&gt;The next portion of the book dives into the allocator's counterpart: the collector. The book lists the design goals of the collector, the first two of which are contradictory (that might be the wrong word, maybe mutually  exclusive, no....that's not it, hmmmm....):&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The garbage collection process has to happen frequently such that there is not a lot of unused by allocated memory on the heap.&lt;/li&gt;
&lt;li&gt;The garbage collection process has to happen infrequently enough to not use too much CPU.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;At odds! That's the word that I was looking for earlier. Wow...journaling this stuff out is a vocabulary challenge. So we need to find a balance where we are running a GC just enough. This is a pretty standard goal for GC implementations. The book lists another goal for the GC, if a GC cycle does happen it should remove as much memory as possible. No point taking up CPU cycles to remove a small bit of memory!&lt;/p&gt;

&lt;p&gt;With these goals in mind, I've reached the end of today's 30 minutes. I'll be picking it up tomorrow with a dive into the logical and physical representations of the managed heap.&lt;/p&gt;

</description>
      <category>dotnet</category>
      <category>csharp</category>
      <category>clr</category>
    </item>
    <item>
      <title>This blog post has a lot of trash talking</title>
      <dc:creator>Safia Abdalla</dc:creator>
      <pubDate>Tue, 25 Feb 2020 05:40:58 +0000</pubDate>
      <link>https://dev.to/captainsafia/this-blog-post-has-a-lot-of-trash-talking-5852</link>
      <guid>https://dev.to/captainsafia/this-blog-post-has-a-lot-of-trash-talking-5852</guid>
      <description>&lt;p&gt;Heads up! This blog post is a reading log I'm writing. I'm spending 30 minutes a day reading &lt;a href="https://github.com/dotnet/coreclr/blob/master/Documentation/botr/README.md"&gt;The Book of the Runtime&lt;/a&gt;.&lt;/p&gt;

&lt;h1&gt;
  
  
  Reading Log 2: The Book of the Runtime
&lt;/h1&gt;

&lt;p&gt;It's been a while since I did my "daily" reading. Life gets in the way, you know? But I'm back reading the BotR. Today's reading log starts off at the &lt;a href="https://github.com/dotnet/coreclr/blob/master/Documentation/botr/intro-to-clr.md#fundamental-features-of-the-clr"&gt;Fundamental Features of the CLR&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;This section listed the set of features that are part of the CLR. One thing that I really liked about the categorization scheme is the way it listed features on a spectrum of "affects the design of everything else" to "kind of its own thing." I felt like this was a pretty sensible model for listing off these features. This matches the way that I sometimes think about things and it is nice to see words put into this perspective.&lt;/p&gt;

&lt;p&gt;One of the fundamental features is the garbage collector. Garbage collectors are runtime constructs that reclaim (or collect) memory that is no longer needed by a process (garbage). Garbage collectors are wonderful because they mean people have to spend less time thinking about trivial things like keeping track of pointers and more time thinking about how to build software that helps people.&lt;/p&gt;

&lt;p&gt;The book goes on to state that garbage collector requires that the runtime have a reference to every piece of allocated memory that a program is using. Anyone who has had to herd cats knows how tough this can be.&lt;/p&gt;

&lt;p&gt;But there's more to it than that. Technically, you only need to know the references when you are about to garbage collect. Since garbage collection is not a continuous process (AFAIK), you only need to know the references when you garbage collect. Just when you think this will simplify things, BotR crushes are dreams by reminding us of another fundamental design principle of the CLR.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The CLR supports multiple concurrent threads of execution with a single process.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;This complicates things because it means that it is not feasible to predict when a garbage collection will happen (AFAIK) as any thread can require garbage collection at any time. So, bummer, we actually do need to maintain a list of references at all times.&lt;/p&gt;

&lt;p&gt;Fooled again! The GC in the CLR does not maintain a reference at all times, just &lt;em&gt;most&lt;/em&gt; times at the book states. I'm gonna be honest, the phrase "almost all the time" makes me a little uneasy. This seems....weird? Unpredictable? What's the reasoning behind it?&lt;/p&gt;

&lt;p&gt;At this point, the book invites us to read the  &lt;a href="https://github.com/dotnet/coreclr/blob/master/Documentation/botr/garbage-collection.md"&gt;garbage collection design doc&lt;/a&gt;. When one sees a link, one must click it! So away we go into a new tab...&lt;/p&gt;

&lt;p&gt;This chapter lists some recommended readings. I've jotted them down for further exploration. Onwards!&lt;/p&gt;

&lt;p&gt;Garbage collectors have two components: an allocator and a collector. All of this is quite familiar from my programming language courses at university. But this is only the first sentence of the chapter, so I won't get too cocky just yet!&lt;/p&gt;

&lt;p&gt;The book stats that the allocator gets called by the Execution Engine. Note that the E's are intentionally capitalized. We're in the big leagues now. I have no idea what the execution engine exactly is but I'm going to use context clues and assume it is the component responsible for the execution lifecycle of code.&lt;/p&gt;

&lt;p&gt;This component passes information about allocated memory to the allocator. Specifically, it lets the allocator know how many bytes were allocated, the "thread allocation context", and whether or not the object is finalizable.&lt;/p&gt;

&lt;p&gt;The first piece of information makes sense to me. Knowing the number of bytes allocated is useful for the collection process. &lt;/p&gt;

&lt;p&gt;The second is also generally clear, although me being me, I want to know exactly what the heck a thread allocation context looks like. I'm vaguely familiar with it from my time spent debugging C# code.&lt;/p&gt;

&lt;p&gt;The last detail is interesting. I'm familiar with finalizable objects but didn't know why it would be important to pass a flag about the objects state. Thankfully, the universe is kind and there is a &lt;a href="https://stackoverflow.com/a/41351147"&gt;StackOverflow post&lt;/a&gt; for this. It made some references to implementation details of the garbage collector that I am not yet familiar with -- so I'll set it aside for now and parse it more completely once I've finished reading the garbage collector chapter...&lt;/p&gt;

&lt;p&gt;...which will happen in the next reading log because my 30 minutes are up. Time flies when you are having fun!&lt;/p&gt;

</description>
      <category>dotnet</category>
      <category>csharp</category>
      <category>clr</category>
    </item>
    <item>
      <title>Lord of the...runtimes?</title>
      <dc:creator>Safia Abdalla</dc:creator>
      <pubDate>Thu, 20 Feb 2020 04:16:03 +0000</pubDate>
      <link>https://dev.to/captainsafia/lord-of-the-runtimes-2o5</link>
      <guid>https://dev.to/captainsafia/lord-of-the-runtimes-2o5</guid>
      <description>&lt;p&gt;Heads up! This blog post is a reading log I'm writing. I'm spending 30 minutes a day reading &lt;a href="https://github.com/dotnet/coreclr/blob/master/Documentation/botr/README.md"&gt;The Book of the Runtime&lt;/a&gt;.&lt;/p&gt;

&lt;h1&gt;
  
  
  Reading Log 1: The Book of the Runtime
&lt;/h1&gt;

&lt;p&gt;So I've been reading the C# via CLR book, but fellow nerd David Fowler recommended The Book of the Runtime as more up my alley. So here we are.&lt;/p&gt;

&lt;p&gt;First off, I love the origin story of the Book of the Runtime. It started off as internal docs for developers on the .NET team at Microsoft. When .NET was open-sourced (yes, .NET is open source!) the documentation was open-sourced as well. I love how organic this evolution was. I find that the best kinds of docs are the ones that come out in that way.&lt;/p&gt;

&lt;p&gt;The moment I read the first paragraph of the text, I knew it was right up my alley. It started by providing a conceptual definition of what a runtime was: "a program and all the dependencies (both resource and execution wise) that it needed to run."&lt;/p&gt;

&lt;p&gt;The book covers how weak the standardization of programs tends to be. For example, when you compile a program, the best you can do is compile it to the most common standard possible: the architecture and operating system (32-bit or 64-bit Windows or Linux, for example). No standards exist for important components of a program like garbage collection and access to a standard library.&lt;/p&gt;

&lt;p&gt;The Common Language Runtime (CLR) serves to solve this problem by providing a standard for this unstandardized landscape. I learned something I didn't know here. The CLR's spec is standardized by ECMA (much the same way that the JavaScript specification is). Nifty!&lt;/p&gt;

&lt;p&gt;The specification outlines everything that you need to know in order to run a program, from dependencies to compilation to deployment. The BotR then reiterated some of the key features of the Common Language Runtime that I learned about in the C# via CLR book. The usual suspects are there:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;an intermediate language with removes the requirement for programs to target a particular architecture&lt;/li&gt;
&lt;li&gt;metadata tables that store information about the types and definitions with a set of programs&lt;/li&gt;
&lt;li&gt;the assembly format which provides a portable format for the executed file&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The book also stressed the benefits of the CLR's support for multiple languages. Because the CLR provides a common platform that can be targetted by any language, any features that are available in the CLR become available to all the languages it supports.&lt;/p&gt;

&lt;p&gt;I drew an analogy between this and the Jupyter messaging specification, a JSON-based messaging specification for managing interactive REPL sessions between arbitrary runtimes (kernels) and clients. I love picking up common threads like this in my learning.&lt;/p&gt;

&lt;p&gt;The book then went on to clarify the CLR's focus on simplicity. And enter my favorite quote, ever:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;For example, fundamentally only simple things can be easy, so adding user visible complexity to the runtime should always be viewed with suspicion.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Ugh! I love this. For the very particular phrase "user visible complexity." It underscores the fact that some complexity is OK as long as it doesn't bleed out to the user. This is a great note to keep for app and API developers alike.&lt;/p&gt;

&lt;p&gt;The book goes onto highlight some things the CLR does to improve ease of use and highlight one particular scenario: keeping object/method names consistent across a codebase. Even when this might seem impractical (renaming every method in a library), the priority for consistency and ease of use is noted.&lt;/p&gt;

&lt;p&gt;I've arrived at the end of the 30 minutes and I already like this text more. It's got the right balance of conceptual details vs. nitty-gritty and I find the narrative voice to be friendly and approachable.&lt;/p&gt;

</description>
      <category>dotnet</category>
      <category>clr</category>
      <category>csharp</category>
      <category>computerscience</category>
    </item>
    <item>
      <title>Red assembly, blue assembly, strong assembly, weak assembly</title>
      <dc:creator>Safia Abdalla</dc:creator>
      <pubDate>Sun, 16 Feb 2020 20:12:16 +0000</pubDate>
      <link>https://dev.to/captainsafia/red-assembly-blue-assembly-strong-assembly-wek-assembly-2b14</link>
      <guid>https://dev.to/captainsafia/red-assembly-blue-assembly-strong-assembly-wek-assembly-2b14</guid>
      <description>&lt;p&gt;Heads up! This blog post is actually a reading log. I'm spending 30 minutes every day (more or less) reading &lt;a href="https://www.amazon.com/CLR-via-4th-Developer-Reference/dp/0735667454"&gt;C# via CLR by Jeffrey Richter&lt;/a&gt; and taking notes as I read it. This is tracked as a series on DevTo so you can read all parts of the series in order.&lt;/p&gt;

&lt;h1&gt;
  
  
  C# via CLR Reading Log 5: Chapter 2-3
&lt;/h1&gt;

&lt;p&gt;In the last reading log, I had just finished up reading the portion of Chapter 2 that discussed assemblies. As it turns out, the content on assemblies continues. After covering them for a conceptual perspective, the book walks through how assemblies can be loaded into a project in VS, how to edit the metadata of the assembly, and more.&lt;/p&gt;

&lt;p&gt;I'll be honest with you, reader, I skimmed through that ish faster than a speeding bullet. I'm just not into books that cover tooling and how to configure things. I much prefer to learn that stuff on an as-needed basis. It lasts much longer in my brain that way.&lt;/p&gt;

&lt;p&gt;So I skimmed through pages of text and different pieces of metadata and what they mean until I landed upon an interesting looking section: "Simple Application Deployment (Privately Deployed Assemblies)." Deployment? Practical and intriguing!&lt;/p&gt;

&lt;p&gt;I found a few tidbits in this section rather interesting:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;When you uninstall an application downloaded from the Windows in Windows, the assemblies associated with that application aren't removed until all users on the machine have uninstalled the application. This makes sense -- but I wonder what the historical motivation for this was?&lt;/li&gt;
&lt;li&gt;Applications don't just have to be installed from the Windows store. They can come from a variety of installers. Instead, the nteract desktop app for windows ships an EXE and MSI file for installation on Windows machines.&lt;/li&gt;
&lt;li&gt;Installing an application and getting it to show in the users' desktop and context menus are not the same thing. Boy do I know this! I spent some time researching adding support for a "Create new notebook" context menu item in nteract and it, unfortunately, requires some extra work. To be fair, a similar amount of extra work is required to bring the same functionality to other operating systems.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I skimmed through some more technical details and arrived at Chapter 3, which promises to cover the topic of shared assemblies and strongly typed assemblies.&lt;/p&gt;

&lt;p&gt;Shared assemblies, aka globally deployed assemblies, are assemblies that can be used by multiple applications or other assemblies. Shared assemblies are useful for deploying reusable code, such as SDKs and frameworks. Indeed, the .NET framework is a globally deployed assembly.&lt;/p&gt;

&lt;p&gt;Sidenote: It feels really weird to use the term "deployed' in the context of assets that run on a single-user machine. To be fair, an assembly can very well run on a cloud machine but I've always thought the term deployment should be referred to...I dunno...web apps? I guess it's time to stretch the imagination.&lt;/p&gt;

&lt;p&gt;The book continues by discussing the file versioning conundrum and managing incompatibilities between different versions of assemblies and applications that depend on them. The book makes an astute point with this regard. A lot of applications exploit bugs in the frameworks and libraries they depend on. Having worked on the nteract core SDK, I know this to be true. Everyone's definition of what a bug is is different.&lt;/p&gt;

&lt;p&gt;This reference is an allusion to the next portion of the book, which promises to discuss how the CLR handles versioning to avoid the variety of problems that occur as a result of version incompatibilities.&lt;/p&gt;

&lt;p&gt;So, there are two different kinds of assemblies that the CLR can process: strongly-named assemblies and weakly-named assemblies. Although, Jeffrey clarifies that the term "weakly-named assembly" is one of his own makings that doesn't exist in the .NET framework documentation. To me, that's an indicator that something serious is about to happen! lol&lt;/p&gt;

&lt;p&gt;The book clarifies that weakly-named and strongly-named assemblies are structurally the same. However, strongly-named assemblies are signed with a public/private key pair owned by the publisher while weakly-named assemblies are not. The CLR applies a different set of policies to strongly-named assemblies than it does to weakly-named ones. It makes sense to me. Code assets that have been signed deserve special treatment at run time than those that have not.&lt;/p&gt;

&lt;p&gt;Strongly-named assemblies are the only assemblies that can be deployed globally, meaning they can be shared by other assemblies and applications on the machine. Once again, this makes sense. You want a certain layer of rigor given to assemblies that affect other components.&lt;/p&gt;

&lt;p&gt;Today's 30 minutes are up -- but the book is about to dive into some interesting content around the approaches that .NET takes for managing global assemblies. Stay tuned for the next reading log! &lt;/p&gt;

</description>
      <category>dotnet</category>
      <category>csharp</category>
      <category>computerscience</category>
    </item>
    <item>
      <title>Hell is other people's DLLs</title>
      <dc:creator>Safia Abdalla</dc:creator>
      <pubDate>Sat, 15 Feb 2020 21:24:23 +0000</pubDate>
      <link>https://dev.to/captainsafia/hell-is-other-people-s-dlls-4hpi</link>
      <guid>https://dev.to/captainsafia/hell-is-other-people-s-dlls-4hpi</guid>
      <description>&lt;p&gt;Heads up! This blog post is actually a reading log. I'm spending 30 minutes every day (more or less) reading &lt;a href="https://www.amazon.com/CLR-via-4th-Developer-Reference/dp/0735667454"&gt;C# via CLR by Jeffrey Richter&lt;/a&gt; and taking notes as I read it. This is tracked as a series on DevTo so you can read all parts of the series in order.&lt;/p&gt;

&lt;h1&gt;
  
  
  C# via CLR Reading Log 4: Chapter 2
&lt;/h1&gt;

&lt;p&gt;We've finally arrived at Chapter 2 of the C# via CLR book. The last chapter made quite a few references to material that would be covered in Chapter 2, so I'm excited to see if it will live up to the hype.&lt;/p&gt;

&lt;p&gt;The first few paragraphs of the chapter mentioned that it would be covering how to build and deploy assemblies and I was concerned because, frankly, I don't really care about that stuff (but maybe I should). My interest recovered a little bit in the following paragraphs where the author covered some of the pitfalls of the previous library and app models in Windows and highlighted the chaos that ensued as a result ("DLL hell", we've all been there).&lt;/p&gt;

&lt;p&gt;The discussion about the difficulty of dependency management and ensuring backward-compatibility across various components of a system really resonated with me. I'm a contributor on &lt;a href="https://github.com/nteract/nteract"&gt;nteract&lt;/a&gt;, a project that ships a core SDK with various components and interdependencies and I've had to deal with my own sort of DLL hell. I guess some problems are just universal.&lt;/p&gt;

&lt;p&gt;Anyways, back to the book, it summarizes a list of annoying things about installing software on Windows:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Too many gosh darn DLLs and too many gosh darn incompatibilities between them&lt;/li&gt;
&lt;li&gt;Every installed application must touch every part of your OS from file system to registries to options to ensure that it is properly loved&lt;/li&gt;
&lt;li&gt;Securit does not exist! You wanna install software? Deal with the consequences, it's a scary world out there.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Hahaha! .NET Framework is presented as the solution to the problems listed above. The book provides a brief summary of some of the ways it addresses thee problems (code access security for resolving security issues) but doesn't dive deeply into them just yet. So I guess I'll circle back to this topic when the book covers it more fully.&lt;/p&gt;

&lt;p&gt;Forthcoming chapters of the book dived back into territory I do not enjoy, commands you type in a terminal to get a computer to do things. &lt;em&gt;groan&lt;/em&gt; Gimme some nice juicy concepts to learn!&lt;/p&gt;

&lt;p&gt;After I read (more like skimmed) this portion of the book, I got a section that I had been looking for: a deeper dive into the metadata associated with a managed module. This content was referenced in the previous chapter so I'm glad I got what I was promised.&lt;/p&gt;

&lt;p&gt;Metadata is stored in a binary blob that represents three different tables: definition tables, reference tables, and manifest tables.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Definition tables, as the name might suggest, keep track of everything that is defined in the source code. This includes things like properties on a class, methods, and types.&lt;/li&gt;
&lt;li&gt;Reference tables maintain a record of everything your code references like other assemblies, modules, and external types.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The book didn't explain what manifest tables were but &lt;a href="https://docs.microsoft.com/en-us/dotnet/standard/assembly/manifest"&gt;some quick Googling&lt;/a&gt; reveals that manifest tables are used to store metadata information like the version of the module or a special security signature.&lt;/p&gt;

&lt;p&gt;The book also showcased how to use a command-line tool (&lt;code&gt;ILDasm.exe&lt;/code&gt;) to view the contents of the metadata of a module. This is one thing that I've appreciated about the references to tooling in this book. I appreciated the fact that formats (with the right tool) were relatively easy to disassemble or inspect. Obviously not as easy to explore as artifacts from other programming languages, but still easier than I expected.&lt;/p&gt;

&lt;p&gt;In the exploration of metadata tables, the book highlighted an important takeaway. For smaller files, the metadata tables might compromise more space than source code in the resulting managed module. You need more space than the actual code you wrote to define the key data types within the file. Metadata tables really pay off for larger programs where types are re-used frequently. In this scenario, the source code would be larger than the metadata tables.&lt;/p&gt;

&lt;p&gt;The next section of the book is titled "Combining Modules Into Assemblies". I'll admit I had almost forgotten that individual modules can be combined into a single dependency. There's a decent amount of nesting in these structures but I'm starting to get a mental model of how each module is structured and how that structure falls under an assembly. Perhaps I'll draw up a picture and include it in my next blog post...&lt;/p&gt;

&lt;p&gt;The first sentence of this section reminded me why I was so confused. An assembly is a collection of one or more managed modules. So a single managed module can also be an assembly. Multiple managed modules can also be an assembly. I've also heard that a grouping of school children is an assembly.&lt;/p&gt;

&lt;p&gt;The importance of assembles is clarified in a forthcoming discussion on how the CLR processes assemblies. It always starts off with a manifest table, which includes references to the files within an assembly. These files can be within the current assembly or in a different assembly altogether.&lt;/p&gt;

&lt;p&gt;And as it turns out, not just any managed module can be an assembly. To be secured and versions (and deployed) a managed module must be encapsulated into an assembly that contains the necessary metadata.&lt;/p&gt;

&lt;p&gt;I'm starting to get a handle on how all these components connect, but unfortunately, it's the end of today's 30 minutes of reading -- so I'll continue this exploration in the next reading log!&lt;/p&gt;

</description>
      <category>dotnet</category>
      <category>csharp</category>
      <category>computerscience</category>
    </item>
    <item>
      <title>That feel when you find a StackOverflow post for the exact question you had</title>
      <dc:creator>Safia Abdalla</dc:creator>
      <pubDate>Fri, 14 Feb 2020 05:57:16 +0000</pubDate>
      <link>https://dev.to/captainsafia/that-feel-when-you-find-a-stackoverflow-post-for-the-exact-question-you-had-5hjj</link>
      <guid>https://dev.to/captainsafia/that-feel-when-you-find-a-stackoverflow-post-for-the-exact-question-you-had-5hjj</guid>
      <description>&lt;p&gt;Heads up! This blog post is actually a reading log. I'm spending 30 minutes every day (more or less) reading &lt;a href="https://www.amazon.com/CLR-via-4th-Developer-Reference/dp/0735667454"&gt;C# via CLR by Jeffrey Richter&lt;/a&gt; and taking notes as I read it. This is tracked as a series on DevTo so you can read all parts of the series in order.&lt;/p&gt;

&lt;h1&gt;
  
  
  C# via CLR Reading Log 3: Chapter 1
&lt;/h1&gt;

&lt;p&gt;And now we arrive to the section titled "IL and Verification."  The section started off by asserting that the IL is a stack-based language, meaning that instructions and pushed onto and popped off a stack. The next sentence in this chapter immediately befuddled me.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Because IL offers no instructions to manipulate registers, it is easy for people to create new languages and compilers targetting CLR.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;I didn't really understand this portion. I mean, I know what a register is (a tiny thing that stores data, lol) and I understand what it means to not have instructions to manipulate registers (no way to put thins in registers or retrieve them), but the leap to that making it easy for people to create new languages and runtimes confused me.&lt;/p&gt;

&lt;p&gt;I wanted to research this further -- but here I arrived at the most annoying position anyone can be in. I didn't know how to ask the question that I wanted to ask. I ended up getting lucky by Googling "no register manipulation CLR" and finding &lt;a href="https://softwareengineering.stackexchange.com/questions/213464/what-does-because-il-offers-no-instructions-to-manipulate-registers-it-is-easy/213465"&gt;this StackOverflow post&lt;/a&gt; where someone had asked the exact same question I had. I am not alone, woot!&lt;/p&gt;

&lt;p&gt;My luck gets even better. Someone had posted an answer. They clarified that having register support would make the CLR more CPU specific. This makes sense because each CPU has a different set of registers and semantics for using them. By abstracting away these details, the CLR is more adaptable to new languages and compilers.&lt;/p&gt;

&lt;p&gt;The book goes on to discuss another benefit of the CLR that is mentioned in the heading title: verification. Verification examines IL code and ensures that it is safe. The book outlined that verification checks for things like the number of parameters passed to a method, that return types are used correctly, and more. The examples listed all seemed like type-checking to me so I wondered why this process wasn't just called type-checking.&lt;/p&gt;

&lt;p&gt;This section of the book continued discussing the difference between safe code (doesn't access memory at a particular address) and unsafe code (accesses and manipulates memory at the address) and different tools for disassembling and generating native code from assemblies. I tend to get a little bored with books like this when they walk into discussion around tooling.&lt;/p&gt;

&lt;p&gt;I'll boldly admit that I skipped through the next few pages, which covered more tools and concepts related to them, and found myself in the section titled "The Common Language Specification." Now we're back to territory I am comfortable with!&lt;/p&gt;

&lt;p&gt;The book once again made a reference to COM --- I really should Google what this is. And posed the following analogy: COM (whatever that is, lol) allows different languages to communicate with each other. That's a pretty general concept I'm comfortable with (but I still want to get into the nitty gritty of COM). The CLR facilitates the same goal by allowing different languages to share types and object references.&lt;/p&gt;

&lt;p&gt;There's a problem with this though. Not all programming languages share the same approach to types. Whether it is how large integers are represented or whether methods support a variable number of parameters, each programming language has different levels of supportability for language features. The Common Langauge Specification outlines the minimum set of features that compilers targetting the CLR must support so that types and objects can be unified appropriately.&lt;/p&gt;

&lt;p&gt;And that's it for the 30 minutes. In the next reading log, I'll be diving into the oft-proclaiming Chapter 2...&lt;/p&gt;

</description>
      <category>dotnet</category>
      <category>csharp</category>
      <category>computerscience</category>
    </item>
    <item>
      <title>3 things you won't believe happen when you execute a print statement</title>
      <dc:creator>Safia Abdalla</dc:creator>
      <pubDate>Thu, 13 Feb 2020 05:14:03 +0000</pubDate>
      <link>https://dev.to/captainsafia/3-things-you-won-t-believe-happen-when-you-execute-a-print-statement-2h7a</link>
      <guid>https://dev.to/captainsafia/3-things-you-won-t-believe-happen-when-you-execute-a-print-statement-2h7a</guid>
      <description>&lt;p&gt;Heads up! This blog post is actually a reading log. I'm spending 30 minutes every day (more or less) reading &lt;a href="https://www.amazon.com/CLR-via-4th-Developer-Reference/dp/0735667454"&gt;C# via CLR by Jeffrey Richter&lt;/a&gt; and taking notes as I read it. This is tracked as a series on DevTo so you can read all parts of the series in order.&lt;/p&gt;

&lt;h1&gt;
  
  
  C# via CLR Reading Log 2: Chapter 1
&lt;/h1&gt;

&lt;p&gt;Continuing on from the last reading log, the "Loading the Common Language Runtime" segment of the book began by covering how assemblies are loaded into the CLR.&lt;/p&gt;

&lt;p&gt;One of the first key points highlighted in this section, in my opinion, is how distributable assemblies can be. Since they contain type-safe data managed in the DLL, they can interop across 32-bit and 64-bit versions of Windows. The book went on to discuss the different configuration and support options that exist for building assemblies that only execute on either 32-bit or 64-bit systems. To be honest, I zoned out a little bit during this section of the book. I don't find it particularly interesting.&lt;/p&gt;

&lt;p&gt;After droning (sorry Jeff!) through a few pages on different bit-systems (note to self: I know there's a word for the bit-iness of a system, what is it though? It's on the tip of my tongue!), I finally made it to the interesting content: how assembly code is executed.&lt;/p&gt;

&lt;p&gt;The book presented the example of an assembly that contained two print statements, like so.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight csharp"&gt;&lt;code&gt;&lt;span class="k"&gt;static&lt;/span&gt; &lt;span class="k"&gt;void&lt;/span&gt; &lt;span class="nf"&gt;Main&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="n"&gt;Console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;WriteLine&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Hello!"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="n"&gt;Console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;WriteLine&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Goodbye!"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;Main&lt;/code&gt; method is the entry point for the CLR. It starts by detecting the types that are referenced in the &lt;code&gt;Main&lt;/code&gt; method. In this case, the only object referenced is the &lt;code&gt;Console&lt;/code&gt; object. The CLR creates a table mapping each property referenced under this object. In this case, the only property referenced is &lt;code&gt;WriteLine&lt;/code&gt;.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Object&lt;/th&gt;
&lt;th&gt;Properties&lt;/th&gt;
&lt;th&gt;Reference&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;Console&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;WriteLine&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;To-be-compiled&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;When the CLR encounters a &lt;code&gt;WriteLine&lt;/code&gt; invocation in the code, it looks for the assembly associated with the &lt;code&gt;Console&lt;/code&gt; object. Since the assembly is a managed module, it will contain a metadata table of all its types and members. The CLR will look for the &lt;code&gt;WriteLine&lt;/code&gt; method in this table. The book states that the IL code for the &lt;code&gt;WriteLine&lt;/code&gt; method can be retrieved from the metadata table as well -- although it doesn't go into details about how exactly this happens. I suppose I'll find out soon enough.&lt;/p&gt;

&lt;p&gt;Once it has retrieved the IL code from the metadata table, it compiles this IL code down to machine code is a memory block it sets aside for it. Then it updates the references in the property table above to the point to this memory block. This is all to say, the compilation happens just-in-time as the invocations are processed.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Object&lt;/th&gt;
&lt;th&gt;Properties&lt;/th&gt;
&lt;th&gt;Reference&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;Console&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;WriteLine&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;A block of memory with machine code for the &lt;code&gt;WriteLine&lt;/code&gt; method&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Also, since there are two &lt;code&gt;WriteLine&lt;/code&gt; invocations in the code above, the code for the method will only be compiled once. The second time the method is invoked, we can directly use the reference stored in the table.&lt;/p&gt;

&lt;p&gt;The book goes on to discuss the performance advantages of JIT compilation and also presents a realistic perspective on the merits of JIT compilation. In particular, more time is probably spent executing the code for the JIT-ed method than actually compiling it just-in-time.&lt;/p&gt;

&lt;p&gt;The book also touched on some of the merits of managed code (code executed by the CLR) and unmanaged code (code not executed by the CLR).  In particular, managed code run by the CLR can be translated to machine instructions optimized for the machine the code is running on. That means the CLR can take advantage of those esoteric (don't pretend they're not!) machine language-level quirks for optimizing things like numerical division.&lt;/p&gt;

&lt;p&gt;And that's it for today's reading log. The 30 minutes sure did go by fast this time! Next time, I'll be taking a look at the section titled "IL and Verification." Vague....and intriguing...&lt;/p&gt;

</description>
      <category>csharp</category>
      <category>dotnet</category>
      <category>computerscience</category>
    </item>
    <item>
      <title>The fun and games begin when the runtime walks in</title>
      <dc:creator>Safia Abdalla</dc:creator>
      <pubDate>Wed, 12 Feb 2020 05:11:30 +0000</pubDate>
      <link>https://dev.to/captainsafia/the-fun-and-games-begin-when-the-runtime-walks-in-17ga</link>
      <guid>https://dev.to/captainsafia/the-fun-and-games-begin-when-the-runtime-walks-in-17ga</guid>
      <description>&lt;p&gt;Heads up! This blog post is actually a reading log. I'm spending 30 minutes every day (more or less) reading &lt;a href="https://www.amazon.com/CLR-via-4th-Developer-Reference/dp/0735667454"&gt;C# via CLR by Jeffrey Richter&lt;/a&gt; and taking notes as I read it. This is tracked as a series on DevTo so you can read all parts of the series in order.&lt;/p&gt;

&lt;h1&gt;
  
  
  C# via CLR Reading Log 1: Chapter 1
&lt;/h1&gt;

&lt;p&gt;The common language runtime handles memory management, thread synchronization, and other fundamental tasks for program language implementation. However, it is not particular about which programming language is used. Different programming languages can compile to target the Common Language Runtime (CLR).&lt;/p&gt;

&lt;p&gt;Some of the more popular languages that target the CLR are C# and F#, but IronPython and IronRuby are other variants. The CLR gives you the flexibility to pick the right programming language for a particular task while still using the same fundamental semantics exposed by the CLR (memory management, etc)&lt;/p&gt;

&lt;p&gt;The process for interfacing an arbitrary programming language with the Common Language Runtime looks something like this:&lt;/p&gt;

&lt;p&gt;Programming language file --&amp;gt; Compiler ---&amp;gt; Managed Module --&amp;gt;  CLR&lt;/p&gt;

&lt;p&gt;The compiler is responsible for compiling the source code to a managed module -- which is an executable format that the CLR can interpret.&lt;/p&gt;

&lt;p&gt;Sidenote: At this point, the book mentions that the CLR takes advantage of the Data Execution Prevention and Address Space Layout Randomization features in Windows. I've heard these terms before and know generally what they are (strategies that exist at the memory management layer for preventing exploits that store malicious code in memory), but don't know the specifics. To-do to read more about this.&lt;/p&gt;

&lt;p&gt;Back to the main plot, the "managed module" mentioned above actually consists of four things:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;CLR Header and PE32 header&lt;/strong&gt;&lt;br&gt;
The book discusses these in more detail. In short, they are headers that contain metadata about the compiled file. Things like the time it was compiled, whether it can run on 32-bit or 64-bit systems, and more. I'm gonna assume that it's not important to know every property that goes in these headers.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Metadata&lt;/strong&gt;&lt;br&gt;
It's essentially a table that stores the types and attributes defined in the source code that was compiled by the compiler. It includes types and attributes defined in your codebase and in dependencies that you use.&lt;/p&gt;

&lt;p&gt;The book referenced that metadata is a superset of older technologies like "COM's Type Libraries and Interface Definition Language files". TBH, I dunno what these are beyond hearing them referenced in technical conversations I've overheard.&lt;/p&gt;

&lt;p&gt;Their history aside, metadata tables are useful because they enable many helpful features, like type-checking and supporting garbage collection (the metadata table can be used to resolve which types reference others to determine what data needs to be retained in memory).&lt;/p&gt;

&lt;p&gt;The book ends its description of metadata by saying that Chatper 2 is entirely dedicated to metadata tables so I guess I'm in for a treat.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;IL Code&lt;/strong&gt;&lt;br&gt;
Intermedia language code is the code that the CLR takes and compiles down to machine code. So the compilation flow for languages targetting the CLR can be summarized as follows.&lt;/p&gt;

&lt;p&gt;Programming language (Python, C#, etc) ---&amp;gt; Intermediate language code ---&amp;gt; Machine code&lt;/p&gt;

&lt;p&gt;After discussing the managed module architecture, the book introduces a new concept. IL code is not what the CLR works with.  How foolish was I to assume that things could be so simple!&lt;/p&gt;

&lt;p&gt;The CLR directly works with assemblies. Assemblies are explicitly described as "an abstract concept that can be difficult to grasp." Well, I don't like the sound of that at all! Thankfully, the book doesn't dive off the deep end here and provides a gentle summary. That's probably why it has so many great reviews!&lt;/p&gt;

&lt;p&gt;In brief, it describes assemblies as groupings of one or more managed modules. It also slips in another interesting statement: assemblies are "the smallest unit of reuse, security, and versioning." Sounds intriguing...The book goes on to state that assemblies (along with metadata tables) will be covered in Chapter 2. I guess this is how you keep readers turning pages!&lt;/p&gt;

&lt;p&gt;I'm at the end of the 30 minutes now so I'll see you next time when I start reading through the heading titled "Loading the Common Language Runtime."&lt;/p&gt;

</description>
      <category>csharp</category>
      <category>dotnet</category>
      <category>computerscience</category>
    </item>
    <item>
      <title>The most important sorting algorithm you need to know</title>
      <dc:creator>Safia Abdalla</dc:creator>
      <pubDate>Wed, 05 Feb 2020 20:37:07 +0000</pubDate>
      <link>https://dev.to/captainsafia/the-most-important-sorting-algorithm-you-need-to-know-38e0</link>
      <guid>https://dev.to/captainsafia/the-most-important-sorting-algorithm-you-need-to-know-38e0</guid>
      <description>&lt;h1&gt;
  
  
  Timsort
&lt;/h1&gt;

&lt;p&gt;Timsort is the most popular sorting algorithm that you've never heard of. If you've spent any time studying sorting algorithms in an academic context, you're probably familiar with the usual suspects: merge sort, quick sort, binary sort, and so on. Timsort is quite unique though. If you've used the native sort methods in Python or NodeJS, you've interfaced with Timsort. Let's take a look at what Timsort is...&lt;/p&gt;

&lt;h2&gt;
  
  
  The what
&lt;/h2&gt;

&lt;p&gt;Timsort is a &lt;strong&gt;hybrid sorting algorithm.&lt;/strong&gt; Hybrid algorithms are algorithms that use two or more sub-algorithms that solve the same problem, such as sorting. A hybrid algorithm will use one of the two sub-algorithms depending on the input data or at different points in the course of the algorithm's execution. Hybrid algorithms are great because they can allow you to combine the best of both worlds when it comes to picking an ideal solution for a problem.&lt;/p&gt;

&lt;p&gt;Hybrid algorithms are great because they allow you to combine the best of both worlds...&lt;/p&gt;

&lt;p&gt;Timsort uses two sub-algorithms under the hood, insertion sort and merge sort. Insertion sort is a sorting algorithm that sorts an unsorted list by iterating through each item in the list one-by-one and placing it in the correct position.&lt;/p&gt;

&lt;p&gt;Merge sort is a divide-and-conquer sorting algorithm that sorts a list by repeatedly dividing the list into smaller lists, sorting those lists, and then merging back the sorted lists together.&lt;/p&gt;

&lt;p&gt;Merge sort and insertion sort each have their strengths and weaknesses. Timsort uses insertion sort when the size of the input list is small. Timsort starts off by using merge sort. The input list is repeatedly divided into smaller halves.&lt;/p&gt;

&lt;p&gt;Eventually, if the length of one of the halves is equal to the length of a run, Timsort will use insertion sort to sort the list. Then, Timsort will merge back the two lists using merge sort. However, Timsort's merge sort strategy is a little different from traditional sorting algorithms. It implements a galloping approach. Typically, when merging two sorted lists together, merge sort will look at the items in the input lists one by one to determine which one should be added to the resulting list first.&lt;/p&gt;

&lt;h2&gt;
  
  
  The how
&lt;/h2&gt;

&lt;p&gt;Timsort is famously implemented as the default sorting algorithm in the Python programming language. Those who are brave of heart can take a look at the implementation for Timsort in CPython &lt;a href="https://github.com/python/cpython/blob/dd754caf144009f0569dda5053465ba2accb7b4d/Objects/listobject.c"&gt;on GitHub&lt;/a&gt;. There is a lot of sort-related code in this file, but most of it provides support for the fundamental requirements of Timsort, like the implementation of a merge sort algorithm.&lt;/p&gt;

&lt;h2&gt;
  
  
  The why
&lt;/h2&gt;

&lt;p&gt;Timsort's popularity has extended beyond the Python programming language. It's the default sorting implementation in Java, JavaScript and Node (via the V8 JavaScript engine), and Octave. Its popularity stems from the fact that it's particularly tuned for the types of lists that one might come across in real-world scenarios. Timsort is highly performant on data that is already partially sorted because it looks for "runs" in the input list. "Runs" are segments of the list, having a minimum of two items, that are in strictly descending or ascending order.&lt;/p&gt;

&lt;p&gt;Essentially, Timsort looks for these already-sorted runs and merges them together to avoid extra work when sorting through the entire list.&lt;/p&gt;

&lt;p&gt;Timsort falls back to insertion sort for short lists because insertion sort on a small number of elements tends to perform better than merge sort. It does not have the same overhead that merge sort has when it comes to managing the recursive calls and merging the lists back together.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;So there you have it. That's the end of the first edition of Algorithm Archaeology covering Timsort. For those who are fans of cliff notes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Timsort is an adaptive algorithm, meaning it uses two different sub-algorithms depending on the situation.&lt;/li&gt;
&lt;li&gt;Timsort uses merge sort to sort the list unless the length of the current list being sorted is less than a particular number N. In Python, N is 64.&lt;/li&gt;
&lt;li&gt;Timsort is the default sorting algorithm in Python, Java, and NodeJS.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For those curious to learn more, I recommend reading Tim Peters' &lt;a href="https://github.com/python/cpython/blob/dd754caf144009f0569dda5053465ba2accb7b4d/Objects/listsort.txt"&gt;original notes on the algorithm&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Stay tuned for more of these posts! I've got some fun stuff in the works. ;)&lt;/p&gt;

</description>
      <category>timsort</category>
      <category>computerscience</category>
      <category>python</category>
      <category>javascript</category>
    </item>
    <item>
      <title>Prime numbers, debriefed</title>
      <dc:creator>Safia Abdalla</dc:creator>
      <pubDate>Wed, 29 Jan 2020 20:23:30 +0000</pubDate>
      <link>https://dev.to/captainsafia/prime-numbers-debriefed-25fn</link>
      <guid>https://dev.to/captainsafia/prime-numbers-debriefed-25fn</guid>
      <description>&lt;p&gt;Prime numbers are commonly used in the field of computer science and mathematics. They are one of those numbers that are so delightfully ubiquitous and interesting that I figured it's worth really digging into in a blog post.&lt;/p&gt;

&lt;h3&gt;
  
  
  What are prime numbers?
&lt;/h3&gt;

&lt;p&gt;Prime numbers are numbers that have only two divisors, one and the number itself.  Examples of prime numbers include 2, 3, 5, 7, and 11. There are infinitely many primes. As of writing this blog post (January 2020), the largest verified prime number is 2&lt;sup&gt;82,589,933&lt;/sup&gt; − 1.&lt;/p&gt;

&lt;h3&gt;
  
  
  Where are prime numbers used?
&lt;/h3&gt;

&lt;p&gt;Prime numbers are primarily used in cryptography. For example, RSA encryption, a popular encryption algorithm uses prime numbers in its implementation. To generate a public key, the RSA algorithm starts with two large random numbers, p and q, and uses those to derive a third number that is used to generate the public key.&lt;/p&gt;

&lt;p&gt;The RSA encryption algorithm uses to large prime number because larger primes produce larger products when multiplied. In order to figure out the two prime numbers that multiplied to a particular number, you have to compute the prime factorization, which can take quite a long time for larger numbers.&lt;/p&gt;

&lt;p&gt;Although the prime numbers in RSA encryption are used to generate a public key that is shared publicly, the actual prime numbers themselves are not share so no one can "spoof" a particular public key. The fact that it is hard to compute prime factorizations for large numbers ensures that it is not possible for some to reverse engineer the prime factors used to generate the public key.&lt;/p&gt;

&lt;p&gt;In one of &lt;a href="https://dev.to/captainsafia/random-numbers-revealed-4968"&gt;my previous blog posts&lt;/a&gt;, I discussed random numbers and how they are computed. As it turns out, prime numbers are useful when generating random numbers as well. The &lt;a href="https://en.wikipedia.org/wiki/Lehmer_random_number_generator"&gt;Lehmer random number generator&lt;/a&gt; algorithm generates a random number as a function of a large prime number and a fixed integer.&lt;/p&gt;

&lt;p&gt;Outside of cryptography and random number generation, prime numbers show up in lots of interesting places in nature. The most popular occurrences of prime numbers in nature is the &lt;a href="https://www.newyorker.com/tech/annals-of-technology/the-cicadas-love-affair-with-prime-numbers"&gt;emergence of the cicadas&lt;/a&gt; which happen at prime number-intervals to avoid colliding with periods in time where there is a high density of predators.&lt;/p&gt;

&lt;h3&gt;
  
  
  How can you tell if a number is prime?
&lt;/h3&gt;

&lt;p&gt;That's the million dollar question!&lt;/p&gt;

&lt;p&gt;It's rather difficult to determine whether a number is prime or not. There's a word for describing the property of a number being prime or not: primality.&lt;/p&gt;

&lt;p&gt;There are several different strategies for determining the primality of a number. The key distinction between one algorithm for determining primality and another are performance and accuracy.&lt;/p&gt;

&lt;p&gt;The slowest algorithm for determining primality uses trial division. Given a number &lt;em&gt;n&lt;/em&gt;, trial division will evaluate numbers from 2 to the square root of &lt;em&gt;n&lt;/em&gt; and determine if any of these numbers evenly divides into &lt;em&gt;n&lt;/em&gt;. If none do, then &lt;em&gt;n&lt;/em&gt; is prime. If there is a number that &lt;em&gt;n&lt;/em&gt; divided by in that range, then &lt;em&gt;n&lt;/em&gt; is composite.&lt;/p&gt;

&lt;p&gt;Why do we only evaluate the range from 2 to the square root of &lt;em&gt;n&lt;/em&gt;?&lt;/p&gt;

&lt;p&gt;Well, on the lower end, knowing that a number is divisible by 1 is not useful for determining whether it is composite or prime since both numbers are divisible by 1.&lt;/p&gt;

&lt;p&gt;On the upper end, we leverage the fact that any composite number will have a set of at least two prime factors. Prime factors are prime numbers that when multiplied together will equal the composite number. If we can't find any prime factors for a number, then we know it is not a composite number. Since the smallest number of prime factors a composite number can have is 2, there is some prime number, &lt;em&gt;p&lt;/em&gt;, that when multiplied with itself will equal &lt;em&gt;n&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;In addition to the trial division strategy for evaluating primality, there exists another class of strategies for determining whether or not a number is prime, known as probabilistic primality tests.&lt;/p&gt;

&lt;p&gt;Generally, these tests will validate that a given number (let's call it &lt;em&gt;p&lt;/em&gt;) is prime by selecting a random number (let's call it &lt;em&gt;a&lt;/em&gt;) and applying some test between &lt;em&gt;p&lt;/em&gt; and &lt;em&gt;a&lt;/em&gt;. Depending on the result of the test, the algorithm will consider &lt;em&gt;p&lt;/em&gt; either prime or composite. Most probabilistic algorithms will run multiple iterations using different values for &lt;em&gt;a&lt;/em&gt; until it is sensible to declare that &lt;em&gt;p&lt;/em&gt; is a prime.&lt;/p&gt;

&lt;p&gt;So that's that on prime numbers. Similar to random numbers, one of the most interesting things about the intersection between prime numbers and computing is how algorithms, like those in cryptography, are able to exploit the properties of prime numbers to their benefit. Neat stuff!&lt;/p&gt;

</description>
      <category>primenumbers</category>
      <category>blogpost</category>
    </item>
  </channel>
</rss>
