<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: naens</title>
    <description>The latest articles on DEV Community by naens (@naens).</description>
    <link>https://dev.to/naens</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/naens"/>
    <language>en</language>
    <item>
      <title>Generalized Fibonacci Memory Allocator</title>
      <dc:creator>naens</dc:creator>
      <pubDate>Tue, 30 Apr 2019 05:23:11 +0000</pubDate>
      <link>https://dev.to/naens/generalized-fibonacci-memory-allocator-2fja</link>
      <guid>https://dev.to/naens/generalized-fibonacci-memory-allocator-2fja</guid>
      <description>&lt;p&gt;Description of the work on the memory allocator &lt;a href="https://github.com/naens/mem_alloc"&gt;https://github.com/naens/mem_alloc&lt;/a&gt; &lt;/p&gt;

&lt;h1&gt;
  
  
  Table of Contents
&lt;/h1&gt;

&lt;ol&gt;
&lt;li&gt; Introduction
&lt;/li&gt;
&lt;li&gt; Memory Allocation
&lt;/li&gt;
&lt;li&gt; Generalized Fibonacci Allocation
&lt;/li&gt;
&lt;li&gt; Implementation Principles

&lt;ol&gt;
&lt;li&gt; Blocks
&lt;/li&gt;
&lt;li&gt; Array of Free Lists
&lt;/li&gt;
&lt;li&gt; Buddies
&lt;/li&gt;
&lt;li&gt; The Dynamic Array
&lt;/li&gt;
&lt;/ol&gt;


&lt;/li&gt;
&lt;li&gt; Implementation Details

&lt;ol&gt;
&lt;li&gt; Data Structures
&lt;/li&gt;
&lt;li&gt; Allocation from the Operating System
&lt;/li&gt;
&lt;li&gt; The first Items in the Array
&lt;/li&gt;
&lt;li&gt; The Buddy System
&lt;/li&gt;
&lt;li&gt; Allocating and Freeing
&lt;/li&gt;
&lt;li&gt; Array Initialization
&lt;/li&gt;
&lt;/ol&gt;


&lt;/li&gt;
&lt;li&gt; Portability and Testing

&lt;ol&gt;
&lt;li&gt; OS tested
&lt;/li&gt;
&lt;li&gt; Makefiles
&lt;/li&gt;
&lt;/ol&gt;


&lt;/li&gt;
&lt;li&gt; Conclusion and Goodbye
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;\pagebreak&lt;/p&gt;

&lt;p&gt;&lt;a id="org6644e5f"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Introduction
&lt;/h1&gt;

&lt;p&gt;Hello.  In this post I will describe a memory allocator that I have&lt;br&gt;
written. the main purpose for the memory allocator was to learn how it&lt;br&gt;
works, and also to have a way to allocate and to deallocate memory for&lt;br&gt;
assembly programs that I write for CCP/M-86 in assembly language.  The&lt;br&gt;
goal was to write it in assembly, but in order to be sure that I can&lt;br&gt;
have something that works, I decided to make a test version in C in&lt;br&gt;
order to be sure that I understand how it works.&lt;/p&gt;

&lt;p&gt;&lt;a id="org460254b"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Memory Allocation
&lt;/h1&gt;

&lt;p&gt;Now I will describe why I need a memory allocator.  It is needed to be&lt;br&gt;
able to get some part of memory in order to store a dynamic data&lt;br&gt;
structure, like a string for example.  If the user has the possibility&lt;br&gt;
to input a string, we cannot know in advance its size, so we get a&lt;br&gt;
large buffer.  But once we know it, it would be better to store only&lt;br&gt;
what is needed.  So, if, for example we have many strings of different&lt;br&gt;
lengths, it would be better to allocate the memory depending on the&lt;br&gt;
length of the string.&lt;/p&gt;

&lt;p&gt;There are, of course many other examples why a memory allocator might&lt;br&gt;
be needed.  Many data structures, like trees and lists are often&lt;br&gt;
implemented by using dynamic memory allocation.&lt;/p&gt;

&lt;p&gt;Other use is to be able to store temporary, intermediate data.  For&lt;br&gt;
example, we read a file and generate a list of strings from it.  Then&lt;br&gt;
we can filter or concatenate them and finally insert the results into&lt;br&gt;
a hash table or write them into a file.  During all these operations,&lt;br&gt;
we might need to generate some intermediate data structures.  So, in&lt;br&gt;
order to be able to reuse the memory that had been used by these&lt;br&gt;
structures again, it is useful to free memory if it's not used&lt;br&gt;
anymore.&lt;/p&gt;

&lt;p&gt;This is exactly what memory allocators do: you can allocate an amount&lt;br&gt;
of memory space you need, and once you don't need it anymore, you can&lt;br&gt;
return it back to the allocator, so it can be used for other&lt;br&gt;
allocations.&lt;/p&gt;

&lt;p&gt;&lt;a id="orge4a9451"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Generalized Fibonacci Allocation
&lt;/h1&gt;

&lt;p&gt;After reading about several different types of memory allocators, I&lt;br&gt;
decided that the allocator which uses the generalized Fibonacci&lt;br&gt;
sequence would be alright for my purpose: it does not waste a lot of&lt;br&gt;
memory (in average approximately 82.2% of the memory is used) and does&lt;br&gt;
not look too complicated to code (just some additions and subtractions&lt;br&gt;
of numbers and memory addresses).&lt;/p&gt;

&lt;p&gt;So what is a generalized Fibonacci sequence?  The usual Fibonacci sequence, which is very common in books about programming languages is the sequence described by the formula an = an-1 + an-2.  So, a generalized Fibonacci sequence is when the last item has a larger number than 2.  In my allocator I use the sequence generated by an = an-1 + an-4.  The main advantage is that the difference between the number is not so big, which makes it use the memory space more efficiently.  Another common memory allocator uses the formula an = 2 * an-1, which in some way is also a generalized Fibonacci sequence.&lt;/p&gt;

&lt;p&gt;&lt;a id="org90f2cf9"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Implementation Principles
&lt;/h1&gt;

&lt;p&gt;&lt;a id="orga4cf155"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Blocks
&lt;/h2&gt;

&lt;p&gt;My allocator specifies the size of the items it allocates in blocks of&lt;br&gt;
8 bytes.  The main reason is to be able to store three additional flag&lt;br&gt;
bits in the size field in the item.  It is important for space&lt;br&gt;
efficiency: otherwise I would lose some bytes for each allocation and&lt;br&gt;
I also wanted to have the returned area aligned for the needs of the&lt;br&gt;
OS, we cannot know: if the user decides to use it in order to store a&lt;br&gt;
structure or a multi-byte variable there, it would better be aligned.&lt;br&gt;
So if the OS returns aligned memory, my allocator will also return&lt;br&gt;
aligned memory.  Otherwise it does not matter.&lt;/p&gt;

&lt;p&gt;&lt;a id="org84186c8"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Array of Free Lists
&lt;/h2&gt;

&lt;p&gt;The memory allocator uses an array of free lists.  A free list is a&lt;br&gt;
linked list of items of the same size, which is a number from the&lt;br&gt;
generalized Fibonacci sequence multiplied by the size of the block.&lt;br&gt;
It is not the area usable by the user: it also includes the header,&lt;br&gt;
which contains the total size of the item.&lt;/p&gt;

&lt;p&gt;In the array the items are stored in the increasing order.  The&lt;br&gt;
structures contained in the array are called &lt;em&gt;cells&lt;/em&gt; in my&lt;br&gt;
implementation.  They contain a field which specifies the size of the&lt;br&gt;
corresponding free list, and a field which is a pointer to the head of&lt;br&gt;
the free list.&lt;/p&gt;

&lt;p&gt;&lt;a id="org12be711"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Buddies
&lt;/h2&gt;

&lt;p&gt;This is a buddy memory allocator, which is handy with a generalized&lt;br&gt;
Fibonacci sequence: we can create a larger item by simply combining it&lt;br&gt;
with another item of a smaller size.&lt;/p&gt;

&lt;p&gt;So this is how the buddy system works: we first allocate some memory&lt;br&gt;
from the Operating System and put it into the array in an appropriate&lt;br&gt;
free list.  When the user wants some memory, we can split the memory&lt;br&gt;
available into buddies, where each buddy is of the size a number of&lt;br&gt;
the sequence multiplied bu the number of bytes in the block.  After&lt;br&gt;
splitting, we can insert the buddy which is not used anymore into its&lt;br&gt;
free list, since it also has the appropriate size.&lt;/p&gt;

&lt;p&gt;When freeing, it's the opposite that happens: we want to return the&lt;br&gt;
buddy to the free list, but if it's buddy is not in use, we can first&lt;br&gt;
merge it with its buddy and insert the merged block into the&lt;br&gt;
corresponding free list.  The generalized Fibonacci sequence&lt;br&gt;
guarantees us that whenever we split or merge items, we have something&lt;br&gt;
that has a corresponding free list.  In order to know how to merge and&lt;br&gt;
how to find the buddy, we use flags, which will be explained later.&lt;/p&gt;

&lt;p&gt;&lt;a id="orgf6b61d3"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The Dynamic Array
&lt;/h2&gt;

&lt;p&gt;The array that contains the free lists has to be able to grow if more&lt;br&gt;
items need to be allocated.  That's why I use a dynamic array.  Its&lt;br&gt;
capacity is increased when more items need to be added.  In this&lt;br&gt;
situation another area big enough to hold the array is allocated and a&lt;br&gt;
new array is created in this area, which is a copy of the old array,&lt;br&gt;
but with greater capacity.  The old array is freed.  The good thing&lt;br&gt;
about the array is that it only use the allocator: it does not need&lt;br&gt;
special Operating System calls.&lt;/p&gt;

&lt;p&gt;&lt;a id="org9cbdf3c"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Implementation Details
&lt;/h1&gt;

&lt;p&gt;In this section I will describe in more detail how everything works.&lt;/p&gt;

&lt;p&gt;&lt;a id="orgace6bdd"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Data Structures
&lt;/h2&gt;

&lt;h3&gt;
  
  
  item
&lt;/h3&gt;

&lt;p&gt;The item is a structure which is the node of the free list, a doubly&lt;br&gt;
linked list.  The item has two modes of existence: free and used.&lt;br&gt;
When it's free, it is in the free list and contains pointers to the&lt;br&gt;
previous and the next items.  When it's used it contains the area&lt;br&gt;
instead of the pointers.&lt;/p&gt;

&lt;p&gt;But whatever the mode is, there is always a header.  It contains one&lt;br&gt;
single field of the size of the pointer.  This field contains the size&lt;br&gt;
of the item in blocks and 3 flags: &lt;code&gt;lr_bit&lt;/code&gt;, &lt;code&gt;inh_bit&lt;/code&gt;, and &lt;code&gt;in_use&lt;/code&gt;&lt;br&gt;
bit.  The &lt;code&gt;in_use&lt;/code&gt; bit tells us whether the item is used or not.  The&lt;br&gt;
&lt;code&gt;ls_bit&lt;/code&gt; and the &lt;code&gt;inh_bit&lt;/code&gt; are needed in order to know how to merge&lt;br&gt;
buddies: the buddy can be the left buddy or the right buddy, so we&lt;br&gt;
might need to go to the right or to the left in order to find the&lt;br&gt;
budd.&lt;/p&gt;

&lt;p&gt;The item is not a C structure.  It's an area of type &lt;code&gt;*void&lt;/code&gt;, for&lt;br&gt;
which accessor functions are used.  So here are some examples of&lt;br&gt;
accessor functions:&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;uintptr_t
item_get_size(void *item)
{
    uintptr_t size_field = ((uintptr_t*)item)[0];
    return size_field &amp;gt;&amp;gt; 3;
}

void
item_set_size(void *item, uintptr_t size)
{
    uintptr_t size_field = ((uintptr_t*)item)[0];
    uint8_t flags = size_field &amp;amp; 7;
    uintptr_t new_size_field = flags | (size &amp;lt;&amp;lt; 3);
    ((uintptr_t*)item)[0] = new_size_field;
}
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Here are functions for getting and setting the size of the item.  As&lt;br&gt;
you can see, it's located at the very beginning of the item, at index&lt;br&gt;
0, and it occupies the &lt;code&gt;sizeof(uintptr_t) - 3&lt;/code&gt; high-order bits of the&lt;br&gt;
field.&lt;/p&gt;

&lt;p&gt;Here's another example:&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;boolean
item_get_inh_bit(void *item)
{
    return (((uintptr_t*)item)[0] &amp;amp; 1) != 0;
}

void
item_set_inh_bit(void *item, boolean in_use)
{
    uintptr_t size_field = ((uintptr_t*)item)[0] &amp;amp; (~(uintptr_t)1);
    uint8_t tmp = in_use ? 1 : 0;
    uintptr_t new_size_field = size_field | tmp;
    ((uintptr_t*)item)[0] = new_size_field;
}
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Now we need to set one single bit.  It's the smallest bit in the&lt;br&gt;
field, so we use 1 here.&lt;/p&gt;

&lt;p&gt;And another important thing, is the area.  The area is the part of the&lt;br&gt;
item which is returned to the user.  This is how we do it:&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;void*
item_get_area(void *item)
{
    return &amp;amp;((void**)item)[1];
}
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;When the user returns the area to us, using the &lt;code&gt;mem_free&lt;/code&gt; function,&lt;br&gt;
we need to get the address of the item:&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;void*
item_from_area(void *area)
{
    return &amp;amp;((void**)area)[-1];
}
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;We assume that the user didn't cheat and didn't write beyond the area,&lt;br&gt;
which means that we trust him, and our data is still the way how we&lt;br&gt;
left it.  This is the notion of cooperation, which comes very often in&lt;br&gt;
the field of programming.&lt;/p&gt;

&lt;h3&gt;
  
  
  The cell structure
&lt;/h3&gt;

&lt;p&gt;Now let's talk about the next structure, which contains items, namely,&lt;br&gt;
the cell.  Here is its definition:&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;struct cell {
    uintptr_t size;
    void *items;
};
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;It contains the size and the pointer to the first item of the free&lt;br&gt;
list.  As you can see, there is a size field in the cell structure, as&lt;br&gt;
well as a size in the header of the items.  It must match whenever the&lt;br&gt;
items are in the free list.  This duplication is needed because the&lt;br&gt;
list might be empty, which means, we need a way to know the size.&lt;br&gt;
Also, when items are not in the free list, when they are in use, we&lt;br&gt;
need to know where to return them.&lt;/p&gt;

&lt;h3&gt;
  
  
  The array
&lt;/h3&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;struct array {
    struct cell *data;
    unsigned int size;
    unsigned int capacity;
};
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Actually, the array is also a structure, and the array is one of its&lt;br&gt;
field: &lt;code&gt;data&lt;/code&gt;.  Here also nothing is complicated.  As I already&lt;br&gt;
explained, there is a capacity, which tells how many elements can be&lt;br&gt;
stored in the array, and the &lt;code&gt;size&lt;/code&gt;, which tells its current size.&lt;/p&gt;

&lt;h3&gt;
  
  
  memlist
&lt;/h3&gt;

&lt;p&gt;When we get some chunks of memory from the Operating System, we&lt;br&gt;
organize them into a linked list in order to be able to free them when&lt;br&gt;
needed.  So, every time we use the first size-of-pointer bytes of the&lt;br&gt;
memory area we receive to store the address of the next chunk.  This&lt;br&gt;
way, if the Operating System requires us to free all the allocated&lt;br&gt;
memory before exiting the application, we free it by using this linked&lt;br&gt;
list.&lt;/p&gt;

&lt;p&gt;Here are the lines of code that do it:&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;while (mem_list != NULL)
{
    void *tmp = mem_list;
    mem_list = *((void**)mem_list);
    free(tmp);
}
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;&lt;a id="org99bde96"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Allocation from the Operating System
&lt;/h2&gt;

&lt;p&gt;We need a source for the memory in order to give it to the user.  For&lt;br&gt;
this we use the memory allocator of the Operating System.  But we&lt;br&gt;
don't know how good it is, which means, we should not rely on it too&lt;br&gt;
much.  We should only use it when we need some memory, and only for&lt;br&gt;
large chunks of memory.&lt;/p&gt;

&lt;p&gt;For this reason I decided to impose the following rules:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  There is a minimum amount that we should ask from the OS, which is
64 times the size of the pointer.&lt;/li&gt;
&lt;li&gt;  When we ask, we ask for a larger amount than the previous time.&lt;/li&gt;
&lt;li&gt;  We do not return memory to the Operating System until the very end,
when we free everything at the end of the application.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This way we do not assume that the memory allocator of the Operating&lt;br&gt;
System is good or efficient.  It's our job to make the memory&lt;br&gt;
allocation work.  The memory allocator of the Operating system can be&lt;br&gt;
extremely simple.  It can even not be an allocator at all, but just a&lt;br&gt;
pointer in a large amount of memory, since we don't need any complex&lt;br&gt;
functionality from it.  It can very well be the Transient Program Area&lt;br&gt;
(TPA) of CP/M-80.&lt;/p&gt;

&lt;p&gt;&lt;a id="orgc17ec59"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The first Items in the Array
&lt;/h2&gt;

&lt;p&gt;The first four items in the array cannot be split because there is&lt;br&gt;
nothing smaller.  This means, in case we need to allocate a lot of&lt;br&gt;
small items, it's better to have these unsplittable items as small as&lt;br&gt;
possible in order to no waste space.&lt;/p&gt;

&lt;p&gt;So let's calculate the minimal size, which will be the size of items&lt;br&gt;
of the first free list in the array.&lt;/p&gt;

&lt;p&gt;For 64-bit we have blocks of 8 bytes and pointers also of 8 bytes.  An&lt;br&gt;
item has to contain a header, a next pointer and a previous pointer.&lt;br&gt;
Together it makes 24 bytes, which can be stored in 3 blocks.  Thus&lt;br&gt;
the first size will be 3.&lt;/p&gt;

&lt;p&gt;For 32 bits blocks are of 8 bytes (it doesn't change) and pointers are&lt;br&gt;
of 4 bytes.  Three pointer-sized fields need 12 bytes, which we can&lt;br&gt;
put in 2 blocks (16 bytes).  That is, the smallest size on a 32-bit&lt;br&gt;
Operating System will be 2.&lt;/p&gt;

&lt;p&gt;For a 16-bit Operating System it's similar.  Pointers are 2 bytes and&lt;br&gt;
blocks are 8 bytes.  I use 2 bytes for pointers because I don't want&lt;br&gt;
to make things complicated by using segments.  So we can put 6 bytes&lt;br&gt;
(3 fields) into a single block.  Which makes the smallest size for&lt;br&gt;
items 1.&lt;/p&gt;

&lt;p&gt;&lt;a id="org4ad8258"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The Buddy System
&lt;/h2&gt;

&lt;p&gt;When we allocate memory, we get a chunk of memory that me might have&lt;br&gt;
to split into buddies, and only one of the buddies will be returned to&lt;br&gt;
the user.&lt;/p&gt;

&lt;p&gt;When we free memory, we insert an item into the array and it's&lt;br&gt;
possible that we might have to merge it recursively with buddies if&lt;br&gt;
they are not in use.&lt;/p&gt;

&lt;p&gt;So the main problem is to find the buddy of an item.  For this we use&lt;br&gt;
two flags: the &lt;code&gt;lr_bit&lt;/code&gt;, and the &lt;code&gt;inh_bit&lt;/code&gt;.  The &lt;code&gt;lr_bit&lt;/code&gt; tells us if&lt;br&gt;
the buddy is a left buddy or a right buddy.  The &lt;code&gt;inh_bit&lt;/code&gt; is used to&lt;br&gt;
restore the &lt;code&gt;lr_bit&lt;/code&gt; and the &lt;code&gt;inh_bit&lt;/code&gt; of the parent buddy, so that if&lt;br&gt;
we merge, we know if it's a left buddy or the right buddy.&lt;/p&gt;

&lt;p&gt;When splitting an item, we set the &lt;code&gt;inh_bit&lt;/code&gt; of the left child to the&lt;br&gt;
&lt;code&gt;lr_bit&lt;/code&gt; of the parent and the &lt;code&gt;inh_bit&lt;/code&gt; of the right child to the&lt;br&gt;
&lt;code&gt;inh_bit&lt;/code&gt; of the parent.  This allows us not to lose the information&lt;br&gt;
when splitting: when we merge we just get this information back from&lt;br&gt;
where we stored it.&lt;/p&gt;

&lt;p&gt;Here is an example:&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;*--+--[/|/]
   |
   +--[L|95]--+--[L|69]--+--[L|50]
              |          |
              |          +--[R|19]--+--[R|14]
              |                     |
              |                     +--[L|5]
              |
              |
              +--[L|26]
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;In this picture we see some examples that illustrate how the&lt;br&gt;
inheritance and the &lt;code&gt;lr_bit&lt;/code&gt; is restored from children.  The&lt;br&gt;
inheritance bit is shown by the letter L or R.&lt;/p&gt;

&lt;p&gt;Let's look at the node 69.  It's a right child with left inheritance&lt;br&gt;
bit.  When it's split, the &lt;em&gt;right&lt;/em&gt; property goes to the left child as&lt;br&gt;
the inheritance bit and the left property goes to the right child as&lt;br&gt;
inheritance bit.   When the children are merged back, both the&lt;br&gt;
&lt;code&gt;lr_bit&lt;/code&gt; and the &lt;code&gt;inh_bit&lt;/code&gt; can be restored.&lt;/p&gt;

&lt;p&gt;The same thing happens when we split the node with size 19.  It's a&lt;br&gt;
left child and this property is kept by the left child as its&lt;br&gt;
left-right bit.  The node 14 is the right child, and it's keeping the&lt;br&gt;
inheritance bit.&lt;/p&gt;

&lt;p&gt;The root node on the picture actually does not exist.  It's there in&lt;br&gt;
order to show the connection to the &lt;em&gt;fake right&lt;/em&gt; node, which does not&lt;br&gt;
contain any area.  It's &lt;code&gt;in_use&lt;/code&gt; bit is set to &lt;em&gt;true&lt;/em&gt; in order to stop&lt;br&gt;
the recursion when the children of its left buddy are merging.&lt;/p&gt;

&lt;p&gt;&lt;a id="orgf16c4f2"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Allocating and Freeing
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Allocation
&lt;/h3&gt;

&lt;p&gt;Now this is how allocation works.  The first step is to determine the&lt;br&gt;
number of blocks needed.  We are given bytes and we need the number of&lt;br&gt;
blocks.  Also we shouldn't forget about the header, the size of which&lt;br&gt;
is also included in the size of the item.&lt;/p&gt;

&lt;p&gt;Once we have the minimal number of blocks, we look for a suitable item&lt;br&gt;
in the array.  Perhaps we find it, perhaps we don't.  If we don't find&lt;br&gt;
it, we need to allocate more memory from the Operating System.&lt;/p&gt;

&lt;p&gt;Then, after we have our item, which can come either from the array or&lt;br&gt;
from the OS, we need to split it as much as possible in order to take&lt;br&gt;
as little memory as possible for our needs.&lt;/p&gt;

&lt;p&gt;And the last thing is to set the &lt;code&gt;in_use&lt;/code&gt; flag and to calculate the&lt;br&gt;
address of the area (which is actually the address of the item plus&lt;br&gt;
the size of the header).  It's important that the user does not access&lt;br&gt;
the header!  So we return the area.&lt;/p&gt;

&lt;h3&gt;
  
  
  Freeing
&lt;/h3&gt;

&lt;p&gt;Freeing is more or less the opposite of the allocation: we calculate&lt;br&gt;
the address of the item, set the &lt;code&gt;in_use&lt;/code&gt; bit to false, insert the&lt;br&gt;
item into the array and merge the item with free buddies as much as&lt;br&gt;
possible.  It's important to guarantee that whenever we give an item&lt;br&gt;
to the user, we have a place in the array where to put it when it's&lt;br&gt;
freed.&lt;/p&gt;

&lt;p&gt;&lt;a id="orgeab01ef"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Array Initialization
&lt;/h2&gt;

&lt;p&gt;The array is a dynamic array: when it reaches its maximal capacity, it&lt;br&gt;
is extended by allocating a bigger array and copying the old data into&lt;br&gt;
the new array, after which the area occupied by the old array is&lt;br&gt;
freed.&lt;/p&gt;

&lt;p&gt;One of the important things about the array is that our allocator is&lt;br&gt;
used to extend the array.  For this reason we have to be sure that the&lt;br&gt;
array contains a free list which is able to "accept" our array when we&lt;br&gt;
need to allocate a new one.  After extending the array, we also&lt;br&gt;
initialize the cells for the new array in order to be able to insert&lt;br&gt;
it into its free list, but it's not really a problem, since if the old&lt;br&gt;
array could contain enough bytes for the old array, a twice bigger&lt;br&gt;
array is more likely to have a cell with size big enough because the&lt;br&gt;
size of the array grows linearly, whereas the sizes of free list have&lt;br&gt;
a Fibonacci-like growth, which is exponential.&lt;/p&gt;

&lt;p&gt;Now I will show how I calculated the &lt;em&gt;defines&lt;/em&gt; for the initialization&lt;br&gt;
of the array.  It's the same for 64 bits, 32 bits and 16 bits, so I'll&lt;br&gt;
only show the 64 bits.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;


&lt;colgroup&gt;
&lt;col class="org-right"&gt;

&lt;col class="org-right"&gt;

&lt;col class="org-right"&gt;

&lt;col class="org-right"&gt;

&lt;col class="org-right"&gt;

&lt;col class="org-left"&gt;
&lt;/colgroup&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th class="org-right"&gt;index&lt;/th&gt;
&lt;th class="org-right"&gt;flsz&lt;/th&gt;
&lt;th class="org-right"&gt;capacity&lt;/th&gt;
&lt;th class="org-right"&gt;array-bytes&lt;/th&gt;
&lt;th class="org-right"&gt;area-bytes&lt;/th&gt;
&lt;th class="org-left"&gt;store-itself&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;

&lt;tbody&gt;
&lt;tr&gt;
&lt;td class="org-right"&gt;0&lt;/td&gt;
&lt;td class="org-right"&gt;3&lt;/td&gt;
&lt;td class="org-right"&gt;1&lt;/td&gt;
&lt;td class="org-right"&gt;16&lt;/td&gt;
&lt;td class="org-right"&gt;16&lt;/td&gt;
&lt;td class="org-left"&gt;true&lt;/td&gt;
&lt;/tr&gt;


&lt;tr&gt;
&lt;td class="org-right"&gt;1&lt;/td&gt;
&lt;td class="org-right"&gt;4&lt;/td&gt;
&lt;td class="org-right"&gt;2&lt;/td&gt;
&lt;td class="org-right"&gt;32&lt;/td&gt;
&lt;td class="org-right"&gt;24&lt;/td&gt;
&lt;td class="org-left"&gt;false&lt;/td&gt;
&lt;/tr&gt;


&lt;tr&gt;
&lt;td class="org-right"&gt;2&lt;/td&gt;
&lt;td class="org-right"&gt;5&lt;/td&gt;
&lt;td class="org-right"&gt;4&lt;/td&gt;
&lt;td class="org-right"&gt;64&lt;/td&gt;
&lt;td class="org-right"&gt;32&lt;/td&gt;
&lt;td class="org-left"&gt;false&lt;/td&gt;
&lt;/tr&gt;


&lt;tr&gt;
&lt;td class="org-right"&gt;3&lt;/td&gt;
&lt;td class="org-right"&gt;7&lt;/td&gt;
&lt;td class="org-right"&gt;4&lt;/td&gt;
&lt;td class="org-right"&gt;64&lt;/td&gt;
&lt;td class="org-right"&gt;48&lt;/td&gt;
&lt;td class="org-left"&gt;false&lt;/td&gt;
&lt;/tr&gt;


&lt;tr&gt;
&lt;td class="org-right"&gt;4&lt;/td&gt;
&lt;td class="org-right"&gt;10&lt;/td&gt;
&lt;td class="org-right"&gt;8&lt;/td&gt;
&lt;td class="org-right"&gt;128&lt;/td&gt;
&lt;td class="org-right"&gt;72&lt;/td&gt;
&lt;td class="org-left"&gt;false&lt;/td&gt;
&lt;/tr&gt;


&lt;tr&gt;
&lt;td class="org-right"&gt;5&lt;/td&gt;
&lt;td class="org-right"&gt;14&lt;/td&gt;
&lt;td class="org-right"&gt;8&lt;/td&gt;
&lt;td class="org-right"&gt;128&lt;/td&gt;
&lt;td class="org-right"&gt;104&lt;/td&gt;
&lt;td class="org-left"&gt;false&lt;/td&gt;
&lt;/tr&gt;


&lt;tr&gt;
&lt;td class="org-right"&gt;6&lt;/td&gt;
&lt;td class="org-right"&gt;19&lt;/td&gt;
&lt;td class="org-right"&gt;8&lt;/td&gt;
&lt;td class="org-right"&gt;128&lt;/td&gt;
&lt;td class="org-right"&gt;144&lt;/td&gt;
&lt;td class="org-left"&gt;true&lt;/td&gt;
&lt;/tr&gt;


&lt;tr&gt;
&lt;td class="org-right"&gt;7&lt;/td&gt;
&lt;td class="org-right"&gt;26&lt;/td&gt;
&lt;td class="org-right"&gt;8&lt;/td&gt;
&lt;td class="org-right"&gt;128&lt;/td&gt;
&lt;td class="org-right"&gt;200&lt;/td&gt;
&lt;td class="org-left"&gt;true&lt;/td&gt;
&lt;/tr&gt;


&lt;tr&gt;
&lt;td class="org-right"&gt;8&lt;/td&gt;
&lt;td class="org-right"&gt;36&lt;/td&gt;
&lt;td class="org-right"&gt;16&lt;/td&gt;
&lt;td class="org-right"&gt;256&lt;/td&gt;
&lt;td class="org-right"&gt;280&lt;/td&gt;
&lt;td class="org-left"&gt;true&lt;/td&gt;
&lt;/tr&gt;


&lt;tr&gt;
&lt;td class="org-right"&gt;9&lt;/td&gt;
&lt;td class="org-right"&gt;50&lt;/td&gt;
&lt;td class="org-right"&gt;16&lt;/td&gt;
&lt;td class="org-right"&gt;256&lt;/td&gt;
&lt;td class="org-right"&gt;392&lt;/td&gt;
&lt;td class="org-left"&gt;true&lt;/td&gt;
&lt;/tr&gt;


&lt;tr&gt;
&lt;td class="org-right"&gt;10&lt;/td&gt;
&lt;td class="org-right"&gt;69&lt;/td&gt;
&lt;td class="org-right"&gt;16&lt;/td&gt;
&lt;td class="org-right"&gt;256&lt;/td&gt;
&lt;td class="org-right"&gt;544&lt;/td&gt;
&lt;td class="org-left"&gt;true&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;As I have already described, the first size in the array will be 3.&lt;br&gt;
The column name &lt;em&gt;flsz&lt;/em&gt; stands for &lt;em&gt;free list item size in blocks&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;The &lt;em&gt;capacity&lt;/em&gt; column says how many cells the array contains.  And the&lt;br&gt;
&lt;em&gt;array-bytes&lt;/em&gt; column is the same thing in bytes.&lt;/p&gt;

&lt;p&gt;The &lt;em&gt;area-bytes&lt;/em&gt; column is how many bytes a free list at the given&lt;br&gt;
index can contain.  And when this value is greater or equal to the&lt;br&gt;
&lt;em&gt;area-bytes&lt;/em&gt; value, the column &lt;em&gt;store-itself&lt;/em&gt; indicates &lt;em&gt;true&lt;/em&gt;.&lt;br&gt;
Otherwise it's &lt;em&gt;false&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;So, after the index 6, the array consistently can store itself.  And&lt;br&gt;
we can be sure of that because the growth of the size of the free&lt;br&gt;
lists is greater than the growth of the capacity.&lt;/p&gt;

&lt;p&gt;But I have decided to avoid allocating small amounts of memory space.&lt;br&gt;
For 64 bits it's 512, which corresponds to the row with &lt;em&gt;index&lt;/em&gt; 10.&lt;br&gt;
As we see from this table, allocating 544 bytes for the initial array&lt;br&gt;
is completely possible and that's what I do.  Here is the code form&lt;br&gt;
the file &lt;code&gt;mem.c&lt;/code&gt;:&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;/* 64-bit OS */
#if defined(__x86_64__)
#define MIN_SIZE 3
#define SIZE_1 4
#define SIZE_2 5
#define SIZE_3 7
#define DATA_INIT_BLOCKS 69
#define ARRAY_INIT_SIZE 11
#define ARRAY_INIT_CAPACITY 16
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Just for info, this is what I do for 32-bit and 16-bit systems:&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;/* 32-bit OS */
#elif defined(__386__) || defined(__i386__) || defined(__DJGPP__)
#define MIN_SIZE 2
#define SIZE_1 3
#define SIZE_2 4
#define SIZE_3 5
#define DATA_INIT_BLOCKS 36
#define ARRAY_INIT_SIZE 10
#define ARRAY_INIT_CAPACITY 16

/* 16-bit OS */
#elif defined(__I86__) || defined(__86__)
#define MIN_SIZE 1
#define SIZE_1 2
#define SIZE_2 3
#define SIZE_3 4
#define DATA_INIT_BLOCKS 19
#define ARRAY_INIT_SIZE 9
#define ARRAY_INIT_CAPACITY 16

#else
#error Unsupported Operating System, sorry.
#endif
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;&lt;a id="orgd476250"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Portability and Testing
&lt;/h1&gt;

&lt;p&gt;In order not to be completely dependent on only one Operating System,&lt;br&gt;
and in order to know that the fact that I the defines in &lt;code&gt;mem.c&lt;/code&gt;&lt;br&gt;
actually serve their purpose, and because I intend to rewrite this&lt;br&gt;
program in assembly for a 16-bit OS, I decided to compile and to test&lt;br&gt;
this allocator on different Operating Systems.&lt;/p&gt;

&lt;p&gt;Luckily Linux can compile both 64-bit and 32-bit binaries and run them&lt;br&gt;
through &lt;em&gt;Valgrind&lt;/em&gt;.  This is very important because it allows me to be&lt;br&gt;
more or less sure that at least 64-bit and 32-bit versions work&lt;br&gt;
correctly.&lt;/p&gt;

&lt;p&gt;I have also added some generated content to the allocated areas with a&lt;br&gt;
checksum in order to be sure that it does not get corrupted.  This&lt;br&gt;
way, even if I can not use &lt;em&gt;Valgrind&lt;/em&gt; on a 16-bit OS, the fact that it&lt;br&gt;
does not report an error is already a good sign.&lt;/p&gt;

&lt;p&gt;&lt;a id="org9dd4744"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  OS tested
&lt;/h2&gt;

&lt;p&gt;So these are the Operating Systems on which I have tested my code:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  64-bit Linux with GCC&lt;/li&gt;
&lt;li&gt;  32-bit Linux with GCC&lt;/li&gt;
&lt;li&gt;  32-bit Linux with OpenWatcom&lt;/li&gt;
&lt;li&gt;  32-bit Hurd with GCC&lt;/li&gt;
&lt;li&gt;  32-bit ArcaOS with OpenWatcom&lt;/li&gt;
&lt;li&gt;  32-bit DOS with OpenWatcom&lt;/li&gt;
&lt;li&gt;  16-bit DOS with OpenWatcom&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a id="org8ac1b69"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Makefiles
&lt;/h2&gt;

&lt;p&gt;I have used two compilers in order to compile this project: GCC and&lt;br&gt;
OpenWatcom.  The OpenWatcom compiler uses an archaic version of C,&lt;br&gt;
where variable declarations have to be first in the block, before any&lt;br&gt;
code.  So in order to compile, I had to modify a lot of things in a&lt;br&gt;
lot of places.&lt;/p&gt;

&lt;p&gt;In the end I have decided to have two makefiles, for both compilers,&lt;br&gt;
because they are very different and in order to preserve some&lt;br&gt;
consistency in each file.  The first file is GNUmakefile.  It's for&lt;br&gt;
GCC, so that it does not read makefile instead.  Unfortunately,&lt;br&gt;
OpenWatcom does not have a makefile name for itself, so I had to use&lt;br&gt;
"makefile".&lt;/p&gt;

&lt;p&gt;&lt;a id="orgc63942a"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Conclusion and Goodbye
&lt;/h1&gt;

&lt;p&gt;So, this was my little project where I tried to learn a little bit&lt;br&gt;
about memory allocation and to implement a generalized Fibonacci&lt;br&gt;
memory allocator.  Unfortunately it's not enough to be considered a&lt;br&gt;
real memory allocator.  A memory allocator has to be able to work with&lt;br&gt;
multiple threads, which was not the main purpose of this project.  For&lt;br&gt;
this reason I could not compare it to real memory allocator nor find a&lt;br&gt;
test suite which would tell how bad or how good it is.  But for my&lt;br&gt;
purposes, namely, to become more familiar with memory allocations, and&lt;br&gt;
to be ready to write a memory allocator for a 16-bit single-threaded&lt;br&gt;
environment it's exactly what is needed.&lt;/p&gt;

&lt;p&gt;See you next time, bye.&lt;/p&gt;

</description>
      <category>memory</category>
      <category>c</category>
      <category>fibonacci</category>
    </item>
    <item>
      <title>The PowerScheme Interpreter</title>
      <dc:creator>naens</dc:creator>
      <pubDate>Sun, 24 Mar 2019 08:53:27 +0000</pubDate>
      <link>https://dev.to/naens/the-powerscheme-interpreter-46k9</link>
      <guid>https://dev.to/naens/the-powerscheme-interpreter-46k9</guid>
      <description>&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2F3pvnc6aag3xj6v17ey7l.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2F3pvnc6aag3xj6v17ey7l.png" title="PowerScheme" alt="alt text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Table of Contents
&lt;/h1&gt;

&lt;ol&gt;
&lt;li&gt; Introduction

&lt;ol&gt;
&lt;li&gt; Hello
&lt;/li&gt;
&lt;li&gt; Lisp in Small Pieces
&lt;/li&gt;
&lt;li&gt; Repetition
&lt;/li&gt;
&lt;li&gt; Different Languages
&lt;/li&gt;
&lt;li&gt; About PowerShell
&lt;/li&gt;
&lt;li&gt; Other Scheme Projects
&lt;/li&gt;
&lt;li&gt; About Lisp
&lt;/li&gt;
&lt;/ol&gt;


&lt;/li&gt;

&lt;li&gt; Lisp and Scheme

&lt;ol&gt;
&lt;li&gt; Intro
&lt;/li&gt;
&lt;li&gt; Short History of Lisp
&lt;/li&gt;
&lt;li&gt; Short History of Scheme
&lt;/li&gt;
&lt;li&gt; Scheme as a Language to write Interpreters for
&lt;/li&gt;
&lt;li&gt; Scheme Properties and Features

&lt;ol&gt;
&lt;li&gt; Interpreted / Compiled
&lt;/li&gt;
&lt;li&gt; Syntax: S-Expressions, Case-sensitiveness, Parentheses
&lt;/li&gt;
&lt;li&gt; Types: data and functions
&lt;/li&gt;
&lt;li&gt; Scope: lexical scope
&lt;/li&gt;
&lt;li&gt; Evaluation: eager evaluation
&lt;/li&gt;
&lt;li&gt; Garbage collection
&lt;/li&gt;
&lt;/ol&gt;


&lt;/li&gt;

&lt;li&gt; More about Scheme
&lt;/li&gt;

&lt;/ol&gt;

&lt;/li&gt;

&lt;li&gt; About PowerShell

&lt;ol&gt;
&lt;li&gt; Introduction
&lt;/li&gt;
&lt;li&gt; History of PowerShell

&lt;ol&gt;
&lt;li&gt; Before PowerShell: Windows
&lt;/li&gt;
&lt;li&gt; Before PowerShell: UNIX
&lt;/li&gt;
&lt;li&gt; The Purpose of PowerShell
&lt;/li&gt;
&lt;li&gt; The Timeline of PowerShell
&lt;/li&gt;
&lt;/ol&gt;


&lt;/li&gt;

&lt;li&gt; Characteristics of PowerShell

&lt;ol&gt;
&lt;li&gt; Influences of other shells and programming languages
&lt;/li&gt;
&lt;li&gt; Types
&lt;/li&gt;
&lt;li&gt; Commands, Functions and Methods
&lt;/li&gt;
&lt;li&gt; Scope: dynamic scope
&lt;/li&gt;
&lt;/ol&gt;


&lt;/li&gt;

&lt;li&gt; Criticism
&lt;/li&gt;

&lt;li&gt; Impressions
&lt;/li&gt;

&lt;/ol&gt;

&lt;/li&gt;

&lt;li&gt; Features

&lt;ol&gt;
&lt;li&gt; Case-Insensitivity of Symbols and Variable names
&lt;/li&gt;
&lt;li&gt; Homoiconicity
&lt;/li&gt;
&lt;li&gt; Type System
&lt;/li&gt;
&lt;li&gt; Lexical Scope and Dynamic Scope

&lt;ol&gt;
&lt;li&gt; Lexical Scope
&lt;/li&gt;
&lt;li&gt; Dynamic Scope
&lt;/li&gt;
&lt;li&gt; Conclusion about Lexical and Dynamic Scope
&lt;/li&gt;
&lt;/ol&gt;


&lt;/li&gt;

&lt;li&gt; First-class Functions and Closures

&lt;ol&gt;
&lt;li&gt; What is being First-class Citizen?
&lt;/li&gt;
&lt;li&gt; Higher-order Functions
&lt;/li&gt;
&lt;li&gt; Anonymous Functions
&lt;/li&gt;
&lt;li&gt; Closures
&lt;/li&gt;
&lt;/ol&gt;


&lt;/li&gt;

&lt;li&gt; Tail-call Optimization
&lt;/li&gt;

&lt;/ol&gt;

&lt;/li&gt;

&lt;li&gt; Functionality

&lt;ol&gt;
&lt;li&gt; PowerScheme is a Scheme interpreter

&lt;ol&gt;
&lt;li&gt; Interactive mode
&lt;/li&gt;
&lt;li&gt; Non-interactive mode
&lt;/li&gt;
&lt;/ol&gt;


&lt;/li&gt;

&lt;li&gt; Lexical and Dynamic Scope in PowerScheme
&lt;/li&gt;

&lt;li&gt; Exception Handling
&lt;/li&gt;

&lt;/ol&gt;

&lt;/li&gt;

&lt;li&gt; Implementation Details

&lt;ol&gt;
&lt;li&gt; The Structure of the Code
&lt;/li&gt;
&lt;li&gt; The Entry Point
&lt;/li&gt;
&lt;li&gt; The Tokenizer
&lt;/li&gt;
&lt;li&gt; The Bag
&lt;/li&gt;
&lt;li&gt; The Parser
&lt;/li&gt;
&lt;li&gt; The Evaluator

&lt;ol&gt;
&lt;li&gt; Types
&lt;/li&gt;
&lt;li&gt; Functions
&lt;/li&gt;
&lt;li&gt; Local Functions and Thunks
&lt;/li&gt;
&lt;/ol&gt;


&lt;/li&gt;

&lt;li&gt; The Environment

&lt;ol&gt;
&lt;li&gt; Lexical and Dynamic Environments
&lt;/li&gt;
&lt;li&gt; Differences between Lexical and Dynamic Scope
&lt;/li&gt;
&lt;li&gt; Tail Call Optimization
&lt;/li&gt;
&lt;li&gt; Environment Representation
&lt;/li&gt;
&lt;li&gt; Branching Stacks
&lt;/li&gt;
&lt;li&gt; Define-Defines
&lt;/li&gt;
&lt;li&gt; Global and Local Arrays
&lt;/li&gt;
&lt;li&gt; Declare and Update for Lexical and Dynamic variables
&lt;/li&gt;
&lt;/ol&gt;


&lt;/li&gt;

&lt;li&gt; Conclusion about the implementation details
&lt;/li&gt;

&lt;/ol&gt;

&lt;/li&gt;

&lt;li&gt; What Else PowerScheme can do

&lt;ol&gt;
&lt;li&gt; A REPL
&lt;/li&gt;
&lt;li&gt; Y-Combinator
&lt;/li&gt;
&lt;li&gt; Function Objects
&lt;/li&gt;
&lt;/ol&gt;


&lt;/li&gt;

&lt;li&gt; What is missing?

&lt;ol&gt;
&lt;li&gt; Vectors
&lt;/li&gt;
&lt;li&gt; Let*
&lt;/li&gt;
&lt;li&gt; Caar and so on
&lt;/li&gt;
&lt;li&gt; Set-car! and Set-cdr!
&lt;/li&gt;
&lt;li&gt; Continuations
&lt;/li&gt;
&lt;li&gt; Exceptions
&lt;/li&gt;
&lt;li&gt; Macros
&lt;/li&gt;
&lt;/ol&gt;


&lt;/li&gt;

&lt;li&gt; Conclusion
&lt;/li&gt;

&lt;li&gt; Bibliography
&lt;/li&gt;

&lt;/ol&gt;

&lt;p&gt;&lt;a id="orgd7579bd"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Introduction
&lt;/h1&gt;

&lt;p&gt;&lt;a id="org15d8262"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Hello
&lt;/h2&gt;

&lt;p&gt;Hello.  In this post I would like to describe the Scheme interpreter I&lt;br&gt;
wrote using the PowerShell programming language.  I am still a&lt;br&gt;
beginner when it comes to Lisp and to programming in general, so I&lt;br&gt;
decided to write a simple interpreter.&lt;/p&gt;

&lt;p&gt;This program is open source and everyone is free to do anything with&lt;br&gt;
it.  So here is the link:&lt;br&gt;
&lt;a href="https://github.com/naens/scheme-lisp/tree/master/chap02pwsh" rel="noopener noreferrer"&gt;https://github.com/naens/scheme-lisp/tree/master/chap02pwsh&lt;/a&gt;.  The&lt;br&gt;
license it GNU GPL v3.0.  Of course nobody will want to buy it or use&lt;br&gt;
it for critical applications, so it's here just for fun.&lt;/p&gt;

&lt;p&gt;&lt;a id="org48adc8a"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Lisp in Small Pieces
&lt;/h2&gt;

&lt;p&gt;I am following the book Lisp in Small Pieces &lt;sup id="520a1805e2957b91051e499a55918b81"&gt;queinnec_lisp&lt;/sup&gt; (often abbreviated as&lt;br&gt;
LiSP), written by Christian Queinnec.  It shows how Scheme and Lisp in&lt;br&gt;
general work and how to write different kinds of Scheme interpreters&lt;br&gt;
and compilers.&lt;/p&gt;

&lt;p&gt;&lt;a id="org9f99bea"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Repetition
&lt;/h2&gt;

&lt;p&gt;In the book, the author says that "Good teaching demands a certain&lt;br&gt;
amount of repetition", on page xiv of my edition of the book.  So in&lt;br&gt;
order to avoid cheating, I have to implement Scheme as many times as&lt;br&gt;
the author does.&lt;/p&gt;

&lt;p&gt;&lt;a id="org046c284"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Different Languages
&lt;/h2&gt;

&lt;p&gt;Because I would like as much as possible to avoid cheating, simply&lt;br&gt;
rewrite the interpreter several times is obviously not enough.  With&lt;br&gt;
the time it will seem that it's all the same and I will remember the&lt;br&gt;
variables, the files, how things are implemented and so on.  In order&lt;br&gt;
to avoid this problem I decided to make it so that it would seem that&lt;br&gt;
I experience the implementation of the interpreter or the compiler for&lt;br&gt;
the first time.  Is there a way to do this?  Of course.  The solution&lt;br&gt;
is to implement each interpreter and compiler in different language.&lt;br&gt;
That's why the interpreter I wrote for the first chapter was in&lt;br&gt;
Kotlin.  I had never ever written anything in Kotlin, let alone a&lt;br&gt;
Scheme interpreter!  At the same time it was a good opportunity to&lt;br&gt;
learn a new programming language and a new development environment (I&lt;br&gt;
used Intellij IDEA, which was not totally new for me, because I had&lt;br&gt;
used it with Java previously, but with Kotlin it was definitely new).&lt;/p&gt;

&lt;p&gt;I have just finished the second chapter of the book, which was about&lt;br&gt;
different environments.  It talked mainly about the function&lt;br&gt;
environment and the dynamic environment.&lt;/p&gt;

&lt;p&gt;It's interesting how much more complicated everything gets simply by&lt;br&gt;
adding a separate namespace for the functions, and the only advantage&lt;br&gt;
is that it increases the performance.&lt;/p&gt;

&lt;p&gt;&lt;a id="orge3c788a"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  About PowerShell
&lt;/h2&gt;

&lt;p&gt;So, this second chapter I decided to do in PowerShell.  It's&lt;br&gt;
interesting that it tries to fill the gap between shells and&lt;br&gt;
programming languages.  The result, I would say, is that it's closer&lt;br&gt;
to the latter.  It also attempts to solve problems that were there in&lt;br&gt;
the UNIX shells, like the string literals difficulties, the pipe,&lt;br&gt;
which is text only and makes programs interdependent.  Also, it allows&lt;br&gt;
to use arithmetic expressions, which was a problem in many earlier&lt;br&gt;
shells.&lt;/p&gt;

&lt;p&gt;Because PowerShell became available on Linux and now is stable, I&lt;br&gt;
decided it was a good opportunity to learn something that is&lt;br&gt;
completely new for me and there is nothing similar in other mainstream&lt;br&gt;
languages.&lt;/p&gt;

&lt;p&gt;In the end it seems PowerShell is a good programming language, and&lt;br&gt;
has a lot of features that make the writing of a Scheme interpreter&lt;br&gt;
in it really easy.  It has all the libraries and data structures&lt;br&gt;
needed, it doesn't except the programmer to free all memory used,&lt;br&gt;
and, in my opinion, it's actually fast.  I didn't compare it with&lt;br&gt;
other programming languages, but for an interpreter that interprets&lt;br&gt;
an interpreter the speed is amazing.&lt;/p&gt;

&lt;p&gt;&lt;a id="orga1b3de3"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Other Scheme Projects
&lt;/h2&gt;

&lt;p&gt;After I finish this project, that I decided to call PowerScheme for&lt;br&gt;
obvious reasons, I will continue to write interpreters in other&lt;br&gt;
languages and I know that it's perfectly possible to write them all&lt;br&gt;
without writing twice in the same language.  I think I'll try to avoid&lt;br&gt;
some languages that need a lot of work in order to be mastered.  I&lt;br&gt;
think about languages like APL, Malbolge, Haskell, and similar.  But&lt;br&gt;
it's almost sure that I will use Prolog, Assembly and PL/M at some&lt;br&gt;
point.&lt;/p&gt;

&lt;p&gt;&lt;a id="org1e6c91b"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  About Lisp
&lt;/h2&gt;

&lt;p&gt;So this program is an implementation of Scheme, a programming language&lt;br&gt;
of the Lisp family.  The first implementations of Lisp were&lt;br&gt;
interpreters and later also compilers were written.  And Lisp has a&lt;br&gt;
very rich history mainly because the whole time it was a language in&lt;br&gt;
which people can experiment.  It is said that the language is&lt;br&gt;
homoiconic, which means that the internal representation of data&lt;br&gt;
matches its lexical representation.  This makes it easy to manipulate&lt;br&gt;
the language as data.  For this reason a lot of things, like code&lt;br&gt;
generators and macros can be easily written using Lisp.&lt;/p&gt;

&lt;p&gt;And because of this richness, only a small part of everything that&lt;br&gt;
ever existed in the history of Lisp has to be implemented.  On the&lt;br&gt;
other hand you can decide to implement anything you want.  For&lt;br&gt;
example, if you find some obscure old feature that existed somewhere&lt;br&gt;
in a programming language (it doesn't have to be Lisp), you can&lt;br&gt;
implement it.&lt;/p&gt;

&lt;p&gt;In my case such feature were dynamic variables.  Following the book I&lt;br&gt;
implemented both lexical and dynamic environments, but I decided to&lt;br&gt;
make it not exactly like in the book.  So the user has the choice to&lt;br&gt;
declare variables to be lexical or dynamic.  My implementation, in my&lt;br&gt;
opinion, is closer to Common Lisp, but I will tell more about it later&lt;br&gt;
in the section on the functionality.&lt;/p&gt;

&lt;p&gt;There are other features that I did not implement.  For example,&lt;br&gt;
vectors, they are not really needed in Lisp.  They are needed only to&lt;br&gt;
make certain operations faster, but everything they do can be done&lt;br&gt;
using lists.&lt;/p&gt;

&lt;p&gt;Another thing that I didn't implement are continuations.  I thought it&lt;br&gt;
was important to implement, but then, when I implemented the whole&lt;br&gt;
language and thought about how I would add continuations, I understood&lt;br&gt;
that a lot will have to be rewritten.  So that means, first, no&lt;br&gt;
continuations this time, second, if you want continuations in your&lt;br&gt;
language, you should think ahead how to implement them, better before&lt;br&gt;
even starting writing code.  There will be a small section about&lt;br&gt;
continuations, but I will describe them in more detail when I will&lt;br&gt;
write about an interpreter where they will actually be implemented.&lt;/p&gt;

&lt;p&gt;By the way, I didn't make a post similar to this about the first&lt;br&gt;
interpreter that I have written using Kotlin, because it's really&lt;br&gt;
simple and basic.  And also, I didn't have the idea yet to write a&lt;br&gt;
post about each interpreter I write for this book.  And going back in&lt;br&gt;
time and try to write a text as though I have just written it doesn't&lt;br&gt;
seem right either.&lt;/p&gt;

&lt;p&gt;&lt;a id="orgb67babf"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Lisp and Scheme
&lt;/h1&gt;

&lt;p&gt;&lt;a id="org871d40d"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Intro
&lt;/h2&gt;

&lt;p&gt;The language that this interpreter, PowerScheme, interprets is&lt;br&gt;
Scheme.  Scheme is a programming language that is born from Lisp.  It&lt;br&gt;
can be said that Scheme is a dialect of Lisp as it continues the&lt;br&gt;
tradition of Lisp and has a lot of common traits with other languages&lt;br&gt;
of the Lisp family, but at the same time, Scheme is a little bit apart&lt;br&gt;
from other Lisps.  The creators of Scheme were not really preoccupied&lt;br&gt;
whether existing Lisp programs can be adapted to Scheme or not.  They&lt;br&gt;
also decided to drop a lot of features that were at some point&lt;br&gt;
introduced, but were not really useful in practice.  Actually, Scheme&lt;br&gt;
is often considered to be a very compact language with short&lt;br&gt;
specification.  So instead of making something complex, but useful for&lt;br&gt;
practical tasks, they made a language that looks better from the&lt;br&gt;
theoretical point of view and is better suited to explore theoretical&lt;br&gt;
concepts.&lt;/p&gt;

&lt;p&gt;Scheme is considered to be a functional programming language, although&lt;br&gt;
the notion of the functional programming language has evolved with the&lt;br&gt;
time.  At first it simply meant that the language can manipulate&lt;br&gt;
functions.  It seems this notion acquired the modern meaning when&lt;br&gt;
Scheme was released in 1975, so Scheme is one of the first functional&lt;br&gt;
programming languages in the modern meaning of the word.  Today being&lt;br&gt;
functional means to focus on immutability, avoid side effects, jumps,&lt;br&gt;
and loops.&lt;/p&gt;

&lt;p&gt;&lt;a id="orgeec54ca"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Short History of Lisp
&lt;/h2&gt;

&lt;p&gt;So, Scheme is a descendant of the Lisp family of programming&lt;br&gt;
languages.  It inherited several of its features.  For example, the&lt;br&gt;
syntax, the names and the meanings of many of its keywords come from&lt;br&gt;
Lisp.&lt;/p&gt;

&lt;p&gt;Lisp is one of the first programming languages that were created.  At&lt;br&gt;
least it is the first successful programming language that was not an&lt;br&gt;
abstraction of the existing hardware, as it was based on theory, such&lt;br&gt;
as logic and lambda calculus.&lt;/p&gt;

&lt;p&gt;Lisp appeared in 1958.  There is only one language from this time that&lt;br&gt;
still survives, FORTRAN.  Because of its origins in logic, Lisp helped&lt;br&gt;
to define several concepts of programming that were first introduced&lt;br&gt;
in Lisp.  These include, for example, closures, macros and anonymous&lt;br&gt;
function, and also manipulation of functions as data in general.&lt;/p&gt;

&lt;p&gt;Over time, many of the ideas that were already in Lisp were&lt;br&gt;
incorporated into other families of languages.  It's interesting that&lt;br&gt;
many languages are still based on the ideas that were already there in&lt;br&gt;
the 1960's.  For example, there was a comparison of the Go programming&lt;br&gt;
language with Algol 1968 &lt;sup id="216a89084fe47ba313861b0a64a9dbd7"&gt;given_re:_2009&lt;/sup&gt;, and it seems Go doesn't&lt;br&gt;
bring anything new.  So, it can be said that the evolution of the&lt;br&gt;
programming languages since the beginning of the 1970's is just&lt;br&gt;
re-combinations of the ideas that existed before that, and where the&lt;br&gt;
more and more ideas from Lisp were incorporated.&lt;/p&gt;

&lt;p&gt;&lt;a id="orga8b253c"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Short History of Scheme
&lt;/h2&gt;

&lt;p&gt;Scheme appeared in 1975 as an attempt to combine everything good that&lt;br&gt;
existed in Lisp with the lexical scope that originated in Algol, by&lt;br&gt;
eliminating, at the same time, everything that was not needed.&lt;/p&gt;

&lt;p&gt;Scheme had a lot of success because it could be used to teach&lt;br&gt;
programming concepts.  For many years in many universities it was used&lt;br&gt;
as the language of the introduction to programming.  It is&lt;br&gt;
because of its advantages as a language where the machine is much less&lt;br&gt;
visible than in other languages (after all many languages of the Algol&lt;br&gt;
family, like C or Pascal, are just abstraction of the machine&lt;br&gt;
language).  So it helped to focus on algorithms and the theoretical&lt;br&gt;
concepts without the need to explain the detail of how pointers or&lt;br&gt;
two's complement integers work.&lt;/p&gt;

&lt;p&gt;Today there exist several implementations of Scheme.  The most&lt;br&gt;
important are Guile, which is used for the GNU project, Racket, one of&lt;br&gt;
the most popular implementations of Scheme, and Chicken.&lt;/p&gt;

&lt;p&gt;Scheme is has today a standard, which is defined in the so called&lt;br&gt;
&lt;em&gt;Revised&lt;sup&gt;n&lt;/sup&gt; Reports on the Algorithmic Language Scheme&lt;/em&gt; (RnRS), so the&lt;br&gt;
implementations that follow it are mostly compatible with each other.&lt;/p&gt;

&lt;p&gt;&lt;a id="org8d256bd"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Scheme as a Language to write Interpreters for
&lt;/h2&gt;

&lt;p&gt;Writing a Scheme interpreter is a popular exercise.  There exist many&lt;br&gt;
implementations in many languages.  This tradition began in the first&lt;br&gt;
years of Lisp, when the author the first specification of Lisp, John&lt;br&gt;
McCarthy came to the idea that it would be not difficult to write an&lt;br&gt;
interpreter of Lisp in Lisp &lt;sup id="3beab3c5c207c3c1ab618cbd0a12a0a8"&gt;graham_roots_2002&lt;/sup&gt;.  Later Lisp became&lt;br&gt;
more and more complex and feature-rich, so what could be easily&lt;br&gt;
re-implemented became less and less similar to the actual Lisp.&lt;/p&gt;

&lt;p&gt;On of the goals of Scheme was to return to this simplicity, so that's&lt;br&gt;
the reason why Scheme became a popular programming language to write&lt;br&gt;
an interpreter for.&lt;/p&gt;

&lt;p&gt;&lt;a id="orgd5e6e91"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Scheme Properties and Features
&lt;/h2&gt;

&lt;p&gt;&lt;a id="orgc7f0c08"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Interpreted / Compiled
&lt;/h3&gt;

&lt;p&gt;When Lisp first appeared, it was at first an interpreted language.&lt;br&gt;
But then people began to write compilers for Lisp.  Because Lisp can&lt;br&gt;
generate new code during execution and then execute it, the code can&lt;br&gt;
also be interpreted during the execution.  So some parts of programs&lt;br&gt;
are compiled, other parts are interpreted.&lt;/p&gt;

&lt;p&gt;Scheme also is more often compiled than purely interpreted.  Very&lt;br&gt;
often Lisps are not compiled directly to the machine language, but to&lt;br&gt;
a specific byte-code.  Racket and Guile use byte-codes, and Chicken&lt;br&gt;
compiles to C, so it can generate binary machine code executables.&lt;/p&gt;

&lt;p&gt;&lt;a id="org0129870"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Syntax: S-Expressions, Case-sensitiveness, Parentheses
&lt;/h3&gt;

&lt;p&gt;The syntax of Scheme is based on the s-expressions, like most other&lt;br&gt;
Lisps.  It continues the tradition of the &lt;code&gt;car&lt;/code&gt; and &lt;code&gt;cdr&lt;/code&gt; accessor for&lt;br&gt;
a &lt;code&gt;cons&lt;/code&gt; and has many common keywords with other Lisps, such as&lt;br&gt;
&lt;code&gt;lambda&lt;/code&gt;, &lt;code&gt;let&lt;/code&gt;, and &lt;code&gt;cond&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Scheme can be case-sensitive or not.  I prefer it to make&lt;br&gt;
case-insensitive because I used a lot Common Lisp and I think &lt;code&gt;IF&lt;/code&gt; and&lt;br&gt;
&lt;code&gt;LAMBDA&lt;/code&gt; should not be interpreted differently from &lt;code&gt;if&lt;/code&gt; and &lt;code&gt;lambda&lt;/code&gt;.&lt;br&gt;
Scheme also was at first case-insensitive.  But later a standard&lt;br&gt;
(R6RS, published in 2007) was published where it was specified that it&lt;br&gt;
is to be case-sensitive.  In many implementation it is an option that&lt;br&gt;
can be enabled or disabled.&lt;/p&gt;

&lt;p&gt;&lt;a id="orgf29bc3f"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Types: data and functions
&lt;/h3&gt;

&lt;p&gt;Scheme is a dynamically strongly typed language, which means, that&lt;br&gt;
even though the types are not specified for everything statically,&lt;br&gt;
they are there during the execution and a value cannot be suddenly&lt;br&gt;
interpreted as something else during execution (as opposed to C, where&lt;br&gt;
it's possible and totally normal to read a string as an integer, read&lt;br&gt;
a floating point number as a pointer, and similar).&lt;/p&gt;

&lt;p&gt;An interesting feature of Scheme (and also other Lisps), is that it&lt;br&gt;
can instantiate functions with their execution environment.  So the&lt;br&gt;
function value evaluated two times can give two different functions&lt;br&gt;
depending on the environment of evaluation.  That makes it possible to&lt;br&gt;
let functions hold data and act as objects.&lt;/p&gt;

&lt;p&gt;&lt;a id="orgd2a1f59"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Scope: lexical scope
&lt;/h3&gt;

&lt;p&gt;Scheme, like most programming languages today, uses lexical scope.&lt;br&gt;
For a Lisp, code usually is more readable when it's a lexical scope&lt;br&gt;
Lisp, because it can be easy to see what instance of a variable refers&lt;br&gt;
to what.  The reason why it is this way for Lisp is that traditionally&lt;br&gt;
functional languages are often read declaratively.  So it should be&lt;br&gt;
possible to say what the program does without simulating an execution.&lt;br&gt;
This is the principle of declarative programming.  Dynamic scope goes&lt;br&gt;
against it by its own definition: it's the variable, that during this&lt;br&gt;
execution happens to be bound to the symbol.  So, for a Lisp that&lt;br&gt;
positions itself closer to the functional style of programming,&lt;br&gt;
lexical scope is much more natural.&lt;/p&gt;

&lt;p&gt;&lt;a id="org0dff23a"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Evaluation: eager evaluation
&lt;/h3&gt;

&lt;p&gt;Scheme uses eager evaluation.  That means that before a function is&lt;br&gt;
called, all of its arguments have to be evaluated.  In some programming&lt;br&gt;
languages it is not the case, then it's called lazy evaluation.&lt;br&gt;
Normally, for purely functional languages, lazy evaluation and eager&lt;br&gt;
evaluation should return the same value, except for the cases when the&lt;br&gt;
evaluation is impossible.  In these cases lazy evaluation has a higher&lt;br&gt;
chance to return a value, because some things are not evaluated if&lt;br&gt;
they don't have to.  But as lazy evaluation is more difficult to&lt;br&gt;
implement and is generally slower, it is not often used in programming&lt;br&gt;
languages.&lt;/p&gt;

&lt;p&gt;Like many other languages, Scheme has some constructs that behave in a&lt;br&gt;
way similar to lazy evaluation.  These are special forms &lt;code&gt;if&lt;/code&gt;, &lt;code&gt;and&lt;/code&gt;,&lt;br&gt;
&lt;code&gt;or&lt;/code&gt;, &lt;code&gt;case&lt;/code&gt;, and &lt;code&gt;cond&lt;/code&gt;.  They can stop the evaluation under&lt;br&gt;
specific condition and return a value before evaluating all the&lt;br&gt;
arguments.&lt;/p&gt;

&lt;p&gt;&lt;a id="org8264fdd"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Garbage collection
&lt;/h3&gt;

&lt;p&gt;Like most of the languages of the Lisp family, Scheme typically uses a&lt;br&gt;
garbage collector.  The reason is that it creates a lot of &lt;code&gt;cons&lt;/code&gt;&lt;br&gt;
structures and leaves them in memory even if they are not used&lt;br&gt;
anymore.  In order to not run out of available memory and not used&lt;br&gt;
much more than needed, Scheme uses a garbage collector and returns&lt;br&gt;
these unused structure to the memory available for later use.&lt;/p&gt;

&lt;p&gt;&lt;a id="org2b3b521"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  More about Scheme
&lt;/h2&gt;

&lt;p&gt;In the end, after writing this interpreter I see why a lot of people&lt;br&gt;
prefer Scheme over Common Lisp.  Common Lisp, being a Lisp2 is a&lt;br&gt;
problem.  It has some ugly characteristics that come from the fact&lt;br&gt;
that the first position of a list during the evaluation has a&lt;br&gt;
different evaluator, the function evaluator.  So it has to have&lt;br&gt;
function wrapper in order to pass use them as data.  It's much less&lt;br&gt;
clean.  I think it's unfortunate that they decided to make it a&lt;br&gt;
standard instead of making it like Scheme, which is much cleaner.  But&lt;br&gt;
Scheme is still strong.  There are still a lot of people every year&lt;br&gt;
learning it and writing interpreters for it, even though it used less&lt;br&gt;
than before in education.&lt;/p&gt;

&lt;p&gt;&lt;a id="orgfdd0515"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  About PowerShell
&lt;/h1&gt;

&lt;p&gt;&lt;a id="orge63c38f"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;As programming language, for this project I decided to use PowerShell.&lt;br&gt;
It is a modern shell written for Windows, which has some interesting&lt;br&gt;
features.  Many people say that it's a nice programming language, so I&lt;br&gt;
decided to use it once in order to find out more about it.&lt;/p&gt;

&lt;p&gt;Now it works on Windows and Linux, which is my main Operating System,&lt;br&gt;
and is easy enough to install.  When the source code was released it&lt;br&gt;
was not very stable yet (I tried to install it on Arch Linux, but it&lt;br&gt;
didn't work).  Now the installation is easy and it works without&lt;br&gt;
needing to install additional packages, plugins, libraries and&lt;br&gt;
so on.&lt;/p&gt;

&lt;p&gt;The main domain of use of PowerShell is the administration on the&lt;br&gt;
Windows machines, and several of its qualities reflect that.  For&lt;br&gt;
example, the syntax is similar to both UNIX shell and to C#, in order&lt;br&gt;
to be familiar to both UNIX and .NET developers.  It can interact with&lt;br&gt;
the components and the API's of the Operating System, which makes it a&lt;br&gt;
useful tool.&lt;/p&gt;

&lt;p&gt;&lt;a id="org52dafc1"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  History of PowerShell
&lt;/h2&gt;

&lt;p&gt;&lt;a id="orgde2c794"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Before PowerShell: Windows
&lt;/h3&gt;

&lt;p&gt;The need for something like PowerShell was there from the early years&lt;br&gt;
of Windows, when it became sophisticated enough for daily use, so many&lt;br&gt;
administrative tasks needed to be automated.&lt;/p&gt;

&lt;p&gt;Before PowerShell another command line shell existed, &lt;code&gt;cmd.exe&lt;/code&gt;.  But it&lt;br&gt;
had its own problems.  First of all, it was not written with Windows&lt;br&gt;
in mind, because it's nothing more that the adaptation of the&lt;br&gt;
&lt;code&gt;COMMAND.COM&lt;/code&gt; shell that existed on MS-DOS.  So, of course it could&lt;br&gt;
accomplish certain things, but its features remained on the MS-DOS&lt;br&gt;
level and nothing had been done to make it especially useful with&lt;br&gt;
Windows.&lt;/p&gt;

&lt;p&gt;So, here are some of the examples of problems of &lt;code&gt;cmd.exe&lt;/code&gt;.  First of&lt;br&gt;
all, when its predecessor, &lt;code&gt;COMMAND.COM&lt;/code&gt; appeared, it was already&lt;br&gt;
outdated.  It used &lt;code&gt;GOTO&lt;/code&gt; for everything and didn't have functions.&lt;br&gt;
&lt;code&gt;cmd.exe&lt;/code&gt; tried to improve the situation, but it still remained limited.&lt;/p&gt;

&lt;p&gt;An explanation of why &lt;code&gt;cmd.exe&lt;/code&gt; is so uncomfortable for programming&lt;br&gt;
and is the way it is, is that it was at first primarily meant to&lt;br&gt;
execute &lt;code&gt;.BAT&lt;/code&gt; files, which were supposed to be lists of commands, and&lt;br&gt;
not real programs.  Then some additional features needed to be&lt;br&gt;
introduced, while maintaining the backward compatibility.  Which made&lt;br&gt;
it so, that with time it became impossible to improve it and it&lt;br&gt;
remained the way it was, almost without changes, from the MS-DOS era.&lt;/p&gt;

&lt;p&gt;&lt;a id="org587a8f9"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Before PowerShell: UNIX
&lt;/h3&gt;

&lt;p&gt;In the UNIX universe, the situation was somewhat better.  Since the&lt;br&gt;
introduction of the Bourne shell in 1977 UNIX had a powerful scripting&lt;br&gt;
environment, which had loops, conditional expressions, functions, and&lt;br&gt;
variables.&lt;/p&gt;

&lt;p&gt;By combining the functionality of the shell with the multitasking&lt;br&gt;
capabilities of the Operating System, and, by using external programs,&lt;br&gt;
it was possible to write easily enough relatively complex programs.&lt;/p&gt;

&lt;p&gt;By the time Windows appeared the UNIX shell had evolved and became&lt;br&gt;
easier to use and had more features.  The Korn Shell appeared, which&lt;br&gt;
was a big improvement in usability over the original Bourne Shell.&lt;/p&gt;

&lt;p&gt;Unfortunately it had its own problems, even on its native Operating&lt;br&gt;
System.  It could only manipulate text, and the common way to do&lt;br&gt;
things was to extract a specific sub-string by filtering from the&lt;br&gt;
standard output of an external program.  It made both the scripts and&lt;br&gt;
the programs interdependent.  Programs could not easily be modified&lt;br&gt;
without breaking the scripts.  So, a program could not be improved, a&lt;br&gt;
field could not be easily added or removed from the output.  I think&lt;br&gt;
this is what led to the POSIX specification, so that script writers&lt;br&gt;
can be sure that the output of a certain program will not change in&lt;br&gt;
the future.&lt;/p&gt;

&lt;p&gt;On the Windows platform the problems were even bigger.  It was not a&lt;br&gt;
usual thing to have GUI program for administration tasks&lt;br&gt;
&lt;sup id="220428e48f1fa84778351fd1ac9437e5"&gt;jones_learn_2017&lt;/sup&gt;.  It was usually done by using libraries and&lt;br&gt;
API's.  There were attempts to use UNIX shells.  For example, it is&lt;br&gt;
known that there was a Korn Shell port, but in the end this is not&lt;br&gt;
what could solve the problems.&lt;/p&gt;

&lt;p&gt;&lt;a id="org3e12247"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  The Purpose of PowerShell
&lt;/h3&gt;

&lt;p&gt;So it has been decided that Windows needs a new scripting environment,&lt;br&gt;
which is different both from &lt;code&gt;cmd.exe&lt;/code&gt; and from UNIX shells.&lt;/p&gt;

&lt;p&gt;Having learns from the experience of different other shells the goal&lt;br&gt;
for the new shell was set to give Windows some real scripting&lt;br&gt;
capabilities.  The new scripting language was thus supposed to act as&lt;br&gt;
an interface for the Windows functions and data types of the libraries&lt;br&gt;
and the API.&lt;/p&gt;

&lt;p&gt;The way PowerShell has been made reflects its purpose: it can access&lt;br&gt;
the Operating System components, it supports objects, which also makes&lt;br&gt;
it easier to work with the OS, since a .NET uses C#, an Object Oriented&lt;br&gt;
language.&lt;/p&gt;

&lt;p&gt;Another thing it tried to improve is to try to move away from being&lt;br&gt;
text-based.  Of course, being a shell, a lot of things are still text,&lt;br&gt;
like, for example, command line arguments, standard input, and&lt;br&gt;
standard output.  But, as much as possible it tries to do with&lt;br&gt;
objects.  One interesting example is the pipeline.  By passing objects&lt;br&gt;
from one program to another through the pipe, the need to parse the&lt;br&gt;
input from the pipeline is avoided.&lt;/p&gt;

&lt;p&gt;&lt;a id="org1710db9"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  The Timeline of PowerShell
&lt;/h3&gt;

&lt;p&gt;The History of Powershell began in 2002, with the so-called Monad&lt;br&gt;
Manifesto &lt;sup id="456b887553631891bf21f8255f316cf5"&gt;the_devops_collective_monad_2016&lt;/sup&gt;, a text describing the need of a new shell for the Windows&lt;br&gt;
Operating System, what its features should be, and what mistakes&lt;br&gt;
should be avoided.&lt;/p&gt;

&lt;p&gt;The first release, PowerShell 1.0, was for Windows Vista.  Then it was&lt;br&gt;
still optional.  It became a part of the OS in 2008, by included in&lt;br&gt;
Windows 7.&lt;/p&gt;

&lt;p&gt;In 2016 it was Open-Sourced, and in 2018 it became possible to use&lt;br&gt;
PowerShell on Linux.  It is still not clear why they made it Open&lt;br&gt;
Source, possibly because they wanted to attract new users or to show&lt;br&gt;
that their software is written in such a way that it can be easily&lt;br&gt;
ported, or perhaps in case they want to switch Windows to a Linux&lt;br&gt;
kernel, they can be sure that enough people tried it and there will be&lt;br&gt;
no big problems with it on a different kernel.&lt;/p&gt;

&lt;p&gt;For me it's a great opportunity to try something I would not have&lt;br&gt;
tried otherwise, because I work on Linux, and it's not worth it&lt;br&gt;
installing Windows on VM or on a real machine only in order to try a&lt;br&gt;
programming language.  I like when programming language designers try&lt;br&gt;
new or different approaches, so I wanted to try it from the moment I&lt;br&gt;
heard it would work on Linux.&lt;/p&gt;

&lt;p&gt;&lt;a id="orgfed5724"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Characteristics of PowerShell
&lt;/h2&gt;

&lt;p&gt;&lt;a id="orga2bc134"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Influences of other shells and programming languages
&lt;/h3&gt;

&lt;p&gt;PowerShell was influenced by several programming languages that came&lt;br&gt;
before it.  The first is UNIX shells, namely Bourne shell and Korn&lt;br&gt;
shell.&lt;/p&gt;

&lt;p&gt;It borrows from them the syntax for the variables, which have to be&lt;br&gt;
prefix with a &lt;code&gt;$&lt;/code&gt; sign.  To make it more consistent, it must be&lt;br&gt;
prefixed every time a variable is used, as opposed to &lt;em&gt;zsh&lt;/em&gt;, for&lt;br&gt;
example, which on the contrary, tries to let the user avoid using &lt;code&gt;$&lt;/code&gt;&lt;br&gt;
whenever possible.&lt;/p&gt;

&lt;p&gt;Another influence was Perl.  Perl has been conceived as a programming&lt;br&gt;
language which was based on the syntax of the Unix shell, but its goal&lt;br&gt;
was to be usable as a real programming language, and so, it was made&lt;br&gt;
easier to use.  Its main features were regular expression matching and&lt;br&gt;
substituting.  It can, for example, use a boolean match test in an&lt;br&gt;
&lt;code&gt;if&lt;/code&gt;, which very valuable.  This is something PowerShell can also do,&lt;br&gt;
while still remaining in the domain of shells.&lt;/p&gt;

&lt;p&gt;The third influence was C#.  The authors knew that a lot of Windows&lt;br&gt;
users work with C#, and so they borrowed several syntactic elements from C#&lt;br&gt;
(for example, classes and arrays).&lt;/p&gt;

&lt;p&gt;&lt;a id="org103f2e0"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Types
&lt;/h3&gt;

&lt;p&gt;Regarding the types, I think PowerShell is more on the strict side,&lt;br&gt;
compared to other shells.  Which is not a bad thing.  The main rule is&lt;br&gt;
that when conversion is made, no data should be lost (except when&lt;br&gt;
floating point is involved).&lt;/p&gt;

&lt;p&gt;This makes programming easier.  When the data type is checked, and&lt;br&gt;
something is of a wrong type, no conversion will be forced, and an&lt;br&gt;
error is reported.  Another advantage is that it becomes easier to&lt;br&gt;
have an array that contains objects or arrays.  Or an object that&lt;br&gt;
contains an array.  In some languages, like in UNIX shell, it becomes&lt;br&gt;
really complicated if you want to do something similar.&lt;/p&gt;

&lt;p&gt;Another interesting feature regarding types, is the support of&lt;br&gt;
objects.  Even though PowerShell is not fully a object oriented&lt;br&gt;
language (authors call it an &lt;em&gt;object based&lt;/em&gt; language), to have objects&lt;br&gt;
really helps to make the programs more structured.  They are like&lt;br&gt;
structs, which can have some functions, called methods attached to&lt;br&gt;
them.&lt;/p&gt;

&lt;p&gt;The support of objects is an important feature of PowerShell.  First,&lt;br&gt;
it allows to interact with the Operating System.  A good part of&lt;br&gt;
Windows has an object oriented interface.  So in order to interact&lt;br&gt;
with it, having objects means that the translation between PowerShell&lt;br&gt;
and the Operating System is easy.  Second, combining objects with the&lt;br&gt;
use of the pipeline makes it easy to pass data from one program to the&lt;br&gt;
other using the pipeline.&lt;/p&gt;

&lt;p&gt;&lt;a id="orgbab64fd"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Commands, Functions and Methods
&lt;/h3&gt;

&lt;p&gt;PowerShell has two kind of things that can be "called" or "invoked":&lt;br&gt;
commands and methods.  Commands are standalone, and methods are a part&lt;br&gt;
of a class and are invoked on an object.&lt;/p&gt;

&lt;p&gt;There are four kinds of commands.  The first is called &lt;em&gt;cmdlet&lt;/em&gt;, which&lt;br&gt;
is pronounced "command-let".  It is a function that is compiled into a&lt;br&gt;
.NET class and made built-in.  It is the fastest among the commands&lt;br&gt;
because it's compiled.  I think it's an interesting feature.  Very few&lt;br&gt;
shells allow to expand the set of built-in commands.&lt;/p&gt;

&lt;p&gt;It has a special name convention, &lt;code&gt;&amp;lt;Verb&amp;gt;-&amp;lt;Name&amp;gt;&lt;/code&gt;, that is a verb&lt;br&gt;
followed by a name, both starting with a capital letter and separated&lt;br&gt;
by a dash.&lt;/p&gt;

&lt;p&gt;The second kind of commands are functions.  They are user-defined&lt;br&gt;
functions thar appear in PowerShell scripts.&lt;/p&gt;

&lt;p&gt;The third kind of commands are the script commands.  It's when a whole&lt;br&gt;
script is treated as a function.  It's similar to bash: you can call a&lt;br&gt;
file with arguments and the file will receive the arguments given by&lt;br&gt;
the user.  It's as if the whole script were one big function.&lt;/p&gt;

&lt;p&gt;The fourth kind of commands are native commands.  These are files&lt;br&gt;
completely unrelated to PowerShell which happen to be executable on&lt;br&gt;
the Operating System, so it's possible to make the PowerShell ask the&lt;br&gt;
Operating System to execute them.  It's also a concept that UNIX and&lt;br&gt;
Linux user know: it corresponds to calling an external binary, like&lt;br&gt;
&lt;code&gt;cat&lt;/code&gt; or &lt;code&gt;mknod&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a id="org544a0a6"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Scope: dynamic scope
&lt;/h3&gt;

&lt;p&gt;PowerShell from the point of view of the scope continues the tradition&lt;br&gt;
of the dynamic scope shells.  The authors say that they considered&lt;br&gt;
different strategies, but chose to make it dynamically scoped anyway.&lt;br&gt;
There should be a reason why so often languages that deal with&lt;br&gt;
something related to systems end up being dynamically scoped.&lt;/p&gt;

&lt;p&gt;Here are some other examples of dynamically scoped languages:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;UNIX shells.&lt;/p&gt;

&lt;p&gt;UNIX shells (sh, bash, ksh, zsh) are dynamically scoped to make things&lt;br&gt;
consistent between calling external programs and functions.  When an&lt;br&gt;
external process is created, it receives a copy (or a COW) of the&lt;br&gt;
environment.  When it finishes, what it did has no influence on the&lt;br&gt;
process that called it.  This is the principle of the dynamic scope.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Emacs Lisp.&lt;/p&gt;

&lt;p&gt;It is not because the authors were lazy or didn't know how to&lt;br&gt;
implement lexical scope, that Emacs Lisp is dynamically scoped.  It is&lt;br&gt;
decision that makes sense in the context of system-oriented&lt;br&gt;
programming, and as Emacs is a little bit like an emulator of an&lt;br&gt;
Operating System, it makes sense to make it dynamically scoped.  The&lt;br&gt;
advantage of dynamic scoping is that it is possible to set the&lt;br&gt;
environment before calling functions, and disable the changes after&lt;br&gt;
exiting from the current function.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Common Lisp.&lt;/p&gt;

&lt;p&gt;Common Lisp allows to create lexically scoped variables and&lt;br&gt;
dynamically scoped variables.  And it's a very interesting example,&lt;br&gt;
because one of the authors of the Common Lisp specification was the&lt;br&gt;
person that made the first fully lexically scoped Lisp, Scheme.  That&lt;br&gt;
means that it has been acknowledged, that lexical scope is not always&lt;br&gt;
the best, nor is a solution for all.  Besides that, implementing&lt;br&gt;
dynamic scoping in a language that is compiled is not easy, because&lt;br&gt;
variables will have to be looked up by name.  In dynamic scope, we&lt;br&gt;
don't really know what is behind a variable name, it is only known&lt;br&gt;
during execution.  That's the reason why it's called dynamic scope.&lt;/p&gt;

&lt;p&gt;I decided to give these examples here, because all of them, in my&lt;br&gt;
opinion, can be applied in PowerShell.  It also allows to call&lt;br&gt;
external programs, and is used for the Operating System.  It seems&lt;br&gt;
that it is the right thing to do in, order to avoid to pass too many&lt;br&gt;
parameters to functions.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a id="org82cdc8b"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Criticism
&lt;/h2&gt;

&lt;p&gt;There are some things about PowerShell that not everybody likes.  As&lt;br&gt;
it is often the case, programming language designers have to make&lt;br&gt;
choices, like for example, it's impossible to represent strings and&lt;br&gt;
variables the same way.  Some languages, like C or Lisp prefer to&lt;br&gt;
represent the variables without special indications and the strings&lt;br&gt;
within quotes.  Other languages, primarily scripting languages make&lt;br&gt;
the other way around, so they have to mark differently variables, for&lt;br&gt;
example, by prefixing them with a &lt;code&gt;$&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;One of the bizarre things for someone who sees PowerShell for the&lt;br&gt;
first time, is that the operators for number comparison and logical&lt;br&gt;
operations are not symbols, but words prefixed by a &lt;code&gt;-&lt;/code&gt;, like &lt;code&gt;-gt&lt;/code&gt;,&lt;br&gt;
&lt;code&gt;-and&lt;/code&gt;, and so on.  The reason for this is that the normal signs&lt;br&gt;
conflicted with other operations like the &lt;code&gt;&amp;gt;&lt;/code&gt; which represents the&lt;br&gt;
output redirection, and PowerShell authors wanted to keep them.&lt;/p&gt;

&lt;p&gt;There is an inconsistency in the syntax of the calls of commands and&lt;br&gt;
methods.  Commands are called without parentheses with arguments&lt;br&gt;
separated by spaces.  Methods, on the other hand, must have&lt;br&gt;
parentheses and have arguments inside separated by commas.  For some&lt;br&gt;
reason the authors could not preserve the consistency for all cases.&lt;/p&gt;

&lt;p&gt;&lt;a id="org50171bd"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Impressions
&lt;/h2&gt;

&lt;p&gt;Overall, I think it's an OK environment for simple tasks, exactly how&lt;br&gt;
it should be for Operating System administration.  I would say it's&lt;br&gt;
even too powerful.  Do administrators really need to create a complex&lt;br&gt;
data structure with modules classes, and so on?  But for programming&lt;br&gt;
it's OK to test simple algorithms, because the typing is strong&lt;br&gt;
enough, so it's easy to make hierarchical structures and not worry&lt;br&gt;
that objects will be lost or merged somehow.&lt;/p&gt;

&lt;p&gt;&lt;a id="orgba92615"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Features
&lt;/h1&gt;

&lt;p&gt;&lt;a id="org9ac70f4"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Case-Insensitivity of Symbols and Variable names
&lt;/h2&gt;

&lt;p&gt;My implementation of Scheme is case-insensitive.  I know that it's not&lt;br&gt;
what is common in the modern programming languages, and even Scheme&lt;br&gt;
switched to be case-sensitive.  But I decided to make it&lt;br&gt;
case-insensitive anyway.&lt;/p&gt;

&lt;p&gt;When you write in Common Lisp, the interpreter, when it displays the&lt;br&gt;
variables and functions in the REPL, it displays them in uppercase,&lt;br&gt;
even though in the code it's lowercase.  It's the tradition, and it&lt;br&gt;
remained this way. That's why very often you see keywords like &lt;code&gt;LET&lt;/code&gt;,&lt;br&gt;
&lt;code&gt;IF&lt;/code&gt;, &lt;code&gt;LAMBDA&lt;/code&gt;, &lt;code&gt;COND&lt;/code&gt; in uppercase when you're familiar with Common&lt;br&gt;
Lisp.&lt;/p&gt;

&lt;p&gt;When I switched to Racket, which is an implementation of Scheme, it&lt;br&gt;
was not really a problem, because I always typed these keywords in&lt;br&gt;
lowercase anyway.&lt;/p&gt;

&lt;p&gt;But what I don't really want is for words like &lt;code&gt;LAMBDA&lt;/code&gt; and &lt;code&gt;IF&lt;/code&gt; to be&lt;br&gt;
recognized as something different from &lt;code&gt;lambda&lt;/code&gt; and &lt;code&gt;if&lt;/code&gt;.  The&lt;br&gt;
programmers who used Common Lisp, at least CLISP and SBCL have been&lt;br&gt;
trained to think of them as the same thing. That's why it should&lt;br&gt;
remain this way.  It's a habit.&lt;/p&gt;

&lt;p&gt;I think it's not natural to change the meaning of the words based on&lt;br&gt;
the case.  Why would anyone give the same name to a variable with the&lt;br&gt;
only difference being the case?  If you want them to be different give&lt;br&gt;
different names.  Nowadays it's been recognized that having longer&lt;br&gt;
names makes programs easier to read.  By looking at the verbal&lt;br&gt;
representation of two variables you have to be able to tell the&lt;br&gt;
difference between them.  If the only difference is the case, then you&lt;br&gt;
have to remember why it's different.  It's either the convention (like&lt;br&gt;
the constants must be uppercase), or just random (why not giving this&lt;br&gt;
variable an uppercase name?).  In either way it's avoidable with a&lt;br&gt;
better naming convention.  That's why I see the case-sensitivity as a&lt;br&gt;
step backwards.&lt;/p&gt;

&lt;p&gt;Perhaps they wanted to support different languages?  That's also a bad&lt;br&gt;
idea.  Characters may look the same, but at the same time, they might&lt;br&gt;
have different Unicode code points.  That's not a good idea.  Anyway,&lt;br&gt;
it's hard to find a justification for case-sensitive symbols and&lt;br&gt;
variable names in Scheme.  That's why my implementation is&lt;br&gt;
case-insensitive.  And it was the case for earlier versions of Scheme.&lt;/p&gt;

&lt;p&gt;&lt;a id="org4b3361f"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Homoiconicity
&lt;/h2&gt;

&lt;p&gt;This implementation of Scheme, like it's the case for most Lisps and&lt;br&gt;
all Schemes, is homoiconic.  This means that there is a correspondence&lt;br&gt;
between the source code and the representation of the program in the&lt;br&gt;
memory &lt;sup id="5d71417e23001a0ba429f599b4580b67"&gt;wikic2_homoiconic&lt;/sup&gt;.  That is, the source code is a form of&lt;br&gt;
AST, the structure that the interpreter will use to execute the&lt;br&gt;
program.&lt;/p&gt;

&lt;p&gt;This makes it very easy to print the program on screen, which is&lt;br&gt;
impossible in many other languages: when a language is compiled,&lt;br&gt;
the structure is lost, and with interpreters, in many cases it&lt;br&gt;
might be possible to restore, but there is also some loss.&lt;/p&gt;

&lt;p&gt;A good example for Lisp is that you can generate functions by&lt;br&gt;
building lists and then evaluating them.  If the lists are&lt;br&gt;
correctly built, the functions will be evaluated correctly.&lt;/p&gt;

&lt;p&gt;Lisp is not the only language that is homoiconic.  Prolog, for&lt;br&gt;
example, and Tcl are also homoiconic.&lt;/p&gt;

&lt;p&gt;Besides generating source and functions, homoiconicity is handy with&lt;br&gt;
macros, which can be considered a slightly more structured way to use&lt;br&gt;
&lt;code&gt;eval&lt;/code&gt; on a list.  Syntactically macros are similar to functions, the&lt;br&gt;
main difference is that the arguments of the macros are not evaluated&lt;br&gt;
before the call.  It' up to the macro to decide how to evaluate them&lt;br&gt;
and whether they have to be evaluated or not.  It can, for example,&lt;br&gt;
explore their structure.  That's how it is possible to use macros for&lt;br&gt;
writing a different &lt;code&gt;define&lt;/code&gt;, which would do something differently.&lt;/p&gt;

&lt;p&gt;In my version of Scheme macros are not implemented.  Not because they&lt;br&gt;
are difficult to implement (actually they are easier to implement than&lt;br&gt;
functions, because you don't have to evaluate the arguments).  The&lt;br&gt;
main reason is because their behavior is complex and it's more a&lt;br&gt;
Common Lisp feature and not Scheme.  But as the syntax of my&lt;br&gt;
interpreter tries to be as close as possible to Scheme, it's difficult&lt;br&gt;
to test whether the behavior is correct.  Perhaps in a later&lt;br&gt;
interpreter I'll implement them.&lt;/p&gt;

&lt;p&gt;&lt;a id="orgd447b2f"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Type System
&lt;/h2&gt;

&lt;p&gt;My implementation of Scheme, as it is the case for other Scheme&lt;br&gt;
implementations and for most Lisp in general, is dynamically and&lt;br&gt;
strongly typed.&lt;/p&gt;

&lt;p&gt;Dynamic typing means that the type is checked at runtime.  It gives&lt;br&gt;
more flexibility.  A variable can contains values of different types&lt;br&gt;
at different times.  For example a function can be called with&lt;br&gt;
different types and still work which makes Lisp very suited for&lt;br&gt;
polymorphism.  A function can be passes, for example, a list of values&lt;br&gt;
of any types and a function that can compare them and that's enough to&lt;br&gt;
implement a sorting algorithm that would work with lists containing&lt;br&gt;
any types.&lt;/p&gt;

&lt;p&gt;Compared to Java, this type system is very easy to use and very&lt;br&gt;
straightforward.  Whereas Java requires a very complex interaction&lt;br&gt;
between classes, types, interfaces and methods in order to implement&lt;br&gt;
similar functionality.  And as it is shown in my implementation, it is&lt;br&gt;
really easy to implement.&lt;/p&gt;

&lt;p&gt;Another advantage is that variables don't need to be declared with the&lt;br&gt;
type.  The type is dynamically assigned and maintained.  The&lt;br&gt;
variables, on the other hand, need to be declared explicitly, which&lt;br&gt;
makes Lisp somewhat stricter that several other languages.&lt;/p&gt;

&lt;p&gt;Some implementation of Lisp, like Common Lisp, for example, require&lt;br&gt;
explicit mention of cases when a declared variable is not used.  I&lt;br&gt;
think it's an interesting idea. It is also the case in Rust and in&lt;br&gt;
Prolog.  But as a tradeoff between several tendencies for the&lt;br&gt;
specification of Common Lisp, it is not very consistent: there is&lt;br&gt;
still a way to have optional function parameters, that can have a&lt;br&gt;
default value specified by the user in case the function is called&lt;br&gt;
without these parameters.&lt;/p&gt;

&lt;p&gt;Lisp is not only a dynamically typed language, it's a dynamically&lt;br&gt;
typed language with strong typing.  It means that the type of the&lt;br&gt;
value is checked every time it is used.  Different operations require&lt;br&gt;
different types.  For example, arithmetic operations require numbers&lt;br&gt;
and cannot work with anything else, whereas &lt;code&gt;car&lt;/code&gt; and &lt;code&gt;cdr&lt;/code&gt; only work&lt;br&gt;
with conses, and it produces an error if you try to use them with&lt;br&gt;
other types of variables.  This is the opposite of what we see in&lt;br&gt;
JavaScript, and it seems it has been recognized that it was a design&lt;br&gt;
mistake.  We see that it's not a bad thing when the language itself&lt;br&gt;
helps the developer to write code that is more strict, like we see in&lt;br&gt;
Prolog, Rust and Common Lisp, where unused variables are not allowed.&lt;/p&gt;

&lt;p&gt;Sometimes Lisp is described as weakly typed.  And the reason for this&lt;br&gt;
is that the notion of strong and weak typing is not precisely defined.&lt;br&gt;
In 1974, for example, Liskov and Zilles considered that the language&lt;br&gt;
is strongly typed if a function only accepts the authorized types.&lt;br&gt;
Today Lisp is called strongly typed because even if it's only at&lt;br&gt;
runtime, types are always checked and unauthorized types are not&lt;br&gt;
allowed.&lt;/p&gt;

&lt;p&gt;Another thing that makes the type system of Lisp not that strong, is&lt;br&gt;
the fact that several types are based of conses.  Conses are like&lt;br&gt;
primitive blocks, and allow to build any data structure.  It can be a&lt;br&gt;
list or a tree or whatever the programmer thinks it to be.  But to the&lt;br&gt;
compiler or interpreter it all looks the same: just a cons with a&lt;br&gt;
&lt;code&gt;car&lt;/code&gt; value and a &lt;code&gt;cdr&lt;/code&gt; value, which themselves can hold any possible&lt;br&gt;
value.&lt;/p&gt;

&lt;p&gt;Now the situation is a bit better.  In modern implementations of Lisp&lt;br&gt;
the user is allowed to define types.  They are usually objects or&lt;br&gt;
records, and can be checked for the type at runtime, so it's possible&lt;br&gt;
to embed complex data structures like custom lists or trees into these&lt;br&gt;
types, so now, not every complex data structure is a &lt;code&gt;cons&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;In my implementation, in order to keep everything simple and stupid,&lt;br&gt;
there is no such thing as a record or an object.  The only way to link&lt;br&gt;
data structures is by using conses.  So my Scheme is not very strongly&lt;br&gt;
typed.&lt;/p&gt;

&lt;p&gt;&lt;a id="orgac85337"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Lexical Scope and Dynamic Scope
&lt;/h2&gt;

&lt;p&gt;Many programming languages have the concept of variable.  A variable&lt;br&gt;
is usually something that represents a value and is referred to by&lt;br&gt;
name.  That means, that one of duties of programming languages is the&lt;br&gt;
management of the names.  Variables are not the only entities in a&lt;br&gt;
program that have names, for example, modules, classes, types,&lt;br&gt;
structures, subroutines, and whatever is needed by the language:&lt;br&gt;
languages are free to add more kinds of entities if they need to.  But&lt;br&gt;
here, the focus is specifically on the names of variables and&lt;br&gt;
functions.&lt;/p&gt;

&lt;p&gt;For this purpose, there exists the notion of scope.  The scope means&lt;br&gt;
the space where a certain name references a certain entity.  When the&lt;br&gt;
scope of a variable ends, we say it goes out of scope.  Which means,&lt;br&gt;
that if its name is used again, it either refers to a different&lt;br&gt;
variable or its usage is illegal.&lt;/p&gt;

&lt;p&gt;One way or another names need to be resolved.  Typically, it happens&lt;br&gt;
from the current point in the program, where the name to be resolved&lt;br&gt;
occurs, and the search is made from there.  And in order to do so,&lt;br&gt;
there exist two main strategies: lexical scoping and dynamic scoping.&lt;/p&gt;

&lt;p&gt;What they both have in common, is that they both search from the name&lt;br&gt;
by looking further in the extent of their scope.  If the variable is&lt;br&gt;
not defined in the current block, then they need to go further, and&lt;br&gt;
it's where they become different.  Lexical scope looks for the name&lt;br&gt;
based on the structure of the source code, or of abstract syntax tree,&lt;br&gt;
which is structurally equivalent (as it is a representation of the&lt;br&gt;
source code).  Dynamic scope, on the other hand, uses the execution&lt;br&gt;
stack in order to find the value.  It first looks in the names of the&lt;br&gt;
caller, then the caller of the caller, and so on until the value is&lt;br&gt;
found.&lt;/p&gt;

&lt;p&gt;Both strategies have interesting properties, and both have advantages&lt;br&gt;
and disadvantages.  So here I will describe them in more detail.&lt;/p&gt;

&lt;p&gt;&lt;a id="org2057e43"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Lexical Scope
&lt;/h3&gt;

&lt;p&gt;Lexical scope, as its name implies, uses lexical structure of the&lt;br&gt;
program, and it means that the way how the variables are resolved is&lt;br&gt;
not dependent on the contents of the stack.  It can be easily said at&lt;br&gt;
compile time: in order to find the value of this variable, such and&lt;br&gt;
such operations need to be performed.&lt;/p&gt;

&lt;p&gt;It means that it can be resolved at compile time and this is typically&lt;br&gt;
when the code to access the variable is generated.  This makes the&lt;br&gt;
code faster, because the information about the names of the&lt;br&gt;
identifiers can disappear, since it is no longer needed: what matters&lt;br&gt;
is the path to access the value.&lt;/p&gt;

&lt;p&gt;At the same time, there arise some implementation difficulties related&lt;br&gt;
to the lexical scope.  It is rather a difficult problem to generate&lt;br&gt;
compiled code for a language that supports first-class functions.&lt;br&gt;
Indeed, when a pointer to a function is created, one can say that it's&lt;br&gt;
a function object: it contains not only code that needs to be&lt;br&gt;
executed, but also variable associated with this piece of code, not&lt;br&gt;
only local variables, which are created each time the function is&lt;br&gt;
executed, but also non-local variables that reference the enclosing&lt;br&gt;
environment.  It is necessary to be able to keep references to them,&lt;br&gt;
be able to read them and to modify.&lt;/p&gt;

&lt;p&gt;There can be several function objects and even several copies of the&lt;br&gt;
same object in the program at the same time.  In this case it becomes&lt;br&gt;
impossible to know how long these environments are needed.  For this&lt;br&gt;
reason languages with such functions typically require garbage&lt;br&gt;
collection.&lt;/p&gt;

&lt;p&gt;From some point of view, everything is garbage collected.  For&lt;br&gt;
example, when a function returns, its local variables and parameters&lt;br&gt;
are garbage collected by popping from the stack.  Only in the case of&lt;br&gt;
function objects, this simple strategy doesn't work anymore and a&lt;br&gt;
more sophisticated garbage collector is needed.&lt;/p&gt;

&lt;p&gt;One interesting property of lexical scope is that it makes it easier&lt;br&gt;
to read code.  The variable referenced by name is either in current&lt;br&gt;
block, or in the enclosing block, or in the block above.  The fact&lt;br&gt;
that it's independent of the execution state of the program, makes it&lt;br&gt;
possible to read code as if it were declarative programming.&lt;/p&gt;

&lt;p&gt;For example, this is a classical function &lt;code&gt;map&lt;/code&gt; as it is implemented&lt;br&gt;
in scheme.  It takes a list &lt;code&gt;l&lt;/code&gt; and a function &lt;code&gt;f&lt;/code&gt; and produces a new&lt;br&gt;
list by applying every element of &lt;code&gt;l&lt;/code&gt;, obtained by the function &lt;code&gt;car&lt;/code&gt;&lt;br&gt;
so: &lt;code&gt;(car l)&lt;/code&gt; and putting the elements together in the same order.&lt;br&gt;
It's the function &lt;code&gt;cons&lt;/code&gt; that combines an element and the rest of the&lt;br&gt;
list returned by the function &lt;code&gt;cdr&lt;/code&gt; in this fashion: &lt;code&gt;(cdr l)&lt;/code&gt;.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;(define (map l f)
  (cond ((null? l) '())
        (#t (cons (f (car l)) (map (cdr l) f)))))
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;As we can see, the code is very concise and we clearly see which&lt;br&gt;
variables are used and where they are defined.  There is no way a&lt;br&gt;
variable will come from outside that is not declared.  This code is&lt;br&gt;
self-contained.&lt;/p&gt;

&lt;p&gt;Now let's see where lexical scope really shines: we'll use this &lt;code&gt;map&lt;/code&gt;&lt;br&gt;
function with a closure:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;(define (map-mult l n)
 (writeln (map l (lambda (x) (* x n)))))

(map-mult '(1 2 3 4) 10)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;We clearly see what variable refers to.  No need to search further.&lt;br&gt;
Even if we don't know what &lt;code&gt;map&lt;/code&gt; does, we still see what &lt;code&gt;n&lt;/code&gt; is and&lt;br&gt;
what &lt;code&gt;x&lt;/code&gt; is.&lt;/p&gt;

&lt;p&gt;Lexical scope also preserves the referential transparency.  It means&lt;br&gt;
that when it's possible to substitute an expression with an equivalent&lt;br&gt;
expression and the behavior of the program will not change.  At least&lt;br&gt;
it's the case if the functions have no side effects.  This way a&lt;br&gt;
function call with parameters can be substituted with the value it&lt;br&gt;
returns without modifying the behavior of the program.  It also allows&lt;br&gt;
to perform memoization, that is, to remember the value that a function&lt;br&gt;
returns with given arguments, and, instead of calling it again another&lt;br&gt;
time, to return the value it returned the first time.&lt;/p&gt;

&lt;p&gt;So here is an example in order to illustrate what referential&lt;br&gt;
transparency means.  Let's look at an example of a power function,&lt;br&gt;
&lt;code&gt;pow&lt;/code&gt;:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;(defun pow (n e)
  (cond ((= e 0) 1)
        ((= e 1) n)
        (t (* (pow n (- e 1))
              n))))
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Here the function power is defined by multiplying &lt;code&gt;e&lt;/code&gt; times the number&lt;br&gt;
&lt;code&gt;n&lt;/code&gt;.  But if we define the function power differently, as being&lt;br&gt;
&lt;code&gt;n^e = ((n^2)^d)*n^r&lt;/code&gt;, where &lt;code&gt;e = 2*d + r&lt;/code&gt; and &lt;code&gt;r&lt;/code&gt; is either (0) or&lt;br&gt;
(1), it possible to write another function that uses less&lt;br&gt;
multiplications:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;(defun pow2 (n e)
  (cond ((= e 0) 1)
        ((= e 1) n)
        (t (let ((r (mod e 2))
                 (d (/ e 2)))
             (* (pow2 (* n n) d)
                (pow2 n r))))))
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Here we use the property of the function &lt;code&gt;*&lt;/code&gt; that it's referentially&lt;br&gt;
transparent: we can achieve the same result using a different&lt;br&gt;
definition of the function by using this function less times.  If the&lt;br&gt;
&lt;code&gt;*&lt;/code&gt; function were referentially opaque, it probably wouldn't have&lt;br&gt;
worked, at least it would have been difficult to be sure that the&lt;br&gt;
function works correctly.&lt;/p&gt;

&lt;p&gt;By the way, this code is written in Emacs Lisp, which is dynamically&lt;br&gt;
scoped.  This example shows that if we are careful, even in&lt;br&gt;
dynamically scoped languages it's possible to write referentially&lt;br&gt;
transparent code.  Here of course it's not that difficult because &lt;code&gt;*&lt;/code&gt;&lt;br&gt;
is a built-in function.  For more complicated cases dynamical scope&lt;br&gt;
can become a more serious problem.&lt;/p&gt;

&lt;p&gt;Lexical scope also facilitates code optimization: referentially&lt;br&gt;
transparent code can be easier analyzed and modified by the compiler&lt;br&gt;
in a guaranteed way of preserving the intended behavior of the&lt;br&gt;
program.&lt;/p&gt;

&lt;p&gt;So, these properties make it easier to reason about the code:&lt;br&gt;
the code can be subdivided into parts because we know exactly what&lt;br&gt;
each name refers to, and referential transparency gives the&lt;br&gt;
possibility to read code as a set of distinct expressions with a&lt;br&gt;
precise meaning.&lt;/p&gt;

&lt;p&gt;&lt;a id="orge6f6b1c"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Dynamic Scope
&lt;/h3&gt;

&lt;p&gt;The other important strategy of scoping is dynamic scope.  Its main&lt;br&gt;
characteristic, which makes it different from lexical scope is the way&lt;br&gt;
the names for the entities are searched.  When a dynamically scoped&lt;br&gt;
name needs to be resolved, the search starts from the block of code&lt;br&gt;
where the name occurs.  At this point it is not different from lexical&lt;br&gt;
scope.  And if code is written carefully enough in many cases it's&lt;br&gt;
possible to write portable code between lexically and dynamically&lt;br&gt;
scoped versions of the same language.&lt;/p&gt;

&lt;p&gt;The difference with the lexical scope starts when the name is not&lt;br&gt;
found in current block.  Whereas lexical scope searches outside, in&lt;br&gt;
the lexical environment, dynamic scope looks in the most recently&lt;br&gt;
defined names, which are in scope.&lt;/p&gt;

&lt;p&gt;Another important difference is that the scope ends with the block&lt;br&gt;
that defined the variable or has given a value to the variable.  That&lt;br&gt;
means, that a value is valid as long as the block has not finished&lt;br&gt;
executing.  The functions called have the right to replace the value&lt;br&gt;
with their own value temporarily, as long as they execute.  And as&lt;br&gt;
soon as the block finishes executing, the value of the variable is&lt;br&gt;
lost and the previous value, if it exists, is restored.&lt;/p&gt;

&lt;p&gt;One interesting property of dynamic scope is that a name reference can&lt;br&gt;
refer to variables that are lexically defined in different places.&lt;br&gt;
The function called cannot know which variables it references.  It&lt;br&gt;
depends on the execution state of the program.&lt;/p&gt;

&lt;p&gt;Dynamic scope has a big disadvantage.  The code that relies heavily on&lt;br&gt;
it is difficult to read, to modify, and to refactor.  The reason is&lt;br&gt;
that when dynamically scoped variables are referenced, several&lt;br&gt;
portions of code become interdependent and if one wants to isolate&lt;br&gt;
some code into a module or a function, there is no way to know which&lt;br&gt;
other functions use these variables.&lt;/p&gt;

&lt;p&gt;And also, there is no referential transparency, which makes it&lt;br&gt;
difficult to subdivide the code in pieces and to reason about each&lt;br&gt;
part locally.  As the result, the code can become unmaintainable.&lt;/p&gt;

&lt;p&gt;In the past, much more languages were using dynamic scope than now.&lt;br&gt;
The first Lisp, for example, used dynamic scope.  It's after the first&lt;br&gt;
implementations that it became apparent that the behavior of the&lt;br&gt;
dynamic scope is not something that was expected while reading the&lt;br&gt;
source code.  Moreover, lambda calculus, which had a lot of influence&lt;br&gt;
on Lisp used lexical scoping: variables could be bound with variables&lt;br&gt;
outside the current function.&lt;/p&gt;

&lt;p&gt;So here is an example that compares lexical and dynamic scopes:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;(declare a 1)
(defun fn (b)
  (list a b))
(let ((a 2))
  (fn 3))
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;This code evaluates to &lt;code&gt;(2 3)&lt;/code&gt;.  As you can see, when &lt;code&gt;fn&lt;/code&gt; is called&lt;br&gt;
the &lt;code&gt;a&lt;/code&gt; "parameter" is set to (2) and overwrites the (1) value.&lt;/p&gt;

&lt;p&gt;Let's look at a similar example in Scheme:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;(define a 1)
(define (fn b)
  (list a b))
(let ((a 2))
  (fn 3))
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The result of evaluation of this code is &lt;code&gt;(1 3)&lt;/code&gt;: we use the first &lt;code&gt;a&lt;/code&gt;&lt;br&gt;
that is captured by &lt;code&gt;fn&lt;/code&gt;.  The other &lt;code&gt;a&lt;/code&gt;, which is in let, is a local&lt;br&gt;
variable and seen from lexical point of view has no relation to the&lt;br&gt;
function.&lt;/p&gt;

&lt;p&gt;So why can dynamic scope cause a problem?  In the first example we&lt;br&gt;
have no idea what &lt;code&gt;a&lt;/code&gt; is.  It can really be anything: a number, a&lt;br&gt;
function, a pair…  This is why it is so hard to refactor dynamically&lt;br&gt;
scoped code: we need to inspect and test every instance where &lt;code&gt;a&lt;/code&gt;&lt;br&gt;
appears, and only then we can decide what we can do with the function&lt;br&gt;
&lt;code&gt;fn&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Dynamic scope makes it also impossible to use closures: when the name&lt;br&gt;
of the variable is looked up, its enclosing blocks are simply ignored.&lt;br&gt;
As for higher-order functions, there are also similar problems.&lt;/p&gt;

&lt;p&gt;On the other hand, while not very suited for big projects and despite&lt;br&gt;
difficulties of refactoring, dynamic scope has its place for some&lt;br&gt;
special uses.  For example, it's useful for temporarily setting some&lt;br&gt;
global parameters that one or several functions should access.  When&lt;br&gt;
the current block ends, all the bindings are undone, but before that&lt;br&gt;
these parameters were accessible by the functions called.  One example&lt;br&gt;
of such use is when you have to write into a file with a specific&lt;br&gt;
format of date, but once you have finished writing, you don't need&lt;br&gt;
this format anymore.  But as long as the function that writes to the&lt;br&gt;
file is alive, all the functions that need to write a date, under the&lt;br&gt;
dynamic scope of that function, will know which format to use.&lt;/p&gt;

&lt;p&gt;Another important use of dynamic scope is backtracking.  When testing&lt;br&gt;
different possibilities, some settings can be set and the outcome can&lt;br&gt;
be tested.  On failure, other setting can be set and the function that&lt;br&gt;
calculates the outcome can be called once again, and so on.&lt;/p&gt;

&lt;p&gt;Dynamic scope is often used for exception handling.  Indeed, the&lt;br&gt;
handlers defined in the block of the code are in many languages&lt;br&gt;
dynamically scoped.  These handlers are typically defined as blocks of&lt;br&gt;
code, often called &lt;code&gt;catch&lt;/code&gt; blocks.  In Java, for example, there is a&lt;br&gt;
&lt;code&gt;try&lt;/code&gt; block, which contains calls that might throw exceptions and&lt;br&gt;
there are &lt;code&gt;catch&lt;/code&gt; blocks that describe steps that need to be done when&lt;br&gt;
an exception occurs.  The code inside the catch blocks is called the&lt;br&gt;
exception handler and can be viewed as a function that is executed&lt;br&gt;
when the exception assigned to them is thrown.  The actual code that&lt;br&gt;
throws the exception can actually be far from the &lt;code&gt;try&lt;/code&gt; block and the&lt;br&gt;
&lt;code&gt;catch&lt;/code&gt; blocks, but nevertheless, to find the handler that needs to be&lt;br&gt;
executed, the rules of dynamic scope are followed.&lt;/p&gt;

&lt;p&gt;&lt;a id="orgc977ee1"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusion about Lexical and Dynamic Scope
&lt;/h3&gt;

&lt;p&gt;So in the end we have seen what scope is and how it works.  We have&lt;br&gt;
also described two main strategies of resolving names when names are&lt;br&gt;
in the current block.  Even though it might seem that the lexical&lt;br&gt;
scope has won and is much more appropriate for programming, dynamic&lt;br&gt;
scope is still very important, and many languages do not even attempt&lt;br&gt;
to get rid of it.  Several languages even support both.  This shows&lt;br&gt;
the importance of the dynamic scope.  So here are some examples of&lt;br&gt;
languages that use both:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Common Lisp, is usually lexically scoped, but supports dynamically
scoped variables, which are called special in Common Lisp.&lt;/li&gt;
&lt;li&gt;  Java and many languages that use exception handlers: the exception
handlers are typically associated with exceptions using dynamic
scope.&lt;/li&gt;
&lt;li&gt;  C macros: when the macros contain free variables, they are
evaluated in dynamic scope.  They work by substituting the macro
names in the code, so it's not truly dynamic scope in the sense
that it does not look the name up in the current execution stack.
But when these free variable are in the scope where the macro is
used, they behave as though they were dynamically scoped.&lt;/li&gt;
&lt;li&gt;  Scheme also, even though it is lexically scoped almost everywhere,
uses dynamic scope in some input and output routines, like for
example, in &lt;code&gt;with-output-to file&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;  Perl, a language that was dynamically scoped when it was created,
added later lexical scope, so now it's up to the programmer to
decide which scoping strategy to use.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It is also interesting to point out that lexical and dynamic scopes are&lt;br&gt;
not the only possibilities to describe scoping rules, nor are&lt;br&gt;
sufficient to characterize the way variables are resolved by the&lt;br&gt;
language.  A good example is the difference between labels in many&lt;br&gt;
assembly languages, which do not need forward declarations and can be&lt;br&gt;
used before being declared, and variables in C, which need to be&lt;br&gt;
declared before being used.  In that sense, each variable in C&lt;br&gt;
introduces its own scope.&lt;/p&gt;

&lt;p&gt;There can also be limitations to the lexical scope, for example in&lt;br&gt;
Java object closures allow only &lt;code&gt;final&lt;/code&gt; variables, which means that&lt;br&gt;
other variables are in a somewhat more restricted scope.  More of such&lt;br&gt;
examples are described by Eric Tanter in &lt;em&gt;Beyond Static and Dynamic&lt;br&gt;
Scope&lt;/em&gt; &lt;sup id="9361e0bd08c4091cedfc72c12e00ee47"&gt;tanter_beyond&lt;/sup&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a id="orgdc316aa"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  First-class Functions and Closures
&lt;/h2&gt;

&lt;p&gt;&lt;a id="org175acd2"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  What is being First-class Citizen?
&lt;/h3&gt;

&lt;p&gt;Another important feature in my implementation, that also usually&lt;br&gt;
exists in implementations of Lisp is the first-class functions.  It&lt;br&gt;
means that functions, along with other types, like integers,&lt;br&gt;
characters, or booleans, are first-class citizens in a given&lt;br&gt;
programming language.&lt;/p&gt;

&lt;p&gt;What it concretely means, is that it has several important properties.&lt;br&gt;
It can be passed as argument to a function.  In order to do this, the&lt;br&gt;
name of the function should not only be usable only for calling the&lt;br&gt;
function, it should also represent the function as a variable that can&lt;br&gt;
be thought of as having a specific value.  Or perhaps there can be&lt;br&gt;
other ways to assign a function value to a variable, for example,&lt;br&gt;
using lambda functions.&lt;/p&gt;

&lt;p&gt;Another important property of first-class citizens, is that they can&lt;br&gt;
be returned from a function as a return value.  It is certainly the&lt;br&gt;
case for Scheme and Common Lisp.  Everything that can be manipulated&lt;br&gt;
as a variable with a name can be returned.  It is because Lisp has&lt;br&gt;
dynamic type system and it does not know the type of the value it&lt;br&gt;
returns.  So if you have a way to make a variable reference something,&lt;br&gt;
you have to have to be able to return it.&lt;/p&gt;

&lt;p&gt;The third property that is necessary for being first-class citizen, is&lt;br&gt;
to be able to be assigned.  Here also, as Lisp is rather a simple&lt;br&gt;
language, if you have a value, you are free to assign it to anything&lt;br&gt;
you want.  Any expression can be made to be evaluated and assigned to&lt;br&gt;
a variable.&lt;/p&gt;

&lt;p&gt;And the last property for being considered a first-class citizen is it&lt;br&gt;
has to be possible to tell whether two functions are equal.  It is&lt;br&gt;
rather difficult because first, function equality can have several&lt;br&gt;
meanings.  For example, if they have the same output for same input,&lt;br&gt;
or, whether their internal structure is the same, whether they have&lt;br&gt;
the same address in memory.&lt;/p&gt;

&lt;p&gt;Also the execution time can be taken into account.  It is also&lt;br&gt;
difficult to tell whether they are equal because to test it would be&lt;br&gt;
rather costly (except for testing the memory address, which is not a&lt;br&gt;
good way).  Also, in general, it is impossible to tell whether the&lt;br&gt;
functions give the same output only by examining their source code.  I&lt;br&gt;
think in practice it is not often used.  And languages which do not&lt;br&gt;
provide a dynamic way to build function like C, provide a more or&lt;br&gt;
less good approximation of equality by just comparing the memory&lt;br&gt;
address.  In case of Java, there is the comparison of the type of the&lt;br&gt;
objects: very often functions are put inside a class, which makes the&lt;br&gt;
equivalent to functions.  Good examples of such classes are&lt;br&gt;
Comparators and Listeners.&lt;/p&gt;

&lt;p&gt;&lt;a id="org2bd6d36"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Higher-order Functions
&lt;/h3&gt;

&lt;p&gt;An interesting feature of Lisp and many functional languages is the&lt;br&gt;
support of higher-order functions.  A higher-order function is a&lt;br&gt;
function that accepts other functions as parameters, and / or that can&lt;br&gt;
return a function as a return value.  Both cases are slightly&lt;br&gt;
different and are used differently.  When a function is passed, it is&lt;br&gt;
to be executed by the accepting function, so it is supported by&lt;br&gt;
languages like C, which cannot capture the environment with closures.&lt;br&gt;
When returning, on the other hand, C is very limited: it can only&lt;br&gt;
return a memory address of a function, it cannot create a new&lt;br&gt;
function, which is possible in functional programming languages.  So,&lt;br&gt;
it can be said, that only a half of the higher-order function feature&lt;br&gt;
is supported by C.&lt;/p&gt;

&lt;p&gt;The usage of higher-order functions gives a lot of expressiveness and&lt;br&gt;
flexibility to the language.  It allows to abstract some actions and&lt;br&gt;
concepts.  Higher-order functions can be used, for example, to sort&lt;br&gt;
differently a list depending on the comparison criteria.  By passing&lt;br&gt;
different functions to the sorting function, it's possible to make it&lt;br&gt;
sort differently.&lt;/p&gt;

&lt;p&gt;Here are some examples of common usage of higher-order functions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  The &lt;em&gt;map&lt;/em&gt; function is used to either transform a list or creating a
new list by applying a function to every element.&lt;/li&gt;
&lt;li&gt;  The &lt;em&gt;filter&lt;/em&gt; function allows to create a new list based on an
existing list by using only elements that are tested positive by a
function supplied by the user&lt;/li&gt;
&lt;li&gt;  The &lt;em&gt;fold&lt;/em&gt; function processes the whole list and creates a single
value from that list.  It can be sum, average, maximum, minimum,
count or anything the user desires to do.&lt;/li&gt;
&lt;li&gt;  Another example is to perform an action the way a user wants it.
For example, it can be an equal function that is passed to a
function that checks whether an element is in a list or not.  Not
all equals are the same, and sometimes it's very useful for the
user to be able to tell exactly when the two values are considered
equal.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And here is an example of code of a higher-order function:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;(define (fold-if list fold-fun filter-fun base-val)
  (cond ((null? list) base-val)
        ((filter-fun (car list))
         (fold-fun (car list)
                   (fold-if (cdr list) fold-fun filter-fun base-val)))
        (else
         (fold-if (cdr list) fold-fun filter-fun base-val))))
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The function in this examples, &lt;code&gt;fold-if&lt;/code&gt; takes as arguments a list, a&lt;br&gt;
fold function, a filter function and a base value.  So it folds only&lt;br&gt;
values that return true when passed to the filter function.&lt;/p&gt;

&lt;p&gt;Here is an example of this function being used:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;(fold-if '(1 2 3 4 5 6 7 8 9 10)
         *
         (lambda (x) (= (modulo x 3) 0))
         1)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;What it means is that given a list of 10 numbers, it applies the&lt;br&gt;
multiplication function, starting from 1 to all members that are&lt;br&gt;
divisible by 3.  So its basically the same as making &lt;code&gt;(* 3 6 9)&lt;/code&gt;,&lt;br&gt;
which returns 162.&lt;/p&gt;

&lt;p&gt;&lt;a id="orgbb7c919"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Anonymous Functions
&lt;/h3&gt;

&lt;p&gt;Another important feature of Lisp and of Scheme, is support of&lt;br&gt;
anonymous functions, also known as lambda functions.&lt;/p&gt;

&lt;p&gt;What it means is that they are just functions, but without a name.&lt;br&gt;
The way of calling them &lt;em&gt;lambda&lt;/em&gt; functions comes from lambda calculus,&lt;br&gt;
which is system that abstracts functions.  The only object in lambda&lt;br&gt;
calculus is a function and the only operation allowed is the function&lt;br&gt;
application.  Functions have no names.&lt;/p&gt;

&lt;p&gt;The way the functions are used in lambda calculus gives and example&lt;br&gt;
how anonymous functions can be used in programming languages: by&lt;br&gt;
passing them as parameters.&lt;/p&gt;

&lt;p&gt;Indeed, if the language does not support first-class functions,&lt;br&gt;
anonymous functions can seem to be a completely useless concept: what&lt;br&gt;
is the use of creating something you have no way to refer to?&lt;/p&gt;

&lt;p&gt;Anyway, the concept of lambda functions was born before the first&lt;br&gt;
programming language was made, and it is also the reason why Lisp, the&lt;br&gt;
first language to use anonymous functions, used the lambda keyword to&lt;br&gt;
refer to them.&lt;/p&gt;

&lt;p&gt;But even if they are not supported, it's still possible to get their&lt;br&gt;
full functionality by just creating normal named functions and to pass&lt;br&gt;
them.  Here lies another problem: the language has to support the&lt;br&gt;
definition of local functions because lambda functions are typically&lt;br&gt;
used in a local context, which can reference local variables (which&lt;br&gt;
are, by the way, non-local to the lambda function).  So by simply&lt;br&gt;
passing a top-level function will not provide the same functionality.&lt;/p&gt;

&lt;p&gt;Another way anonymous functions are often used, is to return a&lt;br&gt;
function as a function's return value.  This way it's possible to&lt;br&gt;
write a function that acts as a function factory, which creates&lt;br&gt;
different functions based on arguments passed to it.  This is&lt;br&gt;
something not possible to do with just function pointers, because they&lt;br&gt;
don't capture the environment.&lt;/p&gt;

&lt;p&gt;So, here is a simple example of usage of an anonymous function in Scheme:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;(define (fn f c)
  (+ (f 5) c))

(fn (lambda (x) (* x 100)) 3)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;It the code creates an anonymous function that simply multiplies its&lt;br&gt;
argument by 100.  The &lt;code&gt;fn&lt;/code&gt; function calls it with value 5 and adds to&lt;br&gt;
the second argument.  So the result is (503 = (100 * 5) + 3).&lt;/p&gt;

&lt;p&gt;&lt;a id="org65f9726"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Closures
&lt;/h3&gt;

&lt;p&gt;Another important concept are the closures.  A closure is when there&lt;br&gt;
is an internal function that has non-local free variables, which are&lt;br&gt;
bound by the enclosing functions or blocks.  It's not necessarily the&lt;br&gt;
closest enclosing block, it can also be a block several levels higher.&lt;br&gt;
The word &lt;em&gt;closure&lt;/em&gt; comes from the fact that the free variables are&lt;br&gt;
&lt;em&gt;closed&lt;/em&gt; by the surrounding block.&lt;/p&gt;

&lt;p&gt;Here is an example of a closure in Pascal:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;program test01;

function fn(c: integer): integer;
    function infn(x: integer): integer;
    begin
        infn := x * 100 + c
    end;
begin
    fn := infn(5)
end;

begin
    writeln(fn(3))
end.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;This code uses an internal function &lt;code&gt;infn&lt;/code&gt; that can access an external&lt;br&gt;
variable &lt;code&gt;c&lt;/code&gt;, which is a parameter of the &lt;code&gt;fn&lt;/code&gt; function.  It works&lt;br&gt;
because the code does not need to create function objects.  It only&lt;br&gt;
needs to access the frame of the enclosing function on the stack.&lt;/p&gt;

&lt;p&gt;Closures are typically associated with first-class functions, because&lt;br&gt;
they are often used to return a function, which is a recurring pattern&lt;br&gt;
in functional programming.  But they can also be used in language's&lt;br&gt;
without first class functions.  For example in C, the global&lt;br&gt;
environment can be seen as a closure over functions, because functions&lt;br&gt;
can use global variables defined in it.  There is also a similar&lt;br&gt;
concept in Pascal, which supports nested function, which are functions&lt;br&gt;
inside functions.  Function on an inner level can use the variables of&lt;br&gt;
the function on the outer level, and thus, the outer level creates a&lt;br&gt;
closure.  However, internal functions cannot be returned because when&lt;br&gt;
returning such functions, their environment has to be somehow&lt;br&gt;
preserved, which is not easy to implement in a language without&lt;br&gt;
garbage collection.&lt;/p&gt;

&lt;p&gt;Closures are often used for callbacks.  That is, when the user&lt;br&gt;
specifies a function to be called in certain circumstances: when an&lt;br&gt;
exception, an interrupt, or an event occurs.&lt;/p&gt;

&lt;p&gt;With closures is associated the &lt;em&gt;funarg&lt;/em&gt; problem.  When a function is&lt;br&gt;
returned, as a value, from another function, it can refer to non-local&lt;br&gt;
values.  The problem is that all the function that are created in this&lt;br&gt;
manner must be able to share values if they refer to the same value.&lt;br&gt;
For example if there are different closures in the same environment,&lt;br&gt;
or the user makes a copy of a closure.  In this case in order to know&lt;br&gt;
when to free the environments no longer in use, the language (or its&lt;br&gt;
runtime) needs a garbage collector.&lt;/p&gt;

&lt;p&gt;Here is an example of a higher order function that returns a function:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;(define (make-multiplier n)
  (lambda (x)
    (* x n)))
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;This function needs to save within itself the parameter passed, &lt;code&gt;n&lt;/code&gt;.&lt;br&gt;
And with another parameter it can create different functions.  It can&lt;br&gt;
be used in the following way:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;(let ((mul5 (make-multiplier 5))
      (mul2 (make-multiplier 2)))
  (writeln (+ (mul5 6) (mul2 4))))
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;We create two functions: &lt;code&gt;mul5&lt;/code&gt; and &lt;code&gt;mul2&lt;/code&gt;.  &lt;code&gt;mul5&lt;/code&gt; returns the&lt;br&gt;
argument multiplied by 5 and &lt;code&gt;mul2&lt;/code&gt; multiplies by 2.&lt;/p&gt;

&lt;p&gt;So when this program is executed, it prints (38), which is ((5 * 6) +&lt;br&gt;
(2 * 4)).&lt;/p&gt;

&lt;p&gt;&lt;a id="org2d1a4ad"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Tail-call Optimization
&lt;/h2&gt;

&lt;p&gt;A Scheme compiler or interpreter cannot use the name Scheme if it does&lt;br&gt;
not perform tail call optimization.  It how it's written in the&lt;br&gt;
specification.  Therefore my interpreter, even if it's really&lt;br&gt;
minimalist and implements a very small subset of the Scheme&lt;br&gt;
programming language, in order to be called Scheme implements some&lt;br&gt;
rudimentary kind of tail call optimization.&lt;/p&gt;

&lt;p&gt;What is the meaning of this term?  When a function calls another&lt;br&gt;
function, it does so by putting some values on the top of the stack.&lt;br&gt;
At least it must be the return value, and often it's also function&lt;br&gt;
parameters.&lt;/p&gt;

&lt;p&gt;When a function before returning calls another function, an thus uses&lt;br&gt;
its result as the result of the current function, it can be avoided to&lt;br&gt;
use more of the stack space.  The purpose is to make as though the&lt;br&gt;
caller called one function, but another function returned.  It is&lt;br&gt;
usually done very easily, if the calling conventions are not too&lt;br&gt;
complicated or badly designed.  What needs to be done is to make the&lt;br&gt;
last function called believe that it must return to the place the&lt;br&gt;
current function must return.  So, instead of pushing a new value to&lt;br&gt;
the stack it doesn't do it and jumps directly to the called function.&lt;br&gt;
The details of what exactly must be done are dependent on the calling&lt;br&gt;
convention, of course, like the way the parameters are passed and so&lt;br&gt;
on, but the idea is always the same: perform the return as though the&lt;br&gt;
caller called not the current function, but the last function.&lt;/p&gt;

&lt;p&gt;This idea is not dependent on the idea that the stack must be there.&lt;br&gt;
If we some day find that the stack is outdated and a much better way&lt;br&gt;
exists to implement function calls, the idea of tail call optimization&lt;br&gt;
might still work.&lt;/p&gt;

&lt;p&gt;&lt;a id="orga69bd0c"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Functionality
&lt;/h1&gt;

&lt;p&gt;So now I am going to say a few words about the functionality of&lt;br&gt;
PowerScheme: what it is, what it's designed to do, and a few of its&lt;br&gt;
properties that make it different from usual Scheme implementations.&lt;br&gt;
Sometimes I included a feature that I thought would be interesting,&lt;br&gt;
sometimes I wanted to include something, but then realized that it&lt;br&gt;
would be too complicated to code or not practical to use.&lt;/p&gt;

&lt;p&gt;&lt;a id="org67ecbab"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  PowerScheme is a Scheme interpreter
&lt;/h2&gt;

&lt;p&gt;One of the most obvious things is that, as a beginner, I don't start&lt;br&gt;
with difficult things from the start.  I try to keep everything&lt;br&gt;
manageable, and when, in the future, I will know better about how things&lt;br&gt;
work, I will add some more advanced techniques and features.  For this&lt;br&gt;
reason my Scheme implementation is an interpreter.  Interpreters are&lt;br&gt;
typically easier to code than compilers, because compilers require&lt;br&gt;
knowledge about the target architecture, optimization techniques and&lt;br&gt;
so on.&lt;/p&gt;

&lt;p&gt;My interpreter can run in two different modes.  The mode is determined&lt;br&gt;
when the program starts and cannot be switched at runtime.&lt;/p&gt;

&lt;p&gt;&lt;a id="org0be5949"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Interactive mode
&lt;/h3&gt;

&lt;p&gt;The first mode is the interactive mode.  PowerScheme is started in&lt;br&gt;
this mode when the interpreter is invoked without arguments.  That&lt;br&gt;
means that it works like a REPL: the program waits for the user to&lt;br&gt;
input lines of code (it doesn't do anything when the user inputs&lt;br&gt;
individual characters), and then looks if it can be interpreted.  If&lt;br&gt;
the input contains a complete s-expression or a single value, it&lt;br&gt;
interprets it and prints the result value on the screen.  In some&lt;br&gt;
cases, when there is an error condition during the execution of the&lt;br&gt;
program, instead of printing a return value, the interpreter prints an&lt;br&gt;
error message.  This is done by the way of the support of Exceptions&lt;br&gt;
by PowerShell.&lt;/p&gt;

&lt;p&gt;So here is how it's started.  From the directory containing the source&lt;br&gt;
code of the interpreter, simply invoke PowerShell by typing &lt;code&gt;pwsh&lt;br&gt;
-File PwScheme.ps1&lt;/code&gt;:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ pwsh -File PwScheme.ps1
PowerScheme&amp;gt;(writeln "Hello, World!")
"Hello, World!"
PowerScheme&amp;gt;_
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Now the interpreter can be used in interactive mode.  In order to exit&lt;br&gt;
there is a specialized built-in function &lt;code&gt;exit&lt;/code&gt;:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;PowerScheme&amp;gt;(exit)
$ _
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a id="orgfe02a07"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Non-interactive mode
&lt;/h3&gt;

&lt;p&gt;The second mode is the non-interactive mode.  In order to enter this&lt;br&gt;
mode the program must be started with an argument which is interpreted&lt;br&gt;
as a file name.  The file referenced by the file name must contain a&lt;br&gt;
program that can be interpreted by PowerScheme.  This program is then&lt;br&gt;
read and executed.  When PowerScheme finishes the execution of the&lt;br&gt;
program, it exits without doing anything.&lt;/p&gt;

&lt;p&gt;And here is an example running in non-interactive mode, by reading the&lt;br&gt;
Scheme source code from a file in &lt;code&gt;tests&lt;/code&gt; directory named&lt;br&gt;
&lt;code&gt;y-combinator.scm&lt;/code&gt;.  As you can see, after executing the program, and&lt;br&gt;
printing what the program has told it to print, it exits immediately.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ pwsh -File PwScheme.ps1 tests/y-combinator.scm
6
3628800
$ _
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a id="orgff49e34"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Lexical and Dynamic Scope in PowerScheme
&lt;/h2&gt;

&lt;p&gt;Another decision I had to make was regarding the scoping rules.  There&lt;br&gt;
are two main ways the variables are looked up: the lexical scope and&lt;br&gt;
the dynamic scope.  My previous interpreter, that I wrote in Kotlin,&lt;br&gt;
was exclusively lexically scoped.  For this one, as I decided to write&lt;br&gt;
it as an exercise for the chapter 2 of &lt;em&gt;Lisp in Small Pieces&lt;/em&gt;, which&lt;br&gt;
discussed the implementation of the dynamic scope, and I think it is&lt;br&gt;
not discussed a lot further in the book, I decided to implement both&lt;br&gt;
dynamic and lexical scope.&lt;/p&gt;

&lt;p&gt;Scheme by its definition must be lexically scoped.  When it was&lt;br&gt;
created, it was considered to be a foreign feature that came from&lt;br&gt;
Algol.  So, anyway, it was not a good thing to leave lexical scope&lt;br&gt;
completely out.  That's why I decided to do both.  In the book there&lt;br&gt;
were shown implementations that used specialized keywords for&lt;br&gt;
declaration and accessing of the dynamic scope.  Now I think it's a&lt;br&gt;
good idea.  It should be weird to want to modify a variable and then&lt;br&gt;
find out that it's actually dynamically scoped.  For this reason, I&lt;br&gt;
think, there is a convention in Common Lisp, in order to distinguish&lt;br&gt;
dynamic variable from the rest, to prepend and append an asterisk&lt;br&gt;
before and after the name of the variable.&lt;/p&gt;

&lt;p&gt;I did more or less the way it's done in Common Lisp: the way dynamic&lt;br&gt;
variables are accessed is the same way lexical variables are.  For the&lt;br&gt;
declaration, both of variables and functions, I use the keyword&lt;br&gt;
&lt;code&gt;dynamic&lt;/code&gt;.  This can lead to problems if the user tries to use the&lt;br&gt;
same name for a lexical and a dynamic variable.  A better solution&lt;br&gt;
would be to check systematically that they don't overlap, or to impose&lt;br&gt;
a distinction in the name, like allowing only names between asterisks&lt;br&gt;
to be legal for dynamic names.  This would actually lead to confusion&lt;br&gt;
because in Common Lisp such variables are all in global scope, but in&lt;br&gt;
my implementation they can be defined at any level.&lt;/p&gt;

&lt;p&gt;Here is a little program I have written in order to test how my&lt;br&gt;
implementation of the dynamic scope works:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;(dynamic y 1)
(define (f x)
 (writeln (list 'f y))
 (set! y (* x 10))
 (writeln (list 'f y))
 (g 7)
 (writeln (list 'f y)))
(define (g a)
 (writeln (list 'g y))
 (set! y a)
 (writeln (list 'g y)))

(writeln (list 'top y))
(set! y 3)
(writeln (list 'top y))
(f 5)
(writeln (list 'top y))
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;It prints the following, showing that even if functions modify the&lt;br&gt;
variable &lt;code&gt;y&lt;/code&gt;, they do it only in their scope and the previous value is&lt;br&gt;
restored when they return:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;(TOP 1)
(TOP 3)
(F 3)
(F 50)
(G 50)
(G 7)
(F 50)
(TOP 3)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a id="orga600ca2"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Exception Handling
&lt;/h2&gt;

&lt;p&gt;Another use of the dynamic scope is for handling exceptions.  So, I&lt;br&gt;
added in PowerScheme a very rudimentary exception handling feature in&lt;br&gt;
order to test how it relates to the dynamic scope.  So let's say that&lt;br&gt;
each error condition has a dynamically scoped function associated with&lt;br&gt;
it.  They can easily be defined by user, for example this way:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;(dynamic (*zero-crasher*) (writeln '*zero-crasher*))
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The syntax of &lt;code&gt;dynamic&lt;/code&gt; is like that of &lt;code&gt;define&lt;/code&gt;, but the declarations&lt;br&gt;
are made in the dynamic environment, thus creating dynamic variables&lt;br&gt;
and functions.&lt;/p&gt;

&lt;p&gt;Let's say it is associated with the condition when something tries to&lt;br&gt;
divide by zero.  It is used this way:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;(define (div a b) (if (= b 0) (crash (*zero-crasher*)) (/ a b)))
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;When we encounter a &lt;code&gt;b&lt;/code&gt; which is zero we crash the program by&lt;br&gt;
executing the &lt;em&gt;crasher&lt;/em&gt;.  The special form &lt;code&gt;crash&lt;/code&gt; evaluates its&lt;br&gt;
argument (here it executes the function that I call the &lt;em&gt;crasher&lt;/em&gt;) and&lt;br&gt;
crashes the program.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;(div 6 0)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;This, for example, prints &lt;code&gt;*ZERO-CRASHER*&lt;/code&gt; and crashes.&lt;/p&gt;

&lt;p&gt;This error handler works globally until it's overwritten:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;(let ((a 5)
      (b 0))
 (set! *zero-crasher* (lambda () (writeln 'test:from-let)))
 (div a b))
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Now we can specify the handler we want.  So here, in this particular&lt;br&gt;
dynamic scope (this &lt;code&gt;let&lt;/code&gt;), we have our own error handler, our own&lt;br&gt;
&lt;em&gt;crasher!!&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Of course its usage is limited and is not enough powerful for many use&lt;br&gt;
cases.  I think it can be possible to implement it so that the program&lt;br&gt;
still can continue the execution.  But even if I implement it there&lt;br&gt;
is a problem.  This kind of exception handler seems to become an&lt;br&gt;
anti-pattern.  When stack-unwinding exception handlers are used, the&lt;br&gt;
code becomes error-prone and difficult to reason about.  This is the&lt;br&gt;
reason why I didn't implement it completely, just a minimum subset in&lt;br&gt;
order to see the dynamic scope in action.  There is also another&lt;br&gt;
problem, even if it somehow can be solved: there needs to be a way to&lt;br&gt;
jump from the handler through the stack to the place where the handler&lt;br&gt;
has been defined.  It's actually difficult to solve without rewriting&lt;br&gt;
a lot of things.  A possible strategy would be to return the value,&lt;br&gt;
but it has to be checked.  Scheme values are objects in my&lt;br&gt;
implementation anyway, so an integer value, for example, (0) can&lt;br&gt;
signify that an exception has been thrown and needs to be caught.&lt;br&gt;
This situation similar to the situation with continuations: it&lt;br&gt;
something that has to be specified in the requirements from the start.&lt;br&gt;
And as it is a buggy feature even if done right I didn't implement it.&lt;br&gt;
On the other hand, in a Scheme with continuations it is not difficult&lt;br&gt;
to implement and can be an interesting toy to play with.&lt;/p&gt;

&lt;p&gt;&lt;a id="org1d9e685"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Implementation Details
&lt;/h1&gt;

&lt;p&gt;In this part I will describe some of the implementation decisions I&lt;br&gt;
have made for this interpreter.  I will not describe every detail of&lt;br&gt;
the implementation, but I will focus on the parts, where I had to make&lt;br&gt;
a choice, or when I had to find a solution to a specific problem.&lt;/p&gt;

&lt;p&gt;So, I will start from the very beginning, when the program is first&lt;br&gt;
invoked: how the program mode of execution is chosen, and so on.&lt;/p&gt;

&lt;p&gt;Then I'll describe how the input is read.  First it goes through the&lt;br&gt;
tokenizer, which leaves only the essential elements of the program,&lt;br&gt;
then there is the parser, which attempts to find a structure in the&lt;br&gt;
program, then this structure is evaluated.&lt;/p&gt;

&lt;p&gt;The evaluation also is a rather complex part of the interpreter, and&lt;br&gt;
its various part will also need to be described.  The various&lt;br&gt;
implementation decisions which I made, like the way function calls are&lt;br&gt;
made, how the environment works, and so on will be given some&lt;br&gt;
attention in this section.&lt;/p&gt;

&lt;p&gt;&lt;a id="orgfcb3853"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The Structure of the Code
&lt;/h2&gt;

&lt;p&gt;So a good way to describe the structure of the code is to show in what&lt;br&gt;
files it is subdivided.  This is a rather simple program and it only&lt;br&gt;
has seven source files:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;code&gt;PwScheme.ps1&lt;/code&gt;  is the entry point of the program.  It executes in
REPL mode, or in the case an argument is given, reads the code from
a file.&lt;/li&gt;
&lt;li&gt;  &lt;code&gt;Tokenizer.ps1&lt;/code&gt;: produces tokens from the output.&lt;/li&gt;
&lt;li&gt;  &lt;code&gt;Bag.ps1&lt;/code&gt;.  Before giving tokens to the parser, its better for it
to have complete expressions, so that further steps can be executed
without delay.  So the &lt;code&gt;Bag&lt;/code&gt; provides an iterator that only gives
things that can be interpreted as Lisp objects: either primitive
values or pairs/lists.&lt;/li&gt;
&lt;li&gt;  &lt;code&gt;Parser.ps1&lt;/code&gt;.  Creates the structural representation of the code.&lt;/li&gt;
&lt;li&gt;  &lt;code&gt;Evaluator.ps1&lt;/code&gt;: transforms the code into the executing
program.  As it is complex enough by itself, previous steps are
needed, so it can do its work without worrying about steps not
directly related to it.&lt;/li&gt;
&lt;li&gt;  &lt;code&gt;Environment.ps1&lt;/code&gt; represents the environment of execution.  There
are two environments: lexical and dynamic.  And they don't work the
same way.  But I tried to put them in one program unit nevertheless
because their functionality is somehow similar.&lt;/li&gt;
&lt;li&gt;  &lt;code&gt;System.ps1&lt;/code&gt;.  It's a helper unit that is needed for the
interaction with the OS.  It also contains some predefined
functions, like multiplication or modulo, which could be expressed
in the language, but it's much more convenient if I use functions
provided by PowerShell.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a id="org51d76d0"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The Entry Point
&lt;/h2&gt;

&lt;p&gt;The file &lt;code&gt;PwScheme.ps1&lt;/code&gt; contains the entry point.  Its job is to&lt;br&gt;
start everything.  If you look at the code, you will see that it&lt;br&gt;
is divided into two parts.  The first part corresponds to the case in&lt;br&gt;
which it's invoked with a single argument, in which case it reads a&lt;br&gt;
file.  The first part is when it's in interactive mode, so its job is&lt;br&gt;
to show a prompt and execute the commands as the user enters them.&lt;/p&gt;

&lt;p&gt;As you can see, the &lt;code&gt;Bag&lt;/code&gt; is only needed in the second case.  It's&lt;br&gt;
because it needs to know whether there is enough input so that it can&lt;br&gt;
be evaluated.  When reading from a file, this problem does not occur,&lt;br&gt;
because the program can just read until the end of the file and if the&lt;br&gt;
expression is unfinished or illegal, it cannot be changed during the&lt;br&gt;
period of execution of the program.&lt;/p&gt;

&lt;p&gt;Another interesting point is how the exit function sends an exception&lt;br&gt;
to the REPL in order to tell it that it wants the program to&lt;br&gt;
terminate.  So, yes, I use an &lt;code&gt;ExitException&lt;/code&gt;.  I don't know how&lt;br&gt;
people at PowerShell look at it, but in some coding communities it's&lt;br&gt;
regarded as a &lt;code&gt;goto&lt;/code&gt; use of an exception, which is not welcome.  But&lt;br&gt;
there is no other &lt;code&gt;goto&lt;/code&gt; mechanism in PowerShell, and this use of&lt;br&gt;
&lt;code&gt;goto&lt;/code&gt; is one of the cases, where doing otherwise would make code less&lt;br&gt;
efficient and more difficult to read.  I would need to return a&lt;br&gt;
special value and every function down on the stack would have to check&lt;br&gt;
whether I want to exit or not…&lt;/p&gt;

&lt;p&gt;&lt;a id="orge6d6f85"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The Tokenizer
&lt;/h2&gt;

&lt;p&gt;One of the things that is very simple and generic in my interpreter is&lt;br&gt;
the tokenizer.  It defines a &lt;code&gt;Token&lt;/code&gt; class, which is a specialized&lt;br&gt;
class for creating tokens.  It reads the input, removes the comments,&lt;br&gt;
interprets primitive values (like characters, strings, numbers), and&lt;br&gt;
doesn't attempt to create any kind of structure.  Instead, when it&lt;br&gt;
encounters a special character, like a parenthesis, it creates a token&lt;br&gt;
for this character.&lt;/p&gt;

&lt;p&gt;&lt;a id="org3e4a907"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The Bag
&lt;/h2&gt;

&lt;p&gt;The &lt;code&gt;Bag&lt;/code&gt; is also a very simple unit.  All it does, is it makes sure&lt;br&gt;
that when the &lt;code&gt;Parser&lt;/code&gt; tries to read the input, it gets something with&lt;br&gt;
a balanced number of parentheses.  This is the reason why it comes&lt;br&gt;
after the tokenizer: it needs to know whether a parenthesis is inside&lt;br&gt;
a literal (a string or a character) or if it's really a real&lt;br&gt;
parenthesis that's there to give the structure to the code.&lt;/p&gt;

&lt;p&gt;&lt;a id="org14929dd"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The Parser
&lt;/h2&gt;

&lt;p&gt;Then comes the &lt;code&gt;Parser&lt;/code&gt;.  Even though many people say that parsing is&lt;br&gt;
still an unsolved problem, for some simple tasks, simple and effective&lt;br&gt;
parsers can be done.  This is certainly the case for Lisp.  I have a&lt;br&gt;
suspicion that it looks the way it looks today in order to make the&lt;br&gt;
parsing problem easier.  A lot of the development of the parsing&lt;br&gt;
techniques came when Lisp already looked the way it looks now.&lt;/p&gt;

&lt;p&gt;So my Parser only has one kind of a non-terminal to parse: the&lt;br&gt;
s-expressions.  That is, when it encounters an open parenthesis, it&lt;br&gt;
knows that what it represents, is a start of an s-expression.  When it&lt;br&gt;
encounters a closing parenthesis, it's the end of an s-expression.&lt;br&gt;
And there is nothing more to it.  A half of the &lt;code&gt;Parser.ps1&lt;/code&gt; file is&lt;br&gt;
actually taken by the definition of the class &lt;code&gt;Exp&lt;/code&gt;, which represents&lt;br&gt;
an expression in the AST.  Basically everything is an &lt;code&gt;Exp&lt;/code&gt;: a&lt;br&gt;
literal, a pair, a function…&lt;/p&gt;

&lt;p&gt;&lt;a id="orgbe5f85a"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The Evaluator
&lt;/h2&gt;

&lt;p&gt;Then it's the job of the evaluator to read the structure that the&lt;br&gt;
parser has produced and to execute (or evaluate, in the language of&lt;br&gt;
Lisp programmers) the expressions.  What is interesting is that usually&lt;br&gt;
in programming languages, there are statements and there are&lt;br&gt;
expressions.  People say that the statements are executed and&lt;br&gt;
expressions are evaluated.  But in Lisp, statements and expressions&lt;br&gt;
are the same thing.  Expressions are statements and statements are&lt;br&gt;
expressions.  If you think about it, it kind of makes sense, because&lt;br&gt;
statements and expressions are both tasks given to the computer to&lt;br&gt;
perform.  So they are closely related.  The reason why people decided&lt;br&gt;
to make a separation between both is difficult to understand.  Perhaps&lt;br&gt;
it has something to do with the history of how programming languages&lt;br&gt;
were created, like at first everything had to be a statement with only&lt;br&gt;
a few arithmetic expressions here and there or something similar.&lt;/p&gt;

&lt;p&gt;&lt;a id="org9c2efb6"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Types
&lt;/h3&gt;

&lt;p&gt;There are three kinds of types in my interpreter: primitive types&lt;br&gt;
(number, symbol, string, character, and boolean), pairs (they are&lt;br&gt;
represented by conses), and functions, of which there are three types&lt;br&gt;
(user-defined, built-in, and thunks).&lt;/p&gt;

&lt;p&gt;&lt;a id="orge0db9fd"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Functions
&lt;/h3&gt;

&lt;p&gt;Functions are an important topic in programming languages, and&lt;br&gt;
particularly in Lisp, which is sometimes even called a functional&lt;br&gt;
programming language.  There are many topics related to functions and&lt;br&gt;
a lot different ways to implement them and, also, a lot of things that&lt;br&gt;
are difficult to implement.  So here I will present several of the&lt;br&gt;
problems I had and solutions I found while implementing functions in&lt;br&gt;
my PowerScheme interpreter.&lt;/p&gt;

&lt;p&gt;First there are, in my interpreter 3 different kinds of functions:&lt;br&gt;
user-defined, built-in and thunks.  Now, after having done some&lt;br&gt;
choices and looking around, I see that it's a common thing for a&lt;br&gt;
language to have functions that are of different kinds.  For example,&lt;br&gt;
C has functions, macros, some keywords that look like functions, but&lt;br&gt;
are not, like &lt;code&gt;sizeof&lt;/code&gt;.  Bash has user-defined functions, built-in&lt;br&gt;
functions, and external commands.  It's more or less the same&lt;br&gt;
situation in PowerShell.  And almost everywhere I see this pattern.&lt;/p&gt;

&lt;p&gt;So, some functions are defined by the user.  These functions are&lt;br&gt;
kept in the form of the AST nodes.  They also save the environment in&lt;br&gt;
which they were defined in order to not lose access to the non-local&lt;br&gt;
variables.&lt;/p&gt;

&lt;p&gt;Now I will show how they are created.  There is a specialized function&lt;br&gt;
the job of which is to create functions.  It's called &lt;code&gt;Make-Function&lt;/code&gt;.&lt;br&gt;
It is called from the Evaluate function whenever it encounters a&lt;br&gt;
&lt;code&gt;lambda&lt;/code&gt; or a &lt;code&gt;define&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;It's interesting that simply by inspecting this one function, you can&lt;br&gt;
tell a lot about how my interpreter works.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;function Make-Function($name, $env, $paramsExp, $body) {
    $function = New-Object Fun
    $function.defEnv = $env.Duplicate($name)
    $function.isThunk = $false
    $function.params = @()
    $paramsCons = $paramsExp
    while ($paramsCons.type -eq "Cons") {
        $function.params += $paramsCons.car.value
        $paramsCons = $paramsCons.cdr
    }
    if ($paramsCons.type -eq "Symbol" -and $paramsCons.value -ne "NIL") {
        $function.dotParam = $paramsCons.value
    }
    $function.body = $body
    return New-Object Exp -ArgumentList ([ExpType]::Function), $function
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;It takes as parameters the name of the function, which is here only&lt;br&gt;
for debugging purposes, the environment in which it was defined, the&lt;br&gt;
expression which holds the parameters, in Lisp it is typically a list&lt;br&gt;
of identifiers, and the body, which is executed when the function is&lt;br&gt;
called.&lt;/p&gt;

&lt;p&gt;You see that a function is in fact an object.  Even though the authors&lt;br&gt;
of PowerShell say that it is not a object-oriented language (they call&lt;br&gt;
it object-&lt;em&gt;based&lt;/em&gt; &lt;sup id="bc24fab90f95f459a9c20cc3d8e7a767"&gt;payette_windows_2007&lt;/sup&gt;), the existence of objects&lt;br&gt;
is rather convenient and makes it easier to write code.&lt;/p&gt;

&lt;p&gt;Then the weird thing is that we duplicate the environment.  Don't&lt;br&gt;
worry, it's not a deep copy of the environment.  It is needed because&lt;br&gt;
the environment itself is an object and if we don't make a copy, its&lt;br&gt;
state will change (for example the scope level), and it's not what is&lt;br&gt;
supposed to happen.  Moreover, the &lt;code&gt;Duplicate&lt;/code&gt; function makes it&lt;br&gt;
possible for the environment to have branching stacks for each&lt;br&gt;
variable.  I will explain it in more detail a little bit later.&lt;/p&gt;

&lt;p&gt;Then we mark that it's not a thunk.  Because it's just a regular&lt;br&gt;
user-defined function.&lt;/p&gt;

&lt;p&gt;Then we walk through the linked list of conses and we append the&lt;br&gt;
elements to the &lt;code&gt;params&lt;/code&gt; field.  Then we need to take care of the case&lt;br&gt;
of the dotted parameter.  It's really complicated and not very well&lt;br&gt;
tested.  I wish it wasn't a required feature of Scheme.&lt;/p&gt;

&lt;p&gt;Then, at last, we insert the body into the function and we put the&lt;br&gt;
function into an &lt;code&gt;Exp&lt;/code&gt;.  Because an &lt;code&gt;Exp&lt;/code&gt; can be anything, a function&lt;br&gt;
is not an exp, but it can be included in an &lt;code&gt;Exp&lt;/code&gt;.  Remember, an &lt;code&gt;Exp&lt;/code&gt;&lt;br&gt;
is a type that can hold any Lisp value, including functions.&lt;/p&gt;

&lt;p&gt;Some functions are built in.  These are functions that perform some&lt;br&gt;
input and output (like reading files or writing to the console),&lt;br&gt;
arithmetic functions, type checking functions, and the &lt;code&gt;EXIT&lt;/code&gt;&lt;br&gt;
function.&lt;/p&gt;

&lt;p&gt;These functions are defined in the &lt;code&gt;System.ps1&lt;/code&gt; file.  I would call it&lt;br&gt;
a unit, because I would like to call it a module, but it means&lt;br&gt;
something else in PowerShell.&lt;/p&gt;

&lt;p&gt;So now, let's look at some examples:&lt;/p&gt;

&lt;p&gt;Let's begin with a simple function whose job is to add two numbers.&lt;br&gt;
In my implementation only integers are supported:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;function SysPlus($a) {
    $result = 0
    foreach ($num in $a) {
        $result += $num.value
    }
    return New-Object Exp -ArgumentList ([ExpType]::Number), $result
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;We receive an array with integers, which we have to add together.  So&lt;br&gt;
we add them one by one and then enclose them in an &lt;code&gt;Exp&lt;/code&gt; of type&lt;br&gt;
&lt;code&gt;Number&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;The next function is also easy to understand:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;function SysWriteLn($a) {
    foreach ($exp in $a) {
        Write-Host -NoNewline $exp
    }
    Write-Host ""
    return $null
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;It has almost the same structure.  It returns a &lt;code&gt;$null&lt;/code&gt;, which&lt;br&gt;
signifies the lack of value returned.  The user should not expect&lt;br&gt;
functions that are made only for side-effects to have a value.&lt;/p&gt;

&lt;p&gt;The next function is a funny one, it's called &lt;code&gt;EVAL&lt;/code&gt;:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;function SysEval($a) {
    return Evaluate $a[0] (Make-Global-Environment) (New-Object Environment "#&amp;lt;eval&amp;gt;") $false
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Yes, it calls evaluate from the evaluator and passes the value it&lt;br&gt;
receives to it, with a new global environment though.  The second&lt;br&gt;
environment is the dynamic environment, which is empty.&lt;/p&gt;

&lt;p&gt;Apply, on the other hand is a little bit larger:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;function SysApply($funExp, $argsExp) {
    $function = Evaluate $funExp $env $denv $false
    $env = Make-Global-Environment
    $denv = New-Object Environment "#&amp;lt;apply&amp;gt;"
    if ($function.type -eq "BuiltIn") {
        return Call-BuiltIn $function.value $argsExp $env $denv $false
    } elseif ($function.type -eq "Function") {
        return Invoke $function $argsExp $env $denv $false
    }
    return $null
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;And it uses either &lt;code&gt;Invoke&lt;/code&gt; or &lt;code&gt;Call-BuiltIn&lt;/code&gt; depending on whether&lt;br&gt;
it's a built-in function or not.&lt;/p&gt;

&lt;p&gt;This is my exception-handling function:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;function SysCrash($args) {
    Evaluate $args[0] $env $denv $false
    throw [ExitException] "CRASH"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;I know it looks like cheating because I implement exceptions using&lt;br&gt;
exceptions, but it's rather a difficult thing to implement and&lt;br&gt;
requires more work than simply changing a few things here and there.&lt;/p&gt;

&lt;p&gt;And here is the function used to read from standard input.  It&lt;br&gt;
displays a prompt and waits for a string followed a newline:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;function SysRead($a, $env, $denv) {
    if ($a.length -eq 1) {
        $msg = $a[0].value
    } else {
        $msg = "&amp;gt;"
    }
    $text = Read-Host $msg
    $tokens = Get-Tokens $text
    $exps = Parse-Tokens $tokens
    $exps | ForEach-Object {
        try {
            $exp = Evaluate $_ $env $denv $false
        } catch [EvaluatorException] {
            Write-Output ("Exception in SysRead: " + $($_.Exception.msg))
        }
    }
    return $exp
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;This function needs to first read from input, then go through the&lt;br&gt;
steps of tokenization, parsing and evaluation.  Then there is a loop&lt;br&gt;
in case the user decided to put several expressions on the same line.&lt;br&gt;
The return value is the last expression evaluated.&lt;/p&gt;

&lt;p&gt;Next, is an example of a function that checks whether the type of a&lt;br&gt;
value is a pair.  Here you can see how boolean values are put in the&lt;br&gt;
&lt;code&gt;Exp&lt;/code&gt;:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;function SysIsPair($a) {
    return New-Object Exp -ArgumentList ([ExpType]::Boolean), ($a[0].type -eq "Pair")
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;And here is the &lt;code&gt;EXIT&lt;/code&gt; function.  Of course there is nothing&lt;br&gt;
surprising about it.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;function SysExit() {
    throw [ExitException] "EXIT"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a id="org353091c"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Local Functions and Thunks
&lt;/h3&gt;

&lt;p&gt;Scheme has a construct to define recursive functions so that they be&lt;br&gt;
local, it's called &lt;code&gt;letrec&lt;/code&gt;.  Here is an example that illustrates how&lt;br&gt;
it works:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;(letrec ((odd
           (lambda (lst)
            (cond
             ((empty? lst) empty)
             (else (cons (car lst) (even (cdr lst)))))))
         (even
           (lambda (lst)
            (cond
             ((empty? lst) empty)
             (else (odd (cdr lst)))))))
  (writeln (even '(1 2 3 4 5 6)))
  (writeln (odd '(1 2 3 4 5 6))))
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The output of this expression is:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;(2 4 6)
(1 3 5)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;So it defines two local functions that are defined using each other,&lt;br&gt;
if that makes sense.  There is a problem regarding the implementation&lt;br&gt;
of &lt;code&gt;letrec&lt;/code&gt;.  It's that &lt;code&gt;let&lt;/code&gt; expressions are evaluated in their&lt;br&gt;
environment and don't have access to other things that exist inside a&lt;br&gt;
let.&lt;/p&gt;

&lt;p&gt;Here we see, that both of these local functions want to access&lt;br&gt;
another, which means, both functions have to know about each other.&lt;/p&gt;

&lt;p&gt;So, what to do?  A simple solution would be to define an environment&lt;br&gt;
for the let parameters, add these names with a temporary value to be&lt;br&gt;
overwritten, and then assign the lambda functions to these variables.&lt;br&gt;
This way, when they are defined, they are in an environment visible to&lt;br&gt;
these local functions.&lt;/p&gt;

&lt;p&gt;This strategy seems to work as it is more or less fine.  But,&lt;br&gt;
unfortunately I have found a problem with this approach.  Let look at&lt;br&gt;
the following code:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;(letrec ((f (lambda (x) (+ (g (+ x 1)) 1)))
         (x (g 4))
         (g (lambda (x) (* x 10))))
  (writeln (+ x (f 7))))
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Indeed, one of the parameters, &lt;code&gt;x&lt;/code&gt; is not a function, but it depends&lt;br&gt;
on the existence of the function &lt;code&gt;g&lt;/code&gt; nevertheless!  What happens, is&lt;br&gt;
that the moment values are assigned the function &lt;code&gt;g&lt;/code&gt; is not available&lt;br&gt;
yet, but the value of &lt;code&gt;x&lt;/code&gt; should be defined: it's (40).  And the value&lt;br&gt;
of the whole expression is &lt;code&gt;(+ 40 (+ (g 8) 1)) = (+ 40 81) = 121&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;How can we make it work?  It's actually easy.  We have seen that it&lt;br&gt;
works with functions, so that means if we transform &lt;code&gt;(g 4)&lt;/code&gt; into a&lt;br&gt;
function, it will work.  So how do we do that?  It's also actually&lt;br&gt;
easy: &lt;code&gt;(lambda () (g 4))&lt;/code&gt;, and nothing more is needed.  This kind of&lt;br&gt;
function is called a thunk.  It literately means that the program has&lt;br&gt;
to have a little thinking session in order to set the value.  That's&lt;br&gt;
why it's called a thunk.&lt;/p&gt;

&lt;p&gt;This is how a thunk is created:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;function Make-Thunk($exp, $env) {
    $function = New-Object Fun
    $function.defEnv = $env
    $function.params = @()
    $function.isThunk = $true
    $nil = New-Object Exp -ArgumentList ([ExpType]::Symbol), "NIL"
    $cons = New-Object Exp -ArgumentList ([ExpType]::Cons), $exp, $nil
    $function.body = $cons
    return New-Object Exp -ArgumentList ([ExpType]::Function), $function
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;So it's a function evaluated in the lexical environment, without&lt;br&gt;
parameters, marked as a thunk with the only element of the body being&lt;br&gt;
the corresponding expression, then it is enclosed in an &lt;code&gt;Exp&lt;/code&gt; and&lt;br&gt;
returned.&lt;/p&gt;

&lt;p&gt;But actually with this approach there is another problem.  If we&lt;br&gt;
simply transform it into a function, the references will refer to a&lt;br&gt;
function and will not work anymore: an expression like &lt;code&gt;(+ x 10)&lt;/code&gt; is&lt;br&gt;
meaningless if &lt;code&gt;x&lt;/code&gt; is a function.  In this case it should be&lt;br&gt;
transformed into &lt;code&gt;(+ (x) 10)&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;For this reason a special kind of functions is needed, that I called a&lt;br&gt;
thunk in my implementation (in other uses this term can have a&lt;br&gt;
slightly different meaning).  This is how it works:  the first time&lt;br&gt;
&lt;code&gt;x&lt;/code&gt; is needed, the thunk function is executed,  then the value is&lt;br&gt;
stored in the &lt;code&gt;x&lt;/code&gt; variable, overwriting the thunk and the value is&lt;br&gt;
returned.  The second time &lt;code&gt;x&lt;/code&gt; is evaluated, the value is used.  That&lt;br&gt;
means that the thunks live only for the purpose of one use, later they&lt;br&gt;
are not needed anymore.&lt;/p&gt;

&lt;p&gt;&lt;a id="org46bbbbe"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The Environment
&lt;/h2&gt;

&lt;p&gt;An environment is a data structure that provides with variable&lt;br&gt;
entities when it's given a name.  This is what makes it possible to&lt;br&gt;
use variables that can hold values which can be modified.&lt;/p&gt;

&lt;p&gt;So an environment has three basic operations:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  declare a variable&lt;/li&gt;
&lt;li&gt;  assign a value to a variable&lt;/li&gt;
&lt;li&gt;  get a value of a variable&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In order to allow variables to be used in blocks, there are two&lt;br&gt;
additional operations:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  enter a scope&lt;/li&gt;
&lt;li&gt;  leave a scope&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;After entering a scope, we can declare new variables without modifying&lt;br&gt;
the old ones, while the old ones are still visible.  These variables&lt;br&gt;
are created on the same level.  When leaving a scope, all the&lt;br&gt;
variables of the current level are forgotten (not necessarily destroyed).&lt;/p&gt;

&lt;p&gt;&lt;a id="orgbff7113"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Lexical and Dynamic Environments
&lt;/h3&gt;

&lt;p&gt;PowerScheme has two scopes: lexical and dynamic.  In order to have&lt;br&gt;
both I use two environments: a lexical environment and a dynamic&lt;br&gt;
environment.&lt;/p&gt;

&lt;p&gt;They can share variables, but the lexical environment in my&lt;br&gt;
implementation is used first to lookup a value.&lt;/p&gt;

&lt;p&gt;In order to tell which one to use while defining variables, I use two&lt;br&gt;
different keyword: &lt;code&gt;define&lt;/code&gt; is used for lexically scoped variables,&lt;br&gt;
and &lt;code&gt;dynamic&lt;/code&gt; is used for dynamically scoped variables.  They are used&lt;br&gt;
the same way.  Here are some examples:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;(define a 1)
(define b 2)
(dynamic
 c 3)
(dynamic d 4)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;It's also possible to declare lexically and dynamically scoped&lt;br&gt;
functions.  Because functions are just values, there is no need for&lt;br&gt;
restrictions here.  Moreover, dynamically scoped functions can have&lt;br&gt;
some interesting uses, like for example, for handling exceptions.&lt;/p&gt;

&lt;p&gt;So here are some examples of function declarations:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;(define (lex-fun b)
  (+ 10 b))
(dynamic (dyn-fun b)
  (+ 100 b))
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;As you can see, I tried to replicate as closely as possible the&lt;br&gt;
lexical scoping syntax for dynamic scope.&lt;/p&gt;

&lt;p&gt;&lt;a id="org4d6d863"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Differences between Lexical and Dynamic Scope
&lt;/h3&gt;

&lt;p&gt;Now I would like to compare lexical and dynamic scope from the point&lt;br&gt;
of view of the implementation.&lt;/p&gt;

&lt;p&gt;There are two important differences between the two.  When functions&lt;br&gt;
are created, only the lexical environment of the creation is saved&lt;br&gt;
inside the function.  It's like when you write a book: throughout its&lt;br&gt;
whole life it will refer to the same events as when it was written.&lt;/p&gt;

&lt;p&gt;Here is the code snippet from the &lt;code&gt;Evaluate&lt;/code&gt; function that implements&lt;br&gt;
&lt;code&gt;define&lt;/code&gt;:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;"DEFINE" {
    if ($cdr.car.Type -eq "Symbol") {
        $name = $cdr.car.Value
        $value = Evaluate $cdr.cdr.car $env $denv $false
        $env.Declare($name, $value)
        return $null
    } else {
        # (define (&amp;lt;name&amp;gt; . &amp;lt;params&amp;gt;) &amp;lt;body&amp;gt;)
        $name = $cdr.car.car.value
        $params = $cdr.car.cdr
        $body = $cdr.cdr
        $function = (Make-Function $name $env $params $body)
        $env.Declare($name, $function)
        return $null
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;It's a part of a &lt;code&gt;switch&lt;/code&gt;, so the first line checks whether we have&lt;br&gt;
encounter &lt;code&gt;define&lt;/code&gt;.  Then, I think the code is easy to follow.  For&lt;br&gt;
&lt;code&gt;dynamic&lt;/code&gt; the code is very similar.  It uses the dynamic environment&lt;br&gt;
for the definition and specialized functions for the environment&lt;br&gt;
because declaring dynamic entities can be different from defining&lt;br&gt;
lexical entities.  Either way &lt;code&gt;$null&lt;/code&gt; is returned in order to not pass&lt;br&gt;
more values than necessary.&lt;/p&gt;

&lt;p&gt;On the other hand, when the function is called, the current lexical&lt;br&gt;
environment is not used at all, it's as if we wanted to isolate the&lt;br&gt;
function from what happens now and let it live in a controlled way by&lt;br&gt;
giving only the necessary values.&lt;/p&gt;

&lt;p&gt;Contrary to the current lexical environment, the contents of the&lt;br&gt;
dynamic environment can be used by the function, and that's where its&lt;br&gt;
name comes from: it's dynamically provided to the function.&lt;/p&gt;

&lt;p&gt;Here is the &lt;code&gt;Invoke&lt;/code&gt; function that is used to call user-defined&lt;br&gt;
functions:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;function Invoke($function, $argsExp, $env, $denv, $tco) {
    $funVal = $function.value
    $params = $funVal.params
    $defEnv = $funVal.defEnv
    $denv.EnterScope()
    $argList, $dotValue = Eval-Args $argsExp $params.length $env $denv
    if (!$tco) {
        $defEnv.EnterScope()
        Extend-With-Args $argList $dotValue $function $defEnv $denv
        $result = Eval-Body $function.value.body $defEnv $denv $true
        $defEnv.LeaveScope()
    } else {
        Extend-With-Args $argList $dotValue $function $defEnv $denv
        $result = Eval-Body $function.value.body $defEnv $denv $true
    }
    $denv.LeaveScope()
    return $result
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The parameter &lt;code&gt;$env&lt;/code&gt;, which represents the current lexical environment&lt;br&gt;
is used only for the evaluation of the arguments.  Then, instead of&lt;br&gt;
it, the definition environment, that was saved during the creation of&lt;br&gt;
the function, is used.  The dynamic environment, &lt;code&gt;$denv&lt;/code&gt;, on the other&lt;br&gt;
hand is used as it is and passed to the &lt;code&gt;Eval-Body&lt;/code&gt; function directly.&lt;/p&gt;

&lt;p&gt;&lt;a id="org3c539cb"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Tail Call Optimization
&lt;/h3&gt;

&lt;p&gt;My interpreter, PowerScheme, also provides an implementation of the&lt;br&gt;
tail call optimization.  When a function is the last of a block of&lt;br&gt;
function, it's possible to perform a simplified version of the call.&lt;/p&gt;

&lt;p&gt;In the &lt;code&gt;Invoke&lt;/code&gt; function above you can see that for tail calls, that&lt;br&gt;
is when &lt;code&gt;$tco&lt;/code&gt; is true, we do not enter nor leave the definition&lt;br&gt;
environment.  We simply evaluate the body as if it were in the&lt;br&gt;
continuation of our presently interpreted code.&lt;/p&gt;

&lt;p&gt;So this is the &lt;code&gt;Eval-Body&lt;/code&gt; function, this is the one that decides&lt;br&gt;
whether a particular evaluation is the last or not:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;function Eval-Body($body, $env, $denv) {
    Define-Defines $body $env
    $cons = $body
    $result = New-Object Exp -ArgumentList ([ExpType]::Symbol), "NIL"
    while ($cons.type -eq "Cons") {
        $tco = ($cons.cdr.type -eq "Symbol")
        $result = Evaluate $cons.car $env $denv $tco
        $cons = $cons.cdr
    }
    return $result
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;When we have the last cons, its &lt;code&gt;cdr&lt;/code&gt; is &lt;code&gt;NIL&lt;/code&gt;.  That is what we&lt;br&gt;
test.  Of course, the body itself must be the last item of the&lt;br&gt;
function.  Actually a body is what is needed to perform tail-call&lt;br&gt;
optimization because the body has its own scope.  And since there is&lt;br&gt;
no such thing as a program stack in the interpreter, I only need to&lt;br&gt;
care about the variables.  Of course PowerShell has a stack, but from&lt;br&gt;
this code I cannot force it to perform tail-call optimization.  For&lt;br&gt;
this project tail-call optimization simply means that the values do&lt;br&gt;
not get accumulated in the environment when it's not needed.  So,&lt;br&gt;
without having access to the stack real TCO is difficult to implement.&lt;/p&gt;

&lt;p&gt;The purpose of &lt;code&gt;Define-Defines&lt;/code&gt; is to make sure that all the defines&lt;br&gt;
of a given lexical level are available to everyone inside the block.&lt;br&gt;
I will tell more about the problems it allows to solve later.&lt;/p&gt;

&lt;p&gt;&lt;a id="orgc98fb63"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Environment Representation
&lt;/h3&gt;

&lt;p&gt;Now let's look at how the environment is represented internally inside&lt;br&gt;
the program.  When I wrote my first interpreter, &lt;em&gt;MUSH&lt;/em&gt;, I decided to&lt;br&gt;
implement the environment using a hash-table, where keys were variable&lt;br&gt;
names and their lexical levels (the lexical levels was a way to avoid&lt;br&gt;
saving the definition environment for the functions, so each variable&lt;br&gt;
was assigned a lexical level in addition to the name).  The values&lt;br&gt;
were stacks of values.  Later I found out that there were several&lt;br&gt;
other ways to represent the environment: a single list, or one list&lt;br&gt;
per frame, and similar.  Then it seems that the idea of having&lt;br&gt;
a stack per name was a rather common way of implementing the&lt;br&gt;
environment.  So, in contrast to the Kotlin interpreter, where I used&lt;br&gt;
a list per lexical level, here I decided to use an associative array&lt;br&gt;
of names to stacks of values.&lt;/p&gt;

&lt;p&gt;Now I will describe how the basic operations on the environment are&lt;br&gt;
performed.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Variable declaration.&lt;/p&gt;

&lt;p&gt;The environment contains a variable corresponding to the current&lt;br&gt;
lexical level.  A variable is declared in the current level if it is&lt;br&gt;
not declared previously or assigned if the variable name and the level&lt;br&gt;
match.  In case the variable exists, but with a level less than the&lt;br&gt;
current level, the new value is pushed on top of the old one.  What it&lt;br&gt;
means is that the new value is put in the array and the old is linked&lt;br&gt;
to the new variable as its &lt;code&gt;next&lt;/code&gt; value.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Variable update.&lt;/p&gt;

&lt;p&gt;If the variable is looked up by name and it's found it's assigned.&lt;br&gt;
Very simple.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Variable lookup.&lt;/p&gt;

&lt;p&gt;Similar to update: just find by name and return the value.  If it's&lt;br&gt;
not in the array, then the name cannot be resolved.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Enter scope.&lt;/p&gt;

&lt;p&gt;It'd done by increasing the current lexical level.  Also not really&lt;br&gt;
complicated.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Leave scope.&lt;/p&gt;

&lt;p&gt;All the variables of the current level are deleted.  They are popped&lt;br&gt;
from their stacks and if it's the last one, the name is removed from&lt;br&gt;
the array.  The lexical level is decreased.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a id="org4de7a5c"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Branching Stacks
&lt;/h3&gt;

&lt;p&gt;These are not the only operations that are performed on the&lt;br&gt;
environment.  There is also the operation called &lt;code&gt;Duplicate&lt;/code&gt; which is&lt;br&gt;
used when creating a new function.  A copy of the environment is made,&lt;br&gt;
not a deep copy, but only of the array.  The reason for this is that&lt;br&gt;
if the array is later modified (for example the parent block ends and&lt;br&gt;
the level is decreased), we still have access to everything we need&lt;br&gt;
for the time the function is executed.&lt;/p&gt;

&lt;p&gt;What it means is that the stacks of the variables now can have&lt;br&gt;
branches.  As it is not a deep copy, all the objects of the array are&lt;br&gt;
the same.  And the new function and its parent can add new elements on&lt;br&gt;
top of the old ones, thus branches are created.&lt;/p&gt;

&lt;p&gt;&lt;a id="org935c8df"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Define-Defines
&lt;/h3&gt;

&lt;p&gt;One of the implication of branching stacks is that a function is not&lt;br&gt;
defined in its own environment, which is a problem for recursive and&lt;br&gt;
mutually recursive functions.  To fix this problem I made a function&lt;br&gt;
called &lt;code&gt;Define-Defines&lt;/code&gt;.  It is used before the evaluation of the&lt;br&gt;
block, in the function &lt;code&gt;Eval-Body&lt;/code&gt;.  It makes a quick scan of the&lt;br&gt;
whole block and looks for defines.  When it finds, it adds them to the&lt;br&gt;
environment with a temporary value.  This value is not null, it's&lt;br&gt;
actually a structure I called a cell that contains the lexical level&lt;br&gt;
of definition, the value and the pointer to the next cell.  So, when&lt;br&gt;
the function is created it duplicates the array.  And also duplicates&lt;br&gt;
the reference to the cell, without duplicating it, which is important&lt;br&gt;
because once all the defines are evaluated, these cells are filled and&lt;br&gt;
their new values become visible both to the function and to the block&lt;br&gt;
where the function is defined.&lt;/p&gt;

&lt;p&gt;&lt;a id="org7ccf692"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Global and Local Arrays
&lt;/h3&gt;

&lt;p&gt;This &lt;code&gt;Define-Defines&lt;/code&gt; function shows a fundamental difference between&lt;br&gt;
the top-level, the zero lexical level, and other levels.  Other levels&lt;br&gt;
can be easily scanned forward before real work in order to add the&lt;br&gt;
defines to the environment.  For the global environment, on the other&lt;br&gt;
hand, it's not possible, because the current block is constantly&lt;br&gt;
expanding.&lt;/p&gt;

&lt;p&gt;Let's say I want to define mutually recursive functions on the&lt;br&gt;
top-level.  Let's take an example of a depth-first search of a tree,&lt;br&gt;
where each node has a value assigned to it and a list of sub-nodes,&lt;br&gt;
which can be empty.  A node is represented by a &lt;code&gt;cons&lt;/code&gt;, which has the&lt;br&gt;
number in the &lt;code&gt;car&lt;/code&gt;, and the list in the &lt;code&gt;cdr&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;So, here is my version of an implementation that returns the sum of&lt;br&gt;
all the numbers in the tree:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;(define (tree-sum tree)
 (let ((n (car tree))
       (sub (cdr tree)))
  (+ n (tree-sum-list sub))))

(define (tree-sum-list list)
 (if (null? list)
  0
  (+ (tree-sum (car list)) (tree-sum-list (cdr list)))))
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;As you can see, it has two mutually recursive function, which is not&lt;br&gt;
surprising since the data itself consists of two types: nodes and&lt;br&gt;
lists.  So, &lt;code&gt;tree-sum&lt;/code&gt; corresponds to nodes, and &lt;code&gt;tree-sum-list&lt;/code&gt;&lt;br&gt;
corresponds to lists.  The implementation is straightforward.&lt;/p&gt;

&lt;p&gt;So, let's test it:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;(tree-sum '(5 . ((6 . ()) (9 . ()))))
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;This tree returns 20.&lt;/p&gt;

&lt;p&gt;Another example:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;(tree-sum '(1 . ((2 . ((4 . ((8 . ()) (9 . ()))) (5 . ())))
                 (3 . ((6 . ()) (7 . ()))))))
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;And this one returns 45, as expected.&lt;/p&gt;

&lt;p&gt;Now let's look a little closer at the problem of mutual recursion.&lt;br&gt;
The function &lt;code&gt;tree-sum&lt;/code&gt; duplicates the environment, which does not&lt;br&gt;
contain &lt;code&gt;tree-sum-list&lt;/code&gt;.  And it will not appear there, because it was&lt;br&gt;
not entered yet if we are talking about the REPL.  So how to solve this&lt;br&gt;
problem?&lt;/p&gt;

&lt;p&gt;When it's something local, we can just scan ahead and look for all the&lt;br&gt;
defines, but here, it's not there yet, so we can't do it.  One&lt;br&gt;
solution would be to store &lt;code&gt;tree-sum&lt;/code&gt; in some kind of waiting list, so&lt;br&gt;
that when &lt;code&gt;tree-sum-list&lt;/code&gt; is defined, we can define &lt;code&gt;tree-list&lt;/code&gt;.  It&lt;br&gt;
would be easy if they weren't mutually recursive functions.&lt;/p&gt;

&lt;p&gt;So, here is my solution: we do not duplicate the global top-level&lt;br&gt;
zero-lexical level environment.  We just say that it's global and it's&lt;br&gt;
the same for everyone.  This way when &lt;code&gt;tree-sum-list&lt;/code&gt; is defined, it&lt;br&gt;
can be used by &lt;code&gt;tree-sum&lt;/code&gt; because &lt;code&gt;tree-sum&lt;/code&gt; didn't make a copy and&lt;br&gt;
didn't try to isolate itself from the world, so it knows what's&lt;br&gt;
happening and when needed can call &lt;code&gt;tree-sum-list&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a id="orgd0d76a5"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Declare and Update for Lexical and Dynamic variables
&lt;/h3&gt;

&lt;p&gt;No there is still a few things I would like to talk about in my&lt;br&gt;
overview of the implementation details of PowerScheme, namely how the&lt;br&gt;
variables are updated.&lt;/p&gt;

&lt;p&gt;One observation is that the dynamic scope cannot be updated using the&lt;br&gt;
same method that is used for lexical scope.  It's interesting to&lt;br&gt;
notice, that reading and writing of lexical variables are somehow&lt;br&gt;
symmetric: when we read the information flows towards us and when we&lt;br&gt;
write, it flows from us.   And it uses the same path, like a pendulum.&lt;/p&gt;

&lt;p&gt;Dynamic variables, on the other hand are not symmetric.  When we read,&lt;br&gt;
we read either from above or from the equal level, but we write only&lt;br&gt;
to the current level.  It would be comparable to how we treat&lt;br&gt;
information: we can read old or new books, but we can only write new&lt;br&gt;
books.&lt;/p&gt;

&lt;p&gt;For this reason there are two different methods for the update&lt;br&gt;
functionality: &lt;code&gt;Update&lt;/code&gt; is for the lexical environment, and&lt;br&gt;
&lt;code&gt;UpdateDynamic&lt;/code&gt; is for the dynamic environment.  What ~UpdateDynamic&lt;br&gt;
does, is it first check whether there is the variable we want to&lt;br&gt;
update in the array, and if so either overwrites the value, if it on&lt;br&gt;
the same dynamic level, or creates a new value, while linking to the&lt;br&gt;
previous.&lt;/p&gt;

&lt;p&gt;&lt;a id="org459a244"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion about the implementation details
&lt;/h2&gt;

&lt;p&gt;So this was a quick overview of my implementation of Scheme.  It's&lt;br&gt;
interesting that there are always a lot of different possibilities for&lt;br&gt;
implementation and surprises, when something does not work as&lt;br&gt;
expected.  I think that by implementing such programs it's possible to&lt;br&gt;
understand more not only about Scheme, but also about Lisp in general&lt;br&gt;
and the language used for the implementation.  I think I described the&lt;br&gt;
main details that require a comment and not a lot of parts of the&lt;br&gt;
source code and the functionality of PowerScheme remain unexplained.&lt;/p&gt;

&lt;p&gt;&lt;a id="org39a85de"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  What Else PowerScheme can do
&lt;/h1&gt;

&lt;p&gt;In this section I will describe some interesting things that you can&lt;br&gt;
do with the PowerScheme interpreter.  The intended use I described in&lt;br&gt;
the Features and Functionality parts.  But very often you can do some&lt;br&gt;
cool things with a programming language in addition to its intended&lt;br&gt;
use.&lt;/p&gt;

&lt;p&gt;&lt;a id="org6e552d5"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  A REPL
&lt;/h2&gt;

&lt;p&gt;A REPL means read eval print loop, which is a program which&lt;br&gt;
continuously prints the evaluated value read from the standard input.&lt;br&gt;
It is useful for testing and for running small Lisp sessions&lt;br&gt;
interactively.&lt;/p&gt;

&lt;p&gt;The code for doing it is really simple:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;(define (toplevel)
 (writeln (eval (read)))
 (toplevel))
(toplevel)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;And this is an example of a REPL session:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ZSH:gproj/scheme-lisp/chap02pwsh&amp;gt;pwsh PwScheme.ps1 tests/toplevel.scm
&amp;gt;: (let ((x 5) (y 3)) (+ x (* y 10))))
35
&amp;gt;: (define (lola x) (writeln 35) (writeln x))

&amp;gt;: (lola 40)
35
40

&amp;gt;: (exit)
EXIT invoked
ZSH:gproj/scheme-lisp/chap02pwsh&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;As you can see it behaves more or less the same way a normal lisp&lt;br&gt;
should.  But it's somewhat limited: it doesn't work when there are&lt;br&gt;
expressions that take several lines.&lt;/p&gt;

&lt;p&gt;&lt;a id="org0952fb5"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Y-Combinator
&lt;/h2&gt;

&lt;p&gt;Y-Combiantor is a function that when applied to another function&lt;br&gt;
returns its fixed-point.  That is, a function which, when applied to a&lt;br&gt;
value returns itself.&lt;/p&gt;

&lt;p&gt;This is useful to implement loops in lambda calculus: by making a&lt;br&gt;
function so, that it calls its argument with the next value, it can&lt;br&gt;
perform some looping calculations.  Here is an example from the Lisp&lt;br&gt;
in Small Pieces book (of course there are many other examples in other&lt;br&gt;
books, like addition, subtraction, and so on).  It calculates the&lt;br&gt;
factorial.&lt;/p&gt;

&lt;p&gt;First the Y-combinator:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;(define fix
  (let ((d (lambda (w)
           (lambda (f)
            (f (lambda (x) (((w w) f) x)))))))
  (d d)))
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;And here is the modified factorial to be used with the &lt;code&gt;fix&lt;/code&gt; function:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;(define (meta-fact f)
 (lambda (n)
  (if (= n 0)
   1
   (* n (f (- n 1))))))
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;As you can see, it applies its argument with the next value.  Its&lt;br&gt;
argument is the fixed point of &lt;code&gt;meta-fact&lt;/code&gt;, which means, it produces&lt;br&gt;
the same call again with a lower value.&lt;/p&gt;

&lt;p&gt;Here is how it's invoked:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;((fix meta-fact) 3))
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The important thing here is that &lt;code&gt;(fix meta-fact)&lt;/code&gt; is equivalent to&lt;br&gt;
&lt;code&gt;f&lt;/code&gt; in &lt;code&gt;meta-fact&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;By definition of fix, &lt;code&gt;(fix f) = (f (fix f))&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;So, &lt;code&gt;((fix meta-fact) 3)&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;is equal to &lt;code&gt;((meta-fact (fix meta-fact)) 3)&lt;/code&gt;,&lt;/p&gt;

&lt;p&gt;which in turn is equal to &lt;code&gt;(* 3 ((fix meta-fact) 2))&lt;/code&gt; if you insert&lt;br&gt;
(3) into the definition of &lt;code&gt;meta-fact&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;And it continues this way until it returns the value of the factorial.&lt;/p&gt;

&lt;p&gt;&lt;a id="org490cb0a"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Function Objects
&lt;/h2&gt;

&lt;p&gt;As I described before, my implementation uses lexical scope.  And here&lt;br&gt;
I would like to illustrate one of its most powerful uses, which, I&lt;br&gt;
would say, makes the full use of the lexical scope and the lexical&lt;br&gt;
environment.&lt;/p&gt;

&lt;p&gt;In fact, in many cases, it would be possible to replace closures with&lt;br&gt;
global functions, or allow to only pass constant values to closures,&lt;br&gt;
like Java does.  So in order to illustrate the full power of the&lt;br&gt;
lexical environment I give an example of function objects, that is,&lt;br&gt;
functions which have an environment associated with them.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;(define (counter n)
 (let ((c 0))
  (lambda (cmd)
   (case cmd
     ((inc) (set! c (+ c n)))
     ((dec) (set! c (- c n)))
     ((pr) (writeln c))))))
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;What this function does, is that it returns a function that accepts&lt;br&gt;
commands.  It initializes a variable &lt;code&gt;c&lt;/code&gt; to (0), and saves the&lt;br&gt;
parameter &lt;code&gt;n&lt;/code&gt;, which represents by how much we want to modify &lt;code&gt;c&lt;/code&gt; at a&lt;br&gt;
time.&lt;/p&gt;

&lt;p&gt;Commands are &lt;code&gt;inc&lt;/code&gt; to increment &lt;code&gt;c&lt;/code&gt; by &lt;code&gt;n&lt;/code&gt;, &lt;code&gt;dec&lt;/code&gt; to decrement &lt;code&gt;c&lt;/code&gt; by&lt;br&gt;
&lt;code&gt;n&lt;/code&gt;, and &lt;code&gt;pr&lt;/code&gt; to print the current value.&lt;/p&gt;

&lt;p&gt;This is why I find the name function objects well-suited for these&lt;br&gt;
functions: they contain fields (&lt;code&gt;c&lt;/code&gt; and  &lt;code&gt;d&lt;/code&gt; in this case), and&lt;br&gt;
methods (&lt;code&gt;inc&lt;/code&gt;, &lt;code&gt;dec&lt;/code&gt;, and &lt;code&gt;pr&lt;/code&gt; here).&lt;/p&gt;

&lt;p&gt;Here is an example of how it's used:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;(define cnt (counter 3))
(cnt 'pr)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;We declare a new counter with the step 3 &lt;code&gt;(cnt 'pr)&lt;/code&gt; returns (0).&lt;/p&gt;

&lt;p&gt;Let's try to increment it:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;(cnt 'inc)
(cnt 'pr)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Now it prints (3).  Let's increment it again, but this time twice!&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;(cnt 'inc)
(cnt 'inc)
(cnt 'pr)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;As expected, it returns (9), which is (3 + 2 * 3).  Let's check how&lt;br&gt;
decrementing works:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;(cnt 'dec)
(cnt 'pr)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;It prints (6).  Which means, the values are correctly saved.&lt;/p&gt;

&lt;p&gt;And decrementing twice brings us back to (0):&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;(cnt 'dec)
(cnt 'dec)
(cnt 'pr)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a id="orgf66757c"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  What is missing?
&lt;/h1&gt;

&lt;p&gt;Of course there is no way a Lisp interpreter will have all the&lt;br&gt;
features.  There should be made some choices.  In this part I will&lt;br&gt;
describe some things that are missing from my implementation of Lisp.&lt;br&gt;
There were several reasons for not implementing a feature.  Perhaps it&lt;br&gt;
was too complicated, in other cases it was not really useful, or I&lt;br&gt;
didn't have time to implement them.  So here I will say a few words&lt;br&gt;
about some, I thought it was worth to talk about, among the&lt;br&gt;
unimplemented features.&lt;/p&gt;

&lt;p&gt;&lt;a id="org6827570"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Vectors
&lt;/h2&gt;

&lt;p&gt;So let's begin with the vectors.  In fact they are not really needed&lt;br&gt;
for a minimalist implementation of Scheme.  Because everything they&lt;br&gt;
can do can be done with pairs and conses.  For example, reading an&lt;br&gt;
element, setting an element, appending an element.  In many cases&lt;br&gt;
lists have advantages because they are easier to use and easier to&lt;br&gt;
implement.  The way vectors work is often a little bit obscure and&lt;br&gt;
often feels like a way to get out of Lisp and to communicate with&lt;br&gt;
lower entities of the Operating System.  I think the main reason Lisp&lt;br&gt;
implementations support vectors is because vectors are faster.  Lists&lt;br&gt;
are linked lists in most implementation (perhaps it's possible to make&lt;br&gt;
it differently, but I don't think it's common).  And vectors are&lt;br&gt;
supposed to use an array behind the scenes.  For this reason can be&lt;br&gt;
they much faster if you want to access elements randomly, and they&lt;br&gt;
take less memory space because they don't need pointers to the next&lt;br&gt;
(and sometimes previous) element.  So it was possible to implement&lt;br&gt;
them, but it was not in my priorities.  For me it would have been much&lt;br&gt;
more interesting to implement macros or continuations.&lt;/p&gt;

&lt;p&gt;&lt;a id="orga574e9d"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Let*
&lt;/h2&gt;

&lt;p&gt;Another feature that I did not implement is the let*.  It is also&lt;br&gt;
useful in many cases, but a lot of people don't like it.  They say&lt;br&gt;
that if you use them, then it becomes more like imperative programming&lt;br&gt;
and so on.  I don't really know if I can agree with that.  Perhaps the&lt;br&gt;
fact that it imposes an order of evaluation is bad?  But for me it's&lt;br&gt;
more like a shortcut of nested lets, where the inner level is&lt;br&gt;
dependent on the outer level.  So I think many functional programming&lt;br&gt;
languages allow such constructs and it is not considered bad.  So,&lt;br&gt;
anyway I didn't implement it because it's easy to replace them with&lt;br&gt;
nested &lt;code&gt;let&lt;/code&gt;'s and their being not there doesn't really feel really&lt;br&gt;
worrying.&lt;/p&gt;

&lt;p&gt;&lt;a id="orgc444199"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Caar and so on
&lt;/h2&gt;

&lt;p&gt;There are a lot of these functions.  Really a lot.  So many that in my&lt;br&gt;
Kotlin interpreter I support any length for these expressions.  Yes,&lt;br&gt;
you can type something like &lt;code&gt;caaaaaaar&lt;/code&gt; and it will still be&lt;br&gt;
interpreted correctly.  But in this interpreter it was not so easy to&lt;br&gt;
implement this feature without making the code needlessly complicated.&lt;br&gt;
So I decided not to do it.  Only &lt;code&gt;car&lt;/code&gt; and &lt;code&gt;cdr&lt;/code&gt;, and that's it.  If&lt;br&gt;
you want more you just take a &lt;code&gt;car&lt;/code&gt; of &lt;code&gt;cdr&lt;/code&gt; of a &lt;code&gt;cdr&lt;/code&gt; of &lt;code&gt;cdr&lt;/code&gt; and&lt;br&gt;
so on.  It should be enough for most cases.  Either way they are a bit&lt;br&gt;
confusing and not easy to read, and you should not abuse of them when&lt;br&gt;
writing code.  So a function composition will not hurt from time to&lt;br&gt;
time.&lt;/p&gt;

&lt;p&gt;&lt;a id="org954a996"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Set-car! and Set-cdr!
&lt;/h2&gt;

&lt;p&gt;Yes, these also are not there.  It was perhaps not the right thing to&lt;br&gt;
do, but in actuality, it is known that immutability is a good thing in&lt;br&gt;
functional programming languages.  Why would you need something&lt;br&gt;
mutable, when you can have the same thing but immutable?  So, in the&lt;br&gt;
examples of the code I used for tests, it was not needed at all.  It's&lt;br&gt;
surprising how much you can do without needing to use these or &lt;code&gt;set!&lt;/code&gt;&lt;br&gt;
and other similar functions.  But in my implementation, &lt;code&gt;set!&lt;/code&gt; is&lt;br&gt;
implemented.  And even this is almost never used, so modifying the&lt;br&gt;
&lt;code&gt;car&lt;/code&gt; or the &lt;code&gt;cdr&lt;/code&gt; of an existing &lt;code&gt;cons&lt;/code&gt; feels like it's not the right&lt;br&gt;
thing to do.  Is it really needed?  Can it be avoided?  Let's just do&lt;br&gt;
it another way…  And so on.  The conses have been born the way they&lt;br&gt;
were born and to mutate them in something else just feels wrong.  This&lt;br&gt;
is why it easier to tolerate &lt;code&gt;set!&lt;/code&gt; because it modifies the whole&lt;br&gt;
value, it replaces one value by another, one can look at it as a&lt;br&gt;
permanent shadowing.&lt;/p&gt;

&lt;p&gt;&lt;a id="orgb96da6d"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Continuations
&lt;/h2&gt;

&lt;p&gt;I really wanted to implement continuations.  I have spent a lot of&lt;br&gt;
time thinking about what they mean and about ways to implement them.&lt;br&gt;
People say that they are useless and dangerous to use&lt;br&gt;
&lt;sup id="ac05a6d7a93b970d1c352195724dbc39"&gt;against_callcc&lt;/sup&gt;, but in my opinion it's a very interesting&lt;br&gt;
concept.  It's an alternative way to look at the execution flow.  And&lt;br&gt;
the execution flow is a very important concept to know and to&lt;br&gt;
understand, espessially for poeple interested in programming languages&lt;br&gt;
and compilers.  Perhaps continuations seem complicated because the way&lt;br&gt;
the execution flow works is not well understood.  Matt Might calls&lt;br&gt;
them the least undestood of all control flows&lt;br&gt;
&lt;sup id="38e133a1cd7512af65c3c2d0e08cf2b4"&gt;continuations_by_example&lt;/sup&gt;.  It's not just an abstraction that&lt;br&gt;
allows to implement other control flow constructs.  It's an important&lt;br&gt;
theoretical concept which reflects an interesting point of view on the&lt;br&gt;
problems related to the control flow.&lt;/p&gt;

&lt;p&gt;In fact continuations allow to implement many existing control flow&lt;br&gt;
mechanisms.  Like for example, &lt;code&gt;goto&lt;/code&gt; &lt;sup id="fb86ca9c88ee182adb1381f2ae05aa3f"&gt;friedman_applications_1988&lt;/sup&gt;,&lt;br&gt;
&lt;code&gt;if&lt;/code&gt;, loops, exceptions, coroutines &lt;sup id="80d7d94a08c21b535d0282b28932cbc6"&gt;reynolds_definitional_1998&lt;/sup&gt;&lt;br&gt;
and so on.  But these constructs were not defined with continuations&lt;br&gt;
when they first appeared.  It was later shown that with continuations,&lt;br&gt;
it's possible to implement them.  So my idea is this.  If we&lt;br&gt;
understand the continuations well enough, perhaps we can come up with&lt;br&gt;
other control flow mechanisms which were not described yet.  And this&lt;br&gt;
idea seems interesting.  Because we don't have a lot of this&lt;br&gt;
mechanisms, and they are all different and not very organized.  So, if&lt;br&gt;
at least we learn how to categorize them and to find out which kind of&lt;br&gt;
control flow is needed in which situation, we can define other control&lt;br&gt;
flow mechanisms that can suit better our current problem.&lt;/p&gt;

&lt;p&gt;So what are continuations?  If you read some info about it you will&lt;br&gt;
find out that the first steps to understand them are rather difficult.&lt;br&gt;
Everybody describes them differently.  People seem to talk about&lt;br&gt;
different things.  Some see them as jumps, and describe them from the&lt;br&gt;
assembly point of view &lt;sup id="8c65371f0890cafb32d166636ae9c2f4"&gt;reynolds_discoveries_1993&lt;/sup&gt;.  Other people&lt;br&gt;
see them as a way to save a point inside a computation, and yet other&lt;br&gt;
say that it's a way to pause the program and to continue later.&lt;br&gt;
George Springer, for example calls them &lt;em&gt;the rest of the&lt;br&gt;
computation&lt;/em&gt;. &lt;sup id="66354b446824fe7b5ff348e76d47facc"&gt;springer_scheme&lt;/sup&gt; So to find what is common about all&lt;br&gt;
these concepts is not that easy.  And also, the examples shown usually&lt;br&gt;
have 3 or 4 nested lambda functions, which in itself is not easy to&lt;br&gt;
understand.  So now I will give you a quick description of what the&lt;br&gt;
continuations are.  In reality I hope to implement them for the next&lt;br&gt;
project that I will do for the third chapter of Lisp in Small Pieces.&lt;br&gt;
And then I will describe them as thoroughly as I can.  That's why I&lt;br&gt;
will not say a lot this time.&lt;/p&gt;

&lt;p&gt;Continuations are a point in execution.  If you have a virtual machine&lt;br&gt;
that's running and you pause it, and then duplicate it, that's not&lt;br&gt;
what continuations are.  Continuations only concern themselves with&lt;br&gt;
the execution, not with data.  So it's like you go back in time, but&lt;br&gt;
it's only you, the whole world continues how it has always been.  In&lt;br&gt;
other words, continuations are similar to the address of a label.  A&lt;br&gt;
good example of continuations are exceptions and &lt;code&gt;longjmp&lt;/code&gt;&lt;br&gt;
&lt;sup id="9c8c2c5116eab591c2f5c88c44eee2ad"&gt;wikic2_call&lt;/sup&gt;.  You don't really go back in time.  The data that&lt;br&gt;
has changed will not revert to its previous state.  But you go back to&lt;br&gt;
the position you were before.  Continuations are in fact more general&lt;br&gt;
than that, but these are good examples of how continuations work.&lt;/p&gt;

&lt;p&gt;And then I wanted to find a way to include them in the interpreter.&lt;br&gt;
And I thought about how I would do it.  My reasoning was like this: I&lt;br&gt;
can save a &lt;code&gt;cons&lt;/code&gt; that has to be executed next.  Is it a good way to&lt;br&gt;
represent a continuation?  The answer is no.  I can perhaps finish the&lt;br&gt;
execution of one block, but what do I do next?  The block doesn't tell&lt;br&gt;
me what to do next.  What does it have to tell me?  It has to inform&lt;br&gt;
me where it came from and what it wanted to do when it finishes.  But&lt;br&gt;
why is it the block that has to have this information?  Because a&lt;br&gt;
continuation can be taken at any point of the program and it's&lt;br&gt;
impossible to know everything about everybody.  So it's the item that&lt;br&gt;
is executing, it has to know what has to be done next.  But how can it&lt;br&gt;
tell?  By itself, it doesn't know anything.  It's its caller that&lt;br&gt;
called him, it will decide what to do after the current block returns.&lt;br&gt;
As you can see it becomes complicated.  In fact it's not the caller&lt;br&gt;
who called the callee who decides what to do next.  It's the callee&lt;br&gt;
that has to remember what was the next step and tell it to the caller&lt;br&gt;
so that it executes what it needs to execute.  As you can see, it's&lt;br&gt;
the complete opposite of how everything was written up until now.  So,&lt;br&gt;
let's try to think one more time what it would look like if I added&lt;br&gt;
continuations to my implementation.  The caller knows what has to be&lt;br&gt;
done when the callee returns, so it tells the callee what has to be&lt;br&gt;
done next.  Then the caller can forget about everything it had told&lt;br&gt;
the callee because it's up to the callee to remember.  When the callee&lt;br&gt;
returns, it says &lt;em&gt;hello&lt;/em&gt; to the caller, and says, you wanted to do&lt;br&gt;
this when I return.  Then the caller executes it.  From this example&lt;br&gt;
you can see that the one that executes what happens after, when the&lt;br&gt;
callee returns, does not have to be the original caller.  It can be a&lt;br&gt;
simple loop that reads the next action and executes it.  In this&lt;br&gt;
situation, it's possible to actually save the continuation and store&lt;br&gt;
it in the computation.  And when the user needs, he can execute it.&lt;br&gt;
So for these reasons I didn't implement them in my interpreter.  It&lt;br&gt;
would have made it too complex if I wanted to keep everything as it&lt;br&gt;
is.  So perhaps there are better strategies for the implementation of&lt;br&gt;
continuations.&lt;/p&gt;

&lt;p&gt;What I described here is similar to what Hayo Thielecke call&lt;br&gt;
&lt;em&gt;continuation passing style&lt;/em&gt; &lt;sup id="b08a5828a0696686c7bba7da048f0bf7"&gt;thielecke_continuations&lt;/sup&gt;.  The next&lt;br&gt;
thing to execute is systematically passed to the callee.  And this&lt;br&gt;
way, in order to store a continuation, what needs to do is to copy the&lt;br&gt;
stack.  Because these next points are stored as arguments to each&lt;br&gt;
function, and each function has the next thing to do passed by its&lt;br&gt;
caller.&lt;/p&gt;

&lt;p&gt;&lt;a id="orgf73afea"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Exceptions
&lt;/h2&gt;

&lt;p&gt;Having said a few words about continuations I will now say why I&lt;br&gt;
didn't implement the exceptions.  Exceptions are actually easier to&lt;br&gt;
implement than continuations.  The purpose is to execute a handler and&lt;br&gt;
to fall through the stack until the level where the handler was set.&lt;br&gt;
It can be implemented with continuations, by saving the the point&lt;br&gt;
where the exception handler was set, but also can be implemented by&lt;br&gt;
returning a special value.  Then the caller checks and sees the value&lt;br&gt;
and returns it to its caller, until the specialized function that&lt;br&gt;
handles exceptions.  As you can see, this implementation also needs a&lt;br&gt;
lot of modifications to the code.&lt;/p&gt;

&lt;p&gt;&lt;a id="orge7ffb1c"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Macros
&lt;/h2&gt;

&lt;p&gt;In Common Lisp one of the most interesting and complicated features&lt;br&gt;
are macros.  They allow to make some really unusual, from the&lt;br&gt;
perspective of other programming languages, things.  And I don't have&lt;br&gt;
a lot of experience with macros.  I know more or less how they work:&lt;br&gt;
they take arguments without evaluating them, call the body that makes&lt;br&gt;
something with the arguments, then the body has to generate an&lt;br&gt;
expression that is then evaluated.  Sounds easy.  But I don't know&lt;br&gt;
enough examples in order to test that they work correctly.  There is&lt;br&gt;
also another problem: I am writing a Scheme interpreter, not a Common&lt;br&gt;
Lisp interpreter, so that means that to test them, it would be even&lt;br&gt;
more complicated, because I know even less about macros in Scheme.&lt;br&gt;
And this problem, in contrast to continuations, I will not try to&lt;br&gt;
solve right now.  I have seen that other chapters in Lisp in Small&lt;br&gt;
Pieces deal with macros (mainly the chapter 9).&lt;/p&gt;

&lt;p&gt;&lt;a id="org3ab34b2"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Conclusion
&lt;/h1&gt;

&lt;p&gt;So, for the conclusion of this post I will say a few words about what&lt;br&gt;
this project gave me, what it allowed me to learn and where I can go&lt;br&gt;
from here as a result of having finished it.&lt;/p&gt;

&lt;p&gt;I used PowerShell for the first time.  If not for this project, it's&lt;br&gt;
hard to imagine, what could motivate me to learn this language.  Not&lt;br&gt;
because I don't find it interesting and I don't like learning new&lt;br&gt;
things.  It's more because I find it difficult to find ideas for new&lt;br&gt;
projects, especially if it's a shell language like PowerShell, which&lt;br&gt;
feature-wise doesn't seem to be too revolutionary (it's more or less&lt;br&gt;
the reason I still haven't learned fish) .  I know how PowerShell is&lt;br&gt;
often described as the first language to use pipes with objects, but I&lt;br&gt;
find this to be not much more than a little extra, like syntactic&lt;br&gt;
sugar, or an interesting toy.  Even though I theoretically understand&lt;br&gt;
how revolutionary it might be for certain tasks, I don't use the pipe&lt;br&gt;
on Linux that much in order to miss the object-oriented functionality,&lt;br&gt;
furthermore, it's not something that played a vital part in my current&lt;br&gt;
project.&lt;/p&gt;

&lt;p&gt;So, I have written my third interpreter, and my second Scheme&lt;br&gt;
interpreter.  My first interpreter was &lt;em&gt;MUSH&lt;/em&gt;, a mash-up between CCP and&lt;br&gt;
PL/M.  I really like the result.  I think I will re-implement it one&lt;br&gt;
more time with more features.  My second interpreter was Scheme, for&lt;br&gt;
which I used Kotlin.  So this is my third one.  And I actually start&lt;br&gt;
to see a pattern.  I think I start to understand some main structural&lt;br&gt;
components of the interpreters and how they should be written.  Of&lt;br&gt;
course I'm still a beginner, and I still have several interpreters and&lt;br&gt;
compilers to write in order to feel comfortable in this domain.&lt;/p&gt;

&lt;p&gt;I have managed to write an interpreter that from my point of view can&lt;br&gt;
do some rather interesting things: it has &lt;code&gt;letrec&lt;/code&gt;, which can do&lt;br&gt;
mutual recursion.  And for some reason my &lt;code&gt;letrec&lt;/code&gt; test doesn't work&lt;br&gt;
on Racket, nor on Guile.  Perhaps I have implemented something which&lt;br&gt;
was not intended to implemented in a normal Scheme…&lt;/p&gt;

&lt;p&gt;Mutual recursion can also be done using &lt;code&gt;define&lt;/code&gt; statements on the&lt;br&gt;
top-level or on a local level.&lt;/p&gt;

&lt;p&gt;The interpreter has also higher-order functions, which I still&lt;br&gt;
consider something rather mysterious and risky, at least when trying&lt;br&gt;
to execute them in the interpreter.&lt;/p&gt;

&lt;p&gt;Also there is support for function objects.  That is I can create a&lt;br&gt;
counter function, and, by passing different messages to the function I&lt;br&gt;
can manipulate the counter: increase its value, print its value and so&lt;br&gt;
on.  It makes the function object somehow similar to a living&lt;br&gt;
organism, it's a bit weird, not easy to describe.&lt;/p&gt;

&lt;p&gt;Another example of things that my interpreter can do, is the&lt;br&gt;
y-combinator.  This is also something that you can find among the&lt;br&gt;
examples in the directory with the source code of my interpreter.&lt;br&gt;
This is also something that, every time I see it working, makes me&lt;br&gt;
wonder how it can possibly work.  It still surprises me when I see it&lt;br&gt;
print the right answer.&lt;/p&gt;

&lt;p&gt;There are some interesting concepts this project allowed me to learn.&lt;br&gt;
For example, this is my first attempt at making an interpreter with&lt;br&gt;
tail-call optimization.  Now I see that it has two parts: the data and&lt;br&gt;
the control flow.  They are not necessarily connected.&lt;/p&gt;

&lt;p&gt;It's interesting to inspect the differences between the lexical scope&lt;br&gt;
and the dynamic scope: how the lexical environment is passed when a&lt;br&gt;
function is created, but the dynamic environment is passed when the&lt;br&gt;
function is called.  A good way to understand these concepts is to&lt;br&gt;
compare them.  And there is no better way to make a comparison than&lt;br&gt;
including both in the same project!  It is necessary to recognize,&lt;br&gt;
that the dynamic scope is not something that is buggy and should be&lt;br&gt;
avoided.  Both lexical scope and dynamic scope make sense and both are&lt;br&gt;
complex concepts.  What is dangerous is to approach one with the&lt;br&gt;
mindset of the other.  Then we can easily get lost.  But by itself,&lt;br&gt;
dynamic scope is very logical, useful and, in itself, is totally&lt;br&gt;
coherent.  Richard Stallman says that it allows to do things that&lt;br&gt;
otherwise would be impossible to do in a practical way&lt;br&gt;
&lt;sup id="b05c615e43a6b1cd31b76c84ca79dd8a"&gt;stallman_emacs:_1981&lt;/sup&gt;.&lt;/p&gt;

&lt;p&gt;Something I didn't initially plan to do is the part with the exception&lt;br&gt;
handling.  By reading more about the way exception handling is related&lt;br&gt;
to dynamic scoping, it occurred to me that it wouldn't be so bad if I&lt;br&gt;
implement it.  But as I don't have continuations and long jumps, the&lt;br&gt;
only thing I could do is to &lt;em&gt;crash&lt;/em&gt; the program, that's why the&lt;br&gt;
exception handlers in my interpreter are called &lt;em&gt;crashers&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;Having implemented all these things I realize that there are still a&lt;br&gt;
lot of things missing and a lot of things that can to be improved.&lt;br&gt;
It's an interesting journey, and every time I finish such a project&lt;br&gt;
more interesting doors open for improvements.&lt;/p&gt;

&lt;p&gt;Of course, what is obvious, is that the type system is very limited.&lt;br&gt;
Not only is the vector missing, examples of use for which are&lt;br&gt;
extremely easy to find, but also, floating point numbers, and&lt;br&gt;
fractions would be useful, as well as the support for very big&lt;br&gt;
numbers, which are supported by some languages.&lt;/p&gt;

&lt;p&gt;Some features could also be included, which are important in the&lt;br&gt;
tradition of Lisp: continuations and macros.  The third chapter of the&lt;br&gt;
book Lisp in Small Pieces, which I will read next deals with&lt;br&gt;
continuations, so I intend including them in the implementation for&lt;br&gt;
the next interpreter, which I decided to do in Swift.&lt;/p&gt;

&lt;p&gt;Another interesting thing to do, is to make a Scheme compiler.  It's&lt;br&gt;
also in the tradition of Lisp to combine compiled code with&lt;br&gt;
interpreted code.  Lisp was at first a fully interpreted language, but&lt;br&gt;
very early it had compilers.  So compiled code and interpreted code&lt;br&gt;
were side by side throughout the whole history of Lisp, in many of its&lt;br&gt;
forms: MacLisp, Common Lisp, Racket, Emacs Lisp, and so on.  So it&lt;br&gt;
seems it's impossible to understand and appreciate Lisp without having&lt;br&gt;
done both, an interpreter and a compiler.&lt;/p&gt;

&lt;p&gt;So, this is the end of this post about the PowerScheme interpreter, I&lt;br&gt;
hope you enjoyed reading it, see you in the next post.&lt;/p&gt;

&lt;p&gt;&lt;a id="org3ab34b3"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Bibliography
&lt;/h1&gt;

&lt;p&gt;&lt;a id="queinnec_lisp"&gt;&lt;/a&gt;[queinnec_lisp] Queinnec, Lisp in Small Pieces, . ↩&lt;/p&gt;

&lt;p&gt;&lt;a id="given_re:_2009"&gt;&lt;/a&gt;[given_re:_2009] @miscgiven_re:_2009,&lt;br&gt;
    title = Re: Coroutines and Go,&lt;br&gt;
    url = &lt;a href="http://lua-users.org/lists/lua-l/2009-11/msg00576.html" rel="noopener noreferrer"&gt;http://lua-users.org/lists/lua-l/2009-11/msg00576.html&lt;/a&gt;,&lt;br&gt;
    urldate = 2019-03-24,&lt;br&gt;
    author = Given, David,&lt;br&gt;
    month = nov,&lt;br&gt;
    year = 2009,&lt;br&gt;
    keywords = compilers,&lt;br&gt;
 ↩&lt;/p&gt;

&lt;p&gt;&lt;a id="graham_roots_2002"&gt;&lt;/a&gt;[graham_roots_2002] @miscgraham_roots_2002,&lt;br&gt;
    title = The Roots of Lisp,&lt;br&gt;
    author = Graham, Paul,&lt;br&gt;
    month = jan,&lt;br&gt;
    year = 2002,&lt;br&gt;
 ↩&lt;/p&gt;

&lt;p&gt;&lt;a id="jones_learn_2017"&gt;&lt;/a&gt;[jones_learn_2017] Jones &amp;amp; Hicks, Learn Windows PowerShell in a month of lunches, Manning (2017). ↩&lt;/p&gt;

&lt;p&gt;&lt;a id="the_devops_collective_monad_2016"&gt;&lt;/a&gt;[the_devops_collective_monad_2016] The DevOps Collective, The Monad Manifesto, Annotated, Leanpub (2016). ↩&lt;/p&gt;

&lt;p&gt;&lt;a id="wikic2_homoiconic"&gt;&lt;/a&gt;[wikic2_homoiconic] @miscwikic2_homoiconic,&lt;br&gt;
    title = Homoiconic Languages,&lt;br&gt;
    url = &lt;a href="http://wiki.c2.com/?HomoiconicLanguages" rel="noopener noreferrer"&gt;http://wiki.c2.com/?HomoiconicLanguages&lt;/a&gt;,&lt;br&gt;
    urldate = 2019-03-09,&lt;br&gt;
    keywords = lisp,&lt;br&gt;
 ↩&lt;/p&gt;

&lt;p&gt;&lt;a id="tanter_beyond"&gt;&lt;/a&gt;[tanter_beyond] Tanter, Beyond static and dynamic scope, ,  11 . ↩&lt;/p&gt;

&lt;p&gt;&lt;a id="payette_windows_2007"&gt;&lt;/a&gt;[payette_windows_2007] Payette, Windows PowerShell in action, Manning ; Pearson Education &lt;a href="https://dev.to2007"&gt;distributor&lt;/a&gt;. ↩&lt;/p&gt;

&lt;p&gt;&lt;a id="against_callcc"&gt;&lt;/a&gt;[against_callcc] @miscagainst_callcc,&lt;br&gt;
    title = An argument against call/cc,&lt;br&gt;
    url = &lt;a href="http://okmij.org/ftp/continuations/against-callcc.html" rel="noopener noreferrer"&gt;http://okmij.org/ftp/continuations/against-callcc.html&lt;/a&gt;,&lt;br&gt;
    urldate = 2019-03-06,&lt;br&gt;
    keywords = continuations, scheme,&lt;br&gt;
 ↩&lt;/p&gt;

&lt;p&gt;&lt;a id="continuations_by_example"&gt;&lt;/a&gt;[continuations_by_example] @misccontinuations_by_example,&lt;br&gt;
    title = Coroutines, exceptions, time-traveling search, generators and threads: Continuations by example,&lt;br&gt;
    author = Might, Matt,&lt;br&gt;
    url = &lt;a href="http://matt.might.net/articles/programming-with-continuations-exceptions-backtracking-search-threads-generators-coroutines/" rel="noopener noreferrer"&gt;http://matt.might.net/articles/programming-with-continuations-exceptions-backtracking-search-threads-generators-coroutines/&lt;/a&gt;,&lt;br&gt;
    urldate = 2019-03-07,&lt;br&gt;
    keywords = continuations, scheme,&lt;br&gt;
 ↩&lt;/p&gt;

&lt;p&gt;&lt;a id="friedman_applications_1988"&gt;&lt;/a&gt;[friedman_applications_1988] Friedman, Applications of continuations, in edited by (1988) ↩&lt;/p&gt;

&lt;p&gt;&lt;a id="reynolds_definitional_1998"&gt;&lt;/a&gt;[reynolds_definitional_1998] Reynolds, Definitional interpreters for higher-order programming languages, in edited by (1998) ↩&lt;/p&gt;

&lt;p&gt;&lt;a id="reynolds_discoveries_1993"&gt;&lt;/a&gt;[reynolds_discoveries_1993] Reynolds, The discoveries of continuations, LISP and Symbolic Computation, &lt;b&gt;6(3-4)&lt;/b&gt;, 233-247 (1993). &lt;a href="http://link.springer.com/10.1007/BF01019459" rel="noopener noreferrer"&gt;link&lt;/a&gt;. &lt;a href="http://dx.doi.org/10.1007/BF01019459" rel="noopener noreferrer"&gt;doi&lt;/a&gt;. ↩&lt;/p&gt;

&lt;p&gt;&lt;a id="springer_scheme"&gt;&lt;/a&gt;[springer_scheme] Springer &amp;amp; Friedman, Scheme and the Art of Programming, . ↩&lt;/p&gt;

&lt;p&gt;&lt;a id="wikic2_call"&gt;&lt;/a&gt;[wikic2_call] @miscwikic2_call,&lt;br&gt;
    title = Call With Current Continuation,&lt;br&gt;
    url = &lt;a href="http://wiki.c2.com/?CallWithCurrentContinuation" rel="noopener noreferrer"&gt;http://wiki.c2.com/?CallWithCurrentContinuation&lt;/a&gt;,&lt;br&gt;
    urldate = 2019-03-01,&lt;br&gt;
    keywords = continuations, scheme,&lt;br&gt;
 ↩&lt;/p&gt;

&lt;p&gt;&lt;a id="thielecke_continuations"&gt;&lt;/a&gt;[thielecke_continuations] @miscthielecke_continuations,&lt;br&gt;
    title = Continuations, functions and jumps,&lt;br&gt;
    shorttitle = Logiccolumn8.pdf,&lt;br&gt;
    url = &lt;a href="http://www.cs.bham.ac.uk/%7Ehxt/research/Logiccolumn8.pdf" rel="noopener noreferrer"&gt;http://www.cs.bham.ac.uk/~hxt/research/Logiccolumn8.pdf&lt;/a&gt;,&lt;br&gt;
    urldate = 2019-03-04,&lt;br&gt;
    author = Thielecke, Hayo,&lt;br&gt;
    keywords = continuations,&lt;br&gt;
 ↩&lt;/p&gt;

&lt;p&gt;&lt;a id="stallman_emacs:_1981"&gt;&lt;/a&gt;[stallman_emacs:_1981] Stallman, EMACS: The Extensible, Customizable Display Editor, in edited by (1981) ↩&lt;/p&gt;

</description>
      <category>scheme</category>
      <category>lisp</category>
      <category>powershell</category>
      <category>interpreter</category>
    </item>
  </channel>
</rss>
