<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Hrishikesh kumar</title>
    <description>The latest articles on DEV Community by Hrishikesh kumar (@hrishi2710).</description>
    <link>https://dev.to/hrishi2710</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/hrishi2710"/>
    <language>en</language>
    <item>
      <title>Multi-threading in Operating System</title>
      <dc:creator>Hrishikesh kumar</dc:creator>
      <pubDate>Thu, 19 Sep 2019 10:36:16 +0000</pubDate>
      <link>https://dev.to/hrishi2710/threading-in-operating-system-3gjb</link>
      <guid>https://dev.to/hrishi2710/threading-in-operating-system-3gjb</guid>
      <description>&lt;p&gt;We will go through the basics of multi-threading and single threading and advantages and disadvantages of both.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;What is a thread?&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Thread is the basic unit of CPU utilization i.e. how an application uses CPU. Let's understand it through an example. &lt;br&gt;
Let's say we open a web-page, we often see the text contents being loaded before the image and video contents. Here loading the web-page is a &lt;a href="https://dev.to/hrishi2710/processes-in-operating-system-551h"&gt;process&lt;/a&gt; and the process contains 2 threads, one for loading the text contents and the other for loading the images contents.&lt;/p&gt;

&lt;p&gt;Let's take another example consisting of basic coding problem. Suppose, we want to get the sum of the numbers in an array of length N. We can make this simple process of adding number multi threaded by associating 2 threads for summing them. One for the first half of array, and second for the other half of the array. And then sum the sum of two halves. Here the threads summing the both the halves are &lt;strong&gt;child threads&lt;/strong&gt; and the thread having the final sum is called &lt;strong&gt;parent thread&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;A process can have as many threads as required (limited to the hardware requirements, and efficiency overheads). Thus, it is clear that code, data, files belonging to a particular process will be common to all the threads in multi threaded process.&lt;/p&gt;

&lt;p&gt;But each thread has it's unique thread Id, program counter, register set and stack memory as illustrated in following diagram:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fwww.csc.villanova.edu%2F~mdamian%2Fthreads%2Fthread.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fwww.csc.villanova.edu%2F~mdamian%2Fthreads%2Fthread.jpg" alt="single thread vs multi thread"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Why do we need multi threading?&lt;/strong&gt;
&lt;/h4&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Responsiveness&lt;/strong&gt; - Let's say in the aforementioned example, while loading a web-page, there is some large image being loaded and taking it's time. As the whole process is multi-threaded, the loading of image will not block the loading of text content thus making it more responsive to the user.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Resource sharing&lt;/strong&gt; - Threads share the memory and resources of the process by default thus allowing application to have several different threads within same address space.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Economy&lt;/strong&gt; - As threads share the same memory and resources that of the processes. It's economical to create and context switch threads vis-a-vis process creation.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;What are the challenges of multi threading?&lt;/strong&gt;
&lt;/h4&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Identifying tasks&lt;/strong&gt; to multi thread to make the application efficient.&lt;/li&gt;
&lt;li&gt;Maintaining &lt;strong&gt;data integrity&lt;/strong&gt; as there maybe situations where the same data is being manipulated by different threads.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Balancing cost&lt;/strong&gt;- It is important to share the workload of application equally among different threads otherwise there will be threads doing less work than other  creating economical overheads.&lt;/li&gt;
&lt;li&gt;It is rather easy to &lt;strong&gt;test and debug&lt;/strong&gt; a single threaded application than multi threaded one.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Parallelism&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;We would come across two terms very frequently, parallelism and concurrency. Generally, these two go hand-in-hand. But what exactly do they mean?&lt;/p&gt;

&lt;h5&gt;
  
  
  &lt;strong&gt;Parallelism vs Concurrency&lt;/strong&gt;
&lt;/h5&gt;

&lt;p&gt;A system is &lt;strong&gt;parallel&lt;/strong&gt; if it can perform more than one task simultaneously.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Concurrency&lt;/strong&gt; is when more than one task makes progress. Even if the system is single core, the CPU schedulers rapidly switch between processes therefore making illusion of parallel system and thus allowing progress for different tasks. &lt;/p&gt;

&lt;p&gt;Therefore, it is important to note that &lt;strong&gt;concurrency can occur without parallelism.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Types of parallelism&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Data parallelism&lt;/strong&gt;- Here, same data is divided into groups and those subset of data are operated on different cores.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Task parallelism&lt;/strong&gt; - Unique operations are performed onto the same data on different cores.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Multi-threading models&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Many to one model&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fprofessormerwyn.files.wordpress.com%2F2015%2F08%2Fmany-to-one.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fprofessormerwyn.files.wordpress.com%2F2015%2F08%2Fmany-to-one.jpg" alt="Many to one"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Many user level threads(threads created in application by user using thread library(explained later)) are mapped onto single kernel level thread. In this, there are following problems:-&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Block statement on 1 user level thread blocks all the thread.&lt;/li&gt;
&lt;li&gt;No true concurrency.&lt;/li&gt;
&lt;li&gt;Not efficient use of multi-core architecture.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;One to one model&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fprofessormerwyn.files.wordpress.com%2F2015%2F08%2Fone-to-one.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fprofessormerwyn.files.wordpress.com%2F2015%2F08%2Fone-to-one.jpg" alt="one to one"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Single user level thread is mapped onto single kernel level thread. It has following advantage over many to one model:-&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Block statement on one thread doesn't blocks any other thread.&lt;/li&gt;
&lt;li&gt;True concurrency.&lt;/li&gt;
&lt;li&gt;Efficient use of multi-core system.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;But it has a problem of overhead for creating as much kernel level threads.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Many to many model&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.cs.uic.edu%2F~jbell%2FCourseNotes%2FOperatingSystems%2Fimages%2FChapter4%2F4_07_ManyToMany.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.cs.uic.edu%2F~jbell%2FCourseNotes%2FOperatingSystems%2Fimages%2FChapter4%2F4_07_ManyToMany.jpg" alt="Many to many"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Many user level threads are mapped onto equal or less number of kernel level threads. It solves the problem of overhead for creating kernel level threads.&lt;/p&gt;

&lt;p&gt;This model has a variant namely 2 level model which includes many to many as well one to one model. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.cs.uic.edu%2F~jbell%2FCourseNotes%2FOperatingSystems%2Fimages%2FChapter4%2F4_08_TwoLevel.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.cs.uic.edu%2F~jbell%2FCourseNotes%2FOperatingSystems%2Fimages%2FChapter4%2F4_08_TwoLevel.jpg" alt="2 level model"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In this certain thread of a process are mapped onto a certain kernel level thread until it finishes it's execution.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Thread Library&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Thread library is API for programmer to create and manage threads in their applications.&lt;/p&gt;

&lt;p&gt;There can be two approaches for implementing thread library:-&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;The first approach is to provide a &lt;strong&gt;library entirely in user space&lt;/strong&gt; with no kernel support. All code and data structures for the library exist in user space. This means that invoking a function in the library results in a local function call in user space and not a system call.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The second approach is to implement a &lt;strong&gt;kernel-level library&lt;/strong&gt; supported directly by the operating system. In this case, code and data structures for the library exist in kernel space. Invoking a function in the API for the library typically results in a system call to the kernel.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;There are &lt;strong&gt;3 main thread libraries&lt;/strong&gt;:-&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;POSIX Pthreads&lt;/strong&gt;- Maybe a user level or kernel level library. Mostly used by Linux/Unix based OS.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Windows&lt;/strong&gt; - Kernel level.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Java&lt;/strong&gt; - Thread created and managed directly in Java programs. As, JVM itself runs on an OS. So, it is implemented using a thread library present on the OS.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Thread creation&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;There are 2 strategies for thread creation:-&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Asynchronous&lt;/strong&gt; - Parent thread creates child then executes independently. It means, little data sharing between parent and child thread.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Synchronous&lt;/strong&gt; - Parent thread waits for the child thread to finish it's execution. More of data sharing is done here.&lt;/li&gt;
&lt;/ol&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Pthreads&lt;/strong&gt;
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;POSIX standard for thread creation and synchronization.&lt;/li&gt;
&lt;li&gt;These are mere specifications for thread behavior, not it's implementation.&lt;/li&gt;
&lt;li&gt;Mostly implemented by UNIX type system.&lt;/li&gt;
&lt;li&gt;Windows doesn't support it natively.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Windows threads&lt;/strong&gt;
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Similar to Pthread creation and management in many ways.&lt;/li&gt;
&lt;li&gt;Differences in method names. For eg:- &lt;code&gt;pthread.join()&lt;/code&gt; function is implemented here using &lt;code&gt;WaitForSingleObject()&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Java threads&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;2 techniques for implementing Java threads:-&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;New class derived from thread class and then override the run method.&lt;/li&gt;
&lt;li&gt;Implement the runnable interface.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;JVM hides the implementation details of underlying OS and provide consistent, abstract environment that allows Java program to operate at any platform.&lt;/p&gt;

&lt;p&gt;All the above user level thread library creation and management comes in the category of &lt;strong&gt;explicit threading&lt;/strong&gt; where the programmer creates and manages threads. &lt;br&gt;
One more way to create and manage threads is to transfer the creation and management from application developers to compilers and run-time libraries. This strategy is known as &lt;strong&gt;implicit threading&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;2 common strategies of implicit threading are:-&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Thread Pool&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Open MP&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Thread Pool&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;There were few difficulties while explicit threading:-&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;How much thread to create in order to use multi-core architecture efficiently?&lt;/li&gt;
&lt;li&gt;Time for creating thread.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The general idea behind a &lt;strong&gt;thread pool&lt;/strong&gt; is to create a number of threads at process startup and place them into a pool, where they sit and wait for work. When a server receives a request, it awakens a thread from this pool—if one is available—and passes it the request for service. Once the thread completes its service, it returns to the pool and awaits more work. If the pool contains no available thread, the server waits until one becomes free.&lt;/p&gt;

&lt;p&gt;Benefits of thread pool:-&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Servicing a request with an existing thread is faster than waiting to create a thread.&lt;/li&gt;
&lt;li&gt;Limits on number of threads. Thus, benefiting the system which doesn't support large number of threads.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Open MP&lt;/strong&gt;
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;These are set of compiler directives as well as API to provide support for parallel programming.&lt;/li&gt;
&lt;li&gt;It identifies the parallel region in the process and executes them.&lt;/li&gt;
&lt;li&gt;We can also have control over number of thread being created and data being shared between the threads.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Finally, it's time to delve inside the issues while threading.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Threading issues&lt;/strong&gt;
&lt;/h3&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;&lt;code&gt;fork()&lt;/code&gt; and &lt;code&gt;exec()&lt;/code&gt; system calls&lt;/strong&gt;.
&lt;/h4&gt;

&lt;p&gt;Whether all the threads of the process will be duplicated or it will become &lt;br&gt;
   single threaded after executing &lt;code&gt;fork()&lt;/code&gt; statement.&lt;br&gt;
   &lt;code&gt;exec()&lt;/code&gt; statement still almost works the same way i.e. program specified as &lt;br&gt;
   parameter will replace the whole process. &lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Signal handling&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;&lt;strong&gt;Signals&lt;/strong&gt; are to mark any event while executing a process.&lt;br&gt;
   There are 2 types of handler:-&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Default signal handler&lt;/strong&gt; - Kernel runs this while handling the signal.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;User defined signal handler&lt;/strong&gt; - User defined handler will override the 
default signal handler.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In single threaded program, all the signals are delivered to the process.&lt;br&gt;
   In multi threaded program, there are 4 options:-&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Deliver the signal to the thread to which the signal applies.&lt;/li&gt;
&lt;li&gt;Deliver the signal to every thread in the process.&lt;/li&gt;
&lt;li&gt;Deliver the signal to certain threads in the process.&lt;/li&gt;
&lt;li&gt;Assign a specific thread to receive all signals for the process.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In case of synchronous signals, the signal needs to be delivered to the thread which signal applies.&lt;br&gt;
In case of asynchronous signals, if the signal is affecting all the threads, then it is to be delivered to every thread. If it is affecting certain thread, then the signal is to be delivered to that certain thread.&lt;br&gt;
Windows doesn't explicitly provides for signal handling but emulate it through &lt;strong&gt;Asynchronous procedure calls(APC)&lt;/strong&gt; . APC is delivered to particular thread rather than the process.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Thread cancellation&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;The thread which is to be cancelled is known as &lt;strong&gt;target thread&lt;/strong&gt;.&lt;br&gt;
There are 2 strategies for thread cancellation:-&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Asynchronous cancellation&lt;/strong&gt;- One thread immediately terminates the target thread resulting in abrupt termination.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Deferred cancellation&lt;/strong&gt; - Target thread periodically checks whether it should terminate or not, thus terminating in orderly fashion.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;code&gt;pthread_cancel(tid)&lt;/code&gt; only requests to cancel a thread. Original cancellation depends on how target thread is set up to handle request i.e. deferred or asynchronous.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Thread local Storage&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;Threads belonging to a process share the data of the process. However, in some circumstances, each thread might need its own copy of certain data. We will call such data &lt;strong&gt;thread-local storage(TLS)&lt;/strong&gt;. Most thread libraries—including Windows and Pthreads—provide some form of support for thread-local storage. Java provides support as well.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Scheduler activation&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;For a user level thread to be executed, it has to communicate with kernel level thread. The scheme for this communication is known as scheduler activation.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi1.wp.com%2Fzitoc.com%2Fwp-content%2Fuploads%2F2019%2F02%2Fthreading-issues.png%3Ffit%3D385%252C395%26ssl%3D1" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi1.wp.com%2Fzitoc.com%2Fwp-content%2Fuploads%2F2019%2F02%2Fthreading-issues.png%3Ffit%3D385%252C395%26ssl%3D1" alt="scheduler activation"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Kernel provides the application with a set of virtual processors known as light weight process(LWP)&lt;/li&gt;
&lt;li&gt;App can schedule user threads on LWP.&lt;/li&gt;
&lt;li&gt;Kernel must inform application about the events, called as &lt;strong&gt;upcall&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Upcalls are handled by the thread library with an upcall handler, and upcall handlers must run on a virtual processor.&lt;/li&gt;
&lt;li&gt;In the case of upcall(let's say block), the kernel makes an upcall to the application informing it that a thread is about to block and identifying the specific thread. &lt;/li&gt;
&lt;li&gt;The kernel then allocates a new virtual processor to the application. &lt;/li&gt;
&lt;li&gt;The application runs an upcall handler on this new virtual processor, which saves the state of the blocking thread and relinquishes the virtual processor on which the blocking thread is running. &lt;/li&gt;
&lt;li&gt;The upcall handler then schedules another thread that is eligible to run on the new virtual processor. &lt;/li&gt;
&lt;li&gt;When the event that the blocking thread was waiting for occurs, the kernel makes another upcall to the thread library informing it that the previously blocked thread is now eligible to run. &lt;/li&gt;
&lt;li&gt;The upcall handler for this event also requires a virtual processor, and the kernel may allocate a new virtual processor. &lt;/li&gt;
&lt;li&gt;After marking the unblocked thread as eligible to run, the application schedules an eligible thread to run on an available virtual processor.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That's it for the basics of threading. Hope you had a good read.&lt;/p&gt;

</description>
      <category>computerscience</category>
      <category>explainlikeimfive</category>
    </item>
    <item>
      <title>JVM architecture in a nutshell</title>
      <dc:creator>Hrishikesh kumar</dc:creator>
      <pubDate>Tue, 13 Aug 2019 14:38:46 +0000</pubDate>
      <link>https://dev.to/hrishi2710/jvm-architecture-in-a-nutshell-2jj1</link>
      <guid>https://dev.to/hrishi2710/jvm-architecture-in-a-nutshell-2jj1</guid>
      <description>&lt;p&gt;We write a code in our IDE, How it is being executed? How does it shows the output which we want (not always!)? This question has perplexed many beginner as well as some high level coders. Here, I will try to answer those questions to some extent vis-a-vis JAVA.&lt;/p&gt;

&lt;p&gt;So, it’s well established fact that all the dirty work of compiling, executing the code is done by Java Virtual Machine(JVM). But what exactly does JVM consists of? How does it executes a code?&lt;/p&gt;

&lt;p&gt;Whatever we write on IDE is present in java source file(.java file). It is then compiled using Java compiler( by javac command). A java class file (.class file) is generated which is then fed to the class loader subsystem.&lt;/p&gt;

&lt;p&gt;The whole JVM architecture looks as follows:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--p2wZ68n7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://javainterviewpoint.com/wp-content/uploads/2016/01/JVM-Architecture.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--p2wZ68n7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://javainterviewpoint.com/wp-content/uploads/2016/01/JVM-Architecture.png" alt="JVM architecture"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Class Loader Subsystem&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;It takes .class file and performs three operations on it:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Loading&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;All core Java API classes are loaded by Bootstrap class loader. The classes present inside extension folder such as jdk and jrk files are loaded by Extension class loader. Further Application class loader loads classes from application level and loads it into the class-path.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Linking&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Here, the class file which is loaded is prepared for execution. Verify operation verifies the syntax. Static variables are allocated memory within prepare block, also those variables are initialized ( Attention:- here the variables are initialized with default values). Further, all symbolic references are replaced with original references from method area while resolve block is being executed.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Initialization&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Finally, the static variables are initialized with the original value assigned and all the static blocks are executed.&lt;br&gt;
Each and every file and operations require a memory area during run-time. JVM provides various run-time data areas.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Run-time Data Areas&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Method Area&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;All .class files are dumped here. It also contains all the static variables.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Heap Area&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;It consists of all the instance variables or object data.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Stack Area&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;For every thread, separate run-time stack will be created which resides in stack area. And for every method called, one entry will be stored in the stack, that entry is called &lt;strong&gt;stack frame&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Each stack frame consists of 3 parts:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Local variable array:-&lt;/strong&gt; As the name suggests, all the local variables of a method and their values are stored here.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Upper end stack:-&lt;/strong&gt; This memory area is used for any intermediate operation within a method.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Frame data:-&lt;/strong&gt; All symbols used in the method is stored here. Also, if any exception occurs, catch block is stored here.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;PC Registers&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;For every thread, separate PC register is created in this area. A PC register contains the address of next executing instruction of that thread.&lt;/p&gt;

&lt;p&gt;Lastly, Native method area holds all native method instructions.&lt;/p&gt;

&lt;p&gt;Here, we can easily see, as for each thread a separate run-time stack is allocated in stack area, which implies that stack area is thread safe. While, for whole JVM, there will be only 1 method area and heap area, which implies that both of the aforementioned areas are not thread safe.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Execution Engine&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Till now, the class is allocated memory and is loaded. All the dirty work of execution of a code is done by this engine. Just like CPU, this executes the program line by line. It also consists of different parts.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Interpreter&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;It reads, interprets and executes the code line by line.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;JIT compiler&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;It comes into action only when there are repeatedly required methods, not for all methods. Profiler does the job of identifying the repeatedly required methods or &lt;strong&gt;hotspot methods&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Then there is garbage collector which collects those variables or methods which are unreachable in a code and frees the memory.&lt;/p&gt;

&lt;p&gt;At last, there is native method interface or Java native interface(JNI) which simply provides an interface for Native method libraries to be loaded during run-time.&lt;br&gt;
So, this is what makes up JVM and their functions.&lt;/p&gt;

&lt;p&gt;Hope, it gives you a basic idea of what sorcery is going on within the machine!! Enjoy.&lt;/p&gt;

</description>
      <category>computerscience</category>
      <category>explainlikeimfive</category>
      <category>java</category>
      <category>architecture</category>
    </item>
    <item>
      <title>Processes in Operating System</title>
      <dc:creator>Hrishikesh kumar</dc:creator>
      <pubDate>Fri, 31 May 2019 12:27:32 +0000</pubDate>
      <link>https://dev.to/hrishi2710/processes-in-operating-system-551h</link>
      <guid>https://dev.to/hrishi2710/processes-in-operating-system-551h</guid>
      <description>&lt;p&gt;Here we will discuss the concept of processes in Operating system. For the basics of operating system you can visit the following article:&lt;br&gt;
&lt;/p&gt;
&lt;div class="ltag__link"&gt;
  &lt;a href="/hrishi2710" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__pic"&gt;
      &lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F169970%2Fc1170f84-4d5a-4e4d-9541-27767baa7afc.png" alt="hrishi2710"&gt;
    &lt;/div&gt;
  &lt;/a&gt;
  &lt;a href="/hrishi2710/basic-operating-system-services-and-structures-275e" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__content"&gt;
      &lt;h2&gt;Basic Operating System services and structures.&lt;/h2&gt;
      &lt;h3&gt;Hrishikesh kumar ・ May 27 '19&lt;/h3&gt;
      &lt;div class="ltag__link__taglist"&gt;
        &lt;span class="ltag__link__tag"&gt;#computerscience&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#explainlikeimfive&lt;/span&gt;
      &lt;/div&gt;
    &lt;/div&gt;
  &lt;/a&gt;
&lt;/div&gt;


&lt;p&gt;So, let's get going with the concept of processes.&lt;/p&gt;

&lt;p&gt;Processes are the program in execution. It is not only a program code, but it can also be a program counter, current activity, heap, etc.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Process concept&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Process state&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;There can be 5 process states at any time, which can be as follows:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;New- When the process is about to be executed.&lt;/li&gt;
&lt;li&gt;Running - when the processor is executing the process.&lt;/li&gt;
&lt;li&gt;Waiting - when the process has been executed but waiting for something, let's say input from any device.&lt;/li&gt;
&lt;li&gt;Ready - when the process is in the queue, ready to be sent to the CPU.&lt;/li&gt;
&lt;li&gt;Terminated - when the execution is over and corresponding resource and memory is de-allocated.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Process state diagram:&lt;br&gt;
&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwvf7m5dcv24b9wfbjlw0.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwvf7m5dcv24b9wfbjlw0.jpg" alt="process state diagram" width="400" height="228"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Process Control Block(PCB)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Process control block consists of:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Process state:- As illustrated above.&lt;/li&gt;
&lt;li&gt;Program counter:- It consists of the pointer pointing towards the address of the next statement to be executed.&lt;/li&gt;
&lt;li&gt;CPU register:- PCB consists of state of CPU registers&lt;/li&gt;
&lt;li&gt;CPU scheduling information:- To decide when to send a process for execution and when to keep it waiting.&lt;/li&gt;
&lt;li&gt;Memory management information:- PCB consists of the addresses of first and last registers involved in any process.&lt;/li&gt;
&lt;li&gt;Accounting information:- Collects data like time taken to run, memory used by the process, etc.&lt;/li&gt;
&lt;li&gt;I/O status&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Threads&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In a process, there are several kinds of different small processes(not to be confused with child processes) running simultaneously which are known as threads. For example, a word editing program can take input from the keyboard and run spellchecker simultaneously. These are 2 threads in a single process.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Process scheduling&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Process scheduling diagram:&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--dyPMJFSA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://www.cs.odu.edu/%7Ecs471w/spring10/lectures/Processes_files/image028.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--dyPMJFSA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://www.cs.odu.edu/%7Ecs471w/spring10/lectures/Processes_files/image028.jpg" alt="processing queue" width="566" height="332"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Any incoming program(or job) is first queued in &lt;strong&gt;job queue&lt;/strong&gt;, then it is sent to ready queue when it is ready for execution. &lt;strong&gt;Ready queue&lt;/strong&gt; is basically linked list whose head LL consists of the address of the first PCB(process control block) and tail consists that of the last PCB that are to be sent to CPU for execution. &lt;/p&gt;

&lt;p&gt;Once the turn for execution comes, scheduler dispatches it to CPU where it is executed. If the job is computational only, then it will be executed and then the process is terminated but if it requires I/O or it creates child process whose execution has to be completed before the process to be finished, then the process waits for the time being and another process is allocated to the CPU in meantime. &lt;/p&gt;

&lt;p&gt;Once the interrupt/&lt;strong&gt;I/O&lt;/strong&gt;/child process finishes its execution, the process is again queued to the ready queue and then it is terminated after it is done processing through CPU.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Scheduler&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;There are two kinds of schedulers: &lt;strong&gt;job scheduler/ long term scheduler&lt;/strong&gt; and &lt;strong&gt;CPU scheduler/ short term scheduler&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Job scheduler&lt;/strong&gt; schedules how a job is allocated to the ready queue to be executed. Its work comes only when the ready queue has space or an existing process completes execution and its terminated, thus it has comparatively more time for its work, thus its name, long term scheduler.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;CPU scheduler&lt;/strong&gt; tends to be faster as it has to dispatch the program from the ready queue to the CPU for execution. Its work comes relatively faster as the program execution (or part of it) takes lesser time, thus its name, short term scheduler.&lt;/p&gt;

&lt;p&gt;Most of the processes can be divided into two categories namely &lt;strong&gt;I/O bound&lt;/strong&gt; and &lt;strong&gt;CPU bound&lt;/strong&gt;. I/O bound processes require less of computational work and more of input from outside whereas the CPU bound tends to be purely computational.&lt;/p&gt;

&lt;p&gt;Schedulers collectively have to do the job of sending a mix of both the programs so that the resources are used efficiently. Let's say more of I/O bound processes are sent, then the job queue will be overloaded (as none of the existing programs is finished executing as they are waiting for I/O). On the other hand, let's say more of CPU bound processes are dispatched, then the I/O queue will be empty resulting in resource wastage.&lt;/p&gt;

&lt;p&gt;There is also one more category of schedulers which is nowadays used in many OS namely &lt;strong&gt;medium term scheduler&lt;/strong&gt;. The OS using this kind of scheduler sends the processes directly to the ready queue. Medium term scheduler comes into the act when there occurs an interrupt. During that, medium term scheduler swaps the process in execution with the interrupt and successfully dispatches it when the interrupt is over.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Context switch&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Context switch is primarily what the medium scheduler does. When an interrupt occurs, the current state(or context) of the process is saved (known as &lt;strong&gt;state save&lt;/strong&gt; or &lt;strong&gt;context save&lt;/strong&gt;) and once the interrupt is over, the state is restored(&lt;strong&gt;state restore&lt;/strong&gt;).&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Operation on processes&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Process creation&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A process can create a &lt;strong&gt;child process&lt;/strong&gt; which in turn can create another child process of its own. A tree is used to maintain the link between them. Every process created is identified by an ID known as Process ID(&lt;strong&gt;pid&lt;/strong&gt;).&lt;/p&gt;

&lt;p&gt;2 types of process execution:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Parent and child are concurrently executed.&lt;/li&gt;
&lt;li&gt;Parent waits for the child process to finish execution.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;2 types of address space used:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Child process is duplicate of the parent using the same resources.&lt;/li&gt;
&lt;li&gt;Child process act as a new program using its own resources.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Process termination&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A process is terminated when its execution is finished or any other process forces it to terminate.&lt;/p&gt;

&lt;p&gt;In some programs, termination of the parent process causes all its child processes to terminate. This is known as &lt;strong&gt;cascading termination&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Once the child process is terminated, it has to wait for the process to call it through &lt;code&gt;wait()&lt;/code&gt; statement. Till then the child process is called &lt;strong&gt;zombie process&lt;/strong&gt;. And in case, if the parent is terminated and the child is not, then the child process is known as &lt;strong&gt;orphan process&lt;/strong&gt;. In Linux and Unix, orphans are then attached to the next process and then terminated.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Interprocess communication(IPC)&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;There are two types of processes vis-a-vis communication. &lt;strong&gt;Independent processes&lt;/strong&gt; which do not require communication and &lt;strong&gt;cooperating processes&lt;/strong&gt; which require communication.&lt;/p&gt;

&lt;p&gt;Communication can be done through 2 ways(or models):&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Shared memory model&lt;/li&gt;
&lt;li&gt;Message passing model&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Shared memory model&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;To illustrate this model, let's have a simple problem of producer-consumer model where the producer produces information to be consumed by the consumer. Here, both the processes to run concurrently, there have to have a buffer of items in shared memory which can be filled by producer and emptied by the consumer. &lt;/p&gt;

&lt;p&gt;Two types of buffer can be used for the aforementioned process. THe &lt;strong&gt;unbounded buffer&lt;/strong&gt; places no practical limit on the size of buffer. The consumer may have to wait for new items, but the producer can always produce new items. The &lt;strong&gt;bounded buffer&lt;/strong&gt; assumes a fixed buffer size. In this case, the consumer must wait if the buffer is empty and the producer must wait if the buffer is full.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Message passing model&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;There should exist a communication link in order to pass the message between two processes. Issues related to these are:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Naming&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Processes that want to communicate must have a way to refer to each other. They can use either direct or indirect communication.&lt;/p&gt;

&lt;p&gt;Under &lt;strong&gt;direct communication&lt;/strong&gt;, each process that wants to communicate must explicitly name the recipient or sender of the message. It can be defined as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;send(P, message)&lt;/code&gt; - Send a message to process P.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;receive(Q, message)&lt;/code&gt; - Receive a message from process Q.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;There is sense of &lt;strong&gt;symmetry&lt;/strong&gt; in this scheme as both the processes know the name of other process. There is an &lt;strong&gt;asymmetrical variation&lt;/strong&gt; of this, in which sender names the recipient but recipient doesn't need to get the name of sender. It can be illustrated as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;send(P, message)&lt;/code&gt; - Send a message to process P&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;receive(id, message)&lt;/code&gt; - Receive a message from any process. The variable &lt;code&gt;id&lt;/code&gt; is set to the name of the process with which communication has taken place.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Under &lt;strong&gt;indirect communication&lt;/strong&gt;, the messages are sent to a mailbox or port from where the recipient of the message can get it. It can be illustrated as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;send (A, message)&lt;/code&gt; - Send a message to mailbox A.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;receive (A, message)&lt;/code&gt; - Receive a message from mailbox A.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So, here it's clear that for two processes to communicate, there should be a shared mailbox. Also, unlike the last one, where only two processes can communicate through direct communication, here more than two processes can communicate by different processes receiving the same message from the mailbox.&lt;/p&gt;

&lt;p&gt;Now, let's say Processes P, Q, R all share the same mailbox where P sends and Q, R receives. Then which process will receive the message sent by P and how to avoid the conflict?. It can be decided by the following methods:-&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Allow a link to be associated with two processes at most.&lt;/li&gt;
&lt;li&gt;Allow at most one process at a time to execute a &lt;code&gt;receive()&lt;/code&gt; operation.&lt;/li&gt;
&lt;li&gt;Select a random process (from Q and R, but not both) and then select the remaining processes in round robin fashion.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Synchronisation&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Message passing maybe either &lt;strong&gt;blocking&lt;/strong&gt; or &lt;strong&gt;nonblocking&lt;/strong&gt; - also known as &lt;strong&gt;synchronous&lt;/strong&gt; and &lt;strong&gt;asynchronous&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Blocking send - The sending process is blocked until the message is received by the process or mailbox.&lt;/li&gt;
&lt;li&gt;Nonblocking send - The sending process sends the message and resumes the operation.&lt;/li&gt;
&lt;li&gt;Blocking receive - The receiver blocks until a message is available.&lt;/li&gt;
&lt;li&gt;Nonblocking receive - The receiver retrieves either a valid message or a null.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;A combination of send and receive calls can be used.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Buffering&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Whether communication is direct or indirect, messages exchanged by processes reside in a temporary queue. Such queues can be implemented in 3 ways.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Zero capacity - The queue has max length of zero, so no message can wait, so sender must block until the message is received.&lt;/li&gt;
&lt;li&gt;Bounded capacity - The queue has max length of n, so sender will block only when the n messages have filled the queue.&lt;/li&gt;
&lt;li&gt;Unbounded capacity - The sender never blocks.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Communication in client-server system&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Apart from the above IPCs, three other models can be used for the client-server system: &lt;strong&gt;sockets&lt;/strong&gt;, &lt;strong&gt;remote procedure calls(RPC)&lt;/strong&gt;, &lt;strong&gt;pipes&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Sockets&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Sockets are the endpoint of communication. A pair of processes needs a pair of sockets. Sockets are defined as IP address followed by port number. For example, 162.0.0.1:80 means the machine has IP address of 162.0.0.1 and listening to port 80.&lt;/p&gt;

&lt;p&gt;Servers implementing specified series (like FTP, HTTP) listen to well known ports( FTP to port 21, HTTP to port 80). All ports numbered less than 1024 are &lt;strong&gt;well known ports&lt;/strong&gt;. Sockets allow only unstructured stream of bytes to be exchanged. Thus it is called &lt;strong&gt;lower level of communication&lt;/strong&gt;. Structuring responsibility lies with the client or server application.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Remote Procedure Calls(RPC)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;RPC is loosely based on message sharing model as a message containing all the information (such as listening port address, parameters for the function etc) is sent to the system with listening port. The message is received and if there is any output to be returned, is done in the same manner.&lt;/p&gt;

&lt;p&gt;One issue that needs to be dealt with is the differences in data representation on the client and server machines. Consider the representation of 32-bit integers/ Some systems( knows as &lt;strong&gt;big-endian&lt;/strong&gt;) store the most significant byte first, while other systems(known as &lt;strong&gt;little-endian&lt;/strong&gt;) store the least significant byte first. To resolve the differences, many RPC systems define a machine-independent representation of data. One such representation is known as &lt;strong&gt;external data representation (XDR)&lt;/strong&gt;. On client or server side, this XDR data can be converted to machine relevant data.&lt;/p&gt;

&lt;p&gt;Another issue that can concern is, how the client and server know the port of each other as they don't share a memory? There are two approaches for this:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The information is predetermined i.e. everytime same port is used. But, in this, after compilation, the port can't be changed.&lt;/li&gt;
&lt;li&gt;The port information can be obtained dynamically by sending a request for the same and then getting it from other side.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Pipes&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Pipes act as a conduit allowing the processes to communicate with each other.&lt;br&gt;
There can be following issues while implementing a pipe:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Bidirectional or unidirectional depending on if the message flow is allowd from both sides or not.&lt;/li&gt;
&lt;li&gt;If two way communication is allowed, it is half duplex(the message is allowed to travel only one way at a time) or full duplex(both ways allowed).?&lt;/li&gt;
&lt;li&gt;Is there mus relation between parent and child processes?&lt;/li&gt;
&lt;li&gt;Can the pipes communicate over a network or on the same machine.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;2 pipes are discussed here:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Ordinary pipes&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Also known as &lt;strong&gt;anonymous pipes&lt;/strong&gt; on Windows. This pipe is unidirectional and employ a must parent child relationship. This means this kind of pipe used for communication on same machine. Once the communication is over, this pipe ceases to exist.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Named pipes&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;These are bidirectional pipes and they can employ communication between any two processes( unlike ordinary pipes which employ communication between parent and child processes only). The pipe still exists after a particular communication is over and can be used by other processes after that.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Hope that the basic concept of processes become clear after this.&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>computerscience</category>
      <category>explainlikeimfive</category>
      <category>operatingsystem</category>
    </item>
    <item>
      <title>Basic Operating System services and structures.</title>
      <dc:creator>Hrishikesh kumar</dc:creator>
      <pubDate>Mon, 27 May 2019 07:52:32 +0000</pubDate>
      <link>https://dev.to/hrishi2710/basic-operating-system-services-and-structures-275e</link>
      <guid>https://dev.to/hrishi2710/basic-operating-system-services-and-structures-275e</guid>
      <description>&lt;p&gt;Operating System(OS) provides an interface to the user in order to communicate with the hardware.&lt;/p&gt;

&lt;p&gt;OS has mainly three functions:-&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;To provide services&lt;/li&gt;
&lt;li&gt;System calls&lt;/li&gt;
&lt;li&gt;User Interface&lt;/li&gt;
&lt;/ol&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Services&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;Services like file maintenance, I/O, resource allocation, communication etc. These services are done with the help of system calls.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;System calls&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;System calls provide an interface for the services provided by the OS. Few of the system calls are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Process control - Any program loaded into memory and ready for execution is known as &lt;strong&gt;process&lt;/strong&gt;. OS provides control on these processes like reading inputs, error detection etc.&lt;/li&gt;
&lt;li&gt;File manipulation - Reading, writing, deleting file etc.&lt;/li&gt;
&lt;li&gt;Device manipulation - Reading from devices, Setting appropriate parameters in the memory to write to devices etc.&lt;/li&gt;
&lt;li&gt;Information manipulation - Transferring information between user program and OS such as &lt;em&gt;time()&lt;/em&gt; , &lt;em&gt;date()&lt;/em&gt; etc.&lt;/li&gt;
&lt;li&gt;Communication - Setting up communication protocol for passing of data between devices. This can be done in 2 ways : &lt;strong&gt;message passing model&lt;/strong&gt; and &lt;strong&gt;shared memory model&lt;/strong&gt;. In the former model the message is passed between devices using &lt;em&gt;hostid()&lt;/em&gt; and &lt;em&gt;clientid&lt;/em&gt;. In the latter model, data is placed in shared memory. Message sharing model is mainly used for smaller amount of data as it takes more time to communicate than the shared memory model as in the latter model the data is shared at the speed of memory transfer. &lt;/li&gt;
&lt;li&gt;Protection &amp;amp; security - Protecting system from malicious softwares, errors etc.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;User and OS interface&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;User Interface are basically of 3 types: &lt;strong&gt;command line UI&lt;/strong&gt;(eg. UNIX), &lt;strong&gt;Batch&lt;/strong&gt;, &lt;strong&gt;GUI&lt;/strong&gt;(eg. Windows)&lt;/p&gt;

&lt;p&gt;OS interface consists of the programs which are used to read the user command and act accordingly. This is done through command interpreter. There are 2 methods through which command interpreter can work:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The command interpreter itself contains the code and sets the parameter accordingly.
Eg:- A command to delete a file may cause the command interpreter to jump to a section of its own code that sets up parameter for deleting and then make appropriate system calls.&lt;/li&gt;
&lt;li&gt;Command interpreter uses the system calls to get any work done.
Eg:- Lets say a command &lt;code&gt;rm file.txt&lt;/code&gt; is given.
The interpreter doesn't understand the code in any way. It will search for the file named &lt;code&gt;rm&lt;/code&gt; and load the file into memory, and execute it with the parameter &lt;code&gt;file.txt&lt;/code&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;OS structure&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;1.Simple structure - DOS has this type of structure.&lt;/p&gt;

&lt;p&gt;Structure of DOS: &lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--sz0rqOgK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/http://faculty.salina.k-state.edu/tim/ossg/_images/dos_struct.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--sz0rqOgK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/http://faculty.salina.k-state.edu/tim/ossg/_images/dos_struct.png" alt="alt text" width="327" height="315"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In DOS, user application may bypass the OS and write into ROM itself which is more prone to errors.&lt;/p&gt;

&lt;p&gt;2.Layered approach - The OS services are modeled into layers with hardware as the lowest layer and user interface as the topmost layer.&lt;/p&gt;

&lt;p&gt;Layered structure: &lt;br&gt;
&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvtltvpia6ysobj6puxry.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvtltvpia6ysobj6puxry.jpg" alt="alt text" width="479" height="479"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here, one thing is to note that the top layer can communicate with lower layers only. So, layer planning has to be done properly. Also, debugging is more easy in this structure as debugging is done layer by layer and the exact location of problem can be ascertained.&lt;/p&gt;

&lt;p&gt;3.Microkernel - Kernel is made smaller to make the OS fast to reboot and avoid conflicts. Microkernel is written on ROM so that it cant be modified. Only the most important system services and calls are embedded into the microkernel and the rest of the system calls are written and loaded onto the disk.&lt;/p&gt;

&lt;p&gt;4.Modules - In the microkernel approach, there is drawback of system calls being present in the disk which is more volatile as user application are also present there and thus prone to be modified and susceptible to fatal error. To get around this, system calls are written onto disk and in form of modules. These modules when needed are loaded with the kernel which is written in EPROM(Erasable programmable ROM) instead of ROM so that it can be modified when necessary.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;OS debugging&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;Whenever an error is encountered, OS writes error information into a &lt;strong&gt;log file&lt;/strong&gt; which can be later used to debug. OS can also take a &lt;strong&gt;core dump&lt;/strong&gt; - a capture of the memory of the process- and store into a file for later analysis. A failure in kernel is called &lt;strong&gt;crash&lt;/strong&gt;. When crash occurs, error information is saved to a log file, and the memory state is saved to a &lt;strong&gt;crash dump&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Performance tuning&lt;/strong&gt; also forms a part of debugging which is done through many tools such as Windows task manager, DTrace etc.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;System boot&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;The process of starting a computer by loading the kernel is knows as &lt;strong&gt;booting&lt;/strong&gt; the system. On most computer, a small piece of code known as the &lt;strong&gt;bootstrap loader&lt;/strong&gt; locates the kernel, loads it into the memory, and starts its execution.&lt;br&gt;
At this point the system is said to be &lt;strong&gt;running&lt;/strong&gt;.&lt;/p&gt;

</description>
      <category>computerscience</category>
      <category>explainlikeimfive</category>
    </item>
  </channel>
</rss>
