<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Igor Rubinovich</title>
    <description>The latest articles on DEV Community by Igor Rubinovich (@igorrubinovich).</description>
    <link>https://dev.to/igorrubinovich</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/igorrubinovich"/>
    <language>en</language>
    <item>
      <title>Books that shaped me as a software engineer</title>
      <dc:creator>Igor Rubinovich</dc:creator>
      <pubDate>Thu, 22 Feb 2024 15:21:25 +0000</pubDate>
      <link>https://dev.to/igorrubinovich/books-that-shaped-me-professionally-433e</link>
      <guid>https://dev.to/igorrubinovich/books-that-shaped-me-professionally-433e</guid>
      <description>&lt;p&gt;For a while I had the idea to list books that I consider as those that shaped my understanding of an aspect of the software industry.&lt;br&gt;
Note I don't say "programming", because part of the understanding is to see the place of programming, the process of conceiving and writing actual code, in the broader picture.&lt;br&gt;
Some of the books will look trivial to many, but the point is  not what's more advanced - it's about what worked well along the way.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. The C Programming Language - Brian Kernighan and Dennis Ritchie
&lt;/h3&gt;

&lt;p&gt;While I ended up never writing a single line of commercial code in C, reading this book during high school defined the tactical part of my code thinking until these days. I did very basic development before, but it was - BASIC or Pascal (see what I did here with "basic"?). &lt;br&gt;
K&amp;amp;R showed me things that were simply impossible with my previous languages in terms of terseness of the code and the general wit of it. I actually did a chapter's exercises in order before proceeding to the next chapter - as instructed in the introduction. This way I could really follow the teaching.&lt;br&gt;&lt;br&gt;
The wise progression of exercises also served as an inspirational guideline when teaching programming on various occasions.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Introduction to Algorithms - Thomas Cormen
&lt;/h3&gt;

&lt;p&gt;This is not an easy read, but if you never saw an algorithm laid out formally, you have work to do, and this book will guide you. And if you saw some, there are always more to explore. It's too big to swallow in one go (in fact, it's a lifelong learning exercise), but going over at least some of it helps understand, evaluate and come up with new algorithms.&lt;br&gt;
It might seem trivial that one would look at common algorithms as the fundamental topic to address early in the career. But to my surprise while interviewing a frontend dev with good references and about 3 years out of college, I asked him about the difference between breadth-first vs depth-first search, and he replied it sounds as something he heard of during the classes, but it was too long ago to remember details. He did a decent job which may sound as an approval for treating the subject as too abstract, but it's not. I firmly believe that given enough time and coffee anyone who does software engineering should be able to find their way out of a labyrinth with their eyes closed, in at least two different ways. A matter of survival.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. UML Distilled - Martin Fowler
&lt;/h3&gt;

&lt;p&gt;If you don't know Fowler maybe you should review your list of software engineering influencers. He is the modern incarnation of the whole Gang of Four in a single person (and if you are him I hope you are ok with the metaphore and I won't be expelled from the industry).&lt;br&gt;
The book's title is completely misleading since he is using UML as the pretext for an efficient overview of the software / service lifecycle. He really tells you how to start thinking about a software project that doesn't yet exist. From idea to architecture on a napkin, through use cases, design patterns, iterations, testing, maintenance and even sunsetting.&lt;br&gt;
Last but not least, he explains lean/agile/scrum, not as much the specific rituals as the mindset of choosing iterative action and accepting imperfection over excessive planning and analysis paralysis. It's also quite curious how over the years these ideas propagated from software into other areas.&lt;br&gt;
You don't become an architect or a project manager immediately after reading this, but you get a boost in both directions, gain a better understanding of the context of your job, and know what to look at next - GoF.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Design Patterns - Erich Gamma, Richard Helm, Ralph Johnson, John Vlissides, a.k.a Gang of Four, a.k.a GoF
&lt;/h3&gt;

&lt;p&gt;It is not really a book to enjoy as a fun light reading, nor do I believe anyone knows every pattern by heart or can always tell a facade from a proxy. But spending enough time with the book provides a perspective for recognizing higher-level abstractions that tend to repeat in seemingly different branches of software and across completely unrelated industries.&lt;br&gt;
Like the Algorithms book it also provides value in terms of expressing and communicating these ideas and therefore ideas about software design/architecture in general.&lt;/p&gt;

&lt;h3&gt;
  
  
  5. JavaScript Alongee - Reginald Braithwaite
&lt;/h3&gt;

&lt;p&gt;Instead of focusing on the good and bad parts of JavaScript, Braithwaite is focusing on its functional aspect. If you come from the whereabouts of C++/Java/C#, once you understand prototype-based inheritance you might think now you understand the language because you finally understand how to do all the things you learned about classes - in JavaScript. This book will deliver some news for you.&lt;br&gt;
There is a lot of influence of functional thinking in today's JavaScript so maybe for some people it's obvious, but if you feel lost when you see second-order functions or React components, or, God save, combinators, consider this book and you will instantly become a better version of your developer self.&lt;br&gt;
If you are not a complete beginner and will read just one book from this list pick this one.&lt;/p&gt;

&lt;h3&gt;
  
  
  6. The Little Lisper - Daniel P. Friedman, Matthias Felleisen
&lt;/h3&gt;

&lt;p&gt;If you were to read JavaScript Alongee, you would want to check out The Little Lisper even just because Braithwaite drew inspiration from it and mentions this on multiple occasions. &lt;br&gt;
The Little Lisper is the oldest book in this list. It's a brilliant piece of literary and educational art. It will tempt you to run a few examples in Lisp (there's an app for that so you can write short lisp programs on your phone while bored on a plane or a picnic). It will take you further into the functional land. It's full of brain teasers and will tickle and talk to your inner child that long ago just wanted to play with computers and tech and dreamed of doing it for real. Someday.&lt;/p&gt;

&lt;p&gt;Be warned: if you actually read enough of books 4 and 5 you will see Lisp jump out of JavaScript's C clothing and even bite you. Extremely susceptible individuals may feel the urge to always replace for loops with recursion. Resist the urge. There are reasons why functional libraries like lodash have for loops under the hood.&lt;/p&gt;

&lt;p&gt;I'm sure everyone's mileage varies and was myself tempted to list more books, but they can be a topic for another post.&lt;/p&gt;

&lt;p&gt;Which books did it for you and why?&lt;/p&gt;

</description>
      <category>books</category>
      <category>architecture</category>
      <category>programming</category>
      <category>discuss</category>
    </item>
    <item>
      <title>Concurrency in Systems Integration</title>
      <dc:creator>Igor Rubinovich</dc:creator>
      <pubDate>Wed, 07 Feb 2024 16:51:22 +0000</pubDate>
      <link>https://dev.to/igorrubinovich/concurrency-in-systems-integration-3hik</link>
      <guid>https://dev.to/igorrubinovich/concurrency-in-systems-integration-3hik</guid>
      <description>&lt;h2&gt;
  
  
  Abstract
&lt;/h2&gt;

&lt;p&gt;Concurrent access to resources presents a frequent challenge in systems integration. WebSemaphore is a scalable, serverless solution that aims to address a niche set of concerns related to concurrency and time-optimized allocation of resources.&lt;/p&gt;

&lt;h2&gt;
  
  
  Concurrency in systems integration context
&lt;/h2&gt;

&lt;p&gt;More often than not, there are some concurrency limits to every system or API we need to communicate with. Examples include the max number of simultaneous connections to a database, actual maximum performance capacity of a web server or artificial limits imposed by a provider based on your usage plan.&lt;/p&gt;

&lt;p&gt;While some products and frameworks provide tools to manage concurrency, they serve better customers that are already using a relevant part of the stack.&lt;/p&gt;

&lt;p&gt;Some examples are HashiCorp Consul that requires setting up a cluster and provides no queueing mechanism, or Redis that also requires a cluster and dealing with special primitives. If the tooling exists and is already maintained, works well within an enterprise or department and can address the issue, the enterprise architects will in most cases choose the shortest path and use their default tool.&lt;/p&gt;

&lt;p&gt;The premise of WebSemaphore is that none of the existing solutions are focused on concurrency and optimization as the primary concern, in particular none address it in a cloud-first, SaaS/IaC manner.&lt;/p&gt;

&lt;p&gt;Microservices-based architectures in their variety are here to stay, with serverless technologies contributing to and facilitating the trend, and are increasingly used by companies of all sizes. As a result, systems in different parts of organizations evolve in an independent manner, essentially similar to startups in terms of their technological independence. Due to the varying needs and competencies, they often use different stacks and platforms, interconnected in complex integration graphs.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5f5v3sk17pgbwcffhknb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5f5v3sk17pgbwcffhknb.png" alt="A sample integration graph" width="800" height="429"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Fig 1. An actual integration graph demonstrating 20 systems with 46 integrations in only 2 enterprise projects. The blurred items are confidential.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;One of the outcomes is that a language or environment-specific implementation and associated complexity can stand in the way of adoption of such solutions across the different stacks.&lt;/p&gt;

&lt;p&gt;We believe the IaC microservices in the style of modern clouds  is the way of the future as they abstract away the technology and implementation details, instead providing a focused set of capabilities that can be easily integrated into any solution.&lt;/p&gt;

&lt;p&gt;WebSemaphore fits the definition with its specific, restricted scope and stack independence, and contributes its functionality to the overall mesh of IaC concepts in the cloud/serverless ecosystem.&lt;/p&gt;

&lt;h2&gt;
  
  
  WebSemaphore
&lt;/h2&gt;

&lt;p&gt;The value proposition of WebSemaphore is to provide an IaC-style, serverless, zero-setup solution that enables developers to seamlessly solve concurrency challenges.&lt;/p&gt;

&lt;p&gt;Due to its minimalistic design, it can upgrade an existing naive/retry-based flow to an async flow by applying only a handful implementation changes along the lines of Fig 3 as shown below.&lt;/p&gt;

&lt;h2&gt;
  
  
  When do we need WebSemaphore
&lt;/h2&gt;

&lt;p&gt;Consider a flow F where consumer C wants to access a limited resource R from multiple processes. The processes could either compete for the resource by trial and failure or queue on any metric, time-ascending being the most frequent and identical to FIFO ordering.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fktu8w4i0roxdpstooprn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fktu8w4i0roxdpstooprn.png" alt="Image description" width="800" height="208"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Fig 2. Flow F. Consumer C is invoking Provider R directly. Note that the Consumer and Provider may be or not be in different organizations&lt;/em&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  No queueing mechanism / Sync
&lt;/h4&gt;

&lt;p&gt;In lack of a control mechanism on the consumer side, the next best way to handle capacity failures is retry with any strategy, &lt;a href="https://www.google.com/url?q=https://en.wikipedia.org/wiki/Exponential_backoff&amp;amp;sa=D&amp;amp;source=editors&amp;amp;ust=1707309491715313&amp;amp;usg=AOvVaw1dRu8uhLs_F2zlpg8BsFZQ"&gt;exponential backoff&lt;/a&gt; being one of the most popular choices.&lt;/p&gt;

&lt;p&gt;It’s easy to see how transactions may be lost with this approach, especially during activity peaks. This may be the goal for some scenarios, such as locking a unique item in an eshop while it’s in someone’s active basket - we don’t want anyone to be able to lock it unless the current order expires. However, it fits poorly with the order processing flow. You don’t really want your customer to click “retry” during payment - a much better strategy, when applicable, is to accept the order and process it when capacity is available.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fam9pblvk4j2h0p2ykvoz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fam9pblvk4j2h0p2ykvoz.png" alt="Image description" width="800" height="316"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Fig 3. Access pattern without a queueing mechanism. The system rejects overcapacity traffic during peaks, and is idle during the lows. For a live, configurable simulation check out &lt;a href="https://www.websemaphore.com/demos/simulation"&gt;https://www.websemaphore.com/demos/simulation&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;From the consumer’s perspective, in cases where retries are undesirable, they may still be chosen as an intermediate solution due to the relative complexity of the other choice. One might even get away with this as long as there is capacity excess most of the time.&lt;/p&gt;

&lt;p&gt;From the provider’s perspective, the gaps in processing translate into lost business opportunity and reduced return on investment. Moreover, if the system has no awareness of its own capacity, excessive requests can risk the execution of the currently processing requests, similarly to what happens in a DDoS attack.&lt;/p&gt;

&lt;h4&gt;
  
  
  With a queueing mechanism / Async
&lt;/h4&gt;

&lt;p&gt;The async  approach breaks the flow into  (1) initiator and (2) processor steps. The initiator invokes the processor indirectly via an intermediate mechanism that controls the throughput.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0tjrtza7fbufoxh4elpf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0tjrtza7fbufoxh4elpf.png" alt="Image description" width="800" height="480"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Fig 4. Asynchronous Flow F. Consumer C is invoking Provider R indirectly&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;To allow accepting requests regardless of capacity, a queue is needed. However, a queue alone does nothing to satisfy the concurrency control requirement, and that’s where the need for an atomic, consistent counter emerges.&lt;/p&gt;

&lt;p&gt;Below is the chart for running the data from Fig 3 through an asynchronous semaphore.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdrerev0dr6rce54fs9x4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdrerev0dr6rce54fs9x4.png" alt="Image description" width="800" height="316"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Fig 5. Async request simulation - semaphore performance over time. Note how lockValue stays high long after the traffic peaks and how eventually peaks of waiting messages are exhausted.&lt;/em&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Is async better than sync?
&lt;/h4&gt;

&lt;p&gt;Like in many “which is better” debates the answer depends on the situation. Here - on the use case. But if you are using sync where async would be reasonable, you are missing on business opportunities. How much you are missing depends on the traffic patterns. For an illustration, here are the execution totals specifically for the simulation presented in Fig 3 and Fig 5 respectively:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp85nb3l0qr60t3rrvrva.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp85nb3l0qr60t3rrvrva.png" alt="Image description" width="800" height="205"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Fig 5. Sync (left) and Async (right) request simulation - totals over time. The final result on the left side is 42.9% rejections. Please note this strongly depends on the actual environment and flow. There are 0 rejections on the right. &lt;a href="https://www.websemaphore.com/demos/simulation"&gt;Check our simulation for an interactive, personalizable comparison.&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;If you find yourself implementing a variation of the async mode and your tools don’t cut it for you out of the box, consider using &lt;a href="https://www.google.com/url?q=https://www.websemaphore.com&amp;amp;sa=D&amp;amp;source=editors&amp;amp;ust=1707309491720690&amp;amp;usg=AOvVaw0xQhoXnUWFLAPZpFKT0IqW"&gt;WebSemaphore&lt;/a&gt;.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;WebSemaphore doesn’t force you to choose sync over async - both modes can be used interchangeably on the same semaphore without breaking consistency. Thus for example, existing solutions can stay with the sync model and phase the migration to async mode over the various critical paths in the application.&lt;/p&gt;

&lt;h2&gt;
  
  
  Some queues are more equal
&lt;/h2&gt;

&lt;p&gt;There comes a time when the solution requires predictable concurrency management. Armed with a queueing solution laying around the project, the adventurous engineer starts digging into &lt;a href="https://www.google.com/url?q=https://stackoverflow.com/questions/28414484/how-to-limit-concurrent-message-consuming-based-on-a-criteria&amp;amp;sa=D&amp;amp;source=editors&amp;amp;ust=1707309491721321&amp;amp;usg=AOvVaw2W0QFbdneKZ8V3kxG112fJ"&gt;how to achieve this&lt;/a&gt;. Another astute developer realizes that we are in fact in the consensus land and has ZooKeeper used elsewhere in the organization. Now she needs to figure out how to use its primitives to achieve the result; find a way to keep persistent connections - especially challenging in a serverless model; finally, make sure the cluster is sufficient for the traffic. A third developer reads an article about how this can be achieved with Redis or DynamoDB.&lt;/p&gt;

&lt;p&gt;Congratulations, you (or your engineers) have just gotten completely distracted from your use case - see you in a few weeks or months. And it's not about skills or competencies - rather, it's about focus.&lt;/p&gt;

&lt;p&gt;Assuming they had the time, budget and skill, what would they build?&lt;/p&gt;

&lt;p&gt;Let’s list the desired features based on the discussion above. We would like to have an self-contained, scalable service that installs a queue in front of a semaphore, making them work as a synchronized unit, so we fulfill the following&lt;/p&gt;

&lt;h3&gt;
  
  
  Requirements
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Concurrency Control:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Limit concurrent throughput&lt;/li&gt;
&lt;li&gt;  Allow lock acquisition for an arbitrary duration&lt;/li&gt;
&lt;li&gt;  Maximize capacity usage over time&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Queue Management:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Pause processing while keeping ingress&lt;/li&gt;
&lt;li&gt;  Multiplexable (think SQS FIFO GroupId)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Failure Handling and Recovery:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Recover from failure, including traffic reprocessing and redriving&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Setup and Integration:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Easily reconfigure the queue, on-the-fly where it makes sense&lt;/li&gt;
&lt;li&gt;  Minimal to no setup&lt;/li&gt;
&lt;li&gt;  A simple API that feels native in the embedding code&lt;/li&gt;
&lt;li&gt;  Stack-independent&lt;/li&gt;
&lt;li&gt;  Scalability: effortless, preferably serverless&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The range of queueing mechanisms is wide, and there are also a few self-standing  respected products providing semaphores within a mostly unrelated feature set.&lt;/p&gt;

&lt;p&gt;For example, the few queueing solutions that allow suspension are not serverless and the ones that are serverless will only let messages wait for a certain maximum amount of time. Systems including semaphoring/signaling  mechanisms in their turn mostly don’t provide queues or require their clients (or their proxies) to maintain a persistent connection.&lt;/p&gt;

&lt;p&gt;Some larger integration products would potentially include similar capabilities but cause the overhead of committing to an overly complex solution. This comes with a higher license fee for the extra features. It will also include a learning curve and at least a partial disruption of the stack (tools, language, often Java). Unless it’s already used where you are, this is comparable to buying and then learning to use a cannon to shoot a fly. Additionally, many of the established products in this space would be better suited for ETL and scheduled jobs rather than near real-time integrations. Finally, they target complex integration environments at organization level whereas you may be solving a specific challenge where the integration is but one of many implementation details.&lt;/p&gt;

&lt;p&gt;While the landscape of available solutions is diverse, few if any of the existing offerings explicitly target concurrency control and throughput optimization in near real-time, distributed communication as their primary area of focus. This highlights an opportunity for a solution like WebSemaphore to address this specific need.&lt;/p&gt;

&lt;h2&gt;
  
  
  The unknown unknowns
&lt;/h2&gt;

&lt;p&gt;Concurrency issues are not typically stated as initial business/project requirements, unless they represent key features of a product. Instead, they will surface during analysis iterations. How soon - depends on the architect’s level of experience, time to collect and analyze project environment data, the technological readiness and policies of the organization.&lt;/p&gt;

&lt;p&gt;To see how a simple, barely technical set of requirements unfolds into a distributed concurrency problem in a global project, see my next article coming up next week.&lt;/p&gt;

&lt;p&gt;WebSemaphore is intended to organically merge into an existing/developing solution in the form of a few API calls, and cause minimum disruption to the existing development flow.&lt;/p&gt;

&lt;h2&gt;
  
  
  What concerns does WebSemaphore address
&lt;/h2&gt;

&lt;p&gt;Below is a summary of the benefits of using the flow described above.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Flow consistency:&lt;/strong&gt; Concurrency limiting/isolation of long flows (such as those including: AWS State machine executions, long computations, physical devices or offline activities).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Data consistency:&lt;/strong&gt; FIFO execution preserving order of events for event-based architectures&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Concurrency control:&lt;/strong&gt; Adhering to concurrency limits implied by process, hardware or provider limits.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Failure tolerance / Disaster Recovery:&lt;/strong&gt; The service will handle temporary outages and allow suspension of processing for failing streams until resolution. Ingress of inbound events for such streams will continue. Processing is simple to restart once the issue is resolved. Redirecting traffic to a functioning destination is an upcoming feature.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Near-real-time and optimal resource utilization:&lt;/strong&gt; processing  capacity must not sit idle. We should process almost immediately most of the time.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Dynamic capacity management:&lt;/strong&gt; Where scaling is possible (such as with EC2 autoscaling), it may take some time to provision additional processing units. Imagine that such provisioning takes a longer time due to some online or even human-participated process such as approval. In these cases WebSemaphore is the ideal tool to provide the elasticity required while the extra capacity is stood up. Once the extra units are available, a simple configuration call is sufficient for WebSemaphore to start processing at a higher rate and catch up with the backlog.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In this article we looked at some specialized challenges in enterprise systems integration and the unique feature set WebSemaphore offers to address them. We concluded with a concise summary of the concerns WebSemaphore aims to address.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;WebSemaphore is in beta and we are actively looking for pilot customers to try out all of the features. As an early partner you will have the exclusive opportunity to influence product priorities as we make it fit your needs.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Looking forward to hearing from you at &lt;a href="https://www.websemaphore.com/contact"&gt;https://www.websemaphore.com/contact&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  See also
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;a href="https://www.google.com/url?q=https://www.websemaphore.com/docs/v1&amp;amp;sa=D&amp;amp;source=editors&amp;amp;ust=1707309491726460&amp;amp;usg=AOvVaw0N-VAbLcPuYkLhS214ckLc"&gt;WebSemaphore documentation&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;  &lt;a href="https://www.google.com/url?q=https://dev.to/igorrubinovich/introducing-websemaphore-22ih&amp;amp;sa=D&amp;amp;source=editors&amp;amp;ust=1707309491726759&amp;amp;usg=AOvVaw2TeG6NMKZ3XWQKidE2Z25y"&gt;Introducing WebSemaphore&lt;/a&gt; on dev.to&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>concurrency</category>
      <category>cloud</category>
      <category>semaphore</category>
      <category>api</category>
    </item>
    <item>
      <title>Introducing WebSemaphore</title>
      <dc:creator>Igor Rubinovich</dc:creator>
      <pubDate>Fri, 02 Feb 2024 12:19:47 +0000</pubDate>
      <link>https://dev.to/igorrubinovich/introducing-websemaphore-22ih</link>
      <guid>https://dev.to/igorrubinovich/introducing-websemaphore-22ih</guid>
      <description>&lt;p&gt;Today, I'd like to introduce a new product that was in the works for the last few months. It’s called WebSemaphore and it will help you manage concurrency in your API communication for profit and business continuity.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is concurrency control
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Concurrency control ensures that correct results for concurrent operations are generated, while getting those results as quickly as possible (Wikipedia).&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Concurrency management, in particular management of concurrent access to a resource, shows up frequently when dealing with databases, file systems and external resources. The term “resource” is often used in this context to represent anything, be it a service, a device, a software license or a physical space such as a meeting room - as long as there is an API to control it. Some terms that point to concurrency without directly mentioning it are locks, mutexes, synchronization and others.&lt;/p&gt;

&lt;p&gt;In the context of modern highly distributed systems WebSemaphore provides a SaaS version of the &lt;a href="https://www.google.com/url?q=https://www.websemaphore.com/docs/v1/concepts/terms&amp;amp;sa=D&amp;amp;source=editors&amp;amp;ust=1706238114575728&amp;amp;usg=AOvVaw0Ldo3Pqu2dBcs_Vvts-ctk" rel="noopener noreferrer"&gt;semaphore&lt;/a&gt; construct known from the realm of databases, operating systems and recently cloud environments. To allow for elasticity and flexibility it also combines the traits of a message broker and an integration platform.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why manage concurrency?
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Data consistency
&lt;/h3&gt;

&lt;p&gt;Most frequently and naturally concurrency limits show up when updating data. Imagine two processes that would like to update a database entry at the same time, incrementing its value by one. To do this both would read the current value, add one and write the result back. Regardless of whether the action is performed in the database (e.g. with an SQL statement) or outside by some wrapping code, both processes risk using old data and overwriting each other’s results unless special measures are taken.&lt;/p&gt;

&lt;h3&gt;
  
  
  Capacity/load management
&lt;/h3&gt;

&lt;p&gt;Any given system can process only so many requests at a time. The number may be high but is always finite. As maximum capacity is reached, the system will crash if the load is not slowed down or redirected to another instance. In such scenarios systems become non-responsive and return errors.&lt;/p&gt;

&lt;h3&gt;
  
  
  Failure protection
&lt;/h3&gt;

&lt;p&gt;Elastically managing capacity in terms of concurrent processes provides clients with protection against failing APIs. If you think about it, a service that doesn’t respond is equivalent to a service with zero throughput. If we can buffer input when the throughput is arbitrarily low, we can also handle the case when there’s no throughput at all.&lt;/p&gt;

&lt;h3&gt;
  
  
  Explicit limitations
&lt;/h3&gt;

&lt;p&gt;Sometimes concurrency is simply an outcome of a use case. An easy example is a printer which can only work out one page at a time. There is simply no way other than not allowing anyone to print when a job is in progress and/or queue the incoming print jobs.&lt;/p&gt;

&lt;p&gt;Many present-day APIs have limitations that will not allow a client to exceed a certain number of concurrent requests, specified by the user’s license / SLA.&lt;/p&gt;

&lt;h2&gt;
  
  
  When WebSemaphore can help
&lt;/h2&gt;

&lt;p&gt;WebSemaphore wraps any API and provides amortization of traffic so that the total count of processes accessing the API doesn’t exceed a preconfigured value.&lt;/p&gt;

&lt;h3&gt;
  
  
  During request traffic spikes
&lt;/h3&gt;

&lt;p&gt;instead of getting rejected, requests get queued and delivered when current processing is complete.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.broadmind.eu%2Fshared-media%2FSmooth_Spikes_With_Websemaphore.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.broadmind.eu%2Fshared-media%2FSmooth_Spikes_With_Websemaphore.png" alt="Meet customer demand at spikes with WebSemaphore"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  During outages
&lt;/h3&gt;

&lt;p&gt;If your system goes down, WebSemaphore keeps accepting requests. As soon as you’ve fixed the issue it will proceed feeding the requests in the same order they arrived.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.broadmind.eu%2Fshared-media%2FBusiness_Continuity_During_Outages_WebSemaphore.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.broadmind.eu%2Fshared-media%2FBusiness_Continuity_During_Outages_WebSemaphore.png" alt="Keep the requests coming during outages with WebSemaphore"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Smooth and efficient scaling of systems
&lt;/h3&gt;

&lt;p&gt;Modern clouds allow easy scaling of the number of instances when demand goes up. Depending on your application and the time it takes to spin up an instance (which may sometimes require human approval), the traffic that comes in the meantime may be rejected. WebSemaphore will keep accepting requests until the new instances are ready. You can then dynamically increase the concurrency limit and process the backlog.&lt;/p&gt;

&lt;p&gt;With WebSemaphore, you can take your time scaling up without losing incoming requests. On the other hand, you can scale down when the spike is not over completely and WebSemaphore will let you catch up with the requests during the lower traffic period.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.broadmind.eu%2Fshared-media%2FSmooth_Scaleup_With_WebSemaphore.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.broadmind.eu%2Fshared-media%2FSmooth_Scaleup_With_WebSemaphore.png" alt="Buffer traffic while scaling up and down with WebSemaphore"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Optimize capacity/throughput
&lt;/h3&gt;

&lt;p&gt;While scaling up is easy with modern providers, it’s also expensive. Planning for maximum capacity is the most performant version while minimal capacity will not hold up to any traffic spikes. WebSemaphore allows planning for average traffic while allowing the resource to be used at optimal capacity at all times.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.broadmind.eu%2Fshared-media%2FOptimize_Spend_With_WebSemaphore.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.broadmind.eu%2Fshared-media%2FOptimize_Spend_With_WebSemaphore.png" alt="Optimize resource utilization with WebSemaphore"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  When the use-case requires
&lt;/h3&gt;

&lt;p&gt;Sometimes a whole flow needs to be performed in such a way that no more than one or N such flows should happen at the same time. This is true even with ephemeral resources such as AWS Lambdas, limited to 1000 per region in a single account.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why WebSemaphore
&lt;/h2&gt;

&lt;p&gt;Being a cloud-native, SaaS product, WebSemaphore requires zero setup or infrastructure maintenance. We also made the API simple and understandable. Since it works over HTTPS/WebSockets, it can immediately work with almost any stack you are currently using.&lt;/p&gt;

&lt;p&gt;To see whether WebSemaphore is something that could work for you please head to &lt;a href="https://www.google.com/url?q=http://www.websemaphore.com&amp;amp;sa=D&amp;amp;source=editors&amp;amp;ust=1706238114580211&amp;amp;usg=AOvVaw134DUpWUcXPzMdgsKgAZN-" rel="noopener noreferrer"&gt;www.websemaphore.com&lt;/a&gt;. If you’d like to chat, use &lt;a href="https://www.google.com/url?q=https://www.websemaphore.com/contact&amp;amp;sa=D&amp;amp;source=editors&amp;amp;ust=1706238114580589&amp;amp;usg=AOvVaw2TkYv7o-B_oeDO-T1qLYu3" rel="noopener noreferrer"&gt;https://www.websemaphore.com/contact&lt;/a&gt; to schedule a call.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;WebSemaphore is in beta and is actively looking for pilot customers to help shape the product decisions. Get in touch if you are interested.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>concurrency</category>
      <category>cloud</category>
      <category>distributedsystems</category>
      <category>semaphore</category>
    </item>
  </channel>
</rss>
