DEV Community

Vitaliy Akimov
Vitaliy Akimov

Posted on • Updated on • Originally published at Medium

Decouple Business Logic using Async Generators

Async generators are new in JavaScript, and I believe it is a very remarkable extension. It provides a simple but powerful tool for splitting programs into smaller parts, making sources easier to write, read, maintain and test.

The article shows this using an example. It implements a typical front-end component, namely drag and drop operations. The same technique is not limited to front-ends. It is hard to find where it cannot be applied. I use the same in two big compiler projects, and I’m very excited how much it simplifies there.

Drag and Drop

You can drag boxes from a palette in the top and drop into any of gray areas. Each drop area has its specific actions. A few items can be selected. Yellow ones have inertial movement.
All the features are independent there. They are split into stages. Some stages compute information shared by a few features. This does introduce some dependency, but can be surely avoided or controlled. All the features are simple to enable, disable, develop, test and debug separately. A few developers or teams could work on it in parallel very efficiently.

I assume some basic knowledge of async generators (or at least of async functions and generators separately) and some fundamentals of HTML DOM (at least knowing what it is). There are no dependencies on third-party JavaScript libraries
For the demo, let’s pretend we don’t know full requirements set and add new a feature only after we finish something and it works. Playing with already working software on intermediate stages typically boosts creativity. It is one of the main components of the agile software development core. I’d better write something not perfectly designed but working first. We can improve it using refactoring any time. Async generators will help.

Typically, at the beginning of any project, I don’t want to spend time on choosing the right framework, library or even an architecture. I don’t want to overdesign. With the help of async iterators, I can delay the hard decisions to a point where I have enough knowledge to make a choice. The earlier I take some option, the more chances there are for mistakes. Maybe I won’t need anything at all.

I’ll describe a couple of steps only here. The other steps are small and can be read directly from code effortlessly. They are just a matter of working with DOM, not a subject of the article. Unlike the transpiled final demo above, the demos in fiddles below work only in a browser supporting async generators. These are, for example, Chrome 63, Firefox 57. First examples also use pointer events polyfill, replaced in the last example.

Async generators

All the samples share nano-framework sources. It is developed once, at the beginning and copy-pasted without any change. In the real project, these are separate modules, imported into other modules if needed. The framework does one thing. It converts DOM events into async iterator elements.
Async iterator has the same next method like ECMAScript plain iterator, but it returns a Promise resolving to Objects with value, done fields.

Async generator functions is an extended function returning an async iterator. Like original non-async generator is a function returning a non-async iterator.

Async generators combine async functions and generator functionality. In the bodies of such functions, we can use await together with yield expressions, and they do exactly what these expressions do in async functions and generators respectively. Namely suspends execution control until Promise in await argument is resolved and for yield outputs value and suspends until caller requests next value.

Here’s preliminary framework implementation, with the first version of business logic:

It is a working sample, press Result there to see it in action. There are four elements you can drag within the page. The main components are send, produce and consume transducers. The application subscribes to DOM events and redirects them into the framework using send function. The function converts the arguments into elements of async iterator returned by produce call. The iterator never ends and called at a module’s top level.

There is for(;;) loop in produce. I know it looks suspicious, you may even have it denied in your team code review checklist or event by some lint rule, since for code readability we want exit condition for loops to be obvious. This loop should never exit, it is supposed to be infinite. But it doesn’t consume CPU cycles since most of the time it will sleep in awaitand yield expressions there.

There is also consume function. It reads any async iterator in its argument, doing nothing with the elements, never returning. We need it to keep our framework running.

async function consume(input) {
  for await(const i of input) {}
}

It is an async function (not generator), but it uses new for-await-of statement, an extension of for-of statement. It reads async iterators, rather than original ECMAScript iterator, and awaits each element. Its simplified implementation could transpile the original consume code into something like this:

async function consume(input) {  
    const iter = input[Symbol.asyncIterator]()
    for(let i;(i = await iter.next()).done;) {}
}

The main function is an entry point of the application’s business logic. The function is called between produce and consume in the module’s top level.

consume(main(produce()))

There is also a small share function. We need it to use the same iterator in a few for-await-of statements.

The first monolithic version of business logic is fully defined in main. With the example, you can already see the power of async generators. The application state (where we started dragging — x, y variables) are local variables, encapsulated inside the function. Besides data state, there is also execution control state. It is a kind of implicit local variable storing position where the generator is suspended (either on await or yield).

The same function could be rewritten without generators, for example into something like this:

function main(state) {
  for(;;) {
    switch(state.control) {
    case "init":
      state.action = "read"
      state.control = "loop1"
      return
    case "loop1":
      const i = state.value 
      if (i.type === "pointerdown") {
        const element = state.element = i.target.closest(".draggable")
        if (element) {
          const box = element.getBoundingClientRect()
          state.x = box.x + window.pageXOffset - i.x
          state.y = box.y + + window.pageYOffset - i.y
          state.control = "loop2"
          state.action = "read"
          return
        }
      }
      state.control = "loop1"
      state.action = "yield"
      state.value = i
      return
    case "loop2":
      const j = state.value
      if (j.type === "pointerup") {
        state.control = "loop1"
        break
      }
      if (j.type === "pointermove") {
        state.element.style.left = `${j.x + state.x}px`
        state.element.style.top = `${j.y + state.y}px`
      }
      state.action = "yield"
      state.control = "loop1"
      state.value = j
      return
    }
  }
}

It is much more verbose comparing to main function in the original version, isn’t it? It is also less readable, the execution control isn’t clear. It is not immediately seen how execution control reaches some state.

There are quite a few other implementation options. For example, instead of switch statement we could use callbacks for the control state, we also could use closures to store the state, but that won’t change much. To run the function, we also need a framework. It interprets the action the function demands to execute ("read", "yield" in the example), compose the stages, etc.

Splitting

The size of the function and no framework requirements are not the only advantages of async generators. The real magic begins when we combine them.

The most often used function combination is their composition, say for function f and g this is a => f(g(a)). The composition doesn’t need any framework. It is a JavaScript expression.

If we compose two plain functions, the second function starts doing its job only after the first one exists. If they are generators, both functions run simultaneously.

A few composed generator functions make a pipeline. Like in any manufacture, say cars, splitting jobs into a few steps using an assembly line significantly increases productivity. Similarly, in the pipeline based on async generators, some function may send messages to the next using values its result iterator yields. The following function may do something application specific depending on a content of the message or pass it to the next stage.

These functions are the component of business logic. More formally it is any JavaScript function, taking async iterator as its parameter and returning another async iterator as a result. In most cases, this will be async generator function, but not necessary. Someone may create some combinator functions building resulting object with async iterator interface manually.

There are many names commonly in use for such functions now. For example Middleware, Epic, etc., I like name Transducer more and will use it in the article.

Transducers are free to do whatever they want with the input stream. Here are examples of what transducers can do on some message arrival:

  • pass through to the next step (with yield i)
  • change something in it and pass next (yield {…i,one:1})
  • generate a new message (yield {type:”two”,two:2})
  • don’t yield anything at all thus filtering the message out
  • update encapsulated state (local variables) based on the message field values
  • buffer the messages in some array and output on some condition (yield* buf), e.g., delaying drag start to avoid false response
  • do some async operations (await query())

Transducers mostly listen for incoming messages on for-await-of loops. There may be a few of such loops in a single transducer body. This utilizes execution control state to implement some business logic requirements.

Let’s see how it works. Let’s split the monolithic main function from the above sample into two stages. One convert DOM events into drag and drop messages — makeDragMessages (types "dragstart", "dragging", "drop") and other updates DOM positions — setPositions. The main function is just a composition of them two.

I split the program here because I want to insert some new message handlers between them. The same way when writing new software I wouldn’t focus too much on how to split the code correctly before I understand why I need this. It should satisfy some reasonable size constraint. They also must be separated on logically different features.

The main function there is actually a transducer too (takes async iterators returns async iterator). It is an example of a transducer which is not an async generator itself. Some larger application may inject main from this module into other pipelines.

This is the final version of the nano-framework. Nothing is to be changed there regardless what new features we add. The new features are function specified somewhere in the chain in main.

First features

Now back to the new features. We want to do something else. Not just dragging something on a page. We have special message names for dragging ("dragstart", "dragging", "drop"). Next transducers can use them instead of mouse/touch events. For example, anytime next we can add a keyboard support, changing nothing for this.

Let’s make some mean to create new draggable items, some area where we can drag them from, and something to remove them. We’ll also flavor it with animation on dropping an item on trash area or outside of any area.

First, everything starts with palette transducer. It detects drag start on one of its element, clones it into a new element and replaces all the original dragging event after with the clone. It is absolutely transparent for all the next transducers. They know nothing about the palette. For them, this is like another drag operations of existing element.
Next assignOver transducer does nothing visible for end-user, but it helps next transducers. It detects HTML element a user drags an item over adds it to all messages using over property. The information is used in trash and in validateOver transducers to decide if we need to remove the element or cancel drag. The transducers don’t do that themselves but rather send "remove" or "dragcancel" messages to be handled by something next. Cancel message is converted to "remove" by removeCancelled. And "remove" messages are finally handled in applyRemove by removing them from DOM.

By introducing another message types, we can inject new features implementations in the middle without replacing anything in the original code. In this example it is animation. On "dragcancel" the item moves back to original position, and on "remove" its size is reduced to zero. Disabling/enabling animation is just a matter of removing/inserting transducers at some specific position.
The animation will continue to work if something else generates "dragcancel" or "remove". We may stop thinking about where to apply it. Our business logic becomes more and higher level.

The animation implementation also utilizes async generators but not in the form of transducers. This is a function returning values from zero to one in animation frames with specified delay, default to 200ms. And the caller function uses it in whatever way it likes. Check for the demo animRemove function in the fiddle above.

Many other animation options are simple to add. The values may be not linear but output with some spline function. Or it may be based not on delay but on velocity. This is not significant for functions invoking anim.

Multi-select

Now let’s add incrementally another feature. We start from scratch, from the nano-framework. We will merge all the steps in the end effortlessly. This way the code from the previous step will not interfere with the new development. It is much easier to debug and write tests for it. There are no unwanted dependencies as well.

The next feature is a multi-select. I highlight it here because it requires another higher order function combination. But at first, it is apparently straightforward to implement. The idea is to simulate drag messages for all selected elements when a user drags one of it.

Implementation is very simple but it breaks the next steps in the pipeline. Some transducers (like setPosition) expect exact messages sequence. For a single item, there should be "dragstart" followed by a few "dragging" and a "drop" in the end. This is no longer true.

A user drags a few elements at the same time. So there’ll be messages now for several elements simultaneously. There is only one start coordinate in setPosition x and y local variables. And its control flow is defined only for one element. After "dragstart" it is in the nested loop. It doesn’t recognize any next "dragstart" until exiting that loop on "drop".

The problem can be solved by resorting to storing state, including a control state, in some map for each element currently dragging. This would obviously break all async generator advantages. I have also promised there are no changes to the nano-framework. So it is not the solution.

What we need here is to run transducers expecting to work with a single element in a kind of a separate thread. There is a byElement function for this. It multiplexes input into a few instances of a transducer passed as its argument. The instances are created by calling the transducer in argument supplying it filtered source iterator. Each source for each instance outputs only messages with the same element field. The outputs of all the instances are merged back into one stream. All we need to do is to wrap transducers with byElement.

First, it converts DOM events into application-specific messages in makeSelectMessages. The second step adds selection indicator and highlight selected items after selection ended in selectMark. Nothing is new in the first two. The third transducer checks if a user drags a highlighted item, it gets all other highlighted items and generates drag and drop messages for each of them in propagateSelection. Next setPosition runs in a thread per each element.

Final result

After the multi-selection feature is implemented it is implemented once and for everything. All we need to change is to add it to main and correctly wrap other transducers with byElement if needed. This may be done either in main or in a module where the transducers are imported from.

Here is the fiddle with the final demo with all the features merged:

All the transducers are in fact very lightweight thread. Unlike real threads they are deterministic but they use non-deterministic DOM events as a source. So they must be considered non-deterministic as well.

This makes all the typical problems of multi-threaded environments possible, unfortunately. These are racings, deadlocks, serializations, etc. Fortunately, they are simple to avoid. Just don’t use mutable shared data.

I violate this constraint in the demo by querying and updating DOM tree. It doesn’t lead to problems here, but in the real application, it is something to care about. For fixing this, some initial stage may read everything needed from a DOM and pack into messages. The final step may perform some DOM updates based on messages received. This may be some virtual DOM render, for example.

Communicating with the messages only allows isolating the thread even more. This may be Web Worker, or even a remote server.

But again, I wouldn’t worry before it became a problem. Thanks to async iterators, the program is a set of small, isolated and self-contained components. It is straightforward to change anything when (if) there is any problem.

The technique is compatible with other design techniques. It will work for OOP or FP. Any classic design pattern applies. When main function grows big, we can add some dependency injection to manage the pipeline, for example.

In the example byElement calls abstract threadBy. In practice, you’ll have more and more such abstract utilities. I wrote a concrete implementation for grouping streams by element, and only after abstracted it. It was very simple, as the concrete implementation was very small.

The technique reduces worrying about application’s architectures. Only write a specific transducer for each feature you need to implement. Abstract common parts into stand-alone transducers. Split it into a few if something else is to be done in the middle. Generalize some parts into abstract reusable combinators only when(if) you have enough knowledge for this.

Relation to other libraries

If you are familiar with node-streams or functional reactive libraries such as RxJS, you could already spot many similarities. They use different stream interfaces.

Transducers don’t require to be async generators as well. It is just a function taking a stream and returning another stream regardless of what interface the stream has. Same technique to split business logic may be applied to any other stream interfaces. Async generators just provide excellent syntax extension for them.

Someone familiar with Redux may notice messages handlers are very similar to middlewares or reducers composition. Async iterator can be converted into Redux middleware as well. Something like, for example, is done in redux-observable library but for different stream interface.

Though, this violates Redux principles. There is no longer a single storage now. Each async generator has its own encapsulated state. Even if it doesn’t use local variables the state is still there, it is the current control state, position in the code where the generator was suspended. The state is also not serializable.

The framework fits nicely with the Redux underlying patterns though, say, Event Sourcing. We can have a specific kind of messages propagating some global state diffs. And transducers can react accordingly, probably updating their local variables if needed.

The name, transducer, is typically associated with Clojure style transducers in JavaScript world. Both are the same things on a higher level. They are again just transformers of stream objects with different interfaces. Though Clojure transducers transform stream consumers, async iterator transducers from this article transform stream producers. A bit more details are in Simpler Transducers for JavaScript article.

We could transform consumer in async iterators as well, by transforming arguments arrived in next/throw/return methods of iterators. In this case, we won’t be able to utilize for-await-of though, and there are no evident benefits.

Extensions

I now work on a transpiler for embedding effects in JavaScript. It can handle ECMAScript async, generators and async generators function syntax extensions to overload default behavior.

In fact, the transpiled demo above was built with it. Unlike similar tools like regenerator, it is abstract. Any other effect can be seamlessly embedded in the language using a library implementing its abstract interface. This can significantly simplify JavaScript programs.

At the moment there are libraries only for standard effects implementation. There’ll be more soon.

For example, possible applications are faster standard effects, saving current execution to a file or DB and restore on a different server or recover after hardware failure, move control between front-end and back-end, on changing input data re-execute only relevant part of the program, use transactions, apply logical programming techniques, even Redux principles for async generators may be recovered.

The compiler implementation itself uses the technique described in the article. It uses non-async generators since it doesn’t have any async messages source. The approach significantly simplified the previous compiler version done with Visitors. It now has almost a hundred options. Their implementation is almost independent, it is still simple to read and extend.

Top comments (0)