About a year ago Nikita Prokopov published an article-manifesto "Software disenchantment". Judging by the positive feedback, developers want to care about the quality of the products they produce. Is it time to start acting, maybe?
In this post, I want to talk about my project, which, in my opinion, can cure the main performance problems of the modern Web and make the user a little bit happier. Here they are - JS bundles size, high time to interactive, high RAM and CPU consumption.
Before reading further, follow the link. Try playing several matches. It is desirable to play from a desktop.
A bit of history
In the beginning, the creators of the Web designed browser as a thin client for web servers. The browser displayed hypertext pages received from a server. It was simple and elegant. As is often the case, a beautiful idea confronted reality, and after a few years, browser manufacturers added support for a scripting language. At first, it was intended only for decorating. Until the middle of the first decade, it was considered proper to create websites with JS as just as an option.
The modern approach of website development is the result of the increasing requirements to user interface interactivity. Tasks to improve interactivity fell on the shoulders of template designers. Often they do not have the competence and authority to develop a "cross-cutting" solution. Template designers have learned JS and became front-end engineers. The logic gradually began to flow from the server to the client. It's convenient for the frontend-guy to write everything on the client side. For the backend-guy, it's convenient not to think about the user. "I'll give you JSON, and then I don't care" -- they say. Two years ago, serverless architecture became popular. The suggestion was that the JS applications would work directly with the database and message queues.
For the moment, a usual website is a complex application written in JS and a simple API server. The main logic executes on a fat client, and the server part degenerates to a database proxy.
If a technical debt on the server side may not affect your user directly, then on the client side will affect. If your startup "takes off" and starts to earn, then further with rising load, the situation will only get worse in terms of performance. Requirements will change. A code base will swell, and a turnover rate in a team will increase. The page will grow fat from dependencies. Website will load obsolete JSON. Background tasks will multiply in number, each of them runs for a few milliseconds every second, which after some time will lead to lags and warming iPad of an unfortunate user so that he or she can fry eggs on it. No one will dare to fix it because of fear to break the system. It will end up with burnt-out front-end guys coming to the manager with a proposal to drop old ugly framework and rewrite everything from scratch on a new shiny one. The manager will refuse, and the front-end guys will start using both together.
How Korolev works
So, what if we go back to the turning point? To the moment when someone came up with the idea of ββupdating content without reloading the page, and historical inevitability spawned AJAX? What if we leave everything on the server and make a thin client? The best sites make a pre-rendering of pages on the server so that a user can see an interface before JS loads. We can go further and leave only the code on the client, which is responsible for processing I/O, taking into account current requirements for interactivity. Thoughts about this led me to the Korolev project.
How does this work from the client point of view? User comes to the page. A server sends the generated HTML and a small script (about six kB without compression), which connects to a server via a web-socket. When a user makes an event (a click, for example), the script sends it to a server. Server process an event generates and send a list of commands like "add a new <div>
there," "add a class to the element," "delete the element." The client applies a list of commands to the DOM. As such, working with HTML does not happen - the script works directly with DOM, so do not worry that a form or position of scroll bar will reset.
What happens on a server? When a request for a page comes from a browser, Korolev creates a new session. An initial state is made and stored in a cache. HTML renders from this state, and a sends to a client as a response to the request β also, the server stores "virtual DOM" in the session. After requesting the page, the server accepts a request to open a web socket. Korolev associates the open web socket with the session. Each event coming from the client can change the state related to the session (but can't modify DOM directly). Each state change results in a call to the render function, which creates a new "virtual DOM" that compares with the old version. The result of the comparison is a list of commands to send to the client.
What happens in a code and a developer's head? Written above could remind you of React, with the difference that everything happens on a server. Korolev has a similar approach. Therefore, if you have worked with React or another "virtual DOM," then Korolev's work style will be familiar to you. If you are not familiar with React, imagine that you have data model and template mapped on it. Event handlers change the data, page changes by itself.
Performance
There are two popular questions about Korolev: "what if the latency is high" and "how does it load my server." Both are very reasonable.
The front-end guy used to the fact that his or her program runs on the user's local machine. It means the changes made to it will be applied as soon as the JS-machine finishes executing the code, and the browser starts rendering. I specifically showed an example at the beginning. If you didn't stop reading, I guess you have had a good experience. Especially if you count, that server hosted in Moscow. If you're live in San Francisco, theoretically minimum round trip time will be 62 ms. Also, you can read the report about UX and response time limits. Check out average client latency in your any website. If it was less than 100, latency is excellent. I hope I dispelled doubts about the possibility of lags.
Back-end guys usually ask the question about the load on the server. The changes inferring engine works very fast: ~10 thousand diffs per second for two arbitrary trees of 500 nodes on the MacBook 2013. Static rendering also gives a pretty good result: up to 1 million pages per second. Each "virtual DOM" is stored and processed in a special serialized presentation and occupies 128 KB of a heap for the average web page. The rendering process is specially optimized and does not have memory and GC overhead.
As for the speed of programming, here Korolev gives excellent benefits. No need to write an additional layer between the database and the server. No need to negotiate a protocol between the client and the server. No need to worry about JS bundles size -- the weight of the JS on the client will always stay the same. There is no need for extra work to support server events: accept the message from the queue and change the session state, Korolev will render and deliver.
The Cost
But advantages have a cost. You should break some habits, and some new ones have to acquire. For example, you will have to leave JS-animations and be satisfied with CSS-animations. You will have to learn how to make the infrastructure initially geo-distributed if you want to serve users from different countries with high quality. You will have to drop JS and switch to Scala.
I'm a little ashamed (actually I'm not) that I misled the reader and did not immediately say that Korolev written in Scala. Would you read to this point if I told you about it above? Talking about Korolev, I have to overcome two stereotypes. The first is related to the fact that server rendering perceived as something slow, not interactive. The second is about Scala is something complicated. Both the first and second stereotypes have nothing to do with reality.
Moreover, programming in React style on Scala is more convenient than on JS. Modern JS tends to functional programming, and Scala gives it out of the box. For example, an object in Scala has a copy() method, which allows you to copy an object by changing some fields. Immutable collections included in the standard library.
Conclusion
Korolev has been developing for three years by several contributors, and many early-stage problems had already solved. The project has detailed documentation, and examples covering all the functionality. I offer to start using Korolev for small independent projects. I hope Korolev will help make the Web less frustrating.
Top comments (5)
Sometimes I have a feeling, that the web developers exists only in Web 2.0 space, somewhere between Angular, Node.js and websocket.
But in the real world, the classical Web 1.0 still exists in good health.
And IMHO, the count of Web 1.0 sites is even greater than the Web 2.0 sites. (or well, I may have the reverse delusion)
Yes, they are not so visible from Web 2.0, because there is no "share on facebook" buttons, OAuth, mobile friendly design and all these social interaction features.
But such web sites exists, they are actively supported and they have regular visitors, actively reading and creating content.
I know this because a huge amount of the web sites I am visiting regularly are exactly of this type. And I know people that browse WEB with JS switched off permanently. (I have JS disable plugin as well and actively test my web sites how they work without JS).
Well, do you have any statistics? How much of those people out there?
No. As I already said, it is possible my personal delusion. But well, most of the web sites I visit regularly are plain web 1.0 sites.
Congratulations!
You have developed ASP.NET Web forms analogue.
Are you sure you had read the article? Does ASP.NET Web Forms support changes inferring (like React)? Does it support server push? Does it support routing? Does it use functional approach?