DEV Community

aziz abdullaev
aziz abdullaev

Posted on

Network optimization (4x WS message size reduction) for sending a lot of data from LiveView to client using push_event

In other article, I discussed about using Task.async_stream to concurrently fetch data in chunks and send to the client to render on a map.

When I implemented the aforementioned optimizations in my localhost, results were phenomenal. 12,000+ points would be fetched from my locally running DB and rendered within a second. However, when deployed, it was not nearly as fast. In fact, it was really slow. It was so bad that it scored 0 for performance in Lighthouse report.

I have analyzed every moving part of application to track down the bottleneck. My DB and VM were both deployed in the same country which resulted in little latency and fast query time. JavaScript (JS) execution time was not considered as I opened the page on the same machine (my laptop). When analyzing the timeline of JS execution along with the rendering, I found that it would take up to 1 second for the new chunk of data to
arrive for JS to execute.

The network was the bottleneck. And the slow network is known to be an Achilles heel of LiveView’s architecture.

My configs are:

Elixir 1.15.7
OTP 26
LiveView 0.20.1
Phoenix 1.7.10
Phoenix LiveView 0.18.18
Ecto 3.11
All running on my MacBook Air M2 8GB Ram
Enter fullscreen mode Exit fullscreen mode

Network optimizations were needed to enhance the performance. Here is what I came up with:

  1. Using single push_event as given in docs (duh)

I looked through the WebSocket (WS) messages and found that each message with 2000 projects would weigh 700kb. Looking at the message content, I found my first obvious mistake.

In Part 1, I sent the data using Enum.reduce:

  def handle_info({:data_received, data}, socket) do
    socket =
      Enum.reduce(data, socket, fn %Project{} = project, socket ->
        socket
        |> push_event("add_project", form_project_map(project))
      end)

    {:noreply, socket}
  end
Enter fullscreen mode Exit fullscreen mode

This resulted in WS messages with repetitive event name with this pattern:

“add_project: [PROJECT], add_project: [PROJECT], add_project: [PROJECT]...”

So, the function was changed to this:

  def handle_info({:data_received, data}, socket) do
    socket =
      Enum.reduce(data, socket, projects, socket ->
        socket
        |> push_event("add_many_markers_for_project", %{data: projects})
      end)

    {:noreply, socket}
  end
Enter fullscreen mode Exit fullscreen mode

Message size was dropped from 750kb to 650kb.

Image description

  1. Minimizing data being transferred

I dropped unused fields for the data I query from database. It was relatively easy, by just adding select: to the query.

      from i in Investment,
        where: not is_nil(i.latitude) and not is_nil(i.longitude),
        limit: ^count,
        order_by: i.id,  
    select: %{
          project_name: i.project_name,
          source_name: i.source_name,
          type: i.type,
          latitude: i.latitude,
          longitude: i.longitude
        }
Enter fullscreen mode Exit fullscreen mode

Next, I shortened the length keys:

        select: %{
          n: i.project_name,
          s: i.source_name,
          t: i.type,
          lat: i.latitude,
          lon: i.longitude
        }
Enter fullscreen mode Exit fullscreen mode

but wait, why do we need to have shortened keys when we can get rid of the keys altogether?

        select: [
          i.project_name,
          i.source_name,
          i.type,
          i.latitude,
          i.longitude
        ]
Enter fullscreen mode Exit fullscreen mode

WS message size was reduced from 650kb to around 400kb:

Image description

All the data and messages transferred between LiveView and Client are all JSON encoded anyways, so basically our data is encoded to a long string that is then decoded on the client-side. What if we could compress the string before sending it?

Good news, I did not even need any dependencies, as :zlib comes with Erlang and is easy to use. After I JSON encoded my data, I compressed it using :zlib.compress. But the catch here is that :zlib.compress produces a binary, so I had to encode it to a string using Base64 encoding.

So, the compression was achieved with 3 lines of code without any dependencies:

  def compress(data) do
    data
    |> Jason.encode!()
    |> :zlib.compress()
    |> Base.encode64()
  end
Enter fullscreen mode Exit fullscreen mode

This technique allowed to reduce the message size to only around 100kb (4x less!)

Image description

Now, the hardest part - handling the data on the client-side with JS.

There are couple of dependencies that need to be installed:

pako - for zlib-decompressing
js-base64 - for decoding Base64 encoded string.

Here is the function that decompresses zlib compressed + base64 encoded string:

const decompress_to_json = (base64string) => {
    let base64_decoded = Base64.atob(base64string)
    let charCodes = [];
    for (let i = 0; i < base64_decoded.length; i++) {
        charCodes.push(base64_decoded.charCodeAt(i));
    }
    let inflatedData = pako.inflate(charCodes, { to: 'string' });

    const data = JSON.parse(inflatedData);
    return data;
}
Enter fullscreen mode Exit fullscreen mode

and my event handler for rendering the points on a map look like this:

        this.handleEvent("add_markers_for_project", ({ data }) => {

            const projects = decompress_to_json(data)
            for (let i = 0; i < projects.length; i++) {
                const project = objectifyProject(projects[i]);
                if (!project.latitude || !project.longitude) {
                    return;
                }
                addMarker(project, map, markersLayerGroup);
            }
        });
Enter fullscreen mode Exit fullscreen mode

where objectify() simply creates object with keys from the array, i.e. from [_name_, _type_, _lat_, _lon_] to {name: _name_, type: _type_, lat: _lat_, lon: _lon_}

Discussion

Let’s talk about the tradeoffs. JS bundle size has increased because of the additional dependencies. And yes, we are doing a lot more operations on both server side elixir and client side, but is it worth performing more computations in order to optimize for network? Absolutely.

compress() function takes around 2.2ms to compress + encode 4000 entries.
decompress_to_json() takes around 20ms to decode + decompress 4000 entries. Execution time is negligible compared to the benefits it produces.

As a result, I was able to get my deployed map render in speed comparable to the one on the local host. It is almost instantaneous. Lighthouse score for performance is 93+.

Top comments (4)

Collapse
 
simonmcconnell profile image
Simon McConnell

It would be interesting to see how MessagePack compares to compressed json.

elixirforum.com/t/how-to-use-messa...

Collapse
 
azyzz profile image
aziz abdullaev

Just tried compressing using msgpax, with this function:

  defp pack(data) do
    data
    |> Msgpax.pack!()
    |> :zlib.compress()
    |> Base.encode64()
  end
Enter fullscreen mode Exit fullscreen mode

The length of msgpax-compressed and json-compressed (above) strings are the following:

Msgpax String.length(pack(data)) #=> 129468
Jason String.length(compress(data)) #=> 124496

Msgpax String.length(pack(data)) #=> 110776
Jason String.length(compress(data)) #=> 105508

Msgpax String.length(pack(data)) #=> 114380
Jason String.length(compress(data)) #=> 108620

Seems like msgpax results in slightly longer result string

Collapse
 
azyzz profile image
aziz abdullaev

I also measures execution time using :timer.tc. Her are the results

Msgpax time_msgpax #=> 8186
Jason time_json #=> 12130

Msgpax time_msgpax #=> 13473
Jason time_json #=> 12740

Msgpax time_msgpax #=> 8074
Jason time_json #=> 10551

Seems like Msgpax packing takes 1.5x less time at best. Which is great, but it only results in 4ms win over json at best

Collapse
 
azyzz profile image
aziz abdullaev

I also tried to have msgpack as encoder (followed everything mentioned in the article). Encoding/decoding messages sent from LiveView to client stayed the same. That's probably because I am sending a huge string over the wire, which is the same string no matter what messaging protocol is used. If I were to have an array of different data, then it would be a different story