DEV Community

Cover image for Good Bye Web APIs
Manuel Vila
Manuel Vila

Posted on • Edited on

Good Bye Web APIs

When building a single-page application or a mobile application, we usually need to implement a web API (REST, GraphQL, etc.) to connect the frontend and the backend. Technically, it's not very difficult, but it has some unfortunate consequences.

Imagine two planets. The planet "frontend" speaks JavaScript and the planet "backend" also speaks JavaScript or any other advanced language.

Now let's say that these planets need to collaborate extensively to form a whole called "application".

Unfortunately, the planets are unable to communicate with each other directly using their native language and they have to rely on a third party called "web API" which speaks a much less sophisticated language.

Indeed, the language of most web APIs is limited to a combination of URLs, a few HTTP verbs (GET, POST, DELETE, etc.), and some JSON.

Frontend + Web API + Backend

The web APIs that speak GraphQL are more advanced but they remain far behind the possibilities of a programming language such as JavaScript:

  • The programming paradigm is procedural or functional (no object-oriented programming).
  • Only the most basic types are supported (forget about Date, Map, Set, etc.).
  • The concept of reference is missing (you can only pass objects by value).

Placing a rudimentary language between the frontend and the backend adds a lot of boilerplate and ruins the development experience.

Another problem is that a web API is an extra layer to worry about. It must be designed, implemented, tested, documented, etc. And all this is frankly a pain in the ass.

But the worst thing is that building a web API generally forces you to degrade the quality of your codebase. Indeed, it's quite challenging to keep your code DRY and cohesive when your frontend and your backend are separated by a web API.

Now imagine that we could get rid of the web API. Imagine that the frontend could communicate directly with the backend using its native language. Wouldn't it be great?

Frontend + Backend

The good news is that it's possible today thanks to a set of libraries called Layr.

Hello, Layr!

With Layr, the frontend and the backend are physically separated (they run in different environments) but logically reunited (it's as if they were in the same environment).

How does it work?

  1. The backend is composed of one or more classes whose some of their attributes and methods are explicitly exposed to the frontend.
  2. The frontend generates some proxies to the backend classes and can use these proxies as if they were regular JavaScript classes.

Under the hood, Layr relies on an RPC mechanism. So, superficially, it can be seen as something like CORBA, Java RMI, or .NET CWF.

But Layr is radically different:

  • It's not a distributed object system. A Layr backend is stateless, so there are no shared objects across the stack.
  • It doesn't involve any boilerplate code, generated code, configuration files, or artifacts.
  • It uses a simple but powerful serialization protocol (Deepr) that enables unique features such as chained invocation, automatic batching, or partial execution.

Layr starts its journey in JavaScript/TypeScript, but the problem it tackles is universal, and it could be ported to any object-oriented language.

Example

Let's implement the classic "Counter" example to see what it looks like to build a full-stack application with Layer.

First, we implement the "data model" and the "business logic" in the backend:



// backend.js

import {
  Component,
  primaryIdentifier,
  attribute,
  method,
  expose
} from '@layr/component';
import {ComponentHTTPServer} from '@layr/component-http-server';

class Counter extends Component {
  // We need a primary identifier so a Counter instance
  // can be transported between the frontend and the backend
  // while keeping it's identity
  @expose({get: true, set: true}) @primaryIdentifier() id;

  // The counter value is exposed to the frontend
  @expose({get: true, set: true}) @attribute() value = 0;

  // And the "business logic" is exposed as well
  @expose({call: true}) @method() increment() {
    this.value++;
  }
}

// Lastly, we serve the Counter class through an HTTP server
const server = new ComponentHTTPServer(Counter, {port: 3210});
server.start();


Enter fullscreen mode Exit fullscreen mode

Oh my! All that code just for a simple "Counter" example? Sure, it seems overkill, but we've actually implemented a full-grade backend with a data model, some business logic, and an HTTP server exposing the whole thing.

Now that we have a backend, we can consume it from a frontend:



// frontend.js

import {ComponentHTTPClient} from '@layr/component-http-client';

(async () => {
  // We create a client to connect to the backend server
  const client = new ComponentHTTPClient('http://localhost:3210');

  // We get a proxy to the Counter backend class
  const Counter = await client.getComponent();

  // Lastly, we consume the Counter
  const counter = new Counter();
  console.log(counter.value); // => 0
  await counter.increment();
  console.log(counter.value); // => 1
  await counter.increment();
  console.log(counter.value); // => 2
})();


Enter fullscreen mode Exit fullscreen mode

What's going on here? By invoking the counter.increment() method the counter value is incremented. Note that this method does not exist in the frontend. It is implemented in the backend and is therefore executed in this environment. But from the perspective of the frontend, the actual execution environment doesn't matter. The fact that the method is executed remotely can be seen as an implementation detail.

The Counter class in the frontend can be extended to implement features that are specific to the frontend. Here's an example of how to override the increment() method to display a message when the counter reaches a certain value:



class ExtendedCounter extends Counter {
  async increment() {
    // We call the `increment()` method in the backend
    await super.increment();

    // We execute some additional code in the frontend
    if (this.value === 3)
      console.log('The counter value is 3');
    }
  }
}


Enter fullscreen mode Exit fullscreen mode

This is what it looks like when the frontend and the backend are reunited. Pretty cool isn't it?

What's the Catch?

Why does everyone build web APIs when we could do without them?

There is one good reason to implement a web API, it's when you want to expose your backend to some external developers through an established protocol such as REST. But let's be honest, the vast majority of applications don't have this requirement. And if it turns out that you need a web API, it is possible to add it afterward while continuing to use the "API-less" approach for all your internal needs.

Another reason is if you work on a large-scale application with millions of users. Indeed, the convenience provided by Layr doesn't come without a cost, so if you want the most optimized application possible, you'd better go with a lower-level solution.

Finally, if you want to implement a frontend or a backend in a language other than JavaScript, you can still use Layr on one side of the stack, but you will then have to implement an API client or server that can speak the Deepr protocol on the other side of the stack.

Conclusion

Removing the web API allows you to build a full-stack application much faster while increasing the quality of your codebase.

By using Layr on several projects, including some production projects, I was able to reduce the amount of code by 50% on average and greatly increase my productivity.

Another important aspect is the development experience. Since the frontend and the backend are no longer separated by a web API, you get a feeling similar to developing a standalone application, and it's a lot more fun.

Top comments (142)

Collapse
 
stereobooster profile image
stereobooster

So there is no catch. Really.

There is always a catch.

  • What happens if server code is updated, but client still uses old code?
  • Is there a way to reload data from server?
  • How does it solve waterfall of requests problem? Or n+1 query problem?
  • Is there a way to subscribe to changes on the server (long polling or websockets)?
Collapse
 
mvila profile image
Manuel Vila

Thanks, @stereobooster , you raise important points.

What happens if server code is updated, but client still uses old code?

The server only exposes the type of the attributes and the signature of the methods. So it's no different than a traditional web API. If you change your API endpoints, you break the client. You can make incremental changes though. In case of breaking changes (which is generally not recommended), there is a way to specify a version number when you expose your backend so that the frontend can automatically reload itself.

Is there a way to reload data from server?

Yes, all database related methods offer a reload option.

How does it solve waterfall of requests problem? Or n+1 query problem?

The frontend and the backend communicate with the Deepr protocol. For now, you can send Deepr queries manually (like you would do with GraphQL) to solve the n+1 query problem, but in a near future, I'd like to implement sugar to make the operation easier.

Is there a way to subscribe to changes on the server (long polling or websockets)?

Not yet, but it is on the road map.

Collapse
 
sleeplessbyte profile image
Derk-Jan Karrenbeld

The server only exposes the type of the attributes and the signature of the methods. So it's no different than a traditional web API. If you change your API endpoints, you break the client. You can make incremental changes though. In case of breaking changes (which is generally not recommended), there is a way to specify a version number when you expose your backend so that the frontend can automatically reload itself.

Traditional APIs are very different. For example, versioning through content-negotiation has none of these problems. Having a server do content-negotiation is a very powerful method to server different agents (for example browsers, or app version, or xxx) different responses, based on their capabilities. This seems completely impossible with Layr.

Even when you use versioning, what happens if you do a rolling update with Layr? That is, you've updated half your servers, but not the other half, with a load balancer in front of it? Will half of it fail?

It also sounds like the entire HTTP Caching protocol would conflict with Layr, that is, Layr will try to do its own synchronisation, ignoring or conflicting with what was previously possible using HTTP Caching. Why is that important? CDN and proxy caching. GraphQL in general suffers from this.

But the worst thing is that the API layer generally forces you to degrade the quality of your codebase. Indeed, it's quite challenging to keep your code DRY and cohesive when your frontend and your backend are separated by a web API.

Isn't this conflating use cases of SSR or those where the backend is very similar to the front-end? In the majority of the work I did the past 10 years, the backend and frontend are completely different, and should be. It feels like the authors of the library haven't had good experiences with trying to write maintainable backend and frontend code and ran into these issues (which is understandable, I've had similar issues too!), but are trying to solve it on the incorrect/unfortunate abstraction.

Thread Thread
 
mvila profile image
Manuel Vila

Traditional APIs are very different. For example, versioning through content-negotiation has none of these problems. Having a server do content-negotiation is a very powerful method to server different agents (for example browsers, or app version, or xxx) different responses, based on their capabilities. This seems completely impossible with Layr.

Let's be real. In practice, how many apps have multiple API versions? My guess is that 95% of the apps have only one API version that serves browsers, mobile, etc. API versioning is a pain in the ass to implement and maintain so we usually go with a single API that supports backward-compatible changes.

It also sounds like the entire HTTP Caching protocol would conflict with Layr, that is, Layr will try to do its own synchronisation, ignoring or conflicting with what was previously possible using HTTP Caching. Why is that important? CDN and proxy caching. GraphQL in general suffers from this.

Layr is made to build web apps, not websites, so HTTP Caching is not an essential feature. It might be supported in the future, but it is not on the priority list.

Isn't this conflating use cases of SSR or those where the backend is very similar to the front-end? In the majority of the work I did the past 10 years, the backend and frontend are completely different, and should be. It feels like the authors of the library haven't had good experiences with trying to write maintainable backend and frontend code and ran into these issues (which is understandable, I've had similar issues too!), but are trying to solve it on the incorrect/unfortunate abstraction.

The author of the library (me) has more than 25 years of experience building full-stack applications.

Thread Thread
 
sleeplessbyte profile image
Derk-Jan Karrenbeld

Let's be real. In practice, how many apps have multiple API versions? My guess is that 95% of the apps have only one API version that serves browsers, mobile, etc.

API versioning is a pain in the ass to implement and maintain so we usually go with a single API that supports backward-compatible changes.

We write applications for millions of daily users, with clients that can be as old as 2 years (updating them is not in our control). Maintenance of up to 5 versions of the representations of the resources of our endpoints is trivial, because we barely ever have to touch old versions. They just keep working.

I think that the reason so many APIs only have 1 version is because versioning is hard, not because they shouldn't / don't want to have them. So yes, I agree with you that many APIs only have one version, but that's probably more a consequence of tooling not allowing for it, such as this, than that people shouldn't have it.

Having backwards compatible APIs (for example by only adding fields, and never removing/changing fields, which is definitely good practice) doesn't mean you shouldn't/won't have versioning.

  • Stripe has versioning (and mostly backwards compatible changes)
  • Facebook has versioning (and mostly backwards compatible changes)
  • GitHub has versioning (and mostly backwards compatible changes)
  • Twitter has versioning (and mostly backwards compatible changes)

That said, the statement that traditional APIs are no different than Layr-enabled APIs is just false. It just doesn't hold up. I don't know why you're stating anecdotal experience or opinion as fact.

Layr is made to build web apps, not websites, so HTTP Caching is not an essential feature. It might be supported in the future, but it is not on the priority list.

Dismissing HTTP Caching because you think web apps don't need it / don't primarily benefit from it means we can't discuss this.

The author of the library (me) has more than 25 years of experience building full-stack applications.

Then your post and this library seems more out-of-place than I thought before.

Thread Thread
 
mvila profile image
Manuel Vila • Edited

I think the misunderstanding comes from the fact that we are not building the same kind of applications.

I cannot speak about building an application for millions of users. I've never done that. I build small-to-medium applications and I designed Layr for this use case.

So Layr is probably not good for Stripe, Facebook, Twitter, etc. But I believe it is great for the vast majority of developers that don't work on large-scale applications.

Thanks for pointing that out. I edited the "What's the catch?" section to make it clear that Layr is not made for large-scale applications.

Thread Thread
 
sleeplessbyte profile image
Derk-Jan Karrenbeld

It would be helpful if experienced people stopped making statements that are opinions as if they're fact. This only spreads misinformation and less experienced people are going to take it at face value and run with it.

Good luck with the endeavour.

Collapse
 
stereobooster profile image
stereobooster

The frontend and the backend communicate with the Deepr protocol. For now, you can send Deepr queries manually (like you would do with GraphQL) to solve the n+1 query problem, but in a near future, I'd like to implement sugar to make the operation easier.

Nice. This make sense. Deepr looks very similar to GraphQL. Does it mean it has the same "problem" with client side cache resolution? e.g. when you have list of objects and then you update one of them, you need to have resolution otherwise update won't be represented in the list

Thread Thread
 
mvila profile image
Manuel Vila • Edited

The cache problem is out of the scope of Deepr, but Layr (which is built on top of Deepr) partially solves this problem with an identity map. So, all object updates should be automatically synchronized. What remains to be solved though is the case of object additions or removals in collections. For that, I need to add the concept of query subscriptions. It is on the road map but it is not an easy task and it will take a while to be implemented.

Collapse
 
wormss profile image
WORMSS • Edited

Is there a way to subscribe to changes on the server (long polling or websockets)?

Not yet, but it is on the road map.

Yeah, that is a MAJOR massive catch.. That pretty much turns this completely unusable for anything other than meaningless apps that that store a users data in a silo. EG, a note keeping app that doesn't have the ability to share notes between users.

Thread Thread
 
mvila profile image
Manuel Vila

@wormss , I think you misunderstood. Not being able to synchronize clients in real-time with web sockets doesn't mean there is no shared storage. It's no different than any web app, the backend stores data in a database, so any data can be shared between users.

Thread Thread
 
wormss profile image
WORMSS

If I am reading your example above correctly, "value" is synchronous, so it would never know if someone else has incremented it? So reading it client side will be wrong the moment someone else adds it.

Thread Thread
 
mvila profile image
Manuel Vila

The backend is stateless, so in the example, the counter is unique for each user. To share the counter among all users, the backend must save it in a database.

Thread Thread
 
wormss profile image
WORMSS

And again, I ask how the UI gets to know that it has changed.

Thread Thread
 
mvila profile image
Manuel Vila

When you execute a remote method on an object, the attributes of the object are automatically transported between the frontend and the backend. So, when you change an attribute in the backend, the change is reflected in the frontend.

Collapse
 
leob profile image
leob • Edited

This is interesting, but the way you're presenting it doesn't work - the messaging is too negative, which I think pushes people AWAY from a potentially very interesting solution ... :-)

Basically, you're telling people "Web APIs suck, you've been doing it wrong, my solution is superior" ... I think psychologically that doesn't work - people will reject your stuff because they're perceiving you as unfairly trashing REST and GraphQL, meaning they'll be biased against it from the start.

So the problem is you're focusing on "your stuff is bad" rather than "my stuff is good". It works much better if your message is a positive one, rather than negative.

In other words, don't tell people "Web APIs suck", tell them that you've developed a great full-stack framework which can (potentially) vastly improve a developer's productivity. THEN you'll get them interested and they'll start listening to you.

(my first reaction when reading your article was, okay, so what is this - it's just RPC - remember CORBA, remember Enterprise JavaBeans, remember SOAP, remember Java RMI and so on? All of that was "RPC", and we moved away from it because there were problems with it ... so that gave me a serious "deja vue" feeling)

Just my 2 cents :-)

Collapse
 
mvila profile image
Manuel Vila

You should read the post more carefully.

Under the hood, Layr relies on an RPC mechanism. So, superficially, it can be seen as something like CORBA, Java RMI, or .NET CWF.

But Layr is radically different:

  • It's not a distributed object system. A Layr backend is stateless, so there are no shared objects across the stack.
  • It doesn't involve any boilerplate code, generated code, configuration files, or artifacts.
  • It uses a simple but powerful serialization protocol (Deepr) that enables unique features such as chained invocation, automatic batching, or partial execution.
Collapse
 
leob profile image
leob • Edited

Hold on, I've read your post, but more importantly I've modified and expanded my original comment - read it again :-)

The problem (as I try to explain) is not that your framework isn't good, I think it's very interesting. The problem is that your message comes across as negative, as unfairly trashing REST and GraphQL. I think that hurts your goal of getting people interesting in your solution.

I've looked at your docs and the fact that you don't have an explicit API layer but "direct object execution" is only a part of the whole thing, all of the features built around and on top of it are probably much more interesting.

People aren't interested in you telling them that they've been doing it wrong, they're interested to hear what value you can add - and looking at your docs, adding value is what you're able to do.

I'd say dump the polarizing title "Goodbye Web APIs" and the divisive message "Web APIs suck". Well as clickbait to get people to read your post the title does work, but if you'd change it to "Web APIs? Maybe there's a better way" then people might be more inclined to take you seriously.

Thread Thread
 
mvila profile image
Manuel Vila

Thanks, @leob , for explaining why you rejected my post in the first place. I agree that the title is a bit "click-baity". But I don't regret it. It's sad, but it's the only thing that works today.

Thread Thread
 
rob117 profile image
Rob Sherling • Edited

I understand what you're saying, and you do you, but I don't think I'd use a budding technology solution with someone who openly has this negative mentality at the helm.

"It's the only thing that works today."
"You should read the post more carefully."

These are not good responses to carefully crafted, constructive criticism. If you look at the likes for @leob 's comments vs. the likes for yours, I think it's pretty apparent that this isn't the best approach for you to take.

Thread Thread
 
leob profile image
leob

Well you're absolutely right that your click-baity title worked, because your post attracted a lot of views and comments. But at the same time I think it's off-putting to many people, which is a shame because I think the work you've done is very interesting. Anyway just my 2 cents, no worries :-)

Thread Thread
 
leob profile image
leob • Edited

Couldn't have said it better Rob Sherling, this is exactly the point - the project might be fantastic but someone with a negative or defensive attitude at the helm is going to put people off ... really what I'd sincerely advise is, flip the switch, lose the negativity, and the project might be doing well and might attract enthusiastic users and collaborators!

Thread Thread
 
mvila profile image
Manuel Vila

Thanks, @leob , I got your point. I wish I could use a less provocative tone, but I feel this is a necessary evil to get this project off the ground. This is not the first I write about Layr (see my previous posts). I used a more consensual tone then, and it didn't work at all.

Thread Thread
 
leob profile image
leob

Well you're right about that and you absolutely do get attention this way - you see that people start discussing, no doubt about it. So well yes, maybe sometimes this is the only way ... but, now that you do have the attention, maybe you should consider trying to change the tone of the discussion - make it less provocative :-)

Thread Thread
 
mvila profile image
Manuel Vila

OK. I'll do my best! :)

Thread Thread
 
leob profile image
leob

Haha good, you rock, you're a star!

Thread Thread
 
emmymay profile image
EmmyMay

The title can be click-baity but the content doesn't have to be.

Thread Thread
 
leob profile image
leob

True and that's the case, to a degree :-)

Collapse
 
theafr86 profile image
Andrew Robida

Jesus Christ when someone provides free code can we refrain from bashing it? This might not solve your problem but it could be very helpful to people that want to build web apps quickly that are not really worried about scaling it. Example a website that is primarily for booking clients. It is not necessary to use something like graphql or to build a rest API for a local business to book clients. I am pretty sure barbershops or saloons don't need to worry about this type of scaling. This would be perfect for something like that but serverless is a good option as well. Good job Manuel Vila thank you for sharing!

Collapse
 
amer profile image
Amer Mallah

It's not really bashing... He led with a headline and article designed to spark interest, made some broad generalizations in his description and is personally engaging with the debate. His answers are a bit defensive, sure, but I think he (and other noobs who are reading this) are getting a lot of information about the theory behind of separation of concerns and the pros/cons of various remoting tech. I think this both improves his product and educates bystanders!

Collapse
 
leob profile image
leob • Edited

There's no bashing going on from what I can see, just people pointing out the pros and cons of this approach.

Collapse
 
sarafian profile image
Alex Sarafian

If I understand this correctly, the client pushes it's state over the "wire" to the server. Isn't this what ASPX Forms did in the past? Everyone loved the convinience but everyone ran away at the end?

  1. Serialize state
  2. Transmit (POST with state)
  3. Deserialize
  4. Process on server
  5. Serialize state
  6. Transmit (Response with state)
  7. Deserialize
  8. Render

ASPX is almost 25 years old and it was build for wired intranet and failed with the advance of the public internet. One of the reasons was that the client had to transmit often too much data and that was slow and had other problems as well. Back then java script was not easy but js evolved just to be able to drive a client's inteligence within the browser and only ask for the data that is necessary to drive the inteligence. ASPX Forms was also customizable and you could control what goes in and what goes out but we all know how it goes. Even before google development became a thing, people would just copy code if it seemed to work somewhere.

Obviously things are different nowadays and MS is doing something similar with the Blazor, where you get to choose if "server-side" rendering happens. As far as I understood Layr, doesn't do server side rendering, only server side processing.

I personally consider a frontend and backend API the same. I don't understand why people make these distinctions. Same nfr rules should apply, like versioning. The only difference is who is the audience of the API or what is the business it tries to serve. With front end, that is data optimized for one audience (the client) and the backend is for all. This difference changes many of the nfr and lifecycle as well but conceptually it is the same.

I always consider layers important. Maybe not convienient to developers but still very important with longivity in mind. Layr takes this transparency away and that makes it a problem for this requirement. I expect that, for most with architectual background, removing/hiding the FE API would raise red flags.

The counter example is very easy and doesn't automatically raise the alarms for many but experience does and my advice is to market this with more complicated examples and make sure that you show that primarily you are in control of what is put in the wire. Gut feeling, this code will lead you back to the

But the worst thing is that the API layer generally forces you to degrade the quality of your codebase. Indeed, it's quite challenging to keep your code DRY and cohesive when your frontend and your backend are separated by a web API.
Enter fullscreen mode Exit fullscreen mode

I hope you find the feedback useful, even if I'm not correct. History has lots of good paradigms to learn from. I'm not this is silly. God knows when I was young some attempts I had to framework some things and I learned from the effort, the feedback, the failures and the success.

Collapse
 
mvila profile image
Manuel Vila

Thanks, @safarian, for this long comment but I'm not sure to fully understand your point.

Except for the ability to expose a backend to some third-party developers, what are the benefits provided by an API layer?

Collapse
 
sarafian profile image
Alex Sarafian

Structure, control, real segregation of concern.

I'm not sure as what you are asking. I mentioned that BE/FE API are both important and if you try to hide/remove the FE API on the principal of developer convienience, then many architects would get worried.

Very simplified, just because it is FE API, it is not there to remove because it is inconvienient. :)

Though some words might be considered judgemental, I'll remind you that they are not used like that.

Thread Thread
 
mvila profile image
Manuel Vila

I am sorry but you didn't convince me. I can agree that for some complex projects with a large team an API layer can provide some useful guardrails, but for 95% of the web apps, I believe that an API layer is just an unnecessary burden.

Thread Thread
 
sarafian profile image
Alex Sarafian • Edited

In my original comment I mentioned long term and there is our different in mental approach and probably our professional engagement.

There is no argument against the context you describe but I consider them under the classifications of "quick and dirty", "fire and forget".

I'm not trying to convince you. Just sharing experience: ) :) . I probably wouldn't have convinced my younger self as well.

Thread Thread
 
mvila profile image
Manuel Vila

I won't bet you're older than me. :)

Collapse
 
ronaldroe profile image
Ronald Roe • Edited

The separation of concerns in applications is intentional. I don't think most devs would have a problem with blurring the lines a bit, or further abstracting some of the lower-level minutiae, but there's a purpose behind the separation between the two. The nasty, ugly mess that PHP-based platforms became, and in some ways continue to be is a prime example. Maintaining data separate from presentation lets us not only develop each separately, but also allow some failover. The frontend doesn't care what the backend does. You can step in, completely replace the backend with an entirely different platform, and if the data looks the same, the frontend doesn't care. If you completely swap out the frontend, the backend doesn't care as long as the requests look the same.

It's fine if you want to push a new platform you've found (or made) that shakes things up a bit. That's how we innovate - by questioning what we already have. However, none of that means better paradigms suddenly become obsolete, or that this shiny new thing is necessarily better.

Collapse
 
mvila profile image
Manuel Vila

I agree that separating the frontend and the backend is a good thing and Layr does not question this.

With Layr, the frontend and the backend can still be developed and deployed independently. You simply avoid a lot of boilerplate.

Collapse
 
peerreynders profile image
peerreynders • Edited

Now let's say that these planets need to collaborate extensively to form a whole called "application".
...
Now imagine that we could get rid of the web API. Imagine that the frontend could communicate directly with the backend using its native language. Wouldn't it be great?

I think there is some confusion here about what a "Web API" is supposed to accomplish.

The type of interface being proposed here is more closely aligned to a BFF (Backend For Frontends) than a "Web API".

Given that a BFF is specialized to one particular frontend, Contract-to-Implementation coupling isn't considered to be a problem - but the assumption is that the contract is dictated by the needs of the frontend - not by the needs of the backend (see also Consumer-Driven Contracts).

A "Web API" on the other hand exposes a generalized interface that is expected to be consumed by any number of consumers that need to interact with it and that shouldn't be constrained to any particular implementation language (or any of its idiosyncrasies). These type of services are guided by the Service Loose Coupling design principle:

Service contracts impose low consumer coupling requirements and are themselves decoupled from their surrounding environment.

i.e. Contract-to-Implementation coupling is considered to be undesirable.

Another problem is that a web API is an extra layer to worry about. It must be designed, implemented, tested, documented, etc. And all this is frankly a pain in the ass.

Decoupling comes at a cost - but failure to decouple also has its downsides - once again it's all about trade-offs - so you have to pick your poison.

  • Removing the decoupling layer lets you move faster initially until later when things are slowed down by the fact that the updated frontend and backend have to be deployed simultaneously (because both are inexorably linked into a distributed monolith).
  • Excessive decoupling is a waste especially if you only ever have to support one type of consumer - however some minimal degree of decoupling is advisable if just to acknowledge the existence of the natural network boundary.

In most cases it makes more sense to treat the frontend and backend as separate bounded contexts - so they should share just enough detail to get the job done but no more.

To quote Ted Neward from back in the SOAP days (2006):

Start from your code, just sprinkle some web service magic pixie dust on it and lo and behold you have a web service, bad things, bad, bad, bad, bad. I need to beat the vendors over the head to stop doing that... That’s just not going to work.

Also the there seems to be the desire to pretend that the network boundary (with the increased likelihood of failure) doesn't exist - that is just another can of worms - Convenience Over Correctness (2008).

Collapse
 
mvila profile image
Manuel Vila

Thanks, @peerreynders , for pointing out a possible confusion with the term "web API".

My definition of a web API is something that connects the frontend layer and the backend layer.

Your definition of a web API is also something that connects the frontend and the backend, but the connection goes through an intermediate layer that enables a logical decoupling.

Having an intermediate layer can be useful for some applications, but for most applications, I believe that it is fine to go without.

Note that Layr removes the pain of building a web API (according to my definition) but it doesn't prevent you to implement an intermediate API layer if this is what you want. Instead of exposing the business layer directly, you can expose an API layer that consumes the business layer.

Collapse
 
peerreynders profile image
peerreynders • Edited

but the connection goes through an intermediate layer that enables a logical decoupling.
Having an intermediate layer can be useful for some applications, but for most applications, I believe that it is fine to go without.

With that statement you have essentially talked yourself out of needing an SPA.

The whole justification for an SPA is that the frontend needs to be so complex that it has become an application in its own right at which point it has a definitive boundary against the backend. At this point it becomes necessary to deliberately control (i.e. design) the granularity and frequency of interaction between the frontend and the backend - this is essentially the contract that

must be designed, implemented, tested, documented,

work that you are trying to avoid and

the language of most web APIs is limited

which is entirely by design in order to constrain the type of coupling (or dependency) that either party can develop.

So when adopting an SPA, Contract-to-Logic, Contract-to-Technology, Contract-to-Implementation, and Consumer-to-Implementation coupling is considered to be negative coupling - typically avoided by adopting a contract-first development approach.

This is why it is so important to deliberately design the interactions that occur over the network (via REST, GraphQL or whatever) - the responsibilities of the frontend and backend are very different - so the coupling should be minimized to give both sides some "breathing room" to evolve (in 2012 Netflix even went so far to pull the generic client-to-server boundary down into the server so that a device specific HTTP interface was exposed at the network boundary - the server-based client adapters being essentially BFFs).

The Layr approach also glosses over the fact that the client-server environment isn't homogeneous - this isn't about two Node.js processes running on servers, talking to one another. So while Node.js and browsers both support JavaScript and may even be running the same JavaScript engine (V8) on a public facing app there is no control over the client device's computational capability or connection quality. Browser's parse and process HTML and CSS at native speeds, perhaps on multiple threads (and possibly separate cores) while parsing and processing JavaScript is more computationally expensive and restrictive.

The web browser's default serialization format is HTML. So in the "it's all just one application" arena Turbolinks has been leveraging that since 2013 (or unpoly for more fine-grained, fragment-level control; since 2015) without ever needing to parse any JSON or XML (as a "Web API" isn't needed anyway).

I would also expect Layr to lead to rather "chatty" traffic over the network which is generally not desirable. If for whatever reason you are in a position where "chattiness" doesn't matter there are other more server-centric approaches like Phoenix LiveView (Elixir) or Laravel Livewire (PHP) which act more like "one application".

Furthermore the emerging trend is to do more on the browser side with less JavaScript. eBay's Marko is even planning to ditch the VDOM for granular compile-time reactivity in order to reduce JavaScript download, parsing and execution costs (and has been supporting streaming async fragments since 2014). If anything I would imagine that the Layr approach encourages developers to use more JavaScript on the browser - especially if they don't make a point of keeping a close eye on the dependencies being pulled into the client code base.

Much has been made of the perceived benefits of Isomorphic JavaScript or Universal JavaScript since 2013 - but so far for me it's right up there with the failed promise of OO.

We know one language is a pipe dream - so giving up the option to use the best possible environment on the backend is a significant sacrifice.

That's not to say that there won't be any use cases for Layr - but failing to actually design the network interface has the tendency to eventually catch up with the product and the team maintaining it.

Thread Thread
 
mvila profile image
Manuel Vila

Thanks again, @peerreynders , for your deep comments! I hope you will not find my answer disappointing.

I need to build SPAs because my applications have rich user interfaces and it is a lot more convenient to handle the UI where it belongs (the browser).

However, I don't want to carry the burden of an API layer because that considerably decreases my development velocity. By removing the API layer, I can get the same experience as if I were building an SSR or standalone application while keeping the advantages of a SPA.

I totally understand the benefits of an API layer, but it is just not for me. Like the vast majority of developers, I don't build applications for millions of users, and I cannot speak about this kind of development. From my experience, what I can say is that for a lot of small-to-medium applications, removing the API layer is a completely viable option.

Thread Thread
 
peerreynders profile image
peerreynders • Edited

I need to build SPAs because my applications have rich user interfaces

This is stated as an absolute truth and yet this is the assumption that really needs to be challenged. SPAs have proliferated largely not because they are needed but because of developer attitudes:

"I just wanna write JavaScript applications and not deal with all that messy web stuff".

and it is a lot more convenient to handle the UI where it belongs (the browser).

There are two separate statements here.

"handle the UI where it belongs (on the browser)" - according to who? The browser is primarily a window to display stuff. Basic HTML gives you access to some stock controls and you can use JavaScript to create custom controls. None of that implies that all of the UI logic has to live in the browser - it potentially could but is doesn't have to. The X Window System predates the World Wide Web - graphical applications running remotely were essentially telling the local X Server to manipulate the graphical content on the local window - i.e. the UI logic was be running on a remote machine while the graphical content was manipulated through an intermediary.

"and it is a lot more convenient" - I see this as the core of the statement: developer convenience - not whether or not the solution is appropriate to the problem at hand but "I wanna do it this way...". The fact is there is a whole range of rendering options on the web and consequentially there is a whole continuum of solution options ranging from statically served web documents to ridiculously interactive graphical applications.

Furthermore SPA solutions tend to be highly framework-centric and specific (possibly opinionated) so the "developer convenience" is even more constrained to "that framework that I invested all that time learning". All too often framework specialist aren't actually aware of what is even possible in the browser (or the web in general). It's the old "I have a hammer and everything else is a nail".

Back in 2015 PPK wrote: Web vs. native: let’s concede defeat - this wasn't conceding "native is always superior to the web" but that an inordinate amount of effort is being wasted to emulate native on the web that would be better spent on developing web-(and mobile-)friendly alternatives that are superior to native in their particular context.

For example introduction of the ServiceWorker on the browser has given a boost to "multi-page apps" because the ServiceWorker can act as a proxy to the server to fulfill requests locally, potentially reusing aspects of the server application. For an example look at Beyond Single Page Apps Google I/O 2018 (Why PWAs Are Not SPAs).

One could say that SPAs have led to the enterprise-ification of web development - and that is not a compliment.

Even the culture around SPAs can be inflammatory (The Great Divide).

Like the vast majority of developers, I don't build applications for millions of users

But it's the companies which "build applications for millions of users" that have been driving the adoption of "native envy" SPA's - they have the funds and resources to push through waste and potentially bad decisions.

So the core question should actually be:

"Should the vast majority of [web] developers be building SPAs - which invariably leads to the implementation of an over-engineered (e.g. GraphQL) web api?"

Betteridge's law of headlines:

Any headline that ends in a question mark can be answered by the word no.

If not SPAs, What?

Thread Thread
 
mvila profile image
Manuel Vila

Let me clarify what I mean by web applications with rich user interfaces. This is the type of applications that offer such a level of interaction that it cannot, given the network latency, be implemented other than locally.

Such applications are for example:

For this type of applications, I hope you agree that the SPA model is the only valid option.

Anyway, the conversation is drifting, isn't it? Layr is made for building SPA and mobile/desktop applications, so the topic is not whether to build this kind of applications or not. :)

Thread Thread
 
peerreynders profile image
peerreynders

For this type of applications, I hope you agree that the SPA model is the only valid option.

And all the examples you offer are "applications for millions of users" and also offer generic "Web APIs" for integration purposes.

Anyway, the conversation is drifting, isn't it?

Not really. Building successful SPAs is always a high intensity effort because of the "level of interaction". Designing an API is supposed to help manage that complexity. So if you don't want to "design, implement, test, and document" an API then maybe you shouldn't be embarking on building an SPA in the first place.

Also the direction this thread was going - all the examples you cited have collaboration features - highlighting that in products that justifiably involve SPAs the responsibilities of the frontend and backend are very different - making them separate collaborating applications, not one single application. That collaboration needs to be designed.

Also using Layr isn't "API-less".

interaction that cannot, given the network latency, be implemented other than locally.

A Note on Distributed Computing:

Differences in latency, memory access, partial failure, and concurrency make merging of the computational models of local and distributed computing both unwise to attempt and unable to succeed.

leading to

Distributed objects are different from local objects, and keeping that difference visible will keep the programmer from forgetting the difference and making mistakes.

i.e. the difference between local and remote invocations in the client code needs to be crystal clear from the maintenance perspective. From the above example:

  // Lastly, we consume the Counter
  const counter = new Counter();
  console.log(counter.value); // => 0
  await counter.increment();
  console.log(counter.value); // => 1
  await counter.increment();
  console.log(counter.value); // => 2
Enter fullscreen mode Exit fullscreen mode

The only hint of any remote interaction is the await and frankly interactions with any JavaScript runtime environment are frequently asynchronous so that's not good enough. One way to mitigate this is to hide all Layr interaction behind a Remote Façade. In doing so you are creating an API.

All the Remote Facade does is translate coarse-grained methods onto the underlying fine-grained objects.

By identifying the "coarse-grained methods" - there better be a good reason why and a well-timed opportunity when you are crossing that network boundary - you are starting to design an API. And it's that API that has to be "designed, tested and documented". Also the example doesn't show exception handling which is necessary because the underlying network calls can fail for any number of reasons - so as such the example misrepresents the "ease" of remote interaction.

So when I see claims of

and greatly increase my productivity.

I have to wonder whether this is largely based on "decrease time to initial success" shortcuts - i.e. not addressing technical debt in a timely fashion (LOC isn't a productivity measure - typically it's faster and easier to write verbose code, well crafted, concise code is much slower to produce).

That is not to say that it isn't possible with Layr to implement a well crafted communication layer - but that still requires "designing, implementing (with Layr and a Remote Façade), testing, and (for your own sanity) documenting" while at the same time locking into all the downsides of a non-interoperable HTTP interface. So while Layr may seem initially more convenient from a JS developer perspective, from a product development perspective it doesn't move any closer to Falling Into The Pit of Success - in fact encouraging the delay of API design will move way from it.

Thread Thread
 
mvila profile image
Manuel Vila

@peerreynders , I admire the effort you put into trying to convince me that an API layer is required to build a SPA but I don't buy it, sorry.

When you build an old-school SSR web app with something like Ruby on Rails or Laravel there is no API layer, right? The UI layer has direct access to the business layer, and it is perfectly fine.

This is the type of architecture I try to achieve with Layr. For me, the fact that the UI layer (the frontend) and the business layer (the backend) are physically separated is an implementation detail.

You mentioned the problem of network errors. It is not difficult to distinguish this type of errors and abstract them away with an automatic retry strategy or a modal dialog offering the user to retry.

The examples I offered (Google Docs, etc.) have of course millions of users. But they are tons of SPAs that have the same characteristics while being far less popular.

Thread Thread
 
peerreynders profile image
peerreynders

I admire the effort you put into trying to convince me that an API layer is required to build a SPA.

I'm not trying to convince you of anything.

It is clear by the amount of effort you have poured into this that you are convinced that this is the right path for you to pursue - so you need follow your inclination wherever it may lead. With the article however you are also trying to to convince others (possibly less experienced developers) that it is reasonable to expect to embark on building an SPA product with the expectation of not having to design an API - that is irresponsible (unless you are in a situation where you just consume already existing APIs).

  • By definition an SPA is a "client-side application" so the backend is a separate (support) application - so in between there will be an API - it doesn't matter whether you use Layr to build it; if you don't design it and "let it just happen" it typically leads to an undesirable outcome.

  • So if you can't or don't want to commit to the full monty of an SPA + backend via API then you should be looking for another non-SPA solution.

When you build an old-school SSR web app with something like Ruby on Rails or Laravel there is no API layer, right?

First of all the term SSR is typically used for "server-side rendered client-side applications" (as opposed to plain CSR) - stock Ruby on Rails and Laravel have no "client-side application".

  • Stock dynamic web sites and static web sites that deliver pure HTML/CSS to the browser don't need an API because all handling of network requests is the responsibility of the browser. Even pages with JavaScript are OK as long as facilities like fetch, XMLHttpRequest, jQuery.ajax(), etc. are avoided. However with a dynamic web site you could make the argument that the web site itself is the API and the browser is the client and all the interactions are governed by the HTTP protocol. But the "application" lives entirely on the server.

  • The moment any JavaScript uses facilities like fetch, handling of network errors, (timeout, unexpected status codes, etc.) is not the responsibility of the browser but of the code written by the developer. This is the reason why there should never be the opportunity to mistake a remote interaction for a local one.

  • The moment facilities like fetch (aka "ajax") are used to access server-side facilities under your control you are using an API that you are responsible for. So in effect even for something like unpoly, optimizing server responses for fragment updates is building an API - it just happens that the API responds to HTTP headers and the payload is HTML, not JSON.

  • The internet is rife with accounts where RoR installations had a "short time to initial success" but invariably maintenance-wise descended into a Big Ball of Mud - so you may want to be careful what you compare your approach to. Active Record is at the core of many Rails application's DB interaction - and many consider it an anti-pattern with regard to application-to-database interaction (Repository being preferred - essentially the API to your persistence layer). So Rails isn't the best source for architectural solution best practices.

  • The design philosophy behind RoR essentially enabled quick transformation of database tables to web based entry forms for CRUD applications - hardly something you should be juxtaposing to "a product" or "domain" claiming to be so complex that it requires an SPA application/architecture.

So in conclusion - your comparison with Ruby on Rails or Laravel doesn't work on any level.

This is the type of architecture I try to achieve with Layr.

Again - with a dynamic web site the browser is the client, HTML/CSS the data, and the interactions are governed by the HTTP protocol (which is a REST architecture); with an SPA your client-side application is the client (which just happens to run inside a browser) taking on all sorts of responsibilities handled by the browser in the former case; and while interactions with the backend go over HTTP there is a lot more latitude regarding the semantics of the interactions - which is where the API design comes in. Apples and Oranges are more similar than those two scenarios.

For me, the fact that the UI layer (the frontend) and the business layer (the backend) are physically separated is an implementation detail.

The backend isn't the business layer. It's largely infrastructure that allows the frontend to interact with the business logic via the web (i.e. API support). Conflation of the delivery system and the business logic is one of the core issues with many Rails applications (Bring clarity to your monolith with Bounded Contexts, Architecture the Lost Years ) - and is exactly the concern I would have with mindless application of a technology like Layr.

Also "physical separation" is never an implementation detail - it's the "laws of physics" that get in the way - the ones responsible for the fallacies of distributed computing, well, being fallacies.

It is not difficult to distinguish this type of errors and abstract them away with an automatic retry strategy or a modal dialog offering the user to retry.

There's a whole mountain of literature dedicated to the fact that these type of errors aren't easy to abstract away (and often modal boxes are bad UX).

But they are tons of SPAs that have the same characteristics while being far less popular.

And my point is that successful SPAs are lot more resource intensive than other approaches - making their ROI dubious in many situations - they shouldn't be attempted on a shoestring budget or with constrained resources and time. PWAs don't have to be implemented as SPAs and if you truly need native-like-capabilities commit and go native.

Thread Thread
 
mvila profile image
Manuel Vila • Edited

Again, thanks a lot for your detailed answer but I am afraid we will never agree.

It's like a left brain speaking to a right brain:

  • You are obsessed with architecture flexibility.
  • I am obsessed with development velocity.

Both approaches are valuable and choosing one over the other really depends on the project you are working on.

Thread Thread
 
peerreynders profile image
peerreynders
  • You are obsessed with architecture flexibility.
  • I am obsessed with development velocity.

My mindset:

"Always code as if the guy who ends up maintaining your code will be a violent psychopath who knows where you live" (link)

And from my experience software products are annoyingly long lived while needing to be continuously adapted to ever changing circumstances in order to remain valuable.

The type of development velocity you seem to be interested in is referred by J.B. Rainsberger as "the scam":

The cost of the first few features is actually quite a bit higher than it is doing it the "not so careful way" ... eventually you reach the point where the cost of getting out of the blocks quickly and not worrying about the design is about the same as the cost of being careful from the beginning ... and after this being careful is nothing but profit.

He acknowledges that "the scam" is initially incredibly seductive - and the approach that you describe in the article has that same seductive appeal. The velocity of that approach moves quickly to the point where the cost of continuing is higher than the cost of starting again.

So the value of that approach can only be realized if the product is decommissioned before the "breakeven point" (my guess less than two years, possibly less, depending on project type). The only other option is to make the product "somebody else's problem" before that breakeven point is reached (which obviously isn't doing them any favours).

I'm only interested in going faster than "so called fast" - going well for long enough so that I'll beat fast all the time.

Collapse
 
jimmont profile image
Jim Montgomery • Edited
Collapse
 
mvila profile image
Manuel Vila

I think that using SQL as an API language is a terrible idea but it is nice to see someone arguing against GraphQL. 😉

Collapse
 
jimmont profile image
Jim Montgomery • Edited

I think SQL is a brilliant idea, providing of course the proper constraints are in place. Simonw goes on to show how it actually works based on his own experiments--he was skeptical too. I imagine it really depends on the application. And I agree with the author that GraphQL is reinventing the wheel while not appearing to improve on REST for resource usage (perhaps it's easier to develop relative to rolling restful and rpc endpoints--initially, not sure about maintenance, I think it's a wash or worse there). See also Owen Rubel's related answer--I tend to totally appreciate his insights, he has a lot of good material in this area: rest vs rpc; API chaining part 1of2 and part 2.

Thread Thread
 
mvila profile image
Manuel Vila

Thanks, @jimmont , I will check this out.

Collapse
 
sirseanofloxley profile image
Sean Allin Newell

This kinda feels like how Serverless still has Servers, Layr is API-less but still has N/W calls and a separateable frontend/backend.

Seems cool, I'm really getting into the idea of using F# + Fable and gRPC for the transport.

Collapse
 
valkon profile image
Manuel Savage

Are we trying to recreate JSP and PHP again?

Collapse
 
mvila profile image
Manuel Vila

Not at all. Layr is quite the opposite. The frontend (UI) and the backend (data model and business logic) are physically separated.

Collapse
 
oguimbal profile image
Olivier Guimbal

It even looks very much like .Net Remoting to me !

Collapse
 
mvila profile image
Manuel Vila

Except for the "API-less" approach, Layr is very different than .Net Remoting. Layr removes the API layer but it keeps the client/server (with a stateless server) architecture of web applications.

Thread Thread
 
oguimbal profile image
Olivier Guimbal

Indeed 😄, I got that, one cannot really compare it with an almost pre-web techno, but the feeling of it remains (it looks like it ... attributes, proxies & stuff).

I guess that's something that object-oriented RPC libs have in common.

Collapse
 
mikeyglitz profile image
mikeyGlitz

How does this approach address communications from both ends from a security standpoint? Would there be a way to authenticate with RCP?
How can you ensure your client isn't being intercepted a-la man in the middle?
How do you ensure your communication is encrypted?

Collapse
 
mvila profile image
Manuel Vila

Conceptually, authentication works the same as with typical web APIs. Instead of passing a token through a cookie or an HTTP header, you pass it through a static class attribute. You can find an example of implementation here: github.com/layrjs/react-layr-realw...

About security concerns, you can expose your Layr backend through HTTPS.

Collapse
 
mikeyglitz profile image
mikeyGlitz

Just out of curiosity, does the framework have parameters for ordering your requests such as etag? That's important for things like versioning.

Collapse
 
mikeyglitz profile image
mikeyGlitz
Thread Thread
 
mvila profile image
Manuel Vila • Edited

Then your concern is about caching? Layr is a solution for building web apps, not websites. Caching backend responses at the HTTP level is essential for websites but not so useful for web apps. Layr might support ETags in the future, but it is not something on the priority list.

Thread Thread
 
baskarmib profile image
Baskarrao Dandlamudi

There are various use cases where Caching is implemented at the API level, like metadata which does not change too often. These are usually cached at API level and returned to clients with out hitting database to fetch the metadata. Do Layr support this?

Thread Thread
 
mvila profile image
Manuel Vila

Not yet but this may be implemented in the future.

Collapse
 
mvila profile image
Manuel Vila

I am sorry, but I am not sure I understand your question. What do you mean by "ordering your requests"?

Some comments may only be visible to logged-in visitors. Sign in to view all comments.