DEV Community

loading...

Discussion on: Good Bye Web APIs

Collapse
peerreynders profile image
peerreynders • Edited

but the connection goes through an intermediate layer that enables a logical decoupling.
Having an intermediate layer can be useful for some applications, but for most applications, I believe that it is fine to go without.

With that statement you have essentially talked yourself out of needing an SPA.

The whole justification for an SPA is that the frontend needs to be so complex that it has become an application in its own right at which point it has a definitive boundary against the backend. At this point it becomes necessary to deliberately control (i.e. design) the granularity and frequency of interaction between the frontend and the backend - this is essentially the contract that

must be designed, implemented, tested, documented,

work that you are trying to avoid and

the language of most web APIs is limited

which is entirely by design in order to constrain the type of coupling (or dependency) that either party can develop.

So when adopting an SPA, Contract-to-Logic, Contract-to-Technology, Contract-to-Implementation, and Consumer-to-Implementation coupling is considered to be negative coupling - typically avoided by adopting a contract-first development approach.

This is why it is so important to deliberately design the interactions that occur over the network (via REST, GraphQL or whatever) - the responsibilities of the frontend and backend are very different - so the coupling should be minimized to give both sides some "breathing room" to evolve (in 2012 Netflix even went so far to pull the generic client-to-server boundary down into the server so that a device specific HTTP interface was exposed at the network boundary - the server-based client adapters being essentially BFFs).

The Layr approach also glosses over the fact that the client-server environment isn't homogeneous - this isn't about two Node.js processes running on servers, talking to one another. So while Node.js and browsers both support JavaScript and may even be running the same JavaScript engine (V8) on a public facing app there is no control over the client device's computational capability or connection quality. Browser's parse and process HTML and CSS at native speeds, perhaps on multiple threads (and possibly separate cores) while parsing and processing JavaScript is more computationally expensive and restrictive.

The web browser's default serialization format is HTML. So in the "it's all just one application" arena Turbolinks has been leveraging that since 2013 (or unpoly for more fine-grained, fragment-level control; since 2015) without ever needing to parse any JSON or XML (as a "Web API" isn't needed anyway).

I would also expect Layr to lead to rather "chatty" traffic over the network which is generally not desirable. If for whatever reason you are in a position where "chattiness" doesn't matter there are other more server-centric approaches like Phoenix LiveView (Elixir) or Laravel Livewire (PHP) which act more like "one application".

Furthermore the emerging trend is to do more on the browser side with less JavaScript. eBay's Marko is even planning to ditch the VDOM for granular compile-time reactivity in order to reduce JavaScript download, parsing and execution costs (and has been supporting streaming async fragments since 2014). If anything I would imagine that the Layr approach encourages developers to use more JavaScript on the browser - especially if they don't make a point of keeping a close eye on the dependencies being pulled into the client code base.

Much has been made of the perceived benefits of Isomorphic JavaScript or Universal JavaScript since 2013 - but so far for me it's right up there with the failed promise of OO.

We know one language is a pipe dream - so giving up the option to use the best possible environment on the backend is a significant sacrifice.

That's not to say that there won't be any use cases for Layr - but failing to actually design the network interface has the tendency to eventually catch up with the product and the team maintaining it.

Thread Thread
mvila profile image
Manuel Vila Author

Thanks again, @peerreynders , for your deep comments! I hope you will not find my answer disappointing.

I need to build SPAs because my applications have rich user interfaces and it is a lot more convenient to handle the UI where it belongs (the browser).

However, I don't want to carry the burden of an API layer because that considerably decreases my development velocity. By removing the API layer, I can get the same experience as if I were building an SSR or standalone application while keeping the advantages of a SPA.

I totally understand the benefits of an API layer, but it is just not for me. Like the vast majority of developers, I don't build applications for millions of users, and I cannot speak about this kind of development. From my experience, what I can say is that for a lot of small-to-medium applications, removing the API layer is a completely viable option.

Thread Thread
peerreynders profile image
peerreynders • Edited

I need to build SPAs because my applications have rich user interfaces

This is stated as an absolute truth and yet this is the assumption that really needs to be challenged. SPAs have proliferated largely not because they are needed but because of developer attitudes:

"I just wanna write JavaScript applications and not deal with all that messy web stuff".

and it is a lot more convenient to handle the UI where it belongs (the browser).

There are two separate statements here.

"handle the UI where it belongs (on the browser)" - according to who? The browser is primarily a window to display stuff. Basic HTML gives you access to some stock controls and you can use JavaScript to create custom controls. None of that implies that all of the UI logic has to live in the browser - it potentially could but is doesn't have to. The X Window System predates the World Wide Web - graphical applications running remotely were essentially telling the local X Server to manipulate the graphical content on the local window - i.e. the UI logic was be running on a remote machine while the graphical content was manipulated through an intermediary.

"and it is a lot more convenient" - I see this as the core of the statement: developer convenience - not whether or not the solution is appropriate to the problem at hand but "I wanna do it this way...". The fact is there is a whole range of rendering options on the web and consequentially there is a whole continuum of solution options ranging from statically served web documents to ridiculously interactive graphical applications.

Furthermore SPA solutions tend to be highly framework-centric and specific (possibly opinionated) so the "developer convenience" is even more constrained to "that framework that I invested all that time learning". All too often framework specialist aren't actually aware of what is even possible in the browser (or the web in general). It's the old "I have a hammer and everything else is a nail".

Back in 2015 PPK wrote: Web vs. native: let’s concede defeat - this wasn't conceding "native is always superior to the web" but that an inordinate amount of effort is being wasted to emulate native on the web that would be better spent on developing web-(and mobile-)friendly alternatives that are superior to native in their particular context.

For example introduction of the ServiceWorker on the browser has given a boost to "multi-page apps" because the ServiceWorker can act as a proxy to the server to fulfill requests locally, potentially reusing aspects of the server application. For an example look at Beyond Single Page Apps Google I/O 2018 (Why PWAs Are Not SPAs).

One could say that SPAs have led to the enterprise-ification of web development - and that is not a compliment.

Even the culture around SPAs can be inflammatory (The Great Divide).

Like the vast majority of developers, I don't build applications for millions of users

But it's the companies which "build applications for millions of users" that have been driving the adoption of "native envy" SPA's - they have the funds and resources to push through waste and potentially bad decisions.

So the core question should actually be:

"Should the vast majority of [web] developers be building SPAs - which invariably leads to the implementation of an over-engineered (e.g. GraphQL) web api?"

Betteridge's law of headlines:

Any headline that ends in a question mark can be answered by the word no.

If not SPAs, What?

Thread Thread
mvila profile image
Manuel Vila Author

Let me clarify what I mean by web applications with rich user interfaces. This is the type of applications that offer such a level of interaction that it cannot, given the network latency, be implemented other than locally.

Such applications are for example:

For this type of applications, I hope you agree that the SPA model is the only valid option.

Anyway, the conversation is drifting, isn't it? Layr is made for building SPA and mobile/desktop applications, so the topic is not whether to build this kind of applications or not. :)

Thread Thread
peerreynders profile image
peerreynders

For this type of applications, I hope you agree that the SPA model is the only valid option.

And all the examples you offer are "applications for millions of users" and also offer generic "Web APIs" for integration purposes.

Anyway, the conversation is drifting, isn't it?

Not really. Building successful SPAs is always a high intensity effort because of the "level of interaction". Designing an API is supposed to help manage that complexity. So if you don't want to "design, implement, test, and document" an API then maybe you shouldn't be embarking on building an SPA in the first place.

Also the direction this thread was going - all the examples you cited have collaboration features - highlighting that in products that justifiably involve SPAs the responsibilities of the frontend and backend are very different - making them separate collaborating applications, not one single application. That collaboration needs to be designed.

Also using Layr isn't "API-less".

interaction that cannot, given the network latency, be implemented other than locally.

A Note on Distributed Computing:

Differences in latency, memory access, partial failure, and concurrency make merging of the computational models of local and distributed computing both unwise to attempt and unable to succeed.

leading to

Distributed objects are different from local objects, and keeping that difference visible will keep the programmer from forgetting the difference and making mistakes.

i.e. the difference between local and remote invocations in the client code needs to be crystal clear from the maintenance perspective. From the above example:

  // Lastly, we consume the Counter
  const counter = new Counter();
  console.log(counter.value); // => 0
  await counter.increment();
  console.log(counter.value); // => 1
  await counter.increment();
  console.log(counter.value); // => 2
Enter fullscreen mode Exit fullscreen mode

The only hint of any remote interaction is the await and frankly interactions with any JavaScript runtime environment are frequently asynchronous so that's not good enough. One way to mitigate this is to hide all Layr interaction behind a Remote Façade. In doing so you are creating an API.

All the Remote Facade does is translate coarse-grained methods onto the underlying fine-grained objects.

By identifying the "coarse-grained methods" - there better be a good reason why and a well-timed opportunity when you are crossing that network boundary - you are starting to design an API. And it's that API that has to be "designed, tested and documented". Also the example doesn't show exception handling which is necessary because the underlying network calls can fail for any number of reasons - so as such the example misrepresents the "ease" of remote interaction.

So when I see claims of

and greatly increase my productivity.

I have to wonder whether this is largely based on "decrease time to initial success" shortcuts - i.e. not addressing technical debt in a timely fashion (LOC isn't a productivity measure - typically it's faster and easier to write verbose code, well crafted, concise code is much slower to produce).

That is not to say that it isn't possible with Layr to implement a well crafted communication layer - but that still requires "designing, implementing (with Layr and a Remote Façade), testing, and (for your own sanity) documenting" while at the same time locking into all the downsides of a non-interoperable HTTP interface. So while Layr may seem initially more convenient from a JS developer perspective, from a product development perspective it doesn't move any closer to Falling Into The Pit of Success - in fact encouraging the delay of API design will move way from it.

Thread Thread
mvila profile image
Manuel Vila Author

@peerreynders , I admire the effort you put into trying to convince me that an API layer is required to build a SPA but I don't buy it, sorry.

When you build an old-school SSR web app with something like Ruby on Rails or Laravel there is no API layer, right? The UI layer has direct access to the business layer, and it is perfectly fine.

This is the type of architecture I try to achieve with Layr. For me, the fact that the UI layer (the frontend) and the business layer (the backend) are physically separated is an implementation detail.

You mentioned the problem of network errors. It is not difficult to distinguish this type of errors and abstract them away with an automatic retry strategy or a modal dialog offering the user to retry.

The examples I offered (Google Docs, etc.) have of course millions of users. But they are tons of SPAs that have the same characteristics while being far less popular.

Thread Thread
peerreynders profile image
peerreynders

I admire the effort you put into trying to convince me that an API layer is required to build a SPA.

I'm not trying to convince you of anything.

It is clear by the amount of effort you have poured into this that you are convinced that this is the right path for you to pursue - so you need follow your inclination wherever it may lead. With the article however you are also trying to to convince others (possibly less experienced developers) that it is reasonable to expect to embark on building an SPA product with the expectation of not having to design an API - that is irresponsible (unless you are in a situation where you just consume already existing APIs).

  • By definition an SPA is a "client-side application" so the backend is a separate (support) application - so in between there will be an API - it doesn't matter whether you use Layr to build it; if you don't design it and "let it just happen" it typically leads to an undesirable outcome.

  • So if you can't or don't want to commit to the full monty of an SPA + backend via API then you should be looking for another non-SPA solution.

When you build an old-school SSR web app with something like Ruby on Rails or Laravel there is no API layer, right?

First of all the term SSR is typically used for "server-side rendered client-side applications" (as opposed to plain CSR) - stock Ruby on Rails and Laravel have no "client-side application".

  • Stock dynamic web sites and static web sites that deliver pure HTML/CSS to the browser don't need an API because all handling of network requests is the responsibility of the browser. Even pages with JavaScript are OK as long as facilities like fetch, XMLHttpRequest, jQuery.ajax(), etc. are avoided. However with a dynamic web site you could make the argument that the web site itself is the API and the browser is the client and all the interactions are governed by the HTTP protocol. But the "application" lives entirely on the server.

  • The moment any JavaScript uses facilities like fetch, handling of network errors, (timeout, unexpected status codes, etc.) is not the responsibility of the browser but of the code written by the developer. This is the reason why there should never be the opportunity to mistake a remote interaction for a local one.

  • The moment facilities like fetch (aka "ajax") are used to access server-side facilities under your control you are using an API that you are responsible for. So in effect even for something like unpoly, optimizing server responses for fragment updates is building an API - it just happens that the API responds to HTTP headers and the payload is HTML, not JSON.

  • The internet is rife with accounts where RoR installations had a "short time to initial success" but invariably maintenance-wise descended into a Big Ball of Mud - so you may want to be careful what you compare your approach to. Active Record is at the core of many Rails application's DB interaction - and many consider it an anti-pattern with regard to application-to-database interaction (Repository being preferred - essentially the API to your persistence layer). So Rails isn't the best source for architectural solution best practices.

  • The design philosophy behind RoR essentially enabled quick transformation of database tables to web based entry forms for CRUD applications - hardly something you should be juxtaposing to "a product" or "domain" claiming to be so complex that it requires an SPA application/architecture.

So in conclusion - your comparison with Ruby on Rails or Laravel doesn't work on any level.

This is the type of architecture I try to achieve with Layr.

Again - with a dynamic web site the browser is the client, HTML/CSS the data, and the interactions are governed by the HTTP protocol (which is a REST architecture); with an SPA your client-side application is the client (which just happens to run inside a browser) taking on all sorts of responsibilities handled by the browser in the former case; and while interactions with the backend go over HTTP there is a lot more latitude regarding the semantics of the interactions - which is where the API design comes in. Apples and Oranges are more similar than those two scenarios.

For me, the fact that the UI layer (the frontend) and the business layer (the backend) are physically separated is an implementation detail.

The backend isn't the business layer. It's largely infrastructure that allows the frontend to interact with the business logic via the web (i.e. API support). Conflation of the delivery system and the business logic is one of the core issues with many Rails applications (Bring clarity to your monolith with Bounded Contexts, Architecture the Lost Years ) - and is exactly the concern I would have with mindless application of a technology like Layr.

Also "physical separation" is never an implementation detail - it's the "laws of physics" that get in the way - the ones responsible for the fallacies of distributed computing, well, being fallacies.

It is not difficult to distinguish this type of errors and abstract them away with an automatic retry strategy or a modal dialog offering the user to retry.

There's a whole mountain of literature dedicated to the fact that these type of errors aren't easy to abstract away (and often modal boxes are bad UX).

But they are tons of SPAs that have the same characteristics while being far less popular.

And my point is that successful SPAs are lot more resource intensive than other approaches - making their ROI dubious in many situations - they shouldn't be attempted on a shoestring budget or with constrained resources and time. PWAs don't have to be implemented as SPAs and if you truly need native-like-capabilities commit and go native.

Thread Thread
mvila profile image
Manuel Vila Author • Edited

Again, thanks a lot for your detailed answer but I am afraid we will never agree.

It's like a left brain speaking to a right brain:

  • You are obsessed with architecture flexibility.
  • I am obsessed with development velocity.

Both approaches are valuable and choosing one over the other really depends on the project you are working on.

Thread Thread
peerreynders profile image
peerreynders
  • You are obsessed with architecture flexibility.
  • I am obsessed with development velocity.

My mindset:

"Always code as if the guy who ends up maintaining your code will be a violent psychopath who knows where you live" (link)

And from my experience software products are annoyingly long lived while needing to be continuously adapted to ever changing circumstances in order to remain valuable.

The type of development velocity you seem to be interested in is referred by J.B. Rainsberger as "the scam":

The cost of the first few features is actually quite a bit higher than it is doing it the "not so careful way" ... eventually you reach the point where the cost of getting out of the blocks quickly and not worrying about the design is about the same as the cost of being careful from the beginning ... and after this being careful is nothing but profit.

He acknowledges that "the scam" is initially incredibly seductive - and the approach that you describe in the article has that same seductive appeal. The velocity of that approach moves quickly to the point where the cost of continuing is higher than the cost of starting again.

So the value of that approach can only be realized if the product is decommissioned before the "breakeven point" (my guess less than two years, possibly less, depending on project type). The only other option is to make the product "somebody else's problem" before that breakeven point is reached (which obviously isn't doing them any favours).

I'm only interested in going faster than "so called fast" - going well for long enough so that I'll beat fast all the time.