Skip to content
loading...

Does your website really need to be larger than Windows 95?

tux0r on September 23, 2018

A few days ago, an important blog article was published: Look around: our portable computers are thousands of times more powerful than the ones t... [Read Full]
markdown guide
 

I wanted to make a quick JS admin/client for an API.

I got a boilerplate with Webpack/Babel/SCSS/React/ESlint. NPM installed 1300 packages.

The main problem is that most of these tools exists to fix the Web problems. So you create something that is broken, then you build more tools to fix it, then more tools to fix those. This is where we at.

Also the docker images of the 2 most popular languages are 655MB (node) and 976MB(java). All those just to run 2kb of code. Ofc the alpine images are way smaller but still ...

 

Which web problems are solved with 1300 packages?

 

I bet that not all of them :))

The ones that are solved by those packages, like users don't upgrade their clients (browsers) and still uses old OSs/browsers, so we have to compile to old JS, which is a problem.

CSS evolving slower then the dev needs, which is a problem solved by SCSS, and so on.
The fact that JS is not compiled is another problem by itself which is at the root of others.

I sacrificed performance and size for speed of development, but I think we got it too far.

users don't upgrade their clients (browsers) and still uses old OSs/browsers, so we have to compile to old JS

This could be solved by using less JS for trivial things. Do you really require your users to allow you to execute arbitrary code on their computers?

No not really, because less is not 0. If you use only 1 JS6 feature the website will not work. One LOC and it is all over.

Trust me, the entire ecosystem is rotted, there is no escape, at least not with the current technologies.

there is no escape

Stop using JavaScript.

I'm getting tired of the "Stop using JavaScript" rant.

Just to make something clear before I elaborate: I don't like JS, and even though I'm a web developer working daily with modern web techs, I don't like how the web looks like today.

Yes it would be nice if it could be more optimized, but optimization takes time. And guess what: time costs money. Because today software is a business. Not an art, a business. And in a business, your goal is to sell as much as possible as quickly as possible while satisfying the client.

So if your client wants the most modern JS-ridden website, unless you can achieve the same result, in the same time, for the same cost but optimized and written in C or whatever pleases you (spoiler alert: you can't), they'll go somewhere else. Because the developer next door will use JS and make the site the client wants and be done with it. Because it sells and because optimization, as of today, is not required to sell a site. That may be sad, but that's true.

So yeah sure, stop using JS if you want, but that won't change the fact that today, we are caught in a trap were JS is necessary. That's because we live in a world influenced by business and business only, not by the satisfaction of producing beautiful and optimized code. Welcome to 2018, welcome to capitalism.

unless you can achieve the same result, in the same time, for the same cost but optimized and written in C or whatever pleases you (spoiler alert: you can't)

Why not?

That's because we live in a world influenced by business and business only

And it is our job as developers to show businesses how things could be made better than they were.

Why not?

You are seriously telling me you can pick any modern (that term is relative but you get what I mean) website and do the same thing from scratch in C (or whatever) with the exact same features in the same time, for the same cost (and therefore the same price) AND do the same kind of maintenance on it once it's done?

You know what, let's say you can. Congratulations. But can I do it? Probably not. Can most developers do it? Also probably not. Why? Because that's not the way we learned to do it. We learned the simpler way: JS and all.

That's the goal of JS and all the modern frameworks: to make things easier. So yeah, let's make a crusade and teach every single web dev (because you can't do it alone) to do the same job with different, more optimized tools. But learning takes time too. Which also costs money. For what? Optimized code? Ok, will it sell more? Nope.

Again, the only thing you have to gain is satisfaction (for the developers, since no one else cares), which makes the whole thing irrelevant business-wise.

And it is our job as developers to show businesses how things could be made better than they were.

And it's the job of the engineer who designed your dishwasher to make it durable, but once again, that's not good for business, planned obsolescence is.

So, sure, show businesses how things could be better. But what you are suggesting increases complexity in many ways and does not guarantees better profit, so I doubt businesses will care.

You are delusional or you will publish a paper on how to make websites without JS in the same amount of time/productivity. Please do, many of us want a JS alternative.

Not all people afford to live in a self made bubble, we have to solve real businesses problems, like building Admins that will generate value for customers.

many of us want a JS alternative.

The "JS alternative" predates JavaScript by decades: If you really need to run code on your users' machines for whatever reason you might make up to rectify being paid for that, write a desktop application. Problem solved.

You forget one thing: users don't want desktop apps anymore.

Like I said, you live in a self-made delusional bubble world, the real world has changed.

There is no use to talk more until you understand the value that JS brought for developers, businesses and customers (productivity, portability and a huge range of platforms and many more).

After that you can offer a real alternative. Not mobile, not native, not desktop, not CLI, not cronjobs, not salesforce scripts, pure browser apps.

One of the problem might be the term "apps", blurring the thick line between desktop applications, websites and mobile crapps. (Yes, I have just invented a new term. Public domain, of course.)

Users want software that works. Their web browser is a desktop application, their Solitaire is a desktop application, their Word/Excel/Outlook (with Outlook being more and more replaced by web cruft...) is probably a desktop application. Who are "the users"? The ones you know? The ones I know?

There is no "all users want $thing". Users use whatever fits their needs - and their needs are not "we need everything in our web browser", because else Electron would not be a thing. Their needs are fulfilled by software that is easily accessible. Of course, web cruft can be sold easier (because you own the servers and the users are forced to rent access instead of buy the full thing)... but is that the most important aspect to you?

Their web browser is a desktop application, their Solitaire is a desktop application, their Word/Excel/Outlook (with Outlook being more and more replaced by web cruft...) is probably a desktop application

Yeah and just like you said: Outlook is being more and more replaced by an online solution, Word/Excel are getting the same treatment as the time passes and I'm pretty sure a good deal of people have traded their Solitaire for an online version (actually, they play Clash of Clans on their smartphone instead *sigh*)

Users use whatever fits their needs - and their needs are not "we need everything in our web browser", because else Electron would not be a thing.

That's true, for some things users still require desktop software and Electron is just like most of the JS stuff: a convenience for the developers. If you developed a web app that needs a desktop version, Electron is a quicker solution than rewriting a real desktop software. Once again, I'm not saying it's a good thing, nor that I like Electron (I truly don't), but once again, business-wise, Electron is relevant.

but is that the most important aspect to you?

That's what I'm trying to say: what's important to me is not relevant. I started programming with C and for years I wanted to write desktop apps and hated web for almost the same reasons as you. Then I realized that I need to make money (and that web is actually fun despite it's many defaults, but that's another debate) with programming, hence my current job involving JS (among others). What's important to me does not matter, the reality is here: JS is a tool answering the needs of today, and we have to stick with it for this exact reason.

I need to make money (...) with programming

Apple makes money with programming, not having one single "web application" that would be worth a cent.

Apple is Apple, you are not, I am not, and no one else is, what applies to them cannot be generalized.

You are talking about a company that makes money by having it's own ecosystem and it's own marketing target. They don't sell products, they sell a brand. Their phones are not the best (none are) yet they sell, because they have their customers ready to buy just for the brand. The same goes for their software, may it be desktop or not. They could just switch everything to the cloud tomorrow, they'd still make a shitload of money because their customers will remain.

Apple is Apple, you are not, I am not, and no one else is

No one else can make money with software that does not run in your web browser? Are you serious?

Even some of my (free) desktop software - not even actively marketed by me - generates donations because people want it. What am I doing wrong?

I'm not saying no one else can, I'm saying Apple is an irrelevant example because of their very specific way of doing things.

Of course you can make money by developing something that runs outside a web browser, I'm just saying this has nothing to do with the whole JS topic.

JS is needed for some things and desktop apps are not always an alternative, that was the original debate.

As long as we're here, non-browser apps generally reach a smaller audience since there's a slightly higher barrier to entry. The user has to go and install it. Is that a problem for a well-targeted application? Probably not, depending on the space it's trying to get users in. Is that a problem if you're trying to reach a broad audience? Yes. I know plenty of people who resist downloading and installing an application as long as they can. Download one browser, access millions of websites and associated content! Now that is a user experience that just feels good.

Making a desktop application that has the same reach as a web app seems insanely impossible. You write a webapp, it work on Windows, all Linux distributions that run a desktop and Firefox or Chromium, iOS, OSX, Android, and Chromebook. You write a desktop application, just making sure your application is portable to all of those operating systems takes careful design. Sure, you may not care about every single one of those OSs, but then you probably aren't trying to reach that broad of an audience in the first place and writing a desktop application was a design decision you made. People who are trying to reach that broad of an audience made the design decision to write a webapp. Every webapp developer who's commented on this post says they don't like JS, but they still use it. You can bet a great sum of money that they're not just using JS to be masochistic.

Another benefit of the webapp frontend is the uniformity of the user experience. Slack is the same if I use the desktop wrapper, a web browser, the Android app, or the iOS app. To me as a user, that's a beautiful thing! You can use other means of achieving that cross-platform uniformity, but only if you can afford to target a smaller user base.

Webapps and JS have carved out a big niche on the modern Internet, and it will take a lot of effort to dig JS out.

You write a webapp, it work on Windows, all Linux distributions that run a desktop and Firefox or Chromium, iOS, OSX, Android, and Chromebook. You write a desktop application, just making sure your application is portable to all of those operating systems takes careful design.

It had become much harder to target all major web browsers (with all major JavaScript versions...) than to target all major operating systems. JavaScript can break tomorrow. The Windows API probably won't.

Even with the disparity in abstraction level between desktop and mobile operating system development environments, making cross-learning and cross-development more difficult? Your first step in trying to generically cross all of those operating systems is surely to make a uniform abstraction layer. But that's what the web app platform is. To supercede the web app platform, you have to do it better and market it to developers really well. Maybe HTML5 is the answer, I don't know, I write embedded code for the most part. But I sure don't expect developers to abandon all the really nice JS frameworks in 5 years.

Also, even if what you say is taken for granted, that doesn't address how much more difficult the Windows API is to learn than JS.

Anyways, what I'm saying is, give us an open source framework better than Qt, and stop telling current JS developers what they already know.

Even QT and JavaFX are "polluted" by CSS. Like JS, CSS is not the perfect language but is the best we have for now for client apps.

Even with the disparity in abstraction level between desktop and mobile operating system development environments, making cross-learning and cross-development more difficult?

With the advent of FireMonkey (Delphi for Android/iOS) and Xamarin (.NET for Android/iOS), this can safely be considered a solved problem.

that doesn't address how much more difficult the Windows API is to learn than JS.

The Windows API was an example. Even if you use wxWidgets or (my favorite GUI framework) IUP, you can be sure that there will be no major OS upgrade in the next two weeks that breaks everything. This can not be said about Chrome's and Firefox's JavaScript interpreters.

Okay, well then evangelize your favorite cross platform GUI framework instead of lecturing JS devos about JS.

Lmao when have Chrome's or Firefox's JavaScript interpreters ever broken a single thing?

"Lmao" (please try to keep your questions at least relatively civil, "lmao" looks dumb and won't make you win this argument), you are aware that both V8 and SpiderMonkey (or whatever is the current name of that thing) are constantly updated and extended with non-standard "ECMAScript" features? You are aware that every single Chrome update has - so far - broken one or more websites?

Feel free to use your favorite search engine instead of lmao'ing at me.

I didn't realize I'm in any sort of argument right now
I was just requesting specifics, of which I received none so far.

Can you give me an example of serious breakage from a Chrome or FF update in recent history?

I googled the last 4 versions of chrome before I got bored and got one bug in timezones that was fixed within that same version.

I didn't realize I'm in any sort of argument right now

Neither was I before you started to laugh at me. That was not nice.

Can you give me an example of serious breakage from a Chrome or FF update in recent history?

This one?

 
 

Waiting for Electron applications written in Rust...

 

Or much better, a desktop app written in Rust + Elm without Electron:

In fact, this application only uses 0-3% CPU and the bundle size is >800KB on macOS

Like... JS Elm?
That's weird.
Would make more sense if DenisKolodin/yew was inside the WebView.
With flatbuffer messages over WebSocket 😂👌

If I ever need a real-life example for over-engineering, I'll choose that.

Not using web technologies for the desktop, I guess.

So, shipping my own renderer?
To what graphics API abstraction?
What about other OSes?

There already are cross-platform solutions for that...!

Like FireMonkey, Xamarin, wxWidgets, IUP ... :-) it all depends on which platforms you "need" to support.

I was specifically talking about platforms I don't "need" to support.
Obviously, it goes without saying that all of the listed have a barrier to entry several orders of magnitude more than web technology.
wxWidgets seems fun for some uses, but how would one go about making it look appropriate for end users?

I personally know a couple of end users who don't know what the URL bar in their web browser does, but they can easily use wxWidgets applications. Why should it be harder?

Because end users want something that looks like Discord and not something that looks like Windows Administrative Tools

And you cannot develop something that looks like Discord with existing desktop GUI technologies, because...?

(I honestly doubt that users "want" that - users have not designed it in the first place. They take what they get.)

It will take a gorillion more development hours, which means you won't be first to market with features, so no one will use it.
Or it won't look as "good", so even if has well-needed features it will be quickly taken over by a project that steals the features and provides a better-looking but less performant frontend.

Also, I forgot to mention this before, not having to install the app (we aren't talking dependencies here, but even downloading an executable and opening it from the filesystem / app store) is a key feature to user acquisition.
So reusing UI components between a less featureful web app and a fully featured hybrid app is a big benefit, both for user consistency and developer productivity.

It will take a gorillion more development hours

Huh? Why?

which means you won't be first to market with features, so no one will use it.

Slack was made popular a quarter century after the IRC and yet people moved to it.

Or it won't look as "good"

The look of an application is not controlled by the chosen framework. There are really ugly Electron applications and really beautiful native applications.

Beautiful native applications are a lot of work.
It makes sense for Photoshop or Ableton Live to have custom native UI, but most apps such as social media, commerce, communication, and most interfaces to brick-and-mortar businesses don't need to invest in that complexity.

Beautiful native applications are a lot of work.

Beautiful web applications are a lot of work.

Work that someone else has already mostly done for you.

Which is true for desktop applications as well. (That said, I wouldn't consider "glueing together other people's code" to be "programming"...)

Business doesn't care if you call it "programming" but you better be focusing on what makes you different and not reinventing Redux.

Business doesn't care whether I glue together desktop or web code as long as it runs anywhere.

Does your desktop code run in the browser?
Or will that be a complete duplication?

No, you won't need a browser for a good desktop application. But it could communicate over HTTP(S) if you feel like it.

You need a version of your application in the browser anyway, otherwise how will people start deriving value from your application and want to install it?

Good question. How does Photoshop sell so well? It has no browser version.

Just make good software.

Perhaps because it's software for professionals(1), for deep work(2), that is clientside resource-limited(3), with a decades long legacy of brand recognition(4) from before anything resembling the modern web, with legacy native code on multiple platforms(5) by a company that has 4 gorillion senior programmers(6) and whose core business is clientside software and not another service(7).

Thunk

And you have the chance to make a new one. Don't waste it on the web.

 

This is something I've debated quite a bit. One of the problems is that businesses started to focus on good enough and relied heavily on better hardware. At the same time, developers started to get lazy and came up with solutions that just worked first. In most companies, they don't care about application performance especially if it is good enough and they have a huge backlog (which is very common).

The first question that I ask at the beginning of every interview is:

How can you swap two variables without using a third one?

Usually, those who have no idea will have an interesting reaction or get into an argument about the validity of the question. If they don't answer it, they end up learning something new.

 

At the same time, developers started to get lazy and came up with solutions that just worked first.

This is the big difference between coders and engineers, I guess. :-)

How can you swap two variables without using a third one?

I admit I had to use Notepad for that - I never had to solve this problem in my programming career. Ha! But it can be done in three elegant lines of pseudo-code. Thank you for the question, really.

 

In a real-world situation where this would be needed (i.e. there is a performance issue that needs to be resolved), my first inclination would be to see if this can be addressed by modifying the higher-level architecture, rather than needing to resort to low-level mathematical trickery.

 
 
 

Yes. No magic involved.
You could scare kids with the question, not software developers.

Move on.

Nothing to see here :D

 

It is a complex question because software became more complex, the number of features increased and all that, the size will increase with all that.
I use computers since Windows 95/98 and one application it was always present to make a comparison from there and now is Microsoft Word, or any Office product. Since then to now I didn't notice an increase in the time of initialization, therefore both hardware and software evolved together in this little example.
I don't know... It is really something we can think for weeks, not just one night...

 

software became more complex, the number of features increased and all that, the size will increase with all that.

Does it though? What is the one "complex" additional "feature" that rectifies the Slack text chat client being 100 times the size of a decent IRC text chat client? Or does the Atom text editor really have more features than an operating system from the 90s?

therefore both hardware and software evolved together in this little example.

Is the same performance on much faster hardware really an achievement worth to have?

 

The editing part of Atom isn't 100s of MB. The actual useful part of any Electron app isn't 100s of MB.

So the issue is that Electron is bloated, right? What if it had been a shared library instead? Now you can use Slack, Spotify, and Atom/VSCode and only paying that bloat cost once. That's much better.

Or what if there had existed a GUI toolkit/lib that ticked all the boxes that Electron does: 100% cross-platform. Unified appearance everywhere. Rapid and easy development.

Electron's popularity is a sign of failure for the existing GUI toolkits. It would never have taken off if GTK or QT had been able to compete on the same terms.

It took off because it enabled web developers to write desktop applications without having to learn a new language. It still has its problems.

A shared Electron library would add another: backwards compatibility.

My point of view about Electron is purely as an user, although a more knowledgeable one as a developer and to me its performance is the main issue. For main developer tasks, I use InteliJ IDEA, but I use VSCode to do some tasks I find easier there and to not create files on the project just keep a response from an API saved, etc. Okay, I use it and then I leave it opened there since I will need at some point later. When the time comes and I summon the windows it takes a bit more time than I think it is really necessary to come up, it takes less time to start it again than coming back to it later. With Postman is even worse, I have to literally close and open it again to make it usable again. I found out later that all of them have Electron in common. I read that Electron has memory management problems and I think it is true by my own experience.
InteliJ takes longer to start, but once it is done, the performance is constant, I can keep it opened the entire week without worries. For an IDE and development tools this is very important.

Okay, I use it and then I leave it opened there (...) I can keep it opened the entire week without worries.

That's fine if your machine does have enough "free resources" and you don't care about the power usage (and the environmental consequences). But some day your IntelliJ will require more resources than your computer could offer - and then? Time for a new computer, scrapping the old one?

When I mean the entire week, I mean to use it all day, suspend the notebook at the end of the day and come back up in the morning every day, Friday I shut down the notebook and we are back Monday. It uses 1~2 GB of memory, I bet Chrome uses more or less than same and I don't think VSCode can do any better than that if I used it to develop my apps and with less features, so... Well managed 2 GB is better than badly managed 2 GB.

The problem is that you consider 2 GB acceptable for any application.

I didn't say I consider acceptable for any application, I'm talking about InteliJ IDEA, arguably one of the best and full of features IDE. I don't think Chrome memory usage is acceptable, for example. It is a matter of giving and taking. We cannot come back to Windows 95, things evolved and accumulated, the applications will get bigger and more memory consuming. If it is fair or not is a case by case issue.

We cannot come back to Windows 95

And this is the core of the discussion: Why not? Why do we always need to make software fill all resources? What's wrong with efficiency?

 

I suspected you were talking about the Electron made apps. If to do a simple serverless function I have to have almost 100 MB of node libraries along with it, I cannot imagine what they need and the size of it...

If the software did much more than before, yes. If not, then it's inexcusable indeed.

 

If Moore's law hadn't died I wonder if we'd be having this conversation?

There's a reason that "necessity is the mother of invention" is a saying. The biggest reason why programs back in the day had tiny footprints or performed well on slow hardware was because they had to - there was no other option and so they found a way. Especially when examples like the Apollo program are brought up. They were trying to accomplish something that the human race had never done before. Not exactly comparable to a web site where people share cat videos. You make a mistake in the former and people die while the world is watching. In the later? A few people are annoyed and if they F5 it'll probably work.

 

The biggest reason why programs back in the day had tiny footprints or performed well on slow hardware was because they had to

And what is the reason to have giant footprints today? Do they also have to?

Software that "probably works" is broken for me.

 

They don't have to be large. But optimisation isn't free. I can write a bloated program in one hour that uses 1MB. In one day you can write a program that is functionality equivalent that uses 1KB. 10x the time for 1000x the benefit. Sounds amazing! Except what if you have 1GB available? At this point MB vs KB seems like a rounding error. Why wouldn't someone develop 5-10 apps instead of one?

Unless your position is that being efficient has zero cost?

Unless your position is that being efficient has zero cost?

Generally speaking everything has a cost, the main question is who pays it. In your example the developer choosing to do 5-10 apps instead of one does so by valuing their time higher than other people's computational resources. Sure, those extra costs may be "rounding errors", but those compound.

Sure, those extra costs may be "rounding errors", but those compound.

Which is true for both computational resources and developer time.

Of course it is. The question is really: do we value the computational resources of the many, or the time resources of the few?

This is why I always found it problematic to call software development "engineering". I don't think any engineer should be happy with say a 10x increase in performance/features at 1000x required resources, yet in software people seem to think this is totally acceptable.

Cars on the other hand constantly get new features, improved safety AND decrease their resource usage (or in other words increase their mileage).

Engineers also always deal with trade-offs in effort vs ROI: material A is 10x as strong and/O durable as material B, but for this use-case the durability of material B already covers the buffer factor, so there is no point in using material A.

Cars get progressively reduce resource usage because

  • The resource is scarce enough for its cost to factor into the equation
  • Using the resource itself has a(n ecologica) cost, so legislation requires optimizations

Fair point, though it seems that for most software performance is not taken into account at all anymore, which is not a trade-off.

And I hope you're not trying to imply that wasted computational resources have no ecological costs?

Yeah, I do agree on the general point, that we should also focus on performance, and not do pointless/suboptimal things when it takes little or no extra effort to do things better.

And indeed, wasted computational resources have an ecological cost, particularly when we're dealing with calculation-intensive tasks like simulations on supercomputers and stuff like bitcoin (though there, the inefficiency is supposedly a feature). For the most part though, this kind of discussion tends to resolve around (clever) low-level micro-optimizations, while the need for (more transparent) architectural optimizations are left out of the discussion.

Believe me, I'm the last person to promote useless micro-optimizations. A while ago I posted an example of a disassembly in a comment here, illustrating that any reasonable C compiler will perform them (and many others that are much harder to do by hand) anyway, so one should always strife for clarity in the code first.

I do however take objection to some recent trends in the industry, especially in regard to technology and framework choices, since I believe that many of them are not at all motivated by needs, but by the availability of cheap and easily replaceable labor.

Overall I think we're on the same page, thanks for a productive discussion and have a nice day. :-)

Yes, I think we're in agreement here, on both the topic and the productive discussion, so a nice day to you as well :-).

 

I spend a large part of my time speaking, mentoring, and writing, trying to tune people into this fact. We tend to have this mentality that, if the space is there, we must occupy it with our software.

We're already reaping the whirlwind on that habit, since the average computer is now annoyingly, if not unusably, slow after the Spectre/Meltdown patches. If we had coded for reasonable efficiency, those patches would have made little to no difference in performance for the average user.

If you think about it, our software doesn't actually do that much new. I like to use the example of video editing software. In terms of features and workflow, little difference exists between early 90s non-linear home video editing software, and modern non-linear home video editing software, yet the modern software is many times more demanding of resources.

Whenever we waste memory and CPU cycles on stupid things - grotesque abuse of dynamic allocation, habitual use of double when we barely needed the precision of float, overdesigned interfaces that focus more on effects than usability - we have wasted resources that could be used for true innovation. I like to tell young game designers: if you waste your resources on stupid things, those are resources you aren't giving to improved graphics and innovative responsive gameplay.

In other news, we're about to hit the wall on Moore's Law, but with Gate's Law still very much in play, we're about to be outperformed by your average 256 MB Windows 98 PC.

Oh, wait, we already literally are.

 

But did you edit 4k or 8k content back in 90s? 🤔

 

That shouldn't make for a five minute program load time on an 8GB computer (seriously), a 30GB install size, nor a 2GB RAM idle usage.

I have to agree with this.

I often hear similar comparison with web browsers, basically on the past and now they do one thing, open web pages, but in reality they do way more than that now. Streaming content, development tools, payments, security and more to name.

There's probably a reason why amount of resources on software you listed increases, but probably it has increased more than it was supposed to.

 

Adding a very quick note here: The Apollo 11 example is pretty interesting. This was, massively, about optimizing code to deal with very limited resources. A load of the current development technology stack also is "optimized" for working with limited resources, but limits are different. Looking at our own environment, I see by now we can handle server machines, workstations, bandwidth, ... pretty well. Not that they are inexpensive and all there for the free taking, managing these isn't much of a problem compared to something else: There simply aren't by no means enough people around who are skilled, qualified, trained well enough to write code for these machines to do something meaningful. We have a market with increasingly demanding customers. Optimizing for "efficiency" making developers building meaningful applications faster these days is way more challenging than writing applications optimized for runtime performance given most of the computers (from servers to smartphones) sold these days are by far too powerful for average users use cases anyhow.

(Note: I thoroughly dislike the whole messy and fragile JavaScript tool chain. And I am pretty much "disenchanted" here, as well, seeing that we didn't come up with a better "developer productivity" tool that works cross-platform and on "the web" without relying upon this whole stack...)

 

This is due to the fact that we use high-level languages where CPU and RAM are totally abstracted.

 

That's the thing with abstractions: they have their costs, but at the same time they reduce the effort of resolving lower-level problems which enables us to address higher-level problems.

Computers as a concept work like this:

  • we create devices to do calculations in a mechanical way, so we don't have to do it manually
  • we reduce the mechanics to electron movements so that we need to move less stuff around
  • we created bytecode so that we don't need to manually move electrons around
  • we created low-level languages so that we don't need to manually create our bytecode all the time
  • we created high-level languages so we don't need to continuously think about memory allocation and memory management, and can instead direct our capacity for cognitive load towards logic and architecture
  • we created libraries, so that that we can substract lower-level logic from our cognitive load and focus more on higher-level architecture
  • we created frameworks, so that we can substract boilerplate higher-level logic from our cognitive load and focus more on business-specific architecture
 

It's a bit more subtle, but it's certainly part of the problem. From another angle, abstracting the hardware resources is the way to get super portable code, so there's a tradeoff even if you don't consider developer productivity.

 

Just to clarify, I'd argue it's the behavior encouraged by high-level languages that's more dangerous than the language's performance characteristics. As a very small example, some garbage collectors can have negligible overhead, but kill you when you hit bad edge cases. If you didn't know that, you'd never think of it; and a garbage-collected language does exactly that, encourage you to forget the garbage collector exists.

 

When C came out it was considered a high-level language too. ;-) That aside, Haskell is very high-level and gets close to C performance (GHC compiles to machine code, has full type erasure like C or C++ etc). I think it has more to do with the fact that people got complacent and assumed that Moore's Law will work in their favor forever.

 

But with that, then we should technically all be programming in assembly then. Find your target machines instruction set and let's program using that.

When C came out it was considered a high level language. You would use C but never for anything that mattered in speed. In time that turned. Same with C++ when it came out.

There are PLENTY of systems you use every day that have great performance but are using high level languages such as Python, C#, and more.

The thought that a high level language can't make code that is optimized is old and should be thrown away in my mind.

 

One thing more - here's a link to a transcript from the recent FFS Conf in London...

docs.google.com/document/d/e/2PACX...

I disagree that we're going to be saved by the server side code taking over the frontend space, but it's a good read.

 

ahahha the transcript cracked me up, it must have been a really good talk:

FLOOR: I don't have a question, I have something crazy you for you. If you type in create-react-app it installs more package dependencies than I have lines of Go code in my server. I leave you with that.

AHHAAH it's definitely true.

Frontend dev is way harder than before. I hope it's just a cycle. Because there's no real reason for all of this complexity to be surfaced back to the developer.

I don't share the opinion on CSS in the transcript. I find it funny that every single developer hates Electron:

I'm going to add to the rant as well and say you missed out on the glorious pinnacle of modern desktop designer in Electron where we bring the simplicity of the browser to replace the joys of BBE for desktop development

Judging from the comments of the people who spoke up it must have been also a cathartic talk :D

There's a lot of "it was better when I was young" talk which I always try to not buy in because it's not alway true by definition. We're the Facebook generation: move fast and break things. Facebook realized it's crappy motto and software developers are starting to realize it as well. It'll take time.

I like the statement that someone gave against the current:

I'm going to go against the rant that I think the browser is saving our industry. It is allowing us to deliver software and democratising software delivery in an environment that's increasingly locked down. There's more and more app stores and more and more restrictions on what you can run on your machine, and there seems to be a trend towards stopping you running software on your machine, whereas the browser is a way that we can actually bypass that and deliver share software.

I didn't think about this but it's true. App stores have committee behind them and/or entrance fees so getting your "utility app" to the user it's certainly harder than before BUT it's also true that the bars are higher now, you can't just slap two ugly buttons on a container and call it an app. Nobody will ever use it unless there's no alternative, which is less and less a reality. The ability to release a new version (on the web) and have all your users see it at the same time is also very, very important.

 

Bloat leads to opportunity cost for users. If your apps were half the resource hog the user could do twice as much! More or less.
But of course that means using someone else's app not yours, so there is little incentive for a dev, right?

 

That linked blog post was one of the most enjoyable things I've read in a while.

Some things I didn't see anybody mention yet (sorry if you did):

  • The familiarity factor tends to trump resource efficiency, and even ease of use, quite a lot when companies choose what to build on top of (libraries, frameworks, platforms, etc) or with (tool chains, programming languages, etc).

  • In the quest for abstraction, modularity, and whatnot, we've made quite a lot of tools that are easier to use than they are efficient to run.

  • Abstraction tools are just so much better nowadays, we get lulled into thinking all of them are as good as the best ones. Optimizing compilers can, in a lot of cases, out perform assembly programmers; garbage collectors add negligible penalty in a lot of use cases (contrived examples, don't eat me). But, just because they're not a problem a lot of the time, doesn't mean they can't screw you over some of the time. We tend to forget that more and more as tools become better and better.

On a related note, one of my college teaches shared a quote with us that went something like this:

The only problem that can't be solved with a layer of indirection is too many layers of I direction. 

On the flip side though, I do believe there are new people entering the field who are very interested in efficiency, and the screeching halt of transistor miniaturization will certainly force everybody to optimize more.

 

I have ranted about this in a way too

I wonder about how to fix this, I feel like it's a cultural thing.

Sometimes things take off, there are a segment of the web development community who really zoomed in on time to first byte, developing all sorts of libraries and frameworks. I think it's admirable but still often results in pushing a huge amount of data and ends up missing the performance mark overall.

Another example of things that helped change cultures that I mentioned in the article was css zen garden. This website let people show off how good their CSS skills were. But what it really showed was how you can separate presentation from markup which at the time was huge

What's the CSS Zen Garden of web performance? Could that be what's needed?

 

I feel the same way...

My personal bugbear is the use of large, 'industrial', 'enterprise' technologies for serving what is, essentially, a document full of text. Such problems do not need to be solved using a large frontend stack. Or a large server framework. But the conventional wisdom is to lay it all on thick and greasy at every level and then try and make it act like a normal web browser.

code of conduct - report abuse