DEV Community

johnfound
johnfound

Posted on

Your powerful dev computer is your weakness.

Usually the developers like to use powerful computers.

They can explain this with many reasons, but the truth is that it is easy to work on a faster computer and the developers are usually rich enough to be able to buy a high-end machine. Or their employers are.

But, come on! Today, even the slowest low-end netbook has huge (I mean HUGE!) computational power.

If we, the experts, can't setup our processes to work great on this extremely fast hardware, how we can ask our users to use our software on their slow, old, cheap machines?

In addition, my experience clearly shows that the programmers improve their software only until it works acceptably fast on their own computer.

Beyond this point the programmer will cook up thousand excuses and explanations, but will not optimize further.

Will claim it is impossible, will talk about "premature optimizations" and will threaten about very expensive and hard support.

The truth is that the optimization is tedious process and when you can't feel the acceleration, you will never understand why you should make this work instead of programming something funny.

So, working on the slower machine, the programmer will always create software of higher quality. This effect works automatically and unconsciously.

That is why I am using for programming, the slowest possible computer I was able to buy (and that has more or less modern hardware, supporting the latest CPU extensions).

It is ASUS X102BA netbook with AMD A4-1200 CPU (2 cores, working on 1GHz), 4GB of RAM and 500GB HDD;

It is running Manjaro Linux with XFCE.

When I am in the office, I am connecting external HDMI display (2560x1440) and good mechanical keyboard.

In addition to the very high positive effect on the quality of my programs, the netbook has additional advantage - it is pretty mobile - very small (266x185x23 mm) and very lightweight (1.1 kg). I am carrying it always with me and can start to program instantly in every moment.

What you think about such point of view?

Top comments (36)

Collapse
 
ahferroin7 profile image
Austin S. Hemmelgarn

I think the issue isn't that developers use really powerful computers, it's that they don't test on cheap, low-end systems anywhere near as much as they should.

Being able to work efficiently is critical these days for software developers, full stop. It's the whole reason IDE's exist in the first place, it's why stuff like make exists, and it's a significant part of why high-level languages even exist (not having to worry about things like memory management frees up mental capacity for handling the actual project).

I encourage you to try building webkit-gtk locally on that laptop you have, I suspect it will take multiple days, if not possibly weeks (it takes almost two hours on the laptop I'm typing this from, which has a Core i7-7700HQ CPU and 16GB of DDR4-2400 RAM). While webkit-gtk is a pathologically bad case of problems with build efficiency, it's really not unusual for big projects to take an hour or more to compile even on big fancy development machines (same machine I mentioned above, GCC, LLVM, GHC, IcedTea, Rust, Boost, LibreOffice, Firefox and Thunderbird also all take almost an hour to build, if not more). If you can't build your own software in a reasonable amount of time, you can't test it efficiently during development, and quality goes down significantly.

Collapse
 
johnfound profile image
johnfound • Edited

I think the issue isn't that developers use really powerful computers, it's that they don't test on cheap, low-end systems anywhere near as much as they should.

This is true and the reason is that all process of refusing to optimize further is totally unconscious. Yes, the programmer can be forced to test on slow machine. But even then, he will recognize as "acceptable" results that he would never accept on his own working computer.

And after recognizing the speed as "acceptable", nothing can make him to work further.

Actually the goal of my post is not to make everyone to downgrade his computer. (It would be great, but I am realistic), but to underline exactly the unconsciousness of this process and this way to help people to realize the effect and maybe to counteract it intentionally.

I encourage you to try building webkit-gtk locally

Well, I never compiled webkit, but recently I have worked on a patch for WINE, so the full build continued something like 10 hours. So I understand what you mean. But nevertheless the patch was committed successfully and now there is one bug less.

Collapse
 
ahferroin7 profile image
Austin S. Hemmelgarn

It's not as purely unconscious as you seem to think, at least not for real world development tasks.

The thing is, as a general rule, optimization is hard (yes, there are some trivial cases, but they are in the minority). It's also very difficult to determine reliably if any given piece of code really could be more efficient than it already is. Together, this means that once you meet the minimum performance requirements for realistic usability, it's often a better use of your time to work on things that are guaranteed to improve the software than it is to chase that extra few milliseconds of faster load time, especially when the latter is far more likely to introduce new bugs in existing functionality than the former.

Now, where that minimum required level of performance is is not easy to figure out, and varies from project to project, sometimes wildly, and often over time too ( for example, if you're in a saturated market, the minimum level of performance you need is 'roughly as good as your most popular competitors'), but it's always there, whether you like it or not.

Collapse
 
jmplourde profile image
Jean-Michel Plourde

I totally disagree with that kind of argument. As a software developer, it is my job to ship good software as fast as I can. Why on earth would I purposely use a tool that is slowing me in the process ? I want the best tool I can afford so that I can concentrate on delivering, not cursing and patching.

There is tools, metrics and theories available to make sure what you ship is working on slower machines. There is no way I'm compromising my efficiency for something that can be tested otherwise. I don't mean you need to blow out $6k on a Macbook. I'm using a T470 with i5, 500gig SSD, 16 gigs of RAM with Fedora 29 as my daily driver and it's plenty enough to develop stable, fast and low resource cost software.

Collapse
 
johnfound profile image
johnfound • Edited

I am sure the InteliJ and Atom developers share the same opinion.

Collapse
 
qm3ster profile image
Mihail Malo

Exactly, and their target audience is developers, meaning they have even looser performance standards.

So how can you hope to be productive when you need to use those?

Thread Thread
 
johnfound profile image
johnfound

Exactly, and their target audience is developers, meaning they have even looser performance standards.

Don't you think this is kind of vicious circle, that need to be broken somehow?

Thread Thread
 
jmplourde profile image
Jean-Michel Plourde

In my opinion no, unless there is negligence. Companies providing tools should not prevent themselves to release features on the premise the users need a powerful machine (their is limit, yes but in general IDEs use resources decently).

Thread Thread
 
qm3ster profile image
Mihail Malo

Sure, it's a positive feedback loop. All I'm saying is they're acting rationally.

More rationally than boycotting them would be for those developers whose productivity in their job would be severely impacted by doing so.

Collapse
 
agronick profile image
Kyle Agronick

Making sure your software works on low end devices is important but I don't think crippling your development platform is the way to get there.

I have issues with swapping and running out of memory with 16 GB of ram. I don't know how I'd get by with 4. Developing requires running a lot more software than your end users will. Builds already take a considerable amount of time and I rather not slow it down more.

Collapse
 
pndyjack profile image
pndyjack

Yeah, I feel the same. But it also depends on what kind of software we write. If running 2 instances of Visual Studio, 1 of Android Studio, 1 each of Firefox and Chrome and SSMS is your requirement (I've had to do various combinations of these apps), then 4 GB won't cut it. But the author's laptop would be plenty for ssh'ing into a server and using terminal based programs.

Collapse
 
elasticrash profile image
Stefanos Kouroupis

yes I would love to work in a slow machine, so as when I have to compile a humongous project I can mess around longer.

when I joined my current company they gave me an old slow laptop. Our main project took 35 min to compile. A few months ago, we switch to a new and powerful laptop, it takes 3 minutes....guess which one caused more frustration.

Not to mention no client will have to go through that process. I deploy my backend apps on aws micro machines so, yeah they need to be optimised and fast. I deploy my angular front end in S3 buckets, the client will never go through the pain of building the project, he doesn't necessary need a fast machine.

Collapse
 
johnfound profile image
johnfound

Our main project took 35 min to compile.

Interesting, but all developers complaining about slow compilation and lose of productivity on a slow computer. But somehow forget about the incremental builds. If your project is correctly structured, you have to build it only once. All future compilations are only on the modules you changed and modules that depend on the changed modules.

So, if you are working on a slow computer and really, really want to do the job, you always can setup your build environment properly and build fast.

Even if you are working on a module that all other modules use, you can create small test project that use this module and work with it, not building the whole project at all.

There is always solution, if you want to make it. And there is always an excuse, if you don't want to make it.

Collapse
 
elasticrash profile image
Stefanos Kouroupis

This is indeed true and it can work fine for a few hours. But in real world scenarios that doesn't easily hold. If you work on a specific branch all alone without being bothered, it works. When you have 15 people working on the same project and you need to rebase every couple of hours plus visual studio untrustworthiness that means you either need to keep track of other people changes and rebuild specific modules or rebuild the whole thing. Unit tests are again a good way around but the same problem applied ...rebuilding the test project takes time. I am not complaining but it's a issue I rather not worry about....than being ...oh my *** I need to rebuild again...and I just made coffee

Collapse
 
xxzozaxx profile image
Ahmed Khaled • Edited

I totally agree with you.

a few months ago I read an artical about the text editor and interviews acceptance relationship and it show most of people don't use vi/emacs, we know why, but the vi/emacs users has the higher acceptance rate -please if you did read the artical reply with the url cuz I can't found it-.

the explanation is simple, most of vi/emacs users write code to solve their own daily problem instead if just install plugins bu clicking.

Same thing about GNU/Linux or BSD vs mac or windows.

Collapse
 
anpos231 profile image
anpos231

You made some valid points here, but It's not always the case.

I am full stack developer.
When I working on the back-end I don't have to worry THAT MUCH on performance. In fact, if it runs on my machine, then it definitely will run on 100 times faster server.
But when I am working on front-end, I do put a lot of care into making websites as responsive as possible. Chrome provides tools for that, you've got the performance tab where you can simulate 6x CPU slowdown and measure how well websites behaves. You can also test you're website on a separate smartphone or tablet.

Having a powerful development workstation allows us developers to accomplish our tasks faster. For example you can use powerful IDEs (like Intellij). Compiling code is also much faster.

In addition to the very high positive effect on the quality of my programs, the > netbook has additional advantage - it is pretty mobile - very small (266x185x23 > mm) and very lightweight (1.1 kg). I am carrying it always with me and can start > to program instantly in every moment.

Are you coding while commuting?

Collapse
 
johnfound profile image
johnfound • Edited

When I working on the back-end I don't have to worry THAT MUCH on performance. In fact, if it runs on my machine, then it definitely will run on 100 times faster server.

My servers are usually not so fast. Right now I am using 1CPU core with 1GB of RAM on 2.4GHz; In addition, the back-end is one of the software that need to be optimized to the last extent. Because the back-end is aimed to serve multiple users simultaneously. So, even if it is 100 times faster than your working computer, when there are 1000 (or 10K) clients connected, the performance matters.

Having a powerful development workstation allows us developers to accomplish our tasks faster. For example you can use powerful IDEs (like Intellij). Compiling code is also much faster.

If developers of InteliJ had used slower computers for development, this IDE would be much faster now. :D

Collapse
 
anpos231 profile image
anpos231

If developers of InteliJ had used slower computers for development, this IDE
would be much faster now. :D

Reminds me of devc++.
This thing had such fast auto-completion like nothing really.

Collapse
 
kspeakman profile image
Kasey Speakman

There is some merit in what you are saying, but it only makes sense to a point. For example, should you dev without any screen -- instead relying on a screen reader -- to make sure that you produce accessible apps? Assuming you do that, how are you going to know if the app looks as you intended when someone does have a monitor?

Or if you produce code on low-powered devices, how do you know if it will scale up in capability with higher powered devices? (Or that it will even work anymore?)

Rather than focusing on constraining your dev machine artificially, it's probably better to test the end product on the environments (along with their constraints) you are targeting.

Collapse
 
johnfound profile image
johnfound

You are right to the some extent. And I can think about many other examples and counterexamples. But it all depends on what you as a developer really want. If you want to make really accessible application, especially targeted for a blind people and want to make it in a best possible way, you definitely should have an visually impaired programmer in the team.

You may have great QA team that to test the program and claim it is slow, but if the developers think (subconsciously) that the program is fast enough, it will never be fixed. The development speed will slow down and the developers will find a way to persuade all around that it is impossible to make this program faster. The release date will come and the program will be released slow.

So, if you can suggest the developers that the program is slow in a way they really believe it (not simply to agree) they will make it faster. If not, well, the next slow program will be released.

Collapse
 
kspeakman profile image
Kasey Speakman

Seems like there is a bigger problem with the organization if no one will listen. Low powered dev machines won't help that.

Thread Thread
 
johnfound profile image
johnfound

Oh, they will listen carefully. And they will agree. But only in words.

Collapse
 
shostarsson profile image
RΓ©mi Lavedrine

I totally agree on the fact that

In addition, my experience clearly shows that the programmers improve their software only until it works acceptably fast on their own computer.

I saw that when I was doing Mobile applications. Whether it is on iOS or Android, developers tends to have the highest end mobile available and are developing on it. And when it is working properly on the latest OS version on the latest high end mobile, they tend to not optimizing it any further.

I would heavily recommend that developers have a low end mobile device and put into the Definition of Done (I assume that you work using Agile methodologies) that the software must work properly on the low end target device.

Collapse
 
phlash profile image
Phil Ashby

It's an interesting position, forcing people to think about performance in the target environment all the time, but for me this seems a rather simplistic approach, as often the target environment varies according to cost, especially for SaaS solutions. Video games, military systems, satellites, tend to have increasing constraints where it becomes important to design for performance earlier, and where professional development would use appropriate simulation and test environments. Even in our predominantly SaaS deployments, we try to create representative test environments to understand behaviour and pick up opportunities to reduce our costs or improve user experience.

Collapse
 
johnfound profile image
johnfound

but for me this seems a rather simplistic approach

And it is. But it works for me.

Notice, that I am using only tools that performs fast enough to not slow my working process.

For example my typical builds on approximately 500Kloc of code is something like 3..4 seconds.

My IDE runs instantly and my keyboard repeat rate is set to 50 characters per second (and 250ms initial delay) in order to allow faster navigation in the source.

The same is about the version control system and any other tool I am using. The only not-so-good performing are the graphics editors Inkscape and Gimp are slow as hell. But well, I am looking for something better and will find it some day.

Collapse
 
tonyorozcor profile image
Tony Orozco

...the programmers improve their software only until it works acceptably fast on their own computer.

Dude, I'm with you on almost everything, a lot of people on my team when on UAT or replicating bugs always keep it coming with the old "It works on my machine", but how could you get a full dev environment (IDEs and other tools) work on a machine like that? how to test with a Chrome windows with devtools open? You will be out of memory in a glance.

Collapse
 
johnfound profile image
johnfound • Edited

I am using Chromium with devtools open and actually it works pretty good with 4GB or RAM.

As I already said - we are the experts. We need to be able to setup our development tools to work properly with every computer.

If the IDE needs too much resources, we have to press the IDE developers to make it less resource hungry. After all, we can create our own IDEs and stop using bloated software. There was powerful IDEs when the computers was 200MHz. The modern IDEs are not more powerful. They are simply slower.