TDD is the only way to create code that is maintainable and extensible. Unless of course, what you are making is utterly simple.
Writing code is a science, not an art. The problem is that humans have spent less than 60 years doing code and we do not have the hang of it completely that is why point 1.
And when some wise guys at Cambridge said, hey we've invented a replacement to COBOL called "CPL" everyone said "We don't need yet another programming language!"
(CPL -> BCPL -> C -> C++/ObjectiveC -> Java/C#/Swift/Kotlin/Go/Scala/...)
So, I've added another of my unpopular opinions:
We will always need Yet Another Programming Language
Because as Adrián said, we're pretty new at this, and so we need better tools. We're still in the software stone age.
Thank you for the quotation. I have the same opinion. We don't need too many more programming languages and frameworks.
With the programming languages happen the same that with the mobile phones. The unique reason new ones are needed is to make the old ones obsoletes and try to take control of the market.
Can someone really tell me why was Ruby needed in the moment of its creation? What does it provide that could not be added to existing languages?
The principle of "Do not repeat yourself" that science respect is not not followed most of the time.
It seems that the theory behind sofware development doesn't matter anymore and the they're is a global hype on languages like Javascript that is dangerous becoming the ground stones of apps without being so mature as language or some optimal for execution.
It is like if some new tech allows to recycle the garbage in a new material javascriptolite and they're is a boom in sell of cars with "futuristic" look, low weight and a hammer toolkit to shape it yourself to the shape you like.
Of course many skilled people will make from Javascriptolite a new market. But some day people will start to read again physics and mechanics.
Or would somebody build an space ship from that material?
I tend to agree that TDD is a bit excessive, but I strongly believe in a BDD approach which is very common in Domain-Driven Design. Writing the failing test cases that describe how what use cases, policies and rules a system should have is much more likely to be known at the start of a project than if "computeXYCoordinates()" returns false :)
Writing code is a mixture of both!
It's science!
Computer "SCIENCE" is theory. The way that mathematics, physics and information work together to "determine how many users this application will be able support under peak hours" is a bit of physics, math, and probability all in one!
A very common feature in science is that it's very hard (and sometimes impossible) to prove that something is correct, so we often deem things correct enough if we can't prove it to be false.
That's unit testing :)
Computer "programming/coding" is the practical application of such theory. It's also what leans more in the direction of....
Art!
There are 100s of ways of writing code to perform the same observable behaviour. It takes skill, practice, patience and empathy to create code that is timeless, well-designed and readable for future developers.
That's an unfortunate accident of the English language. It's not a problem elsewhere (mann/frau, uomo/donna, etc). Ask a computer and it would say "woman contains man so the former is more generic".
I think this isn't a fight we should be focusing on, there are much bigger battles to fight for gender equality, this kind of petty and surgical word picking just brings bad name to the whole issue, which so many feminists fail to understand.
I think the truth exists somewhere in the middle. I did a coding dojo with a brilliant manager a few years back on TDD, and learned a lot about the value of unit tests.
The first advantage is that it gives you a usable API (API is UX). It's all too easy to get tunnel vision and write an unusable API because you've never attempted to use it. The "doesn't compile" step in TDD makes sure that you are trying to use the thing before making the thing.
The other advantage is that it prevents you from solving problems that don't exist. The specific example we had was roman numerals. At the start of the session I felt a bit of dread: roman numerals are, superficially, pretty hard to parse. Working through "I" to "III" to "MCVIII" resulted in a complete solution that was far simpler than the one I conjured up before we started coding.
But, you'd never get anything done if you followed TDD strictly.
I think the middle ground is writing your code while visualizing the test. This ensures that you are writing units (which are easier to use and obviously easier to unit test).
Couldn't agree more. TDD is workable when you're well staffed and have some discretion over deadlines. Otherwise it's an incredibly difficult sell from a business value standpoint, unfortunately. Especially if you're in an organization where certain delivery timelines are expected regularly
Most of the cost of software is after the initial build. TDD helps keep it decoupled so it's easier and cheaper to maintain in the long term. TDD software probably has better TCO.
I don't think you have to let software/coding consume your life to be good at it. It's ok if you don't code outside of work. It's ok to have a life outside of this.
I totally disagree being a well rounded human with other hobbies and strong soft skills is better than a dude who codes 24/7 especially when that dude gets burnt out.
Comment marked as low quality/non-constructive by the community. View Code of Conduct
Objectively speaking the captain autismo who codes 24/7 will always write code better than someone who only does it 9 to 5. We're not really arguing which is best for the human.
I'm a web developer specialized in coding modular sites and passionate about writing maintainable code.
I do frontend, web apps and I organise communities.
Yes, because the dude that codes 24/7 in 3 years might be going into another field altogether, while a somebody that takes it slower might be in the game longer.
In my experience, more than 10 years, that is wishful thinking.
The people who code 24/7 on personal or open source things love doing this to a point of near obsession and I know of some who have been living that way for more than 15 years (yeah, meet them when we were teenagers and they lived glued to computer science books) and at that point you just have to accept that you will never be nearly as good as them and then be glad to be able to learn from them.
I'm a web developer specialized in coding modular sites and passionate about writing maintainable code.
I do frontend, web apps and I organise communities.
Software Engineer @SciFY.
Live to learn something new -and write cleaner and more sustainable code- every day.
Passionate with learning and discovering new technologies, history, and psychology.
Software Engineer @SciFY.
Live to learn something new -and write cleaner and more sustainable code- every day.
Passionate with learning and discovering new technologies, history, and psychology.
Of course you can, do not let anyone tell you otherwise. Most of the time when people bash PHP it's because the scenarios on where it is the ideal language to use are very narrow.
Software Engineer @SciFY.
Live to learn something new -and write cleaner and more sustainable code- every day.
Passionate with learning and discovering new technologies, history, and psychology.
Really, PHP as a language is so restricting that there are a lot of things you simply cannot and should not code in it. Additionally the whole design for the typical use of the language - a collection scripts that get evaluated from start to end for every web request, triggered by access through a web server - is so bad that it has lead to countless incredibly misguided design decisions and massive numbers of hacks and other issues.
As an example, there's WordPress having all the modules deployed under the public folder and then access to them gets you direct "access" to the code, so all you need to find is one ignorant developer's one bad module and scan the world for all installations of it to hack a big pile of websites. This is the legacy PHP has given us.
Add to that the fact that the language has countless really bad design choices, that simply waste your brain power in useless things - does the function have underscores, does it take haystack before needle, and other such questions constantly waste your time when they could be standardized, and the language community simply refuses to fix their countless past mistakes keeping "backwards compatibility" as some sort of gold standard that must never be sacrificed.
In short: Yes, you can produce code that is better than WordPress in PHP, but you can never be a good programmer while using PHP - you waste too much of your time on things other than producing good code.
In order to help junior devs grow you have to let them struggle. You can't give them all the answers, they have to spend time in the trenches finding the answers themselves with a little guidance.
I've been in the field of coding in general for around 4 years. Even though I know I have worked really hard, I've never done anything professional and I fear I will do horrible with an actual job.
Never done anything besides experimenting and personal projects. And I constantly feel like a Junior dev that has no clue what they're doing.
Especially since I've been focusing in the security aspects of things, I feel like I have barely touched the surface of things.
First off, kudos to you for focusing on the security aspects of things! That puts you ahead of a lot of others.
I have been coding professionally for 6 years, and let me tell you, I have plenty of days still where I have no clue what I am doing!
The tech world moves so fast and evolves so quickly that even those who have been in the field for years have to constantly learn new things. Because we are all constantly learning, we are all juniors or newbies at one thing or another. The key is you have to become OK with being uncomfortable and know that its just part of being a dev.
I also bet because you have done lots of experimenting and personal projects you are probably pretty self sufficient and scrappy which are both GREAT characteristics to have as a dev. Don't be intimidated by the word "professional", give it a shot and apply for a job! If you want some more motivation read this post my coworker wrote.
Something I do when mentoring a junior developer is to tell them that if they get stuck on something, they should try and solve it by themselves for fifteen minutes (adjust the duration depending on the situation) before they ask for help. But if they’re still stuck on the same thing after these fifteen minutes, they have to ask for help.
I found it has many benefits. First, it makes it clear that it’s OK to ask for help. Second, it sets a balance between them interrupted you too often (which is frustrating to you) and not making much progress (which is frustrating to them, which is worse). Often, when they finally come for help, even though they’ll fell that they were just stuck for 15 minutes, in reality they will have come close to the solution, and you can show them how they almost solved it.
Apple's Macs are way overrated for software development, not worth the money (at least for me) for the hardware specs of their computers, I feel like I only have to use one because they force developers to compile iOS apps in a Mac. Some people say they like its terminal because its better, and they are correct, out of the box its terminal is very good, but you could setup correctly your terminal in Windows or Linux to run as the Mac one, saving you hundreds of dollars and having much better hardware.
I don't understand how Apple could've thought it a good idea to remove the escape key and without at least one USB-A port.
I've used Macs for 10 years because I can't stand Windows, and I need some applications that aren't available for Linux. Pretty sure my next mac will be a hackintosh.
I like using them because I find them at a sweet spot between the freedom of Linux and the maintenance a Window need. But I can totally see your point.
I've got a 2015 MacBook at home that I love developing with. But I wouldn't want to buy a current model, not with all the keyboard issues and lack of anything-but-USB-C ports. Year after year, Apple feels more like a luxury/lifestyle brand than a technology company.
Kudos to the ternary operator! I greatly appreciate your compactness from time to time! And don’t let those bullies tell you you’re hard to read. You are fine just the way you are. W?t:f.
Don't mind ternary operators for simple cases but once someone starts putting multiple function calls in each part of the operator, it makes it difficult to debug.
What people usually feel is not I.S., but they are simply realizing how little they know.
Which is totally fine and also important.
You can be a non-master in a field and still get paid to do it.
It doesn't mean your boss/followers/mom/dad thinks you're much better than you actually are.
I just hope folks don't take I.S. as excuse to think they are good at something while they're not.
I wish my code was half as good as my expectations for how my code should be. My lattes are better than my code, so maybe I should make latte art instead of ascii art
I agree in some level with this sentiment. I can't speak for others, but for myself I often struggle with recognizing whether I'm suffering from Impostor Syndrome or if I'm just recognizing how little I actually know about something the more I learn about it. The more I learn, the more I realize how little I actually know and how much more there is to learn. I suppose this is the Denning-Kruger effect in action!
A person thinking they have impostor syndrome to feel good about themselves by definition can't have impostor syndrome 😅
It is a debilitating trench from which the climb out is very difficult. All the people I know that I consider to truly have impostor syndrome are excellent at what they do and what they understand, their brains are just not able to accept their victories.
Yeah, I kind of wish that hadn’t been such a big part of my formal education.
I only did some CS and so much of it revolved around inheritance in Java that I found myself looking for opportunities to use inheritance in the wrong places.
Grew up in Russia, lived in the States, moved to Germany, sometimes live in Spain. I program since I was 13. I used to program games, maps and now I reverse engineer password managers and other stuff
Location
Berlin and Málaga
Education
MS in CS from State Polytechnic University of St. Petersburg
Agile and Scrum are dumb. Daily stand-ups, planning poker, scrum masters and all that other stuff is a waste of time, money and office space. So far I'm yet to see this pseudo process help any team that is not good in the first place to get and perform better.
Also Agile is the worst misnomer in the software dev world. There's nothing agile about Agile.
What I "love" is when you're in a retrospective and you're asked for ideas that would make things better, but when you state your idea you're shouted down because "that's not agile".
Also, the idea that the amount of effort required is equal amongst developers is nonsense.
Grew up in Russia, lived in the States, moved to Germany, sometimes live in Spain. I program since I was 13. I used to program games, maps and now I reverse engineer password managers and other stuff
Location
Berlin and Málaga
Education
MS in CS from State Polytechnic University of St. Petersburg
One of the first things which goes out of the windows with most companies which adopt a Scrum way of working is the agile part of it. But there are companies which do adhere to the agile manifesto.
Real Scrum Masters (not a person who used to have the title of project manager) serve the purpose of helping the team organize who does what and taking care of any lack of clarity on the stories.
Scrum is the name that Agile receives when applied to software development. Agile is getting feedback as quickly as possible about the decisions you make. I feel that if anyone works using the alternative, waterfall, then that person is the worst engineer it can be.
Grew up in Russia, lived in the States, moved to Germany, sometimes live in Spain. I program since I was 13. I used to program games, maps and now I reverse engineer password managers and other stuff
Location
Berlin and Málaga
Education
MS in CS from State Polytechnic University of St. Petersburg
From my experience a good team keeps themselves in sync naturally. There are tickets/issues, Slack, email, old fashioned p2p talking. There weren't many stand-ups so far, where I learned something interesting or got synced up. Usually it's just a reiteration of the stuff that I already know or something I'll get to know later during the day anyway. People just say what they did or going to do. If it's interesting for me, I already know it, otherwise I don't need to hear it.
I think the team which lacks clarity in stories doesn't need a scrum master, but rather learn more about the product they are building and talk more to their teammates. You don't need an extra freeloader on the team to help with that.
Agile and waterfall are not the only alternatives. I don't have a name for it, but another approach would be a "natural organic team approach" where everyone is just getting their and common shit done, where the communication and planning happen naturally. In the right setting it just works. I've worked in the company, believe it or not, where we had about 40 people working on the product and there was not a single meeting dedicated to project management. We did have meetings, but they were extremely rare, like one in 3 months to announce something serious usually. Like a new future project, or that we're moving to a new building, but not to decide who works on which task. And in that company I and the whole company was amazingly productive, much more productive than in any other agile or process-less company I worked for before or after.
And in that company I and the whole company was amazingly productive, much more productive than in any other agile or process-less company I worked for before or after.
That is the point. You want a reproducible system to make any team productive, not leave it to chance.
Grew up in Russia, lived in the States, moved to Germany, sometimes live in Spain. I program since I was 13. I used to program games, maps and now I reverse engineer password managers and other stuff
Location
Berlin and Málaga
Education
MS in CS from State Polytechnic University of St. Petersburg
IMO introducing this type of agile process IS leaving it to chance, as it has no influence on the result. Something else has to be done to change improve things.
They're big business now and command enormous groups and conferences. No matter their merit, they're not going anywhere because people out there sell agile to companies and teams for a living.
never met a part of the stack I didn't like. sr. engineer at clique studios in chicago, perpetual creative hobbyist, bird friend, local gay agenda promoter. she/her. tips: https://ko-fi.com/carlymho
never met a part of the stack I didn't like. sr. engineer at clique studios in chicago, perpetual creative hobbyist, bird friend, local gay agenda promoter. she/her. tips: https://ko-fi.com/carlymho
By middleware frameworks I mean things like React, Vue, etc, that are a javascript/front-end interface for the backend; they definitely have a lot of great uses but I think there's starting to be this idea that you have to have one when most sites don't, actually.
Cofounded Host Collective (DiscountASP.net). Cofounded Player Axis (Social Gaming). Computer Scientist and Technology Evangelist with 20+ years of experience with JavaScript!
Cofounded Host Collective (DiscountASP.net). Cofounded Player Axis (Social Gaming). Computer Scientist and Technology Evangelist with 20+ years of experience with JavaScript!
Cofounded Host Collective (DiscountASP.net). Cofounded Player Axis (Social Gaming). Computer Scientist and Technology Evangelist with 20+ years of experience with JavaScript!
I would say: Good points. So it’s more about the ecosystem than about the language itself. The above is probably also true to e.g. TypeScript, isn’t it?
Cofounded Host Collective (DiscountASP.net). Cofounded Player Axis (Social Gaming). Computer Scientist and Technology Evangelist with 20+ years of experience with JavaScript!
Because TypeScript can leverage most of what JavaScript offers, it had a huge advantage over other languages. One minor difference is it needs a bit more tooling ad it doesn't run native and must be compiled to js.
I've been coding for over 20 years now! (WOAH, do I feel old)
I've touched just about every resource imaginable under the Sun (too bad they were bought out by Oracle)
1) MVC is a terrible design pattern. The separation of concerns is absolutely backwards. It separate directly related items that would most likely be worked on together, simply because one is visual and the other is logical. Instead, we should go back to the time when code was organized by access and usage. The logical code and visual template for a given URL should exist in the same folder, not separate them.
2) these modern "URL routers" are very slow, bulky, unnecessary, don't scale well in hardware, or scale to larger teams! It causes a "too many cooks in the kitchen" problem
MVC isn't a horrible design pattern. It's because they're trying to implement it into things like PHP (which can't be actually done). But in my experience while everything isn't perfect, MVC can increase your development speed, and keep things separated and clear in your head. At least it has for me.
And yes, the modern URL routers usually use regular expressions and fancy stuff to make URLs pretty. While it does make it slower, a lot of these sites aren't going for performance, they're going for development speed, and I have both built and used "URL routers".
Their goal is to implement something in the language the developer is familiar in, making it easier for them to write code.
If you were expecting 10,000+ requests a second, then yeah, you wouldn't want a "URL router".
I've been coding for over 20 years now! (WOAH, do I feel old)
I've touched just about every resource imaginable under the Sun (too bad they were bought out by Oracle)
Both of those things above are directly related for me. There are better design patterns out there that are simpler, easier to implement, easier to understand, and significantly more performant. Yes, all of these, all at the same time.
I get why these tools work for others... but the whole point was "unpopular opinions", which mine is. ;) And it is based purely on 20+ years experience as a software engineer developing software at pretty much any scale (microcontrollers to fully distributed server clusters across multiple datacenters)
As time permits, I've been documenting all of my notes on this, but it is a very lengthy process. But it essentially comes down to attempting to create a zero-boilerplate-framework. Pretty much just create a new file, have the text "hello world" in it, nothing else, and it works. Need more? Simple enough to add more. URL Routers and MVC frameworks add additional and often unnecessary extra boilerplate to get the same job done.
I truly believe our ultimate jobs as software engineers should be to simplify tasks as much as possible, not add more barriers to entry with frameworks which get in the way of simple tasks.
All design patterns are terrible.... for problems they weren't meant to solve. I think a more general issue here is people; we learn a new thing and want to apply it everywhere.
I recently had to fix a catastrophe of an Android application, on a tight schedule, with little Android programming experience, and a rusty Java, and I agree: what’s the problem with having to instantiate an anonymous class from an interface to implement an event listener, when you just have to type the first three characters of the listener method to have the IDE write the whole thing? And when you read the code, your brain will just skip from setOnClickListener to onClick anyway.
In the end, the help you can get from the IDE because you’re using a proper language like Java trumps the verbosity.
And that was in Android Studio: it’s a proper IDE, but it’s not that good.
I find the worst managers are those who used to code, but they weren't very good and are still bitter about it.
This is what I kind of wanted to say with "short coding history", because usually when someone is not very good at coding tries to switch roles pretty soon.
Thank you for saying that. React is a terrific tool with terrific programmers around it but I feel everything about it is over hyped.
The counter argument for my previous statement: if it generates this much enthusiasm from its users, they have definitely done something right.
I can't just feel the groove. To each their own, when I went through Vue's docs I knew it was the tool for me, when I went through React docs I wanted to close the tab and go do something else :D
What bothers me though it's how some people (definitely not the fault of the core developers) market it as the one true ring that will give you invisibility and super powers. The counter argument: I'm this close to running naked in the street shouting for the second coming in the form of Rust and WebAssembly so I understand what supercharging factor have tools your brain fits in when you discover them ;)
There's also the marketing aspect to consider: once a large enough pool of companies/people adapt a tool, a lot of others follow suit just because it's popular and if the tool it's good enough, it sticks.
Coding is as much a matter of personal growth as it is of logic and control-flow. I keep patience, curiosity, & exuberance in the same toolbox as vim and git.
*Opinions posted are my own*
1) That unit tests are believed to be necessary to create good software is a damning indictment of both industry and academia. Decades have been spent researching things like formal methods and static analysis and unit tests are the best we can do?
2) Having a minimum code coverage requirement leads to a bunch of tests that do little more than test the VM/runtime, not the code that's supposed to be being tested.
3) Having unit tests is nowhere near as useful for refactoring as people claim.
4) Unit testing became fashionable only because people started using Ruby on Rails and they needed to make up for the lack of static types and billable hours (1/4 of the time compared to java meant a 1/4 of the amount of pay).
To a very large extent I agree with this. Unit tests are a horrible replacement for a lack of static typing. That doesn't mean they aren't useful, but if you're using unit tests to prevent breakages during refactoring, do yourself a favor and just learn a freaking statically typed language.
That is, learn literally anything other than JavaScript.
You should only unit test units which are worthwhile to test. No need to test setters and getters unless they contain business logic. Requiring a minimum coverage is bad. But aiming for a minimum isn't.
I didn't say unit testing didn't exist. I said it wasn't fashionable. No one in 2005 would have been accused of not being a professional if they didn't do unit testing.
I was there too. I'm so old I remember when unit tests would have been called white box tests and looked at as an inferior form of test as you can see the code.
I can even remember being told how in the future all we would need to write is the formal spec and the computer would work out what the executable code should be. Needless to say that future never arrived.
There is no justification in using Yoda Conditions in PHP or JavaScript.
In those languages, you should always use the === operator anyway, so if you can train yourself to always use ===, then you don’t need a hard to read safeguard against writing = instead of == in an if statement. And if you can’t train yourself to always use ===, but you can train yourself to write Yoda Conditions, then you’re weird.
Also, Yoda Conditions for any other operator (!==, <, >) are evil.
I am a software development engineer in test for Infosys. My job is officially to write automated tests in Selenium Webdriver. I'm also a web developer as a hobbyest
For everything that goes beyond one code base, I’m in. DRY at module, service or system level has a lot of serious disadvantages, the biggest of them is creating dependencies.
Ligatures are weird and don't make code easier to read.
You don't need to use bleeding edge/new shiny code features to have a good time. Especially for new operators when the existing method wasn't even difficult or time consuming.
For languages with braces, while they can be optional for one line if-statements etc, please don't...
SASS isn't the solution to CSS problems. Sure it can do some different stuff but just switching CSS to SASS doesn't solve poorly written/structured CSS.
Ligatures are weird and don't make code easier to read.
Intellectually, they're just another glyph; no different than any other letter. I can't read Russian, that doesn't make it unreadable. Everything is hard to read until you learn to read it.
Emotionally, yeah, I don't use them either. Hipster nonsense. ;-)
True while they are just another glyph, even if I was use to reading them, they don't make code any easier - they are just another way of representing something that wasn't even a problem.
It is kinda like a solution in search of a problem.
1) Object Oriented Programming is overrated and often not the most fitting choice.
2) Code does not mater, solving problem does.
3) We all loves our tools (ide, linter ...) but the fact that we search new one or way to make them better is a proof that in the end, they sucks one way or another.
4) Git is not that of a great tool. We use it more because it's the biggest on doing what it does.
5) JavaScript is ultimately misunderstood: it's not an OOP language, the core of it's object's model is prototype and lot of people try to make it otherwise.
6) I am being too much?
Anyway, it's not written in stone and only represent feelings of my present self (will be happy to discuss any point and even change my mind).
1) Depends a lot of what your code aims for. It is an MVP that will be erased after the startup gets funding or closes for the lack of funding? yeah, OOP or not doesn't matter.
2) Solving problems with the most optimal solution possible at the minimum cost possible. Taking into account that more than 95% (or more) of the cost of software is on the maintenance part (add this or that, change X and fix Y) and taking into account that OOP is the easiest way for most humans to work, this also clashes with point 1.
3) Is more a sign of how little we know about the art of solving complex abstract problems.
4) totally agree. And I would add that most of the problems it solves were introduced thanks to it.
5) what is an OOP language? a language that uses objects. What is an object? the representation using code of an abstract entity from a problem we are solving. Therefore for me, there is almost no language that isn't OOP. Most people call a language is OOP because it uses Classes but I find that way to simplistic.
6) Oh, no! please continue.
Anyway, it's not written in stone and only represent feelings of my present self (will be happy to discuss any point and even change my mind).
1) I totally agree. What I was trying to do (and it's not only OPP in fact, but a lot of stuff) is that we should use the right tool for the right job and not the shiniest tool because everyone love it.
2) I was trying to express that the beauty of your code and the absolute awesomeness that we can find in it is not as relevant that the problem you seek to solve.
3) That's an interesting thinking, I'll dwell in it someday.
4) Some other tools (like pijul) deserve a little lookup to broad a little the landscape of vcs.
5) I tend to believe that OOP is a paradigm revolving around object. It's not that objects exist in it but more than everything is done to promote them. For instance Java urge you to manage things through object. Object become de-facto the "orientation" taken by the language. Other paradigm, like functional or ECS, or not focus on object as a "core" concept of programing. As such, object are just a convenient way to pack data altogether in those paradigm rather by being the central gear of it. At least it's the way I see it.
5) JavaScript is an OOP language. It's object system comes from Self. Classes and objects aren't the point of object oriented languages. The key idea (at least according to Alan Kay, who we can probably take as an authority on the subject) was that you could summon mathematical structures into existence with almost nothing if you can orchestrate actors, each implementing their own language. Classes and methods on them were the particular implementation they arrived at for Smalltalk as an implementation of this.
It seems that I effectively used the wrong words here. JavaScript indeed use object. The object pattern is prototyping. But I still think that JavaScript is not "in essense" an OOP Language but a language proposing it: the language does not favorise the use of object and "this" does not refer to an object the same way that any other language (but the prototype it's currently called in) and does not make the thing easy.
We can object to that that functions are kind of "object" themselves.
I realize though that my knowledge on this particular subject is not that strong, so is my position on it. I'll check the work of Alan Kay that you mention earlier. Thanks a lot.
Clarification of my mention of Alan Kay: he invented object oriented programming along with the rest of the Smalltalk group at Xerox PARC.
this in JavaScript behaves the way it does in any other prototype based language (as I mentioned before, see Self). Self and its lineage were part of an attempt to make an even purer object oriented language that Smalltalk. In Smalltalk you had objects...but you also had classes, which were somehow different. They were objects, but were part of a totally different inheritance hierarchy. Self asked if the classes were really necessary, or if they were baggage carried over from earlier programming experience. The answer at this point is quite clear. Classes are unnecessary, but it's really hard to get people past the baggage and comfortable with conjuring objects directly.
For completeness, there's a third branch of object oriented programming represented by Common Lisp's CLOS based on structs and generic functions with multiple dispatch. It turns out to be an even more powerful approach, and this isn't a thing in that system at all. That branch, if you take it seriously and work out the mathematics, leads to Stepanov's work on generic programming and the C++ Standard Template Library.
Let me explain more, clean code can be fast and it is good to have code that will be easily understandable but my point is that you could have messier code that is faster and fix its readability issues with clear well thought out comments.
So just because a piece of code is "messy" does not mean your cleaner code is faster.
Generally, it would be good to explain your statement a bit.
I would say this is not always true as good optimizing compilers can turn clean code (whatever exactly you mean by that) into very fast code.
But it definitely can be true in certain situations.
However, the question is: Does it matter? My first goal
Is always to have maintainable and understandable code. I optimize only when I need to. This might result in less readable code. But I only do it if there is a serious performance problem or the requirements are not met.
1: true. I don't know about America or Europe, but Ukrainian/Russian programming interviews are often a clown fiesta. Interviewer can ask you to build binary search tree and count all methods of an Object class in Java while in reality you are going to code HTML templates and write CSS lol.
Software/code is a liability, solutions to problems and your bank of knowledge are the real assets.
Every year you or your employer spends money maintaining an aging codebase. Bugs, vulnerabilities, support tickets. Strictly (and realistically) speaking, code would be in the "liability" column in an accounting book.
On the other hand, you or your employer has a large amount of people who are subject matter experts on your product. If you put every one of those people into a room, and magically herded those cats, you could probably come up with a better product. They know why a specific quirk exists, they know why one piece of code has to be horribly complex.
I've learned two things from this. Invest in people, not code. I hate my code, but I love what it does.
Any language using curly brackets for code blocks. Simply because with a new line the left and right curly brackets align, oh the beauty of alignment!
But your opinion must be even more unpopular, that you prefer new line in one language and not the other.
Git branches are only useful on public repos where you have contributors from outside the team. Used on any other scenario makes you a bad engineer by delaying integration, therefore, increasing costs of production.
I really dislike JS and how it's spreading everywhere. Everyone seems to be building stuff on top of Node or Electron, leading to bloated, RAM-consuming programs that do simple things that don't justify the resource usage.
I believe there's also the factor of a high influx of bad devs, just like what used to happen with PHP some years ago.
I'm a software developer who writes about Laravel, JavaScript, Rails, Linux, Docker, WordPress and the tech industry. Follow me on Twitter @tylerlwsmith
Squarespace is good enough for most small business websites and it doesn't need maintenance. That being said, it is unlikely someone without web savvy will be able to make a good website with Squarespace.
PHP is a good enough tool for most sites.
jQuery is still an alright option in 2019.
Learning a framework before learning the underlying language is fine.
Most websites never need to scale.
Project management tools are little more than security blankets and don't actually solve project management problems.
I'm a software developer who writes about Laravel, JavaScript, Rails, Linux, Docker, WordPress and the tech industry. Follow me on Twitter @tylerlwsmith
jQuery's API really lends itself to developer productivity. In the past year I've moved away from using jQuery on small projects because I don't want to load the extra kilobytes, but I don't think I would have made it through my first year of programming without being able to leverage how simple jQuery makes interactivity.
I hope that more developers recognize how much power it gives beginner developers before they shun the library in favor of tools like React and Vue. Both of those tools come with a fair share of SEO problems, and client-side rendering can kill performance on slow connections and inexpensive phones. jQuery largely doesn't have these problems.
Maybe applied with formal methods like TLA+ it could become one (I don't know enough about that) but the very fact that I can write a program with a few lines of code, run it twice, and have different unreproducible results invalidates the definition of "exact science".
Well,
Back at my UNI, studying CS, I was under the "exact sciences and technology" department.
Make of it what you want.
The fact that software development is so often referred to as "art" applies mostly (as it should) to UI (in my humble opinion for marketing purposes). UI consists up to 1%-10% of your typical projects codebase.
Generalizing such references leads to phenomena such as "terminology inflation", "tech booble" and other characterizations most relevant to marketing and/or financial context rather than software development, which I'm pretty sure most of us want nothing to do with.
Now If you can truly get different and/or unpredictable outcomes from the same application/script then please my good sir do go public so the world can witness your awesomeness.
Back at my UNI, studying CS, I was under the "exact sciences and technology" department.
I don't think computer science (which is indeed an exact science) and programming are the same thing, that's why I disagreed. If you had written "CS is an exact science" I wouldn't have replied. Hope you can understand my point.
Now If you can truly get different and/or unpredictable outcomes from the same application/script then please my good sir do go public so the world can witness your awesomeness.
I was referring to non deterministic concurrency, it's not hard to write a program that has different outcomes.
Software is the product of a programming language applied to specific requirements.
Programming languages can be applied artistically (as any means of expression can be perceived in an artistic context), should they though?
What is your primary goal of writing software, self expression? or building the next killer app?
If it's the later I'd stick with what I know works consistently and dependably and leave the artistic pondering for my css framework of choice and the marketing dpt.
Now that I think of it even natural languages are not art, what they really are is the sum of their rule set (spelling, grammar, syntax). It's subsequent use case scenario isn't necessarily part of their nature (French is the exception that justifies the rule, according to a quote from the Matrix "Cursing in french is like wiping your ass with silk").
For a final point I'll just say this: If programming language is the means of artistic expression, explain Perl.
My users' experience is far more important than my development experience. The industry has forgotten. Evolution of apps and the web has stalled while ever changing development processes devour teams. New ways of thinking about software languish under the complete rejection of risk from first user interview to deployment.
Software used to have a personality that matched the platform it ran on. Mac apps felt like Mac apps, Windows apps felt like Windows apps, iOS and Android apps felt like they were at home there. My unpopular opinion is web development, and its bastard child "cross platform web apps" have killed the personality behind software when the majority of development went into building for a browser instead of a platform.
30+ years of tech, retired from an identity intelligence company, now part-time with an insurance broker.
Dev community mod - mostly light gardening & weeding out spam :)
It's pronounced Diane. I do data architecture, operations, and backend development. In my spare time I maintain Massive.js, a data mapper for Node.js and PostgreSQL.
2 - Price, I can have a couple of powerful dedicated servers for 150€ per month and make a solid database setup with replication and backups in there that can handle thousands of connections per minute while the same on AWS would cost me around 900€ per month.
3 - Again cost. The recovery time on a crash and the ability to quickly move(a couple of minutes) everything to another machine are priceless.
1. Windows is the best environment for developers to work in.
Linux isn't stable enough, I always end up spending a not-insignificant amount of my time having to figure out why my graphics drivers stopped working or some application isn't launching, or what to do to resolve the random new dependency to a library my distro isn't providing, even after spending a massive amount of time to getting a distro even running and after fixing all the compatibility issues with my reasonably standard hardware.
Mac often breaks standards that have been around for eons without a good reason, and doesn't provide you with the basic means to make your life comfortable, and Apple does not provide you with a sensible repair service that would be required for a machine you need to work with. I cannot afford my work machine being out for repairs at an unknown facility for weeks, so I pay for on-site support packages from e.g. Lenovo and Dell.
2. Popularity is a bad sign.
If something is too popular, you will likely want to evaluate it very carefully before taking it into use. PHP was popular, for the completely wrong reasons. Ubuntu is popular, yet for me it's one of the distros with the worst success rate of getting a working install just by launching the install image, and being Debian based means APT which is the worst option out there for a package manager. Java is very popular, because the wrong people are making the decisions.
Typically how I see something being "the most popular x" is that it means it's been selected by people who don't know what they're doing and you don't want to repeat their mistakes. Research for yourself, maybe you'll end up with the same choice, but probably not.
3. Paid solutions are (often) better than free solutions.
People often flock to a "free" option for something. "Free operating system" means nobody is motivated to build a coherent experience that (mostly) just works out of the box. "Free editor" means sacrificing your performance to use Notepad++ instead of JetBrains IDEs. "Free database/monitoring solution/similar" means you need to figure out how to host it yourself - this is not in your expertise and if you can afford it, it's better to buy a paid service and focus on what is. I just don't have the time to use a "free" solution if a good paid alternative exists.
I pay for Windows, JetBrains IDEs, Photoshop, Slack, virus scanners, password managers, and many other things where free options would exist.
4. Git sucks.
Git is one of these examples of popularity not being a good sign. It's popular because people don't think, because "everyone uses Git". Yes, Git has a few decent ideas, but nothing unique and actually worth a damn.
What you should would want from your version control system, is that it follows the DVCS workflow for sure, that it works as well for all your developers regardless of their platform of choice, and that it works with you when you want to extend things.
Mercurial does all these things incredibly well, there's a few other options out there that do a pretty good job at it as well, but Git really is not among them.
Git is headed by a kernel developer, a person who has no understanding of a "human interface", and doesn't care about other people's problems, and it shows. If it runs on Linux, that's all that is required. The hook system is near impossible to get to work on Windows properly without some incredibly bad and broken hacks (pre-commit somehow achieves a fairly functional system but I don't yet know how). It is not particularly extensible, everything is based on a really broken implementation of some Unix philosophies of magic filenames and putting binaries in specific locations. Git's commands are nearly incomprehensible, and there's a lot of situations where you just have no idea if your changes are going to be deleted or what will happen.
Due to Git's popularity, the only decent Mercurial hosting, BitBucket, is now being terminated, and the world will become even more dominated by Git, which is clearly the worse option of the two, and there are other options out there that are probably even better for your use case.
5. If you don't use a GUI for your version control, you're doing it wrong.
Command line utilities have their place, for sure. A version control system without a powerful CLI is not worth using. However, when you are actually working on your code, you want a powerful GUI that is showing you which changes you made, gives you quick access to full visual side-by-side diffs of the files, and allows you to spot your mistakes before you commit them.
If you do not do this, you are doing it wrong, plain and simple. I call it "blind commits" when people just do something like git add . && git commit -m "Latest work" (and often add a git push -f because that's how they roll, then make an alias of the whole thing for "efficiency" and are spreading it proudly with their colleagues). You should always check through your changes, you are not the programming god you thought you were, you make mistakes, you forget to finish some changes, you left that debug statement in, etc. - check it before you commit it. CLI diffs just don't give you the full picture in a clear enough manner for them to be truly useful.
5. Web and application frameworks are overrated.
Microframeworks and overall DIY solutions can often get you where you need to be faster with less dependencies, and with a better end result. You should always pick the smallest base to build on instead of starting from Django, Electron, or other such monstrous framework.
6. RDBMSes are almost always the wrong choice.
The benefits of a relational database are basically about consistency guarantees and such, but they are impossible to implement in a manner that scales, so instead you end up with a restricting database with tons of behavior that does not help you make your software better.
7. Scrum and other processes are overrated.
You don't need a strict process of any kind if you have a competent team and good leadership. Fire the bad actors in your team, then hire better ones, and you don't need to implement Scrum, SAFe, or other such things. If your team requires strict adherence to a process to deliver, maybe you need to think of working somewhere else.
8. Not all people should be tolerated in a team.
If your teammate is wasting everyone else's time because they can't understand the documentation, it might be that your documentation is bad, or it might be that this team member is going to slow everyone down and produce little or no benefits to your team. Figure out which, and fire them if necessary. There's also other ways people are under-performing, or causing other problems with your team - get rid of them and give the rest of your team a high performing highly motivated environment to work in.
9. Your personal preferences don't matter.
You might like { on the same line with the function body, you might like it on the next line. You might like semicolons in your JavaScript, you might hate them. I know you really love to align your data structures neatly, who doesn't like to see
foo={"key":"value","long key":"another value"}
None of this matters, it's not worth your time to focus on these things. Use a code formatter whenever possible, gofmt, black, prettier, clang-format or whatever works for you. If your language does not have one yet, think of building one. If you're designing your language, make sure it has one standard and make a formatter for it. Better yet, make it refuse to run code that isn't formatted - problem solved forever.
Code formatting is a question people spend way too much time fighting about, with too little gains. Pick one way to format things automatically, enforce it via continuous integration and pre-commit hooks (e.g. pre-commit.com or in a better system like Mercurial you can do it more easily).
10. You should think hard before you start a project.
Want to start a project? That's great! Now stop and think about it. How long do you think it will really take? Ok, now double that, because there's always complications. Do you really want to commit to that amount of work, just to start with?
Just want to try and learn something new? Great, that's a fine option too. But are you really learning, or just copy & pasting? Did you come up with a unique idea that you want to build, or are you going to just follow a tutorial? Also are you sure you're learning the right thing, is this something worth spending your time in?
Are you hoping to make some money with it, sell it maybe? Do you have a plan on how to do that? Do you understand marketing? Do you know how to deal with accounting? If not - start with these questions first, and only when you understand how to actually launch something successfully, start coding for it.
In many more serious cases, you also want to think about the design in advance - is concurrency a specific issue you need to deal with? How are users going to interact with it? What platforms do you need to support? Other such questions should be considered, only then can you make the right choices for your project.
There's reasons you might want to still start the project, but starting a project and not finishing it is a good way to demotivate yourself from starting another project in the future.
11. Customization is bad.
If you customize your shell and use a lot of aliases (I draw the line at like alias ls="ls -lhaF" type of mostly insignificant default options, alias rm="rm -i" or rm -f is too much though), you need to put extra effort into thinking about how to help your colleague when they need help, or when using a shell on your servers, or other environments. Even distributing your own configuration to your computers in a reproducible manner that survives reinstalls, buying new computers, and so on is a lot of work that is often just not worth it.
If you customize your IDE to work differently from your colleagues with a bazillion plugins and such, you will be confused when trying to help each other.
If you spend a lot of time and effort personalizing your desktop or application preferences, it's gone sooner or later anyway, or you'll end up changing your mind and tweaking the personalization instead of spending time on useful things.
This is where I waver a bit myself though - because IF we had good ways to reproducibly distribute your configuration for all your applications, for your shells, and so on, then it might be a bit less of a hassle and more worth it. However, right now it's not exactly easy. There's Nix but it doesn't take care of everything.
We need to build more standardization on where software configuration lives on all OSes, and how the personalization can be securely and privately synced to a safe storage, while allowing us to share certain customizations with other people. If we achieve this, then the time would be much better spent.
But you should still not customize your shell and use excessive aliases 😄
12. You should host the things you know, but nothing else.
Not quite sure how popular this is, but it sure isn't being followed by a lot of people. Don't host your database, let someone else worry about that. Don't host your metrics collecting software, it's not your area of expertise. Host your own load balancer, host your own application containers, and anything else like that that YOU directly work with to implement your software.
Preferring not to use a debugger, but rather writing tests (TDD) to discover and fix bugs.
That one hurts...
Writing tests is meant to help regression cases and in TDD, shape the application code. Debuggers aren't meant to be replaced by tests, they help you narrow the problem down so you easily see the code and conditions of a bug. This should (but maybe it hasn't in your experience) help you get to the source of the problem faster.
Okay, granted I'm talking more about a project on which I have full control and knowledge. On other projects, especially where I'm unfamiliar and/or there are less tests, I'll use a debugger more.
Also true that in some cases I do get out a debugger, but increasingly over the years I tend to already have a test for the code in question, and often it's simply obvious what's wrong (the test has failed on an assert), so I just fix the code because I already know what's wrong.
The time I do pull out the debugger is when either a) I don't have a test (I should write one then - so less likely) or b) it's making no sense at all and my mental model and tests don't match the code.
I'm going to say that in about the past 18 months I've used a debugger three times.
Stats on project I'm working on (alone):
288000 lines
1800 files
1367 unit tests
7000 test assertions
Hence: I prefer not to use a debugger, but rather write tests.
Bonus: Once you fix a bug by first writing a test that fails, you know when you've fixed it, you know you'll know if you ever break it again, and you have one more test.
Approximately 65000 lines of tests, so about 20% is test code.
The programming language is called XXXXX (*)
(*) Name redacted because it's a new unreleased and in-development. Doubt it will see the public eye for a few years or more (if ever). It is however written in itself because "Dog Fooding" (and heavy testing) is the only way to hope to succeed at such insanity.
FYI - The counts are pretty inaccurate, I didn't use sloc tools, just find/xargs/wc. I'd say there's about 15% of blank lines/comments/junk at a guess.
I guess we can add one more unpopular software opinion:
Always sharing, always learning.
I am a software engineer, who values a good work-life balance and working with a great team making an impact product over technology.
I had a colleague who was putting the curly bracket on the same line, for compactness, but then was starting the code block with an empty line, for readability.
I used to like my brackets to match, but lint disagrees and so do most programmers. I have my principles, and if you don't like them, well ... I have others. Life is full of disappointments and sometimes it's easier just to go with the flow.
I once gave Interview at Oracle Cambridge, the a*-wipes think private functions should not be tested because you will call only the public functions.
My Answer : Well, that doesnt mean we dont modify private functions. Use whatever is at your disposal to test them for Ruby use method send, For C# InternalsVisibleToAttribute, For Java Reflection. Do what is required to test them.
I hate Javascript with a passion: it's an utter Frankenstein of a language ill-suited for everything, and used only because it's a fad. I refuse to learn it as a platform (I can read it) or work in it. I firmly believe any project built in Javascript would be a hundredfold better in a different language.
Classes are a poor mechanism to define types, a poor mechanism to define namespaces, and a poor mechanism to define modules.
Design patterns are mathematical models glimpsed darkly by those who aren't used to formal thought.
The practice of programming you adopt is driven more by local constraints and possibilities than by universals. All the various components that can contribute to a particular practice of programming (test driven development, various parts of agile, literate programming) are suggestions that may be useful in crafting the practice in a particular project or environment. The benefit comes from knowing what ends are trying to be achieved, engaging in the discipline seriously to try to get that end, and building expertise to know what parts become important under what conditions and constraints.
Engineering is being able to stand up in court and declare as an expert that something was reasonable practice and no liability can apply to it failing...and make it stick.
Even as a TypeScript obsessive and advocate, I have to admit it does nothing to replace 99% of your tests.
I want the TypeScript hype to burn brighter than ever, it should definitely be the only/default way to write JavaScript by hand, but please test your code.
You should aim for full coverage of module entry points by integration tests.
You should aim for a separate full coverage of implementation by unit tests.
Use Stryker.
Cast to any to shove invalid data into the module at the boundary.
TDD only works correctly if you have an exact specification of the solution. It is not good in an agile context where everything is unclear or unknown at the beginning.
The NPM ecosystem is broken because of the massive dependency hell and breaking changes every six months.
Just because Async code is good on the client side (browser) doesn't mean it's the right approach for the server side. Even the DOM manipulation is synchronous, for good reasons. Async is like a hammer you want to apply to everything, even if it doesn't make sense in every case. People should begin to distinguish more carefully when, where and for what Async IO is better or worse suited.
People shouldn't use JPA and just use proper JDBC/SQL. Faster, simpler and more maintainable. JPA just gives you extra class of complexness the projects velocity will decrease. And the ratio goes up for more complex entity graphs.
Grew up in Russia, lived in the States, moved to Germany, sometimes live in Spain. I program since I was 13. I used to program games, maps and now I reverse engineer password managers and other stuff
Location
Berlin and Málaga
Education
MS in CS from State Polytechnic University of St. Petersburg
I’ll admit I’ve always passionately disliked Javascript. I also have a strong preference for statically typed languages over dynamic, but always thought Javascript was the most unlikeable of the lot. Perhaps it is that I initially learnt it by osmosis, trying to get things working under both Netscape and early Internet Explorer browsers at the same time. Worrying if I put my curly brace on the same line or the following line, or if I forgot a semi-colon at some obscure, critical location. That said, ES6 has improved things a lot.
But I’ve always been hesitant of dynamic languages in general, preferring statically typed for any codebase of significant size. The 2000's had been a bit discouraging for me in this regard. First everybody was loving Ruby, then seemed that Python has overtook it in attention. And then Node.js came along and I really felt like Bromden with a pillow, looking for a way to smother the insanity.
(As an aside, I do quite like Ruby as a language, and writing Coffeescript - the wonderful lipstick for the pig that is javascript - has given me an appreciation for the indentation feature a’la Python.)
The number one problem to my mind with using dynamic languages for a large codebase is maintainability. Refactoring a compiled, statically typed language is so much easier. There is so many potential errors that the act of compiling and type checking eliminate for you that you don’t get with dynamic languages. The focus on huge amounts of unit tests always seemed like compensation for this. And refactoring code is so much easier. Add another parameter to a function, try and compile and you find all the places in the code you need to fix. Maintaining a sizeable amount of code, or code you haven’t looked at for a while is so much easier.
I think the pendulum of opinion has started to swing back in my direction in the last five years. Running stuff in a web-browser is now a serious concern, and good hackers are working on ways to portably do good programming in anything but Javascript. From Webassembly to Elm, PureScript, Typescript and others.
Newer languages like Go and Rust are taking the nice features many have lauded in the last decade as the advantages of dynamic languages. Things like good package management, less boilerplate and such, and baking them into static languages. So my outlook is getting rosier.
Learning different languages, frameworks and methodologies for solving coding problems helps you write better code, identify bad habits, and solve problems with your primary tools.
But how do you know if the app is going to be big when you start? That's why people usually suggest to start with a monolith (or with a serverless architecture more and more lately)
Seriously speaking: I think you can/should do microservices only when you have a big organization or if you have very specific requirements.
Because microservices change everything: how you deploy, who is responsible for what, team boundaries, monitoring and observability tools, feature isolation, how you test and so on.
If you don't have vertical teams responsible for a few indipendent microservices you run the risk of running a distributed monolith.
Well, that's why this opinion is unpopular.
I saw many big applications (like ERPs) which are very slow and need very expensive resources for very small results. If their authors (who are pretty big companies) would aproach microservices the performance of the applications would be more than better. Indeed this need a very good and clear software architecture and a biger team (team who usually exist, or can be easy extended).
I guess an important point is chosing technologies because implementing microservices, for example, in Go is much easier than on a LAMP stack. So, i know that adopting newest technologies can be sometimes descouraging but using right tools microservices can be easier than seems not only for the biggest companies.
WASM should kill JS.
Furthermore, we should break outdated browsers.
There is no excuse. You say a massive 2% of your visitors use those?
Well, they can install a normal browser. And these days they can do it without having admin privileges. (Which they probably do have, contrary to best practices.)
If you are Facebook-scale, you can ship different bundles to different browsers, and seamlessly degrade/hide features in the UI. The 2% represent so many people that they can pay for myriads of full-time developers for just that aspect.
But if you are a small startup and you invest too heavily into backwards-compatibility? All of your users will suffer.
"Use the right tool for the job" is horrible advice. No one deliberately uses the wrong thing, willingly. Sometimes you are forced to, but even then you KNOW it's the wrong thing.
It something people say because they heard it, it's obviously not falsifiable, and it makes them sound wise since no one is going to disagree.
It's almost always the right tool at the point it was selected. All tools in software will look wrong(obsolete) after 5 years. The problem is, people judge the decisions taken at the inception of a project/task well after all the initial conditions have changed.
That the majority of programming problems - especially smaller ones - can be defined unambiguously in English. Not as concisely as computer code, of course, but if you go down that rabbit hole you end up in a land of impenetrable formulae.
That if you and I can understand unambiguous English there's no good reason why a computer can't. This is 2019, after all, not 1965. We've spent all those years making computers cleverer (my phone can not only hear but understand me when I speak to it), but when faced with programming problems, instead of getting them to use our language we force ourselves to become machines and work with ever more complex computer languages. Which only a small minority can do, so maybe maintaining exclusivity has something to do with it.
That when computers speak the language of their users, software is a whole lot more accessible (skin in the game) and reliable (more eyeballs, fewer bugs).
React was not develop to be used as a SPA library, and because this most of the issues it have in a SPA will never be fixed. The ever growing community around React is both helpful and harmful. I'm not going to hide the fact I'm an Angular developer, fan and advocate, but if you took a look at Vue, and how it's being developed, you see a clear difference between a community driven framework for SPA, and a library that is really just for show information. That's not even something new, but the fact that we wanted to make it something is not.
Too many developers don't go outside their comfort zone to find the right tools for the job or learn new skills. They rather stick with the same old stack with a strong belief it will solve anything. Even if the project ends up unnecessary complicated.
Senior DevOps Engineer with 9+ years of experience. Otherwise an avid artist, reader, cinephile & football fan. Looking forward to connecting with everyone :)
Adding to the multiple Gems (non Ruby ones, of course 😛) that I discovered in the comments here,
Atleast In India, recruiters should stop blindly copying from terminologies such as Site Reliability Engineers in Google (SREs) or DevOps unless their companies are of that scale or if they're aiming to reach there. Pedantically, it's more of a mindset to solve problems than just a plaintext job description.
While I agree there's no substitute for work experience but no one should mistake more years or exposure to multiple tools as a great employee. Sometimes determined people with a little less of both can be better if you give them a chance.
Containers, orchestration & cloud computing are great but they're not a silver bullet to solve all problems magically, given the level of effort & complexity going behind it when used on a large scale. Apps work perfectly fine even when setup & served in the standard way. Speaking of which, don't fall into the trap of implementing it (or any other tech) just because everyone else is.
The world is object oriented and so are human languages. Objects (such as my car's fuel tank, unfortunately) are mutable. Functional programming was invented to drive me insane.
30+ years of tech, retired from an identity intelligence company, now part-time with an insurance broker.
Dev community mod - mostly light gardening & weeding out spam :)
1) maintainability is way more important than your design ideology
2) we spend way too much time talking about how to build things and not enough time just building things
Coding is as much a matter of personal growth as it is of logic and control-flow. I keep patience, curiosity, & exuberance in the same toolbox as vim and git.
*Opinions posted are my own*
Coding is as much a matter of personal growth as it is of logic and control-flow. I keep patience, curiosity, & exuberance in the same toolbox as vim and git.
*Opinions posted are my own*
I've heard the argument that "the only thing that matters is what tools this team thinks are best to solve their problems", meaning, moral considerations should not be a part of technical discussions. I think it's a cousin to "I'm talking businesses, not ethics, don't bother me about it".
Did you know that throughout the entire Sefer Beresheet (Book of Genesis), the word emunah (loosely, "faith") refers only to one thing: honesty in business dealings?
So I think that business considerations, including technical considerations, are moral considerations as well.
Will this technology help us to serve people better? Will it exclude people (users or beginning developers)? What about the maintainers of this piece of tech, do their values undermine our own? What will adopting this technology lead to in our industry? In the broader culture?
Thanks for the explanation and in theory yes, it's worth it and it's worth getting the answer to those questions. The thing is that various people have various levels of ethics and morals (a simple example: employees at giant companies that do good and bad things).
Will this technology help us to serve people better? Will it exclude people (users or beginning developers)?
What about the maintainers of this piece of tech, do their values undermine our own?
This can be hard to measure for every piece of tech...
What will adopting this technology lead to in our industry? In the broader culture?
Grew up in Russia, lived in the States, moved to Germany, sometimes live in Spain. I program since I was 13. I used to program games, maps and now I reverse engineer password managers and other stuff
Location
Berlin and Málaga
Education
MS in CS from State Polytechnic University of St. Petersburg
Typescript is only "mostly" typesafe: you're better off just using plain old javascript and understanding its limitations than using that leaky abstraction.
Live technical interviews are useless.
I prefer to do a short exercise that the applicant can solve in his own time and then discuss the implementation
It's entirely possible that you're right, the reason why devs are not switching "en masse", in my opinion, is Windows, not the lack of superior hardware alternatives.
TDD is a meme and you shouldn't be writing tests before you even know what your modules will look like.
Writing code is an art form. It takes intuition and freeform thought to structure things.
TDD is the only way to create code that is maintainable and extensible. Unless of course, what you are making is utterly simple.
Writing code is a science, not an art. The problem is that humans have spent less than 60 years doing code and we do not have the hang of it completely that is why point 1.
_ is the only way to create code that is maintainable and extensible
That's what they said about COBOL.
And when some wise guys at Cambridge said, hey we've invented a replacement to COBOL called "CPL" everyone said "We don't need yet another programming language!"
(CPL -> BCPL -> C -> C++/ObjectiveC -> Java/C#/Swift/Kotlin/Go/Scala/...)
So, I've added another of my unpopular opinions:
We will always need Yet Another Programming Language
Because as Adrián said, we're pretty new at this, and so we need better tools. We're still in the software stone age.
Thank you for the quotation. I have the same opinion. We don't need too many more programming languages and frameworks.
With the programming languages happen the same that with the mobile phones. The unique reason new ones are needed is to make the old ones obsoletes and try to take control of the market.
Can someone really tell me why was Ruby needed in the moment of its creation? What does it provide that could not be added to existing languages?
The principle of "Do not repeat yourself" that science respect is not not followed most of the time.
It seems that the theory behind sofware development doesn't matter anymore and the they're is a global hype on languages like Javascript that is dangerous becoming the ground stones of apps without being so mature as language or some optimal for execution.
It is like if some new tech allows to recycle the garbage in a new material javascriptolite and they're is a boom in sell of cars with "futuristic" look, low weight and a hammer toolkit to shape it yourself to the shape you like.
Of course many skilled people will make from Javascriptolite a new market. But some day people will start to read again physics and mechanics.
Or would somebody build an space ship from that material?
I tend to agree that TDD is a bit excessive, but I strongly believe in a BDD approach which is very common in Domain-Driven Design. Writing the failing test cases that describe how what use cases, policies and rules a system should have is much more likely to be known at the start of a project than if "computeXYCoordinates()" returns false :)
Writing code is a mixture of both!
It's science!
A very common feature in science is that it's very hard (and sometimes impossible) to prove that something is correct, so we often deem things correct enough if we can't prove it to be false.
That's unit testing :)
Computer "programming/coding" is the practical application of such theory. It's also what leans more in the direction of....
Art!
Writing code is hardly science, any more than writing fiction is. We are craftsmen and -women, not scientists.
The word
Craftsmen
has nothing to do with gender it does mean-women
aswell.Craftswomen
is not even a word in English.That's an unfortunate accident of the English language. It's not a problem elsewhere (mann/frau, uomo/donna, etc). Ask a computer and it would say "woman contains man so the former is more generic".
I think this isn't a fight we should be focusing on, there are much bigger battles to fight for gender equality, this kind of petty and surgical word picking just brings bad name to the whole issue, which so many feminists fail to understand.
Yep.
😂 In a fit of frustration on Friday afternoon bc I couldn't get specs to cooperate I put this message in slack. I'm kidding of course, mostly....
hahahaha
TDD has its place, in some scenarios it's the only sensible option, but its benefits in general are massively overstated.
Writing code is more like artisanship. I feel I've got more in common with someone who builds bespoke wooden furniture than a painter.
I think the truth exists somewhere in the middle. I did a coding dojo with a brilliant manager a few years back on TDD, and learned a lot about the value of unit tests.
The first advantage is that it gives you a usable API (API is UX). It's all too easy to get tunnel vision and write an unusable API because you've never attempted to use it. The "doesn't compile" step in TDD makes sure that you are trying to use the thing before making the thing.
The other advantage is that it prevents you from solving problems that don't exist. The specific example we had was roman numerals. At the start of the session I felt a bit of dread: roman numerals are, superficially, pretty hard to parse. Working through "I" to "III" to "MCVIII" resulted in a complete solution that was far simpler than the one I conjured up before we started coding.
But, you'd never get anything done if you followed TDD strictly.
I think the middle ground is writing your code while visualizing the test. This ensures that you are writing units (which are easier to use and obviously easier to unit test).
Couldn't agree more. TDD is workable when you're well staffed and have some discretion over deadlines. Otherwise it's an incredibly difficult sell from a business value standpoint, unfortunately. Especially if you're in an organization where certain delivery timelines are expected regularly
Most of the cost of software is after the initial build. TDD helps keep it decoupled so it's easier and cheaper to maintain in the long term. TDD software probably has better TCO.
Write the code. Does it work? Write a test to prove it. Write more tests as you think of ways to break it.
Found a bug? Fix it. Write a test to prove it was fixed.
I don't think you have to let software/coding consume your life to be good at it. It's ok if you don't code outside of work. It's ok to have a life outside of this.
Caveat: you need to make peace with the fact that you will never be as good or experienced as the guy that codes in his free time for fun.
I totally disagree being a well rounded human with other hobbies and strong soft skills is better than a dude who codes 24/7 especially when that dude gets burnt out.
Objectively speaking the captain autismo who codes 24/7 will always write code better than someone who only does it 9 to 5. We're not really arguing which is best for the human.
No the half asleep burned out Red Bull fueled constant coder is going to write sloppy code and insist his coworkers are just too dumb to read it.
@sergio - hey, that "captain autismo" comment seems pretty inappropriate. You might want to reconsider that.
True, very true.
Yes, because the dude that codes 24/7 in 3 years might be going into another field altogether, while a somebody that takes it slower might be in the game longer.
In my experience, more than 10 years, that is wishful thinking.
The people who code 24/7 on personal or open source things love doing this to a point of near obsession and I know of some who have been living that way for more than 15 years (yeah, meet them when we were teenagers and they lived glued to computer science books) and at that point you just have to accept that you will never be nearly as good as them and then be glad to be able to learn from them.
Yeah you're probably right :)
You can write good quality code in PHP :)
I love PHP. It's the thing I've spent the most time with so it's what I'm most comfortable with.
I'm starting to learn React and playing around with Python a little bit at work, but PHP is what I've always used and always want to come back to.
Exactly, that was my point. you can write clean code in any language ;)
Of course you can, do not let anyone tell you otherwise. Most of the time when people bash PHP it's because the scenarios on where it is the ideal language to use are very narrow.
Or because they haven't used it recently. PHP 7 really turned my opinion around on it. As did Laravel
Does it count if I use a PHP framework? I mainly use Laravel :D
Laravel is awesome.
Me too ;)
Really, PHP as a language is so restricting that there are a lot of things you simply cannot and should not code in it. Additionally the whole design for the typical use of the language - a collection scripts that get evaluated from start to end for every web request, triggered by access through a web server - is so bad that it has lead to countless incredibly misguided design decisions and massive numbers of hacks and other issues.
As an example, there's WordPress having all the modules deployed under the public folder and then access to them gets you direct "access" to the code, so all you need to find is one ignorant developer's one bad module and scan the world for all installations of it to hack a big pile of websites. This is the legacy PHP has given us.
Add to that the fact that the language has countless really bad design choices, that simply waste your brain power in useless things - does the function have underscores, does it take haystack before needle, and other such questions constantly waste your time when they could be standardized, and the language community simply refuses to fix their countless past mistakes keeping "backwards compatibility" as some sort of gold standard that must never be sacrificed.
In short: Yes, you can produce code that is better than WordPress in PHP, but you can never be a good programmer while using PHP - you waste too much of your time on things other than producing good code.
Everyone really should read this if they haven't eev.ee/blog/2012/04/09/php-a-fract...
In order to help junior devs grow you have to let them struggle. You can't give them all the answers, they have to spend time in the trenches finding the answers themselves with a little guidance.
I often recognized that Juniors “just” lack of
confidence, not of knowledge.
They know the answer but are not sure if it’s the right one.
One step away from being a Junior is to just do it. Better ask for forgiveness than for permission.
I've been in the field of coding in general for around 4 years. Even though I know I have worked really hard, I've never done anything professional and I fear I will do horrible with an actual job.
Never done anything besides experimenting and personal projects. And I constantly feel like a Junior dev that has no clue what they're doing.
Especially since I've been focusing in the security aspects of things, I feel like I have barely touched the surface of things.
First off, kudos to you for focusing on the security aspects of things! That puts you ahead of a lot of others.
I have been coding professionally for 6 years, and let me tell you, I have plenty of days still where I have no clue what I am doing!
The tech world moves so fast and evolves so quickly that even those who have been in the field for years have to constantly learn new things. Because we are all constantly learning, we are all juniors or newbies at one thing or another. The key is you have to become OK with being uncomfortable and know that its just part of being a dev.
I also bet because you have done lots of experimenting and personal projects you are probably pretty self sufficient and scrappy which are both GREAT characteristics to have as a dev. Don't be intimidated by the word "professional", give it a shot and apply for a job! If you want some more motivation read this post my coworker wrote.
I Am Not A Real Programmer
Andy George ・ Jan 25 ・ 3 min read
Thanks for the wonderful post. You write great articles BTW.
There are a lot of talented people on here.
It's nice seeing everyone trying to get together.
"Better ask for forgiveness than for permission.", nicely put love it!
Something I do when mentoring a junior developer is to tell them that if they get stuck on something, they should try and solve it by themselves for fifteen minutes (adjust the duration depending on the situation) before they ask for help. But if they’re still stuck on the same thing after these fifteen minutes, they have to ask for help.
I found it has many benefits. First, it makes it clear that it’s OK to ask for help. Second, it sets a balance between them interrupted you too often (which is frustrating to you) and not making much progress (which is frustrating to them, which is worse). Often, when they finally come for help, even though they’ll fell that they were just stuck for 15 minutes, in reality they will have come close to the solution, and you can show them how they almost solved it.
Maybe too unpopular but here it is:
Apple's Macs are way overrated for software development, not worth the money (at least for me) for the hardware specs of their computers, I feel like I only have to use one because they force developers to compile iOS apps in a Mac. Some people say they like its terminal because its better, and they are correct, out of the box its terminal is very good, but you could setup correctly your terminal in Windows or Linux to run as the Mac one, saving you hundreds of dollars and having much better hardware.
I don't understand how Apple could've thought it a good idea to remove the escape key and without at least one USB-A port.
I've used Macs for 10 years because I can't stand Windows, and I need some applications that aren't available for Linux. Pretty sure my next mac will be a hackintosh.
I like using them because I find them at a sweet spot between the freedom of Linux and the maintenance a Window need. But I can totally see your point.
This probably counts as another unpopular opinion, but Windows (10 with WSL and an X Server installed) isn't that bad.
Exactly this, the reason I am using Mac right now, and boy it feels good!
I've got a 2015 MacBook at home that I love developing with. But I wouldn't want to buy a current model, not with all the keyboard issues and lack of anything-but-USB-C ports. Year after year, Apple feels more like a luxury/lifestyle brand than a technology company.
Kudos to the ternary operator! I greatly appreciate your compactness from time to time! And don’t let those bullies tell you you’re hard to read. You are fine just the way you are. W?t:f.
Key phrase: time-to-time
Don't mind ternary operators for simple cases but once someone starts putting multiple function calls in each part of the operator, it makes it difficult to debug.
Very much this. If you have more than one ternary operation in a statement, you need to not use any.
The main argument against it seems to be "junior devs won't understand it". I think the answer to that is that they should learn the language.
Yeah sometimes a deep dive into the language will clear up most problems you have as a Jr.
lol
Not sure if unpopular but:
The term "Impostor Syndrome" is way too overused.
What people usually feel is not I.S., but they are simply realizing how little they know.
Which is totally fine and also important.
You can be a non-master in a field and still get paid to do it.
It doesn't mean your boss/followers/mom/dad thinks you're much better than you actually are.
I just hope folks don't take I.S. as excuse to think they are good at something while they're not.
I agree in some level with this sentiment. I can't speak for others, but for myself I often struggle with recognizing whether I'm suffering from Impostor Syndrome or if I'm just recognizing how little I actually know about something the more I learn about it. The more I learn, the more I realize how little I actually know and how much more there is to learn. I suppose this is the Denning-Kruger effect in action!
Ouch no responses, you hit a nerve :D
Not sure, maybe the post simply has too many replies. Or maybe it‘s not an unpopular opinion, therefore it‘s not a good reply 😬
A person thinking they have impostor syndrome to feel good about themselves by definition can't have impostor syndrome 😅
It is a debilitating trench from which the climb out is very difficult. All the people I know that I consider to truly have impostor syndrome are excellent at what they do and what they understand, their brains are just not able to accept their victories.
Inheritance is 99.9% of the time (e.g. always...) the wrong decision.
Inheritance, after-all, is one of the strongest forms of coupling you can introduce into your code ;)
There's usually a way to use composition to achieve the same thing.
Yeah, I kind of wish that hadn’t been such a big part of my formal education.
I only did some CS and so much of it revolved around inheritance in Java that I found myself looking for opportunities to use inheritance in the wrong places.
I feel you there. Been there done that 😋
James, didn't you do a Composition over inheritance post?
I don't think so... but it's something I've had in my head to do someday!
MPJ has a video on it. Very good imo 😀
youtu.be/wfMtDGfHWpA
Agreed.
Composition is 99% of the times a better choice :D
The Template pattern happens a lot more than just 0.1%
Agile and Scrum are dumb. Daily stand-ups, planning poker, scrum masters and all that other stuff is a waste of time, money and office space. So far I'm yet to see this pseudo process help any team that is not good in the first place to get and perform better.
Also Agile is the worst misnomer in the software dev world. There's nothing agile about Agile.
What I "love" is when you're in a retrospective and you're asked for ideas that would make things better, but when you state your idea you're shouted down because "that's not agile".
Also, the idea that the amount of effort required is equal amongst developers is nonsense.
What Agile are you referring to? The one from the Agile Manifesto? Or the one most companies claim to practice?
Since I'm speaking from experience, I'm guessing it's the latter. But who cares about some unicorn in the sky if all we get is the real thing?
One of the first things which goes out of the windows with most companies which adopt a Scrum way of working is the agile part of it. But there are companies which do adhere to the agile manifesto.
Daily stand-ups serve the purpose of putting the team on sync. If your team is 3 people in size, then they are ridiculous.
Planning Poker, if done correctly serves a very important purpose as I explain here:
Explaining Scrum story points
Adrián Norte
Real Scrum Masters (not a person who used to have the title of project manager) serve the purpose of helping the team organize who does what and taking care of any lack of clarity on the stories.
Scrum is the name that Agile receives when applied to software development. Agile is getting feedback as quickly as possible about the decisions you make. I feel that if anyone works using the alternative, waterfall, then that person is the worst engineer it can be.
From my experience a good team keeps themselves in sync naturally. There are tickets/issues, Slack, email, old fashioned p2p talking. There weren't many stand-ups so far, where I learned something interesting or got synced up. Usually it's just a reiteration of the stuff that I already know or something I'll get to know later during the day anyway. People just say what they did or going to do. If it's interesting for me, I already know it, otherwise I don't need to hear it.
I think the team which lacks clarity in stories doesn't need a scrum master, but rather learn more about the product they are building and talk more to their teammates. You don't need an extra freeloader on the team to help with that.
Agile and waterfall are not the only alternatives. I don't have a name for it, but another approach would be a "natural organic team approach" where everyone is just getting their and common shit done, where the communication and planning happen naturally. In the right setting it just works. I've worked in the company, believe it or not, where we had about 40 people working on the product and there was not a single meeting dedicated to project management. We did have meetings, but they were extremely rare, like one in 3 months to announce something serious usually. Like a new future project, or that we're moving to a new building, but not to decide who works on which task. And in that company I and the whole company was amazingly productive, much more productive than in any other agile or process-less company I worked for before or after.
That is the point. You want a reproducible system to make any team productive, not leave it to chance.
IMO introducing this type of agile process IS leaving it to chance, as it has no influence on the result. Something else has to be done to change improve things.
They're big business now and command enormous groups and conferences. No matter their merit, they're not going anywhere because people out there sell agile to companies and teams for a living.
Hallelujah for number 2. Tell it to Silicon Valley VCs 😂
What do you mean by middleware frameworks?
I wish they'd listen to me!
By middleware frameworks I mean things like React, Vue, etc, that are a javascript/front-end interface for the backend; they definitely have a lot of great uses but I think there's starting to be this idea that you have to have one when most sites don't, actually.
:-(
Yeah I agree! I just wrote this comment about classic web apps with server rendered templates:
I see them often ignored in the context of new web apps.
It's like people don't even try anymore, they go straight to SPAs even if they don't really need to...
Even some articles about web development in general tend to set SPAs as a given and "older" MPAs as a deprecated alternative.
I think it goes in the same direction as yours...
Damn, I agree with both big time!
JavaScript is the greatest language of all time!
Have you tried any others? 😎
No, seriously, I’d like to know why you think so. Maybe you could explain why.
I started with basic on the C64. Then Amiga Basic. QBASIC. Cobol. C. C++. Perl. JavaScript. ASP. Visual Basic. VB.net C#. PHP.
Too a Haskell course but haven't done anything with it.
So I guess... Yes.
I’ve used a couple of those languages as well and it is obviously a very personal opinion.
So again:
I am honestly interested.
The community. Npmjs, etc. JavaScript has the largest community.
don't need any tools to get started. Can run in just a browser, which everyone has.
easy to share your programs. Send people a URL and they can see your code run.
runs everywhere. runs in the most popular app that everyone has, the web browser. Runs server side. Can be used to build mobile apps and more.
flexible. You want OOP or FP. You can program however you like.
backwards compatibility. The code you write today will still run in years to come.
And probably more I can't think of right now.
Cheers!
Thanks!
I would say: Good points. So it’s more about the ecosystem than about the language itself. The above is probably also true to e.g. TypeScript, isn’t it?
Because TypeScript can leverage most of what JavaScript offers, it had a huge advantage over other languages. One minor difference is it needs a bit more tooling ad it doesn't run native and must be compiled to js.
Good points dude. Also, you can use it to make art with P5.js, and ML with Tensorfow.js :) that's awesome!
It's really fun to use it!
For the backend? 100% agreed (and I have tried many others). After all these experience, Node is still my num. 1 choice.
see: Blub paradox
1) MVC is a terrible design pattern. The separation of concerns is absolutely backwards. It separate directly related items that would most likely be worked on together, simply because one is visual and the other is logical. Instead, we should go back to the time when code was organized by access and usage. The logical code and visual template for a given URL should exist in the same folder, not separate them.
2) these modern "URL routers" are very slow, bulky, unnecessary, don't scale well in hardware, or scale to larger teams! It causes a "too many cooks in the kitchen" problem
MVC isn't a horrible design pattern. It's because they're trying to implement it into things like PHP (which can't be actually done). But in my experience while everything isn't perfect, MVC can increase your development speed, and keep things separated and clear in your head. At least it has for me.
And yes, the modern URL routers usually use regular expressions and fancy stuff to make URLs pretty. While it does make it slower, a lot of these sites aren't going for performance, they're going for development speed, and I have both built and used "URL routers".
Their goal is to implement something in the language the developer is familiar in, making it easier for them to write code.
If you were expecting 10,000+ requests a second, then yeah, you wouldn't want a "URL router".
Both of those things above are directly related for me. There are better design patterns out there that are simpler, easier to implement, easier to understand, and significantly more performant. Yes, all of these, all at the same time.
I get why these tools work for others... but the whole point was "unpopular opinions", which mine is. ;) And it is based purely on 20+ years experience as a software engineer developing software at pretty much any scale (microcontrollers to fully distributed server clusters across multiple datacenters)
As time permits, I've been documenting all of my notes on this, but it is a very lengthy process. But it essentially comes down to attempting to create a zero-boilerplate-framework. Pretty much just create a new file, have the text "hello world" in it, nothing else, and it works. Need more? Simple enough to add more. URL Routers and MVC frameworks add additional and often unnecessary extra boilerplate to get the same job done.
I truly believe our ultimate jobs as software engineers should be to simplify tasks as much as possible, not add more barriers to entry with frameworks which get in the way of simple tasks.
Hmm, that's very interesting. How would you go about making a framework that simple, but flexible at the same time?
If you're working a project like this, I'd love to see it.
True, but the best routers should compile those regexps to have near zero overhead ✌🏾
Oh? You can compile regular expressions ... ?
Sure, for example:
Basically you turn the regexp into a cached internal object that has very little overhead during the pattern matching.
That's neat. Thanks! I didn't know you could do that. :D
It's very useful if you have to match a regexp in a loop for example. You only construct it once :)
All design patterns are terrible.... for problems they weren't meant to solve. I think a more general issue here is people; we learn a new thing and want to apply it everywhere.
The verbosity of Java doesn’t hurt a bit.
...if you have a proper IDE.
I recently had to fix a catastrophe of an Android application, on a tight schedule, with little Android programming experience, and a rusty Java, and I agree: what’s the problem with having to instantiate an anonymous class from an interface to implement an event listener, when you just have to type the first three characters of the listener method to have the IDE write the whole thing? And when you read the code, your brain will just skip from
setOnClickListener
toonClick
anyway.In the end, the help you can get from the IDE because you’re using a proper language like Java trumps the verbosity.
And that was in Android Studio: it’s a proper IDE, but it’s not that good.
Oh, I‘ve got another one: Eclipse works fine for me. :-)
😱...🤗
Right.
A left curly bracket that marks the start of a code block should be placed on a new line.
Only in Python.
This is the best answer
triggered imgur.com/gallery/zC0k1Qp
heresy!
Why?
I just find it utterly ugly but it was more of a joke.
That's a brave one!
fite me
You don't have that privilege in Go. 😂
😥
curly brackets are syntax trash, they are for the compiler...
humans use indentation
Languages that rely on indentation scare the bejeezus out of me! 😥
Managers in software with short or no coding history are the worst.
Some of my best managers were non-coders.
I find the worst managers are those who used to code, but they weren't very good and are still bitter about it.
This is what I kind of wanted to say with "short coding history", because usually when someone is not very good at coding tries to switch roles pretty soon.
Not sure this is unpopular, but a good one nonetheless
Tabs over spaces
My unpopular opinion is that this really doesn't matter at all and you should have something that formats your code for you.
Underrated comment
I use tabs for indentation, and spaces for alignment. This way, I can switch tab size and still have everything align as it should.
SO YOU’RE THAT GUY.
And so began the war :)
File size is smaller too!
React is massively overused, misused, and over-promoted
Thank you for saying that. React is a terrific tool with terrific programmers around it but I feel everything about it is over hyped.
The counter argument for my previous statement: if it generates this much enthusiasm from its users, they have definitely done something right.
I can't just feel the groove. To each their own, when I went through Vue's docs I knew it was the tool for me, when I went through React docs I wanted to close the tab and go do something else :D
What bothers me though it's how some people (definitely not the fault of the core developers) market it as the one true ring that will give you invisibility and super powers. The counter argument: I'm this close to running naked in the street shouting for the second coming in the form of Rust and WebAssembly so I understand what supercharging factor have tools your brain fits in when you discover them ;)
There's also the marketing aspect to consider: once a large enough pool of companies/people adapt a tool, a lot of others follow suit just because it's popular and if the tool it's good enough, it sticks.
Nooooo, React is great!
And so is the hydraulic press I bought at Christmas to deal with that bag of walnuts.
Preach!
1) That unit tests are believed to be necessary to create good software is a damning indictment of both industry and academia. Decades have been spent researching things like formal methods and static analysis and unit tests are the best we can do?
2) Having a minimum code coverage requirement leads to a bunch of tests that do little more than test the VM/runtime, not the code that's supposed to be being tested.
3) Having unit tests is nowhere near as useful for refactoring as people claim.
4) Unit testing became fashionable only because people started using Ruby on Rails and they needed to make up for the lack of static types and billable hours (1/4 of the time compared to java meant a 1/4 of the amount of pay).
To a very large extent I agree with this. Unit tests are a horrible replacement for a lack of static typing. That doesn't mean they aren't useful, but if you're using unit tests to prevent breakages during refactoring, do yourself a favor and just learn a freaking statically typed language.
That is, learn literally anything other than JavaScript.
You should only unit test units which are worthwhile to test. No need to test setters and getters unless they contain business logic. Requiring a minimum coverage is bad. But aiming for a minimum isn't.
You're probably doing tests wrong or I'm doing something you wouldn't call tests. That's the only logical explanation I can think.
Out of interest, what language(s) are you using?
PHP
Unit testing was in wide use LONG before Rails.
I didn't say unit testing didn't exist. I said it wasn't fashionable. No one in 2005 would have been accused of not being a professional if they didn't do unit testing.
Having been there, I disagree, and submit that our contexts are probably different.
I was there too. I'm so old I remember when unit tests would have been called white box tests and looked at as an inferior form of test as you can see the code.
I can even remember being told how in the future all we would need to write is the formal spec and the computer would work out what the executable code should be. Needless to say that future never arrived.
There is no justification in using Yoda Conditions in PHP or JavaScript.
In those languages, you should always use the
===
operator anyway, so if you can train yourself to always use===
, then you don’t need a hard to read safeguard against writing=
instead of==
in anif
statement. And if you can’t train yourself to always use===
, but you can train yourself to write Yoda Conditions, then you’re weird.Also, Yoda Conditions for any other operator (
!==
,<
,>
) are evil.I'd expand that to a lot of other languages; even C based. Any compiler worth its salt can warn on "assignment in a boolean context" these days.
Type coercion is fine if you know what you're doing.
Type coercion is fine.
Implicit type coercion is the worst feature of all.
DRY does more harm than good.
(Yes, that is deliberately bait-y, there's a lot more nuance to it in reality.)
For everything that goes beyond one code base, I’m in. DRY at module, service or system level has a lot of serious disadvantages, the biggest of them is creating dependencies.
100% dependencies are the worst!
This is a safe space for being a bit bait-y as long as everyone is self aware enough. 😁
I'm kind of hoping at least one of these isn't unpopular ...
I still haven't seen Entity Framework add value
I'd say the same thing about Hibernate/JPA on the java side. If anything, they end up causing more problems than they solve.
Not unpopular, at least with me.
I also like VS (best IDE to my mind) and WPF (way better than WinForms were).
Ligatures are weird and don't make code easier to read.
You don't need to use bleeding edge/new shiny code features to have a good time. Especially for new operators when the existing method wasn't even difficult or time consuming.
For languages with braces, while they can be optional for one line if-statements etc, please don't...
SASS isn't the solution to CSS problems. Sure it can do some different stuff but just switching CSS to SASS doesn't solve poorly written/structured CSS.
Intellectually, they're just another glyph; no different than any other letter. I can't read Russian, that doesn't make it unreadable. Everything is hard to read until you learn to read it.
Emotionally, yeah, I don't use them either. Hipster nonsense. ;-)
True while they are just another glyph, even if I was use to reading them, they don't make code any easier - they are just another way of representing something that wasn't even a problem.
It is kinda like a solution in search of a problem.
1) Object Oriented Programming is overrated and often not the most fitting choice.
2) Code does not mater, solving problem does.
3) We all loves our tools (ide, linter ...) but the fact that we search new one or way to make them better is a proof that in the end, they sucks one way or another.
4) Git is not that of a great tool. We use it more because it's the biggest on doing what it does.
5) JavaScript is ultimately misunderstood: it's not an OOP language, the core of it's object's model is prototype and lot of people try to make it otherwise.
6) I am being too much?
Anyway, it's not written in stone and only represent feelings of my present self (will be happy to discuss any point and even change my mind).
1) Depends a lot of what your code aims for. It is an MVP that will be erased after the startup gets funding or closes for the lack of funding? yeah, OOP or not doesn't matter.
2) Solving problems with the most optimal solution possible at the minimum cost possible. Taking into account that more than 95% (or more) of the cost of software is on the maintenance part (add this or that, change X and fix Y) and taking into account that OOP is the easiest way for most humans to work, this also clashes with point 1.
3) Is more a sign of how little we know about the art of solving complex abstract problems.
4) totally agree. And I would add that most of the problems it solves were introduced thanks to it.
5) what is an OOP language? a language that uses objects. What is an object? the representation using code of an abstract entity from a problem we are solving. Therefore for me, there is almost no language that isn't OOP. Most people call a language is OOP because it uses Classes but I find that way to simplistic.
6) Oh, no! please continue.
Words to live by.
1) I totally agree. What I was trying to do (and it's not only OPP in fact, but a lot of stuff) is that we should use the right tool for the right job and not the shiniest tool because everyone love it.
2) I was trying to express that the beauty of your code and the absolute awesomeness that we can find in it is not as relevant that the problem you seek to solve.
3) That's an interesting thinking, I'll dwell in it someday.
4) Some other tools (like pijul) deserve a little lookup to broad a little the landscape of vcs.
5) I tend to believe that OOP is a paradigm revolving around object. It's not that objects exist in it but more than everything is done to promote them. For instance Java urge you to manage things through object. Object become de-facto the "orientation" taken by the language. Other paradigm, like functional or ECS, or not focus on object as a "core" concept of programing. As such, object are just a convenient way to pack data altogether in those paradigm rather by being the central gear of it. At least it's the way I see it.
But my vision may be biased or even uterly wrong.
6) Ok...
7) Top 10 are overrated, top 7 is the way to go.
5) JavaScript is an OOP language. It's object system comes from Self. Classes and objects aren't the point of object oriented languages. The key idea (at least according to Alan Kay, who we can probably take as an authority on the subject) was that you could summon mathematical structures into existence with almost nothing if you can orchestrate actors, each implementing their own language. Classes and methods on them were the particular implementation they arrived at for Smalltalk as an implementation of this.
It seems that I effectively used the wrong words here. JavaScript indeed use object. The object pattern is prototyping. But I still think that JavaScript is not "in essense" an OOP Language but a language proposing it: the language does not favorise the use of object and "this" does not refer to an object the same way that any other language (but the prototype it's currently called in) and does not make the thing easy.
We can object to that that functions are kind of "object" themselves.
I realize though that my knowledge on this particular subject is not that strong, so is my position on it. I'll check the work of Alan Kay that you mention earlier. Thanks a lot.
Clarification of my mention of Alan Kay: he invented object oriented programming along with the rest of the Smalltalk group at Xerox PARC.
this
in JavaScript behaves the way it does in any other prototype based language (as I mentioned before, see Self). Self and its lineage were part of an attempt to make an even purer object oriented language that Smalltalk. In Smalltalk you had objects...but you also had classes, which were somehow different. They were objects, but were part of a totally different inheritance hierarchy. Self asked if the classes were really necessary, or if they were baggage carried over from earlier programming experience. The answer at this point is quite clear. Classes are unnecessary, but it's really hard to get people past the baggage and comfortable with conjuring objects directly.For completeness, there's a third branch of object oriented programming represented by Common Lisp's CLOS based on structs and generic functions with multiple dispatch. It turns out to be an even more powerful approach, and
this
isn't a thing in that system at all. That branch, if you take it seriously and work out the mathematics, leads to Stepanov's work on generic programming and the C++ Standard Template Library.clean code is not fast code!!!!
Let me explain more, clean code can be fast and it is good to have code that will be easily understandable but my point is that you could have messier code that is faster and fix its readability issues with clear well thought out comments.
So just because a piece of code is "messy" does not mean your cleaner code is faster.
Speaking from the interpreter side of things
Generally, it would be good to explain your statement a bit.
I would say this is not always true as good optimizing compilers can turn clean code (whatever exactly you mean by that) into very fast code.
But it definitely can be true in certain situations.
However, the question is: Does it matter? My first goal
Is always to have maintainable and understandable code. I optimize only when I need to. This might result in less readable code. But I only do it if there is a serious performance problem or the requirements are not met.
You're not ALWAYS going to maintain your code. The better code you write, the lesser other person have to struggle.
JavaScript is a terrible programming language.
I remember when that was a popular opinion.
Maybe not unpopular, but programming interviews are a joke and hardly correlate to actually hiring good people
Big companies are important to our ecosystem and help by giving stable jobs, good pay, and reasonable benefits
1: true. I don't know about America or Europe, but Ukrainian/Russian programming interviews are often a clown fiesta. Interviewer can ask you to build binary search tree and count all methods of an Object class in Java while in reality you are going to code HTML templates and write CSS lol.
Software/code is a liability, solutions to problems and your bank of knowledge are the real assets.
Every year you or your employer spends money maintaining an aging codebase. Bugs, vulnerabilities, support tickets. Strictly (and realistically) speaking, code would be in the "liability" column in an accounting book.
On the other hand, you or your employer has a large amount of people who are subject matter experts on your product. If you put every one of those people into a room, and magically herded those cats, you could probably come up with a better product. They know why a specific quirk exists, they know why one piece of code has to be horribly complex.
I've learned two things from this. Invest in people, not code. I hate my code, but I love what it does.
Any language using curly brackets for code blocks. Simply because with a new line the left and right curly brackets align, oh the beauty of alignment!
But your opinion must be even more unpopular, that you prefer new line in one language and not the other.
Git branches are only useful on public repos where you have contributors from outside the team. Used on any other scenario makes you a bad engineer by delaying integration, therefore, increasing costs of production.
Without branches, how do you work in team?
davefarley.net/?p=247
He explains it better than me.
I really dislike JS and how it's spreading everywhere. Everyone seems to be building stuff on top of Node or Electron, leading to bloated, RAM-consuming programs that do simple things that don't justify the resource usage.
I believe there's also the factor of a high influx of bad devs, just like what used to happen with PHP some years ago.
I would use the IRC gateway for Slack if I could. I don't need anything better for chats. I can chat without GIFs.
I actually try not to use Electron apps, and look for native alternatives.
Squarespace is good enough for most small business websites and it doesn't need maintenance. That being said, it is unlikely someone without web savvy will be able to make a good website with Squarespace.
PHP is a good enough tool for most sites.
jQuery is still an alright option in 2019.
Learning a framework before learning the underlying language is fine.
Most websites never need to scale.
Project management tools are little more than security blankets and don't actually solve project management problems.
Thanks for some love to jQuery. Not every website is SPA, it's okay to use jQuery and his tons of great plugins for simple JavaScript tasks.
jQuery's API really lends itself to developer productivity. In the past year I've moved away from using jQuery on small projects because I don't want to load the extra kilobytes, but I don't think I would have made it through my first year of programming without being able to leverage how simple jQuery makes interactivity.
I hope that more developers recognize how much power it gives beginner developers before they shun the library in favor of tools like React and Vue. Both of those tools come with a fair share of SEO problems, and client-side rendering can kill performance on slow connections and inexpensive phones. jQuery largely doesn't have these problems.
Programming is not an art.
Programming is an exact science!
Definitely not an exact science imho :D
Maybe applied with formal methods like TLA+ it could become one (I don't know enough about that) but the very fact that I can write a program with a few lines of code, run it twice, and have different unreproducible results invalidates the definition of "exact science".
Doesn't it?
Well,
Back at my UNI, studying CS, I was under the "exact sciences and technology" department.
Make of it what you want.
The fact that software development is so often referred to as "art" applies mostly (as it should) to UI (in my humble opinion for marketing purposes). UI consists up to 1%-10% of your typical projects codebase.
Generalizing such references leads to phenomena such as "terminology inflation", "tech booble" and other characterizations most relevant to marketing and/or financial context rather than software development, which I'm pretty sure most of us want nothing to do with.
Now If you can truly get different and/or unpredictable outcomes from the same application/script then please my good sir do go public so the world can witness your awesomeness.
Hi Michael,
I don't think computer science (which is indeed an exact science) and programming are the same thing, that's why I disagreed. If you had written "CS is an exact science" I wouldn't have replied. Hope you can understand my point.
I was referring to non deterministic concurrency, it's not hard to write a program that has different outcomes.
You can hardly classify a bug as the rule, or draw conclusions from exceptions for that matter.
I can't see how someone can program without practicing cs (knowingly or otherwise, good or bad is another issue).
I can't see how error intolerant systems (eg: medical equipment) can be built upon anything other than an exact science.
Meme reads: "Change my mind" not "Change YOUR mind" :P
First and foremost:EDIT: I'm deleting this but varnish thinks otherwise...
Software is the use of language.
Language is an art.
Software is the product of a programming language applied to specific requirements.
Programming languages can be applied artistically (as any means of expression can be perceived in an artistic context), should they though?
What is your primary goal of writing software, self expression? or building the next killer app?
If it's the later I'd stick with what I know works consistently and dependably and leave the artistic pondering for my css framework of choice and the marketing dpt.
Now that I think of it even natural languages are not art, what they really are is the sum of their rule set (spelling, grammar, syntax). It's subsequent use case scenario isn't necessarily part of their nature (French is the exception that justifies the rule, according to a quote from the Matrix "Cursing in french is like wiping your ass with silk").
For a final point I'll just say this: If programming language is the means of artistic expression, explain Perl.
Peace!
Also, while i'm at it...
Emacs is to editors the next best thing since sliced bread!
Your software skills do NOT matter, your way to promote those skills do matter a lot more.
You can replace the word "software" with "business" as well.
My users' experience is far more important than my development experience. The industry has forgotten. Evolution of apps and the web has stalled while ever changing development processes devour teams. New ways of thinking about software languish under the complete rejection of risk from first user interview to deployment.
Software used to have a personality that matched the platform it ran on. Mac apps felt like Mac apps, Windows apps felt like Windows apps, iOS and Android apps felt like they were at home there. My unpopular opinion is web development, and its bastard child "cross platform web apps" have killed the personality behind software when the majority of development went into building for a browser instead of a platform.
Dependency injection frameworks should be avoided.
I wholeheartedly disagree with this statement. Well done! 😉
I'll bite :)
Just the frameworks, or DI as a whole?
Just the frameworks. I think I should write about this as soon as I find the time.
Server-rendered templates now, server-rendered templates forever!
I see them often ignored in the context of new web apps.
It's like people don't even try anymore, they go straight to SPAs even if they don't really need to...
Even some articles about web development in general tend to set SPAs as a given and "older" MPAs as a deprecated alternative.
jQuery is fabulous
1 - Totally agree.
2 - Price, I can have a couple of powerful dedicated servers for 150€ per month and make a solid database setup with replication and backups in there that can handle thousands of connections per minute while the same on AWS would cost me around 900€ per month.
3 - Again cost. The recovery time on a crash and the ability to quickly move(a couple of minutes) everything to another machine are priceless.
Accessibility is overrated while internationalisation is underrated.
Translation of docs is a waste of time: English is still a requirement for developers so we shouldn't give any fake impressions to beginners.
I've got a few.
1. Windows is the best environment for developers to work in.
Linux isn't stable enough, I always end up spending a not-insignificant amount of my time having to figure out why my graphics drivers stopped working or some application isn't launching, or what to do to resolve the random new dependency to a library my distro isn't providing, even after spending a massive amount of time to getting a distro even running and after fixing all the compatibility issues with my reasonably standard hardware.
Mac often breaks standards that have been around for eons without a good reason, and doesn't provide you with the basic means to make your life comfortable, and Apple does not provide you with a sensible repair service that would be required for a machine you need to work with. I cannot afford my work machine being out for repairs at an unknown facility for weeks, so I pay for on-site support packages from e.g. Lenovo and Dell.
2. Popularity is a bad sign.
If something is too popular, you will likely want to evaluate it very carefully before taking it into use. PHP was popular, for the completely wrong reasons. Ubuntu is popular, yet for me it's one of the distros with the worst success rate of getting a working install just by launching the install image, and being Debian based means APT which is the worst option out there for a package manager. Java is very popular, because the wrong people are making the decisions.
Typically how I see something being "the most popular x" is that it means it's been selected by people who don't know what they're doing and you don't want to repeat their mistakes. Research for yourself, maybe you'll end up with the same choice, but probably not.
3. Paid solutions are (often) better than free solutions.
People often flock to a "free" option for something. "Free operating system" means nobody is motivated to build a coherent experience that (mostly) just works out of the box. "Free editor" means sacrificing your performance to use Notepad++ instead of JetBrains IDEs. "Free database/monitoring solution/similar" means you need to figure out how to host it yourself - this is not in your expertise and if you can afford it, it's better to buy a paid service and focus on what is. I just don't have the time to use a "free" solution if a good paid alternative exists.
I pay for Windows, JetBrains IDEs, Photoshop, Slack, virus scanners, password managers, and many other things where free options would exist.
4. Git sucks.
Git is one of these examples of popularity not being a good sign. It's popular because people don't think, because "everyone uses Git". Yes, Git has a few decent ideas, but nothing unique and actually worth a damn.
What you should would want from your version control system, is that it follows the DVCS workflow for sure, that it works as well for all your developers regardless of their platform of choice, and that it works with you when you want to extend things.
Mercurial does all these things incredibly well, there's a few other options out there that do a pretty good job at it as well, but Git really is not among them.
Git is headed by a kernel developer, a person who has no understanding of a "human interface", and doesn't care about other people's problems, and it shows. If it runs on Linux, that's all that is required. The hook system is near impossible to get to work on Windows properly without some incredibly bad and broken hacks (pre-commit somehow achieves a fairly functional system but I don't yet know how). It is not particularly extensible, everything is based on a really broken implementation of some Unix philosophies of magic filenames and putting binaries in specific locations. Git's commands are nearly incomprehensible, and there's a lot of situations where you just have no idea if your changes are going to be deleted or what will happen.
Due to Git's popularity, the only decent Mercurial hosting, BitBucket, is now being terminated, and the world will become even more dominated by Git, which is clearly the worse option of the two, and there are other options out there that are probably even better for your use case.
5. If you don't use a GUI for your version control, you're doing it wrong.
Command line utilities have their place, for sure. A version control system without a powerful CLI is not worth using. However, when you are actually working on your code, you want a powerful GUI that is showing you which changes you made, gives you quick access to full visual side-by-side diffs of the files, and allows you to spot your mistakes before you commit them.
If you do not do this, you are doing it wrong, plain and simple. I call it "blind commits" when people just do something like
git add . && git commit -m "Latest work"
(and often add agit push -f
because that's how they roll, then make an alias of the whole thing for "efficiency" and are spreading it proudly with their colleagues). You should always check through your changes, you are not the programming god you thought you were, you make mistakes, you forget to finish some changes, you left that debug statement in, etc. - check it before you commit it. CLI diffs just don't give you the full picture in a clear enough manner for them to be truly useful.5. Web and application frameworks are overrated.
Microframeworks and overall DIY solutions can often get you where you need to be faster with less dependencies, and with a better end result. You should always pick the smallest base to build on instead of starting from Django, Electron, or other such monstrous framework.
6. RDBMSes are almost always the wrong choice.
The benefits of a relational database are basically about consistency guarantees and such, but they are impossible to implement in a manner that scales, so instead you end up with a restricting database with tons of behavior that does not help you make your software better.
7. Scrum and other processes are overrated.
You don't need a strict process of any kind if you have a competent team and good leadership. Fire the bad actors in your team, then hire better ones, and you don't need to implement Scrum, SAFe, or other such things. If your team requires strict adherence to a process to deliver, maybe you need to think of working somewhere else.
8. Not all people should be tolerated in a team.
If your teammate is wasting everyone else's time because they can't understand the documentation, it might be that your documentation is bad, or it might be that this team member is going to slow everyone down and produce little or no benefits to your team. Figure out which, and fire them if necessary. There's also other ways people are under-performing, or causing other problems with your team - get rid of them and give the rest of your team a high performing highly motivated environment to work in.
9. Your personal preferences don't matter.
You might like
{
on the same line with the function body, you might like it on the next line. You might like semicolons in your JavaScript, you might hate them. I know you really love to align your data structures neatly, who doesn't like to seeNone of this matters, it's not worth your time to focus on these things. Use a code formatter whenever possible, gofmt, black, prettier, clang-format or whatever works for you. If your language does not have one yet, think of building one. If you're designing your language, make sure it has one standard and make a formatter for it. Better yet, make it refuse to run code that isn't formatted - problem solved forever.
Code formatting is a question people spend way too much time fighting about, with too little gains. Pick one way to format things automatically, enforce it via continuous integration and pre-commit hooks (e.g. pre-commit.com or in a better system like Mercurial you can do it more easily).
10. You should think hard before you start a project.
Want to start a project? That's great! Now stop and think about it. How long do you think it will really take? Ok, now double that, because there's always complications. Do you really want to commit to that amount of work, just to start with?
Just want to try and learn something new? Great, that's a fine option too. But are you really learning, or just copy & pasting? Did you come up with a unique idea that you want to build, or are you going to just follow a tutorial? Also are you sure you're learning the right thing, is this something worth spending your time in?
Are you hoping to make some money with it, sell it maybe? Do you have a plan on how to do that? Do you understand marketing? Do you know how to deal with accounting? If not - start with these questions first, and only when you understand how to actually launch something successfully, start coding for it.
In many more serious cases, you also want to think about the design in advance - is concurrency a specific issue you need to deal with? How are users going to interact with it? What platforms do you need to support? Other such questions should be considered, only then can you make the right choices for your project.
There's reasons you might want to still start the project, but starting a project and not finishing it is a good way to demotivate yourself from starting another project in the future.
11. Customization is bad.
If you customize your shell and use a lot of aliases (I draw the line at like
alias ls="ls -lhaF"
type of mostly insignificant default options,alias rm="rm -i"
orrm -f
is too much though), you need to put extra effort into thinking about how to help your colleague when they need help, or when using a shell on your servers, or other environments. Even distributing your own configuration to your computers in a reproducible manner that survives reinstalls, buying new computers, and so on is a lot of work that is often just not worth it.If you customize your IDE to work differently from your colleagues with a bazillion plugins and such, you will be confused when trying to help each other.
If you spend a lot of time and effort personalizing your desktop or application preferences, it's gone sooner or later anyway, or you'll end up changing your mind and tweaking the personalization instead of spending time on useful things.
This is where I waver a bit myself though - because IF we had good ways to reproducibly distribute your configuration for all your applications, for your shells, and so on, then it might be a bit less of a hassle and more worth it. However, right now it's not exactly easy. There's Nix but it doesn't take care of everything.
We need to build more standardization on where software configuration lives on all OSes, and how the personalization can be securely and privately synced to a safe storage, while allowing us to share certain customizations with other people. If we achieve this, then the time would be much better spent.
But you should still not customize your shell and use excessive aliases 😄
12. You should host the things you know, but nothing else.
Not quite sure how popular this is, but it sure isn't being followed by a lot of people. Don't host your database, let someone else worry about that. Don't host your metrics collecting software, it's not your area of expertise. Host your own load balancer, host your own application containers, and anything else like that that YOU directly work with to implement your software.
You take time to build a ci/cd pipeline - sweet!
You took time to get your build green - sweet!
You did not take time to make your build faster and now its taking up 6gb RAM per run and 2.5hrs to build - not so sweet!
Given the current professional jobs I seem to have had these past 15 years or so, these are my "not popular opinion" with colleagues:
Can you guess the particular part of the Software Industry I'm in?
That one hurts...
Writing tests is meant to help regression cases and in TDD, shape the application code. Debuggers aren't meant to be replaced by tests, they help you narrow the problem down so you easily see the code and conditions of a bug. This should (but maybe it hasn't in your experience) help you get to the source of the problem faster.
Okay, granted I'm talking more about a project on which I have full control and knowledge. On other projects, especially where I'm unfamiliar and/or there are less tests, I'll use a debugger more.
Also true that in some cases I do get out a debugger, but increasingly over the years I tend to already have a test for the code in question, and often it's simply obvious what's wrong (the test has failed on an assert), so I just fix the code because I already know what's wrong.
The time I do pull out the debugger is when either a) I don't have a test (I should write one then - so less likely) or b) it's making no sense at all and my mental model and tests don't match the code.
I'm going to say that in about the past 18 months I've used a debugger three times.
Stats on project I'm working on (alone):
Hence: I prefer not to use a debugger, but rather write tests.
Bonus: Once you fix a bug by first writing a test that fails, you know when you've fixed it, you know you'll know if you ever break it again, and you have one more test.
Just out of curiousity now, what is the lines/files of code split between tests and application code?
And also curious, what programming language?
(*) Name redacted because it's a new unreleased and in-development. Doubt it will see the public eye for a few years or more (if ever). It is however written in itself because "Dog Fooding" (and heavy testing) is the only way to hope to succeed at such insanity.
FYI - The counts are pretty inaccurate, I didn't use sloc tools, just find/xargs/wc. I'd say there's about 15% of blank lines/comments/junk at a guess.
I guess we can add one more unpopular software opinion:
We do need Yet Another Programming Language!
I had a colleague who was putting the curly bracket on the same line, for compactness, but then was starting the code block with an empty line, for readability.
I used to like my brackets to match, but lint disagrees and so do most programmers. I have my principles, and if you don't like them, well ... I have others. Life is full of disappointments and sometimes it's easier just to go with the flow.
Asynchronous concurrency is a framework in disguise.
All mainstream concurrency paradigms suck.
😂
😸
I once gave Interview at Oracle Cambridge, the a*-wipes think private functions should not be tested because you will call only the public functions.
My Answer : Well, that doesnt mean we dont modify private functions. Use whatever is at your disposal to test them for Ruby use method
send
, For C#InternalsVisibleToAttribute
, For JavaReflection
. Do what is required to test them.I hate Javascript with a passion: it's an utter Frankenstein of a language ill-suited for everything, and used only because it's a fad. I refuse to learn it as a platform (I can read it) or work in it. I firmly believe any project built in Javascript would be a hundredfold better in a different language.
Majority of time it doesn't matter what programming language and tech stack you are using. The product you are building is what's important.
Classes are a poor mechanism to define types, a poor mechanism to define namespaces, and a poor mechanism to define modules.
Design patterns are mathematical models glimpsed darkly by those who aren't used to formal thought.
The practice of programming you adopt is driven more by local constraints and possibilities than by universals. All the various components that can contribute to a particular practice of programming (test driven development, various parts of agile, literate programming) are suggestions that may be useful in crafting the practice in a particular project or environment. The benefit comes from knowing what ends are trying to be achieved, engaging in the discipline seriously to try to get that end, and building expertise to know what parts become important under what conditions and constraints.
Engineering is being able to stand up in court and declare as an expert that something was reasonable practice and no liability can apply to it failing...and make it stick.
Even as a TypeScript obsessive and advocate, I have to admit it does nothing to replace 99% of your tests.
I want the TypeScript hype to burn brighter than ever, it should definitely be the only/default way to write JavaScript by hand, but please test your code.
You should aim for full coverage of module entry points by integration tests.
You should aim for a separate full coverage of implementation by unit tests.
Use Stryker.
Cast to
any
to shove invalid data into the module at the boundary.Unpopular software opinions:
TDD only works correctly if you have an exact specification of the solution. It is not good in an agile context where everything is unclear or unknown at the beginning.
The NPM ecosystem is broken because of the massive dependency hell and breaking changes every six months.
Just because Async code is good on the client side (browser) doesn't mean it's the right approach for the server side. Even the DOM manipulation is synchronous, for good reasons. Async is like a hammer you want to apply to everything, even if it doesn't make sense in every case. People should begin to distinguish more carefully when, where and for what Async IO is better or worse suited.
People shouldn't use JPA and just use proper JDBC/SQL. Faster, simpler and more maintainable. JPA just gives you extra class of complexness the projects velocity will decrease. And the ratio goes up for more complex entity graphs.
Gotcha, thanks for the response! Looks like a good solution, I agree no plugins is preferable to plugins. Do you know of any Linux alternatives?
I also find myself using session restoration regularly - but that's gonna vary pretty widely by individual I think.
Agile & Scum is total non-sense. They're counter-productive in the end. Speaking from my own experience.
No, they can be great sometimes. Speaking from my own experience.
I won't argue with you, b/c we're all speaking from our own personal experience. I have just shared my own opinion on this topic.
Yes
I’ll admit I’ve always passionately disliked Javascript. I also have a strong preference for statically typed languages over dynamic, but always thought Javascript was the most unlikeable of the lot. Perhaps it is that I initially learnt it by osmosis, trying to get things working under both Netscape and early Internet Explorer browsers at the same time. Worrying if I put my curly brace on the same line or the following line, or if I forgot a semi-colon at some obscure, critical location. That said, ES6 has improved things a lot.
But I’ve always been hesitant of dynamic languages in general, preferring statically typed for any codebase of significant size. The 2000's had been a bit discouraging for me in this regard. First everybody was loving Ruby, then seemed that Python has overtook it in attention. And then Node.js came along and I really felt like Bromden with a pillow, looking for a way to smother the insanity.
(As an aside, I do quite like Ruby as a language, and writing Coffeescript - the wonderful lipstick for the pig that is javascript - has given me an appreciation for the indentation feature a’la Python.)
The number one problem to my mind with using dynamic languages for a large codebase is maintainability. Refactoring a compiled, statically typed language is so much easier. There is so many potential errors that the act of compiling and type checking eliminate for you that you don’t get with dynamic languages. The focus on huge amounts of unit tests always seemed like compensation for this. And refactoring code is so much easier. Add another parameter to a function, try and compile and you find all the places in the code you need to fix. Maintaining a sizeable amount of code, or code you haven’t looked at for a while is so much easier.
I think the pendulum of opinion has started to swing back in my direction in the last five years. Running stuff in a web-browser is now a serious concern, and good hackers are working on ways to portably do good programming in anything but Javascript. From Webassembly to Elm, PureScript, Typescript and others.
Newer languages like Go and Rust are taking the nice features many have lauded in the last decade as the advantages of dynamic languages. Things like good package management, less boilerplate and such, and baking them into static languages. So my outlook is getting rosier.
Learning different languages, frameworks and methodologies for solving coding problems helps you write better code, identify bad habits, and solve problems with your primary tools.
That's not unpopular though ahhaha :P
I've been thinking about this lately, since there have been a lot of articles coming up about Full Stack Devs and the merits of specializing.
I think specializing is great, but I also think it's great to know what's going on over the other side of the fence as well. :)
We need maybe a tenth of the software that's being written, and same for the jobs that are writing it.
It's time for a next generation Web framework to learn from and supplant Angular and React.
My unpopular opinion is to use microservices instead big monolithic applications.
In the sense that you think apps should start as microservices?
Some of the big applications, yes.
But how do you know if the app is going to be big when you start? That's why people usually suggest to start with a monolith (or with a serverless architecture more and more lately)
Seriously speaking: I think you can/should do microservices only when you have a big organization or if you have very specific requirements.
Because microservices change everything: how you deploy, who is responsible for what, team boundaries, monitoring and observability tools, feature isolation, how you test and so on.
If you don't have vertical teams responsible for a few indipendent microservices you run the risk of running a distributed monolith.
What do you think about that risk?
Well, that's why this opinion is unpopular.
I saw many big applications (like ERPs) which are very slow and need very expensive resources for very small results. If their authors (who are pretty big companies) would aproach microservices the performance of the applications would be more than better. Indeed this need a very good and clear software architecture and a biger team (team who usually exist, or can be easy extended).
I guess an important point is chosing technologies because implementing microservices, for example, in Go is much easier than on a LAMP stack. So, i know that adopting newest technologies can be sometimes descouraging but using right tools microservices can be easier than seems not only for the biggest companies.
WASM should kill JS.
Furthermore, we should break outdated browsers.
There is no excuse. You say a massive 2% of your visitors use those?
Well, they can install a normal browser. And these days they can do it without having admin privileges. (Which they probably do have, contrary to best practices.)
If you are Facebook-scale, you can ship different bundles to different browsers, and seamlessly degrade/hide features in the UI. The 2% represent so many people that they can pay for myriads of full-time developers for just that aspect.
But if you are a small startup and you invest too heavily into backwards-compatibility? All of your users will suffer.
"Use the right tool for the job" is horrible advice. No one deliberately uses the wrong thing, willingly. Sometimes you are forced to, but even then you KNOW it's the wrong thing.
It something people say because they heard it, it's obviously not falsifiable, and it makes them sound wise since no one is going to disagree.
It's almost always the right tool at the point it was selected. All tools in software will look wrong(obsolete) after 5 years. The problem is, people judge the decisions taken at the inception of a project/task well after all the initial conditions have changed.
My 2¢.
React was not develop to be used as a SPA library, and because this most of the issues it have in a SPA will never be fixed. The ever growing community around React is both helpful and harmful. I'm not going to hide the fact I'm an Angular developer, fan and advocate, but if you took a look at Vue, and how it's being developed, you see a clear difference between a community driven framework for SPA, and a library that is really just for show information. That's not even something new, but the fact that we wanted to make it something is not.
Too many developers don't go outside their comfort zone to find the right tools for the job or learn new skills. They rather stick with the same old stack with a strong belief it will solve anything. Even if the project ends up unnecessary complicated.
Adding to the multiple Gems (non Ruby ones, of course 😛) that I discovered in the comments here,
Atleast In India, recruiters should stop blindly copying from terminologies such as Site Reliability Engineers in Google (SREs) or DevOps unless their companies are of that scale or if they're aiming to reach there. Pedantically, it's more of a mindset to solve problems than just a plaintext job description.
While I agree there's no substitute for work experience but no one should mistake more years or exposure to multiple tools as a great employee. Sometimes determined people with a little less of both can be better if you give them a chance.
Containers, orchestration & cloud computing are great but they're not a silver bullet to solve all problems magically, given the level of effort & complexity going behind it when used on a large scale. Apps work perfectly fine even when setup & served in the standard way. Speaking of which, don't fall into the trap of implementing it (or any other tech) just because everyone else is.
I'm the same - some languages are fine with it on new lines, some are strange.
Most importantly though is that you are consistent. Don't be like PHP PSR-1 and have classes and functions one way but if/for/while etc another.
Classes were a mistake
Classes only need to be trimmed to a Bonsai. Then they are quite good again.
🔥 Hot Take Warning 🔥
Almost none of it matters. vim vs. emacs? Yawwwn!
Everything is good enough.
"x is slow!" Oh, because it runs in 100ms instead of 50ms? That must be tough.
You can make Java attractive
It’s definitely sexy, especially with a dark theme. 😎
Yes, If you are blind and type with your voice it might be...
Arguing about programming languages and tooling is actually a good thing, that means we care about our craft.
Someone once said, "you don't master coding you get use to it"
This opinion I barely hear.
The world is object oriented and so are human languages. Objects (such as my car's fuel tank, unfortunately) are mutable. Functional programming was invented to drive me insane.
Redux is easy
I'll bite. jQuery is still highly relevant and useful.
Don’t declare an empty constructor in your domain model unless it’s a valid use case.
“But my ORM needs”
I don’t care. I don’t care. I don’t care.
for loops over .reduce() methods
Good code DOES need comments.
Nope, I don't need to deal with your inconsistent syntax. 😋
Architecture is necessary. Architects are not.
As an 'architect' I actually agree (when can I retire again?)
@see: Security, UX Design, ...database admin
Can I expand slightly - different viewpoints are necessary, departments to contain them are not.
CoffeeScript still better than JavaScript
Anything is better than JavaScript ;-)
1) maintainability is way more important than your design ideology
2) we spend way too much time talking about how to build things and not enough time just building things
Development time should be counted while comparing frameworks, modules etc. not only the speed of the framework
It's appropriate to talk about choice of tech stack in moral terms.
Interesting Benny, what do you mean?
I've heard the argument that "the only thing that matters is what tools this team thinks are best to solve their problems", meaning, moral considerations should not be a part of technical discussions. I think it's a cousin to "I'm talking businesses, not ethics, don't bother me about it".
Did you know that throughout the entire Sefer Beresheet (Book of Genesis), the word emunah (loosely, "faith") refers only to one thing: honesty in business dealings?
So I think that business considerations, including technical considerations, are moral considerations as well.
Will this technology help us to serve people better? Will it exclude people (users or beginning developers)? What about the maintainers of this piece of tech, do their values undermine our own? What will adopting this technology lead to in our industry? In the broader culture?
It's worth asking those questions.
Thanks for the explanation and in theory yes, it's worth it and it's worth getting the answer to those questions. The thing is that various people have various levels of ethics and morals (a simple example: employees at giant companies that do good and bad things).
This can be hard to measure for every piece of tech...
Great questions by the way!
Oh, c'mon. Such a pedantic you are.
Bash is great. 😌
(CMD/Ctrl) + s is better for saving that default Vim :w
hjkl still a better move than arrows.
Python and separately Machine Learning are terrible and I don't know why people are interested in either of them.
This is an interesting one, I'm always in a tmux session. What replaces that functionality?
I like JavaScript!
What's not to like?
All code is technical debt.
(Yeah I’m that dev who would rather reach for others work rather than roll his own.)
The labor practices driving most open source software development seem wildly unsustainable
Kendi kendini silen yazilima ne dersin.
Hyped architectures are a bloated construction and most times you don’t need to use them (and in fact it can be counterintuitive).
The less planning we do, and the more we value diversity, the more innovative (and profitable) products we deliver.
Rest doesn't fit modern architecture, and no one agrees how to do it properly!
Refactoring should be postponed
SOLID is overused. Not the ideas behind it, but the term itself.
Re: 1
Better code completion, search and code navigation are worth it.
Oh, man! You did open a can worms here! =8-[= ~ = ]
Typescript is only "mostly" typesafe: you're better off just using plain old javascript and understanding its limitations than using that leaky abstraction.
Frameworks are silly. Just use libraries.
Loose and dynamic typing aids in writing fast code, but it is difficult to catch bugs early, uphold quality and ensure no runtime errors/crashes.
IDEs are bloat.
We should try to steer away from frameworks as much as possible.
Using non-open-software technology to code is just wrong.
C++ is the best language to learn how to code.
The DRY phenomenon and "code reuse" has done more harm than good; it's a giant foot gun that gets used in a development process WAY too early.
Inheritance is an anti-pattern
Because I'm a lady doesn't mean I should know CSS or have more experience with CSS.
We should all learn HTTPS.
There's too much JS on the web.
Live technical interviews are useless.
I prefer to do a short exercise that the applicant can solve in his own time and then discuss the implementation
Microsoft make better hardware for developing code than Apple. I'm sorry, but the Mac Book cannot compete with a Surface Book 2.
It's entirely possible that you're right, the reason why devs are not switching "en masse", in my opinion, is Windows, not the lack of superior hardware alternatives.
The Spring Framework shouldn't be used for everything in Java
When your hands itch for Go, grab Rust instead.
SQL server's date functions are not intuitive. Unlike oracle. Oracle can't build decent ide's
Programming is not a craft.
React is not better than Angular.
Angular is not better than React.
From my side
All front-end frameworks today are a waste of time to learn.
Two, specific to C#
Developers can’t be trusted to use “var” properly, and it leads to unreadable code in 99% of cases.
Private member variables should not start with an underscore!