loading...

What is today's "goto"?

Jason Steinhauser on June 20, 2018

50 years ago, Edgar Dijkstra published a controversial (at the time) editorial entitled "Go To Statement Considered Harmful". In it, he stated what... [Read Full]
markdown guide
 

Object-relational mapping, aka the Vietnam of computer science.

Honestly, I don't think even goto is an excommunicable offense. It's a very simple, very powerful tool that's very easy to do terrible things with, and it's almost always better to use other control structures to manage execution flow in more readable terms, but there are exceptions to everything. Notably the case of breaking out of nested while loops is only easily accomplished with goto. And of course all loops reduce down to jmp statements in assembly so we're technically all guilty anyway.

 

I'll admit that at first glance, I thought that the linked article was going to be hyperbolic. However, the parallels that it drew were quite interesting and it was a really insightful read. It definitely helped me solidify some thoughts I'd had about ORMs, specifically the issue of schema ownership. Thank you for the link!

To your other point about the utility of goto, I'd argue that multiple nested while loops are more than likely a code smell. I agree, everything does get boiled down to jmp instructions at the bare metal. However, I think that the big distinction with Dijkstra's statement is that we should strive to remove goto in high-level languages. They definitely have a place in low-level languages.

Finally, while I can pretty much agree with your stance on ORMs, I'm curious about alternatives to ORMs. Do you have some suggestions? I'm always interested in learning more. Do you think that RDMS is more of the issue with ORMs and we should use more of a document store à la MongoDB, a graph DB such as Neo4j? And do you share similar feelings about GraphQL as ORMs?

Thanks again for the comment and the article!

 

We had a thread about O/RMs and other data access patterns fairly recently so I'll just point you there rather than repeat myself :) But I love working with relational databases; they're the Swiss army knife of data storage solutions, where every other form of database I've encountered has been a specialized tool for working with specific architectural patterns: document databases for hierarchical information with few to no interconnections, BigTable derivatives for accumulation of flat records, and so on. They do well for the situation they're designed to operate in, and very poorly anywhere else.

I haven't explored GraphQL much at all. I like SQL and can do basically anything I need to with that, so until someone creates a database that uses GraphQL as a native query language instead (it's probably not too far off), I don't have a reason to use it. But it seems helpful for others, and doesn't have the fundamental impedance mismatch that makes O/RMs a problem.

I've played a bit with GraphQL and Django's ORM (meh) today and I'm quite sure everything is going to explode when queries become really complicated :D

 

I am so excited to read that you have no idea.

 
 

BitCoin is a disaster and should never have become a thing

 

There are two good things, in my opinion, that have come from BitCoin:

  • A way for people in economically-starved countries to receive payment for goods and services in a non-fiat currency
  • The widespread concept of an immutable chain of trust

Despite those, I mostly agree with you - the way that BitCoin has been used, as a whole, has had a negative impact on the world economy and the world's power consumption.

 

The widespread concept of an immutable chain of trust

Blockchain has existed in many forms for years. see: git. The only reason for the proof of work is to get around decentralization fraud, but I'll get back to that.

A way for people in economically-starved countries to receive payment for goods and services in a non-fiat currency

This is a failure of established countries not providing the accessibility to modern payment services imo.

This may be unpopular opinion but decentralized currency will never take off. Money as we know it was designed to be centralized. It's a feature not a bug, to disconnect merchants from buyers and eliminate bartering. And the harm it has done to both the energy and hardware industries is absolutely horrific.

Git and blockchain are very different -- it's the easiest thing in the world to rewrite history in git, an operation blockchains are expressly designed to prevent. I've got more fingers on one hand than I've heard well-considered use cases for a blockchain, but it is its own data structure.

I can't argue with you on those points, honestly. I hadn't thought about git as a chain of trust, and that's a failure on my part because... that's exactly what it is.

This may be unpopular opinion but decentralized currency will never take off. Money as we know it was designed to be centralized. It's a feature not a bug

As much as Gilfoyle and I would love to think that we could get to a universal/decentralized currency, or introduce bartering back into mainstream trading, there's no plausible path to that happening.

I always saw Git as a blockchain because it's still using a hash-verified chain of "blocks" to send data which also happens to be able to be decentralized. Bitcoin sends financial transactions, and git sends code. What do you see that makes them fundamentally different?

I also don't completely disagree with why Bitcoin was invented. To get away from government and to make transactions easier on a global scale. But I feel this is more of an issue with the banks that making a new currency can not solve. The problem with money is that it really doesn't have any value. It only has the value it does because we all use it and accept it from others. Furthermore, I wholeheartedly agree my bank should not charge me for minimum balances or inter-bank transfers.

ah, you got me -- the commit hash does depend on history, so the major distinction is in usage (whether you care about modifying history) not in nature.

git as a chain of trust [...] that's exactly what it is

I don't agree. Git can only be trusted as long as everybody is playing fair. Git can neither prevent DoS attacks (pushing thousands of useless commits) or double spending (uhm... code duplication in this analogy???) as long as we're talking about a public repo. The only way to prevent attacks in git is by centralising it (restrict access, forcing a pull request workflow). Referencing earlier data by a pointer on its hash is the only parallel I see between git and blockchains. And that's obviously not what makes blockchains special.

And the harm it has done to both the energy and hardware industries is absolutely horrific.

What's the harm for an industry in selling millions of additional devices at higher prices? The hardware industry LOVES Bitcoin. But I totally agree (and I guess everybody working on blockchains does that, too) that the energy consumption needs to be cut down. Migrating from PoW to PoS or other consensus algorithms that don't rely on computing power must have top priority in blockchain research.

edit: For further clarification - it looks to me as if you think blockchains are a new persistency layer, something one should compare to filesystems, databases, and yes, maybe git. But blockchains are the worst persistency layer, no question about it. Nobody with a sane mind would exchange a persistency layer with a blockchain in an existing app.

What makes blockchains great is the possibility of building apps that were not possible with common persistency layers - decentralised apps, whose functionality are so beneficial that it's okay to base them on the technically worst persistency layer that ever existed, blockchains. (To put it in melodramatic terms.)

Blockchain tech can only be trusted if everyone is playing fair also. There are 51% attacks that have been successful.

Miners are also negatively impacting astronomy and other academic research. The cost and wait time for GPU acquisition definitely is good for hardware vendors, but bad for a broad swath of GPU consumers.

Additionally, Dapps are not really a new thing. BitTorrent, Kazaa, etc., have existed for nearly two decades.

Well, 51% means more than half of the users. That's different to "everyone". Git really needs everyone (100%) to play fair. And while Bittorrent, Kazaa, etc. shared data, Dapps also share logic (code).

 

Using field injection instead of constructor injection. Well, on the harmful-scale it is not quite on gotos level.

Violating the "single level of abstraction" principle within modules.

Recklessly introducing thirdparty libraries. Or the other extreme: the infamous "Not invented here" syndrome. Which in some sense is a form of procrastination: "Hey boss, before I write the app, I've first got to write my own crypto/auth/orm/di/you name it..."

 

Using field injection instead of constructor injection.

SO MUCH THIS. It gets so much more difficult to reason about your program.

Recklessly introducing thirdparty libraries. Or the other extreme: the infamous "Not invented here" syndrome.

Unfortunately, I've been bitten on both of these in the past. Do you know how many times I've had to write BLAS (basic linear algebra subprograms) libraries because "we think we have a slightly better way to do these"? Thankfully, I didn't have to deal with frivolous 3rd party libraries like LeftPad, but I have used React or some React components when they were definitely not necessary.

 

It gets so much more difficult to reason about your program.

Could you elaborate on that, because I've never seen field versus constructor injection have any effect except for PO(J)O unit testing and the amount of code.

 

null. The concept of "there is no value" is necessary, but the implementation of null as a pointer to nowhere is very problematic in most languages.

The problem is that nearly anything can be null, and in most languages there is no way to restrict a type declaration to a non-null value. A null guard clause is the quintessential example of a DRY violation.

And also if you are given a null value, there is no way to determine its declaring type.

In conclusion, null is the original showcase of what accidental complexity looks like. For alternatives, see the Null Object Pattern for OOP, and Maybe/Option for FP. C# also has the concept of Nullable for value types, and it is supposedly going to be applicable to reference types in a future release.

 

I agree that nothingness needs to have some sort of representation (even typed nothingness), but null is definitely not the way to do it. Tony Hoare introduced the concept in ALGOL 65 because it was easy, but he's since said it was his billion dollar mistake. I think that may be an underestimation, honestly. I'm definitely looking forward to C# adopting Nullable for reference types in the near future, and I definitely lean towards Maybe/Option. Thanks for the comment!

code of conduct - report abuse