DEV Community

Josh
Josh

Posted on

How to grasp language structures and computer science concepts that you don't understand intuitively?

So I am mentioning a couple of problems that have been bothering me. I have been a software engineer for about 6 years or so.

  1. Whenever I read code, for example in the Swift language, some programmers use the power of the language heavily, closures, protocols and they end up breaking down the modules very heavily and access things in what seems unnecessarily complex? I am unable to read that sort of code and understand what's going on. I end up having to go back to those language concepts and coming back to try and understand what's going on. Even then, I find it hard.


  2. Also, whenever I try and reproduce a data structure or algorithm, it goes beyond me how to apply that sort of concept in programming in general. I can't seem to intuitively grok it. It seems like I end up doing it the rote way. This doesn't help in the application of concepts elsewhere. I can't seem to reinvent that kind of stuff.

    These things make me feel like I've reached a roof in my programming skill and cannot go beyond. It feels like my programs are too simplistic, because I just grasp the idea/requirements and end up coding it quite awfully with long modules and unnecessary logic, although it works. I can't seem to give it quite the standard of awesomeness, if you know what I mean.

Can someone advise me what I can do to improve this? Or am I done programming?

Top comments (1)

Collapse
 
ahferroin7 profile image
Austin S. Hemmelgarn

Consider looking into learning some higher mathematics, specifically stuff like set theory and graph theory.

Quite often, getting a different perspective on such things can greatly aid in understanding them, and while many programmers may not admit it, there's a lot of higher math involved in coding. Graph theory especially, as it can be used to explain quite a bit about a vast majority of compound data structures (stuff like trees, linked lists, sequence types, etc) as well as the operations done on them.

You might also consider looking into learning about some of the various models of computation. There are quite a few things I never properly understood until I started looking into the actual theory behind the various models of computation. Big ones I'd suggest looking into are:

  • Finite state machines: These are used a lot, especially in low level embedded software design. They're also one of the easiest models to understand, as they can be trivially used to model a wide variety of things in modern society that aren't computers (an elevator for example).
  • Pushdown automatons: These are also used a lot, especially in parsers. They're also pretty easy to understand once you look at them a bit.
  • Lambda calculus: This forms the basis for pretty much all functional programming environments, and to a lesser extent a lot of other programming languages. It takes some effort to understand for most people though, but it's worth it in many cases. The Wikipedia page on it does a decent job of explaining the basics.
  • Cellular automatons: These aren't used as much as the other models I've listed, but are still useful to know because they're one of the simplest concurrent models of computation. Conway's Game of Life is a rather famous example of a 2D cellular automaton.
  • Actor model: Probably the most useful model after lambda calculus, the actor model is how a significant percentage of distributed and concurrent systems are structured, as well as providing a basis for modeling threading, multiprocessing, and many other things.