Advent of Code is an annual coding puzzle.
We're going to deal with AoC 2024 Day 2: Red-Nosed Reports, Part 2.
The straightforward solution is a sublist iteration algorithm.
On some inputs - which we'll see later - this approach is quite slow:
Disclaimer: It wasn't ever supposed to be run on this input. There's nothing wrong with using the sublist iteration algorithm for the AoC inputs.
Tenet is a Hollywood movie about super-determinism.
Can this Hollywood movie help us?
The puzzle
The puzzle is quite basic: Basically you want to tell whether a list is strictly monotonic with two caveats:
The first caveat is that it can be either ascending or descending, and the differences must be within [1, 3] range.
The second caveat is that you can remove a single element:
So, naively we just want to iterate all of these "dampened sublists":
And this works for normal inputs.
The problem
But our input isn't always that easy...
Our actual input:
- has a lot of lines,
- and lines are really long.
Due to its sheer size, it takes a LOT OF TIME for the computer to chug through that huge input:
The core of the issue is that the code is doing the same computations repeatedly:
Can you think of a better way?
No idea yet?
The idea is: We split into two teams: Red team and Blue team, and we invert the Blue Team.
Red Team is going to get a briefing from Blue Team on how to win the battle, then Red Team is going to use this information to win.
How does the Blue Team know at the beginning how to win?
Because Blue Team starts the mission from the end of the battle.
Joint Pass solution
This forwards/backwards might sound cryptic, so let's start with a well-known thing: rolling sums.
In our problem, we don't want sum, but min and max instead:
But if we can roll forwards, then we can roll backwards too:
How is this useful? Let's think of what happens if we concatenate them:
Instead of destructing the list into sublists, we construct them from prefix / suffix pairs.
The bonus: The new min and max after the concat can be computed in O(1)!
Don't worry about the edge cases! All cases can be handled by a stupid trick instead (TL;DR represent absence with None, and filter them out):
Ok, so we can create arbitrary lists, but we need really specific ones.
We must construct lists by "removing K consecutive elements".
We can achieve that - once again - in a dumb tricky way (let's say K=2 because it is more visual that way):
And that's the general idea of it.
Let's see the algorithm!
Joint Pass - Preamble
Remember: We want N-K long sublists.
In the preamble, let's get the basics out of the way:
- An empty or 1-list is trivially safe.
- In all other cases we'd need the differences.
Joint Pass - Red Team
Now comes our forward roll with the Red Team:
Joint Pass - Blue Team
Now comes our backward roll with the Blue Team:
Joint Pass - Briefing and Winning the battle
Now we essentially merge together the prefix-suffix pairs and query the first "safe" one:
Yes, it can look dense, but not because it is complex, but I might be explaining it improperly. Here are some examples:
Essentially just think of how you would merge the two tuples' mins, maxes and how you would compute the difference. And keep in mind that things can become None, so you have to filter that out.
Joint Pass algorithm
A code can be something like this:
#!/usr/bin/env python3
def dampened_safe(xs: list[int], K: int) -> bool:
# 0. Edge case
if len(xs) - K < 2:
return True
# 1. Generate differences
diffs = [x0 - x1 for x0, x1 in zip(xs, xs[1:])]
# 2. Generate prefixes (Red Team)
prefixes = list(zip([None] + xs, [None, None] + prefix_rolling_sum(diffs, min), [None, None] + prefix_rolling_sum(diffs, max)))
# 3. Generate suffixes (Blue Team)
suffixes = list(zip(xs + [None], suffix_rolling_sum(diffs, min) + [None, None], suffix_rolling_sum(diffs, max) + [None, None]))
# 4. Drop K suffixes, zip with prefixes-suffixes and find the first "safe" pair
for p, s in zip(prefixes, suffixes[K:]):
ptail, pmin, pmax = p
shead, smin, smax = s
d = ptail - shead if ptail != None and shead != None else None
ds = [x for x in [d, pmin, pmax, smin, smax] if x != None]
if all(1 <= x <= 3 for x in ds) or all(-3 <= x <= -1 for x in ds):
return True
return False
def prefix_rolling_sum(xs, f):
if not xs:
raise ValueError("_|_")
out = [xs[0]]
acc = xs[0]
for x in xs[1:]:
acc = f(acc, x)
out.append(acc)
return out
def suffix_rolling_sum(xs, f):
if not xs:
raise ValueError("_|_")
out = [xs[-1]]
acc = xs[-1]
for x in reversed(xs[:-1]):
acc = f(x, acc)
out.append(acc)
return list(reversed(out))
def main() -> None:
with open("input-xxl.txt") as f:
lines: list[str] = f.read().strip().splitlines()
cnt = 0
for line in lines:
xs = [int(n) for n in line.split()]
if dampened_safe(xs, 1):
cnt += 1
print(cnt)
##############################################################################
if __name__ == "__main__":
main()
I hand-rolled loops and didn't use itertools, to make it explicit.
It is faster than the straight-forward algo, so it can be considered practically fast:
Wait... 2 ENTIRE SECONDS?!?!?!?!?!?!?!
That isn't fast :(
This is why I said that it is practically fast.
We made the conscious decision to toy around with a bunch of auxiliary lists and nonsensical Python list manipulation stuff, and this has a cost of 2 second runtime.
We don't want this overhead, so let's introduce a new constraint:
Only O(1) additional space allowed.
Separate Pass algorithm
Due to the additional O(1) space constraint we can wave goodbye to precomputed mins and maxes.
Can we do something else, but still keep the Red - Blue teamwork alive?
In the Joint-Pass solution we asked whether the list: is EITHER ascending OR descending.
Maybe our question wasn't specific enough?!
What if we asked instead ONLY whether the list is ascending (within bounds [1, 3]) or its opposite?
Aka instead of answering in a Joint Pass, we'll have to ask ascending in one Pass and descending in a different Pass.
The answer is monotonic and easily storeable by simply ints:
Please DON'T FORGET in the image above we are testing descending safety, but it works the same way for ascending, simply you must change the predicate which tests consecutive pairs.
Based on this we can ask some questions like:
But we can also answer the one billion dollar question:
And... this is also the algorithm: Just go over the questionnaire and win!
Of course you have to test the ascending case [1, 3] and descending case [-3, -1] separately, but that's still just O(N) overall:
#!/usr/bin/env python3
from typing import Callable
def dampened_safe(xs: list[int], K: int, pred: Callable[[int, int], bool]) -> bool:
# 0. Edge Cases
N = len(xs)
L = N - K
if L < 2:
return True
# 1. Find P from the left
P = 1
while (P < N and pred(xs[P - 1], xs[P])):
P += 1
# 2. Find S from the right
S = 1
while (S < N and pred(xs[N - 1 - S], xs[N - S])):
S += 1
# 3. Check if Prefix-Suffix not long enough
if P + S < L:
return False
# 4. Check Prefix only
if L <= P:
return True
# 5. Check Suffix only
if L <= S:
return True
# 6. Check Prefix-Suffix pairs
for p in range(1, P + 1):
s = L - p
if s <= S and pred(xs[p - 1], xs[N - s]):
return True
return False
def main() -> None:
with open("input-xxl.txt") as f:
lines: list[str] = f.read().strip().splitlines()
ascending = range(1, 4)
descending = range(-3, 0)
cnt = 0
for line in lines:
xs = [int(n) for n in line.split()]
if dampened_safe(xs, 1, lambda a, b : (a - b) in ascending) or dampened_safe(xs, 1, lambda a, b : (a - b) in descending):
cnt += 1
#
#
print(cnt)
##############################################################################
if __name__ == "__main__":
main()
We saved some time on not allocating auxiliary lists and other garbage. We pay by the fact, that now we have to analyze ascending and descending in separate passes but that still gives O(N):
Of course the code doesn't exactly radiate clean code vibes, but we managed to get out of the O(1) space coffin, so huzzah.
This solves Part 2 where K = 1.
Of course, this can be still made faster (p should not necessarily start at 1, and stuff like that), but we won't go that far... solving Part 2 under a second compared to 28 minutes is good enough for government work.
Btw, this red-blue approach is just one from the many possible solutions.
There are faster ones, but this Red-Blue (be it separate or joint pass) has an interesting property.
But talking about that property will be a huge rabbit hole.
Everything below is part of that rabbit hole.
Clean Code
This Red-Blue has an interesting property...
I mean definitely not that it is Clean Code for sure! I mean look at it, it is just trash on wheels.
But... who writes Clean Code?
There's a book called Clean Code by Robert C. Martin, aka "Uncle Bob".
So Uncle Bob writes Clean Code.
What is Clean Code?
That's a more complex question.
To answer that, we have to go down a little bit of a rabbit hole...
But I have to rip the band-aid off: We'll go into Clojure.
I don't want to, you don't want to, but we have to go in to make a point.
For simplicity's sake, consider a version of which only checks descending, but does not compress the information into ints but it keeps the actual prefix/suffix lists.
It is a sort of dumbed down version of the Joint Pass.
Our Red-Blue team algorithm looks like this:
The merge is constant time operation on two tuples:
Now, if you aligned the columns a bit more you can see a pattern to what it does:
If you unroll the rolls into their definitions it is even more obvious that it is just testing the safety of sublists in a weird way:
But that's equivalent to simply checking the concatenations of prefixes and suffixes:
But that is literally just mapping isSafe over a list:
But the concats can be made up by a zipWith (++):
And you can undrop 1 from the blue suffixes, to get the real suffixes:
But in FP languages like Clojure, inits and tails functions produce prefix and suffix lists:
So in code this thing can be like:
Or in diagram:
Remember that you started with the Red-Blue which is O(N):
But your Clojure code:
- Is the same
- except it is O(N^2)
Back to the whole black-belt Clean Coder topic...
I want to hammer home, that you did tiny step-by-step rewrites like:
Show that 42 is (1 + (1 + 40))
Proof:
42
= (2 + 40) // because 42 = 2 + 40
= ((1 + 1) + 40) // because 2 = 1 + 1
= (1 + (1 + 40)) // because (+) is associative
So 42 and (1 + (1 + 40)) are the same.
By equivalences, by tiny rewrite steps, little by little...
The Red-Blue Python code became Clean Code.
Why?
The O(N^2) Clojure you just wrote right now is the O(N^2) Clojure Uncle Bob wrote.
Both the Red-Blue and the Clojure find the answer using the same ancient method though: blindly guessing. Simply for Clojure it takes O(N) to see if a single guess is safe, while the Red-Blue is able tell it in O(1).
Of course, if you don't want to blindly guess, you can solve it in a tight single loop, and if you constrain K to be 1, then you can model it with a finite state machine. The core idea is that when the next element is bad, then you can either:
- remove the bad element,
- or try removing the tail of the safe list which was accumulated up to that point.
These tight single loop variants are of course O(N) too, but - if implemented well - they can have a smaller constant factor than the Red-Blue.
I just personally think that the Red-Blue is funnier because it manages to be O(N) while still playing that 'try everything' stuff.
We only did descending... so in reality both algos test descending or ascending.
Uncle Bob's does it in a complex way:
Our Joint Pass algorithm simply tests min-max of differences, because I'm a simpleton.
But... there's the curious case of 1-long and 0-long N-1 sublists:
The specification says:
So, a report only counts as safe if both of the following are true:
The levels are either all increasing or all decreasing.
Any two adjacent levels differ by at least one and at most three.
Based on spec, what should we do when the list is 0 or 1-long, aka the list of differences is empty?
For example, Java uses:
boolean allMatch(Predicate<? super T> predicate)
API Note:
This method evaluates the universal quantification of the predicate over the elements of the stream (for all x P(x)). If the stream is empty, the quantification is said to be vacuously satisfied and is always true (regardless of P(x)).
Honest to God: I have zero idea. I went with the JavaDoc and the universal quantor, so this is where the codes actually functionally differ.
But back to Uncle Bob....
Uncle Bob has an article titled: Why Clojure?, where he writes:
So, why Clojure? I’ve made a list. Are you ready for it? Here it is.
1. Economy of Expression.
If you are wondering where the rest of the list is, there isn’t any more.
I understand that the whole point of that article was to be concise so the article was also concise, but I think that effort might've misrepresented things a tiny bit.
The list is very theatrically and funnily 1-long, but that leaves out certain other details.
I am not going to make an exhaustive enumeration, just mention some, on the level of my own stupidity:
- Immutability
- Referential Transparency
- FP is about "What?" not "How?" or "Why?". This sounds fluffy and nonconsequential first, but it is about "What", so expect answers accordingly (here's (λx.x x)(λx.x x))
- Side effect handling is a bit more complicated
- Unfortunately, it is terribly easy to confuse data with code :( Because... Anyways here's a FizzBuzz in Clojure
- Turing had a computational model, and Church had a computational model. Turing was Church's doctoral student. It is not a competition, it is a hug
- You might have to use gotos
- Imperative is like Lousiana where the sun shines on you. FP is like Sweden, the sun is still there, it will just feel different
- You compose expressions
- Subjective point: For this one, I have no proof, only my limited personal experience. Sometimes it can feel that FP is our enemy. Our enemy might actually genuinely know and care more about us than our own team, of course on its own harsh way.
TL;DR There's no good or bad. We don't need to buy a FP Doink-It toy.
Here's a short and good example of cool non-fp code.
In this section we just did a huge unrefactoring chain to write Clean Code.
Curtsey
What prompted me to write this whole thing, was a single word in that article: Scotland.
Uncle Bob writes:
But, Dammit, it’s dynamically typed!!!
OK, I get it.
You like static typing. Fine.
You use a nice statically typed language, and I’ll use Clojure.
And I’ll be in Scotland before ye.
I "benchmarked" that Scotland trip:
Yes. This was immature from me... but Uncle Bob said that Scotland thing, and it is very rude to sit down without being invited.
You know what is in Scotland?
Glasgow, a statically typed language with tea, lots of precious and delicious tea!
To take part in a tea party, you don't have to do anything other than shake that Language's hand.
You don't have to curtsey, because She is not your Monarch.
But no matter my donkey brain and poor programming...
If I met her, I'd have to, because She is my Head of State.
Oh good Lord No, we don't want to KISS Her Majesty the Queen.
We don't put an arm around, like Michelle Obama did:
Nonono, we don't talk about Her Majesty's ⊥:
I'd bow as a gentleman, and ladies would curtsey:
-o and -O2 do exactly what you expect. Yes -o is redundant, it is just there because curtsey is such a lovely word.
It might seem exhausting, but you can get used to it.
Sort of like what you'd have to do in a Clojure TDD/BDD framework:
Keep the unit tests always up-to-date, sort of like with that 754 which might be actually 386.
Shall we sit down for tea now?
But that's just one way to enjoy tea...
We can enjoy tea in all sorts ways!
For example, in the article there was a Clojure, like this:
(defn square [x] (* x x))
(println (take 25 (map square (range))))
Which could be this:
ghci> sq x = x * x
ghci> mconcat . take 25 . map (print . sq) $ [0..]
0
1
4
9
16
25
<I redacted the rest>
Or it could be like this:
ghci> mconcat [ print (x * x) | x <- take 25 [0..] ]
0
1
4
9
16
25
<I redacted the rest>
No, this isn't American prohibition, we actually do drink tea.
Of course, if you don't like tea... I snuck in...:
Perfectly Legal.
And if those don't work, here's a different one:
Yes. These for, to are just functions I'm coding on the fly, to prove that syntax is just syntax:
All of these are the same language, just in different styles. These codes are not C# Reflection, nor Aspect Oriented point-cuts, nor language extensions, neither Turtles All The Way Down like in Rust. These are literally just functions defined by me, aka random user 4329854. I.e.: There are no React Hooks, there's no Redux, there's no NgRx. That use is just the name I gave to a stupid function. This whole thing is just a cheap syntactical trick from me, to show that syntax is a shallow thing.
I personally prefer Richard Bird's style. Below is a short snippet, just for educational purposes (i.e. Buy+Read his books, do not believe anything I say in this article, as I am not an expert.):
I like his code style, not because it looks "nice" in the linter-sense of the word, or it plays well with agile, or it supports an agenda. I like it because it does a very unique thing: proofs. It is not a syntax level property of the code that it allows proofs. Aka: syntax and semantics together matter.
If I made up this many different codes in such a short time, imagine a 10 years old codebase. Do you think it would be easier than a really ugly OOP codebase? I doubt that. You can do the same crazyness in Clojure too.
I don't think that language has to do anything with "bad code". I am the living proof: I write bad code in all languages.
So... with that aside...
What do you think? What would happen if that exact Clojure source code from earlier would be in Scotland on a honeymoon?
Note: There's nothing wrong with that Clojure code, I fed it to the Glasgow Haskell Compiler, also known as the Glorious Haskell Compiler made by the Exquisite Guild of Coreforgers and Stackframemongers.
And now onto the most important and profound difference between languages in the history of mankind:
$ ghci
GHCi, version 9.6.2: https://www.haskell.org/ghc/ :? for help
ghci> 1 + 1;;
<interactive>:1:1: error:
Parse error: module header, import declaration
or top-level declaration expected.
The above 1 + 1;; caused a parse error, because we don't actually put our little fingers up in the air.
Where we think that comes from is actually France and the French, before the revolution:
$ ocaml
OCaml version 4.10.0
# 1 + 1;;
- : int = 2
Of course, French have their own opinions.
What matters is the last line Uncle Bob wrote:
And I’ll be in Scotland before ye.
In Summary:
Syntax is just syntax. What matters is how it connects to semantics and of course semantics itself.
I am not a representative sample of the Haskell community, nor the use of the language. I barely know the language. I am not exactly from Scotland.
My point is:
Use whatever language in whatever way it feels like home for YOU.
I just wanted to bring this whole Scotland thing up, because every person can be a Slim Shady lurking, he could be working at Burger King spitting on our onion rings.
And shoutout to all QAs and SDETs who are forced to witness our declarative/imperative wonderlands.
Becoming friends
There's no bad or good language.
I, for example, write bad code in all languages. So bad in fact, that I wouldn't call it code. I mean just look at my Pythons here for example. They are not Pythons. They even have secondary.
But, what would we say to Uncle Bob?
What would you say to Uncle Bob, considering that you - in this post - went forwards in time, backwards in time and saw the whole thing play out?
I mean, if you managed to get this far, then you just did a Red and Blue, but not with lists but instead with concepts.
You could say something like this to Uncle Bob: I'll see you in the beginning, friend!
I would say something like this to Uncle Bob: I understand that Americans have A-10 Warthogs, but the Scottish have Corporal Cruachan the Fourth.


















































Top comments (0)