DEV Community

Cover image for Advent of Code 2020: Day 16 using csv and numpy in Python
Yuan Gao
Yuan Gao

Posted on • Edited on

Advent of Code 2020: Day 16 using csv and numpy in Python

Another quick one today

The Challenge Part 1

Link to challenge on Advent of Code 2020 website

The challenge is about figuring out the label for columns of data, knowing only a set of rules that are known to apply to each label.

The first task, however, is much stripped down, and is about removing invalid entries from the data, using the heuristic that they contain values that don't fit any rule.

Importing data

The data is, again, csv-like, so I'll be using the python csv library. The input data looks a little different this time, as there's some vertical structure to the document rather than pure data that can be easily imported in a few lines.

Rather than automatically detect the structure, I will hard-code line numbers to avoid wasting time here:

import csv
import re

data = open("input.txt").readlines()

rules_data = data[0:20]
my_ticket = next(csv.reader(data[22:23], quoting=csv.QUOTE_NONNUMERIC))
nearby_tickets = list(csv.reader(data[25:], quoting=csv.QUOTE_NONNUMERIC))
Enter fullscreen mode Exit fullscreen mode

The csv.QUOTE_NONNUMERIC tells csv library to convert the unquoted values to numbers, it saves us having to do an extra conversion step

Parsing rules

The rules_data contain some structured values that serve as rules, the example given is:

class: 1-3 or 5-7
Enter fullscreen mode Exit fullscreen mode

This should be interpreted as rule name class, which tests if the given value is between 1 and 3, or 5 and 7 (inclusive). To parse these, a little bit of regex is used, returning a function that implements these rules:

rules_re = re.compile("^([a-z ]+): (\d+)-(\d+) or (\d+)-(\d+)$")
rules = []
for rule_str in rules_data:
    name, *numbers = rules_re.match(rule_str).groups()

    def make_rule(numbers):
        l1, h1, l2, h2 = map(int, numbers)
        return lambda val: (l1 <= val <= h1) or (l2 <= val <= h2)

    rules.append((name, make_rule(numbers)))
Enter fullscreen mode Exit fullscreen mode

A thing to note here is the closure def make_rule(). While it's possible to insert a lambda directly, since that lambda requires access to variables outside it which change every loop, we get into a situation where they reference the wrong variable, which is problematic. Therefore we need them to be local scope only, and so need to use a closure that is assigning those variables.

Finding the errors

With the rules structure defined, we can loop over the values in the tickets a few times. There are three loops here: looping over each ticket, looping over each value in the ticket, and applying each rule to the value to check if any are invalid.

errors = []
for ticket in nearby_tickets:

    for value in ticket:
        if not any(func(value) for _, func in rules):
            errors.append(value)

print("errors", sum(errors))
Enter fullscreen mode Exit fullscreen mode

This yields the result the challenge is looking for.

The full code for this part is:

import csv
import re

data = open("input.txt").readlines()

rules_data = data[0:20]
my_ticket = next(csv.reader(data[22:23], quoting=csv.QUOTE_NONNUMERIC))
nearby_tickets = list(csv.reader(data[25:], quoting=csv.QUOTE_NONNUMERIC))

rules_re = re.compile("^([a-z ]+): (\d+)-(\d+) or (\d+)-(\d+)$")
rules = []
for rule_str in rules_data:
    name, *numbers = rules_re.match(rule_str).groups()

    def make_rule(numbers):
        l1, h1, l2, h2 = map(int, numbers)
        return lambda val: (l1 <= val <= h1) or (l2 <= val <= h2)

    rules.append((name, make_rule(numbers)))

errors = []
for ticket in nearby_tickets:

    for value in ticket:
        if not any(func(value) for _, func in rules):
            errors.append(value)

print("errors", sum(errors))
Enter fullscreen mode Exit fullscreen mode

The Challenge Part 2

In the second part of the challenge, we're asked to figure out which rule goes with which column by finding an order of rules for which all values in the table are valid.

Finding valid tickets

First, we need to discard all the errored tickets detected in the previous step, so we modify the previous code to give us a list of valid tickets:

valid = []
for ticket in nearby_tickets:
    for value in ticket:
        if not any(func(value) for _, func in rules):
            break
    else:
        valid.append(ticket)
Enter fullscreen mode Exit fullscreen mode

The NaΓ―ve approach

The brute-force method, is to recursively check every combination of rules until one is found for which the whole table is valid. Each recursion of this function tries to find the next rule that is valid for the next column of the table. If it finds one that matches, it goes one level deeper. If it doesn't it moves on to the next rule. In this way, by the 20th recursion (if it gets there) it will have applied the correct set of rules to each column.


def check_next(rules, data):
    column, *the_rest = data

    for idx, (name, rule) in enumerate(rules):
        if all(rule(value) for value in column):
            new_rules = rules[:idx] + rules[idx+1:]

            if not new_rules:
                return [name]

            if rule_string := check_next(new_rules, the_rest):
                return [name] + rule_string
    return None

valid_np = np.array(valid)
check_next(rules, valid_np.T)
Enter fullscreen mode Exit fullscreen mode

Unfortunately this is REALLY slow. There are too many iterations, and this calculation could take hours.

Can we do better?

Process of elimination

Let's explore, could it be that the data is structured so we can find the columns through a process of elimination? The earlier phrasing of the question seems to suggest so. Let's see how many rules are valid for each column:

for column in valid_np.T:
    print(len([rule_name for rule_name, rule in rules if all(rule(value) for value in column)]))
Enter fullscreen mode Exit fullscreen mode

Output

8
19
20
12
4
18
10
5
6
1
9
15
14
13
3
16
7
2
11
17
Enter fullscreen mode Exit fullscreen mode

Bingo, it is clear that one of these columns, the 10th, matches exactly one rule, which means we can eliminate both this column, and that rule from the next step of search. In fact, it is clear that another column has 2 matches, another 3, so the data is very likely to be searchable using a simple process of elimination.

Let's code that:

columns = np.insert(valid_np.T, 0, np.arange(valid_np.shape[1]), axis=1)

def eliminate(rules, columns):

    if len(columns) == 1:
        return [(columns[0][0], rules[0][0])]

    for idx, (col_idx, *column) in enumerate(columns):
        valid_rules = [rule_name for rule_name, func in rules if all(func(value) for value in column)]

        if len(valid_rules) == 1:
            new_rules = [rule for rule in rules if rule[0] != valid_rules[0]]
            remaining_columns = np.vstack([columns[:idx], columns[idx+1:]])
            return [(col_idx, valid_rules[0])] + eliminate(new_rules, remaining_columns)

result = eliminate(rules, columns)
Enter fullscreen mode Exit fullscreen mode

Output

[(9.0, 'wagon'),
 (17.0, 'arrival station'),
 (14.0, 'row'),
 (4.0, 'train'),
 (7.0, 'route'),
 (8.0, 'seat'),
 (16.0, 'type'),
 (0.0, 'zone'),
 (10.0, 'arrival platform'),
 (6.0, 'departure time'),
 (18.0, 'departure platform'),
 (3.0, 'departure location'),
 (13.0, 'departure date'),
 (12.0, 'departure track'),
 (11.0, 'departure station'),
 (15.0, 'arrival track'),
 (19.0, 'arrival location'),
 (5.0, 'duration'),
 (1.0, 'price'),
 (2.0, 'class')]
Enter fullscreen mode Exit fullscreen mode

This code completed instantly (it didn't have to do any search), and has ordered the rules!

The challenge asks us to use this rule-to-column mapping and multiply the six columns from our ticket data whose rule name begins with "departure". We can leverage numpy's multi-indexing/broadcasting to pull these columns out of our ticket data directly:

six_rules = [int(idx) for idx, rule_name in result if rule_name.startswith("departure")]
print("product", np.prod(np.array(my_ticket)[six_rules]))
Enter fullscreen mode Exit fullscreen mode

This directly produces the output. The full code for this part is:

import csv
import re

data = open("input.txt").readlines()

rules_data = data[0:20]
my_ticket = next(csv.reader(data[22:23], quoting=csv.QUOTE_NONNUMERIC))
nearby_tickets = list(csv.reader(data[25:], quoting=csv.QUOTE_NONNUMERIC))

rules_re = re.compile("^([a-z ]+): (\d+)-(\d+) or (\d+)-(\d+)$")
rules = []
for rule_str in rules_data:
    name, *numbers = rules_re.match(rule_str).groups()

    def make_rule(numbers):
        l1, h1, l2, h2 = map(int, numbers)
        return lambda val: (l1 <= val <= h1) or (l2 <= val <= h2)

    rules.append((name, make_rule(numbers)))

valid = []
for ticket in nearby_tickets:

    for value in ticket:
        if not any(func(value) for _, func in rules):
            break
    else:
        valid.append(ticket)

columns = np.insert(valid_np.T, 0, np.arange(valid_np.shape[1]), axis=1)

def eliminate(rules, columns):

    if len(columns) == 1:
        return [(columns[0][0], rules[0][0])]

    for idx, (col_idx, *column) in enumerate(columns):
        valid_rules = [rule_name for rule_name, func in rules if all(func(value) for value in column)]

        if len(valid_rules) == 1:
            new_rules = [rule for rule in rules if rule[0] != valid_rules[0]]
            remaining_columns = np.vstack([columns[:idx], columns[idx+1:]])
            return [(col_idx, valid_rules[0])] + eliminate(new_rules, remaining_columns)

result = eliminate(rules, columns)

six_rules = [int(idx) for idx, rule_name in result if rule_name.startswith("departure")]

print("product", np.prod(np.array(my_ticket)[six_rules]))
Enter fullscreen mode Exit fullscreen mode

Onward!

Top comments (0)