Advent of Code 2024, Day 11

I had to get some help, but learned something new about memoization and the importance of caching. Solutions and explanation within.

Large stones protruding from the ground.
Photo by Ed Phillips on Unsplash

Part of the AoC 2024 series

Array manipulation at its best! And memoization!

adventofcode2024/day11 at main Ā· javorszky/adventofcode2024
Contribute to javorszky/adventofcode2024 development by creating an account on GitHub.

Spoilers

Part 1

Array expansion. Iā€™ve seen this before. One of the issues I had a while back is that I overā€“complicated some solutions. Used custom structs where a linked list, a ring list, or just an array would have sufficed. Now that I remember that I had a lot of grief with custom data structures instead of using primitives, I opted for primitives.

In this case the primitive is that a number gets replaced by either one or two numbers. Thereā€™s nothing special here, I can just use a u64 for this. Probably not even need the 64, because the number either splits, or gets multiplied which will result in an even number of digits in like a cycle or two.

Thankfully for part 1 the naive solution of just cycling over each element of a vec is fast enough. On to part 2!

Part 2

Using the same naive implementation will not work. Time to memoize! Basically because each number is going to result in the same output, so I can precompute the result of a number a bunch of steps forward, and keep reusing it.

Thereā€™s about 12 hours between me writing the previous paragraph and this one. I tried the memoization with the naive solution, but that didnā€™t pan out. The code became incredibly complex, I still wanted to collect everything into a vec and count the elements, which would have grossly exceeded the available memory and storage space I have ever had access to.

I got stuck, so watched someone elseā€™s solution, so this isnā€™t technically my solution, but itā€™s a learning experience. It does use memoization with a different data structure, and thereā€™s a learning experience here, so onwards I go.

The gist of it is that we only care about the number of total stones after all the blinks. Their order doesnā€™t matter. Also each distinct number spawns its own tree, so thereā€™s no intermingling between the numbers.

Previously I thought I could store how many elements a number spawns in 1, 2, 3, ... 10 blinks, and then use that to calculate 75. Like 7 * 10 + 5, but that would still depend on what the elements would be after these chunks. It wouldnā€™t really help us.

The solution was a recursive function that looks roughly like this:

function count(number, steps_left) -> u64 {
    if steps_left == 0 { return 1; } // we are at the end
    
    if number == 0 { return count(1, steps_left-1); } // if 0, becomes 1

    if number.as_string.len %2 == 0 {
        return count(number.as_string.left_half.as_number, steps-1)
             + count(number.as_string.right_half.as_number, steps-1)
    } // even digit numbers get split, eg 2024 -> 20, 24

    return count(number * 2024, steps-1) // otherwise *2024
}

Straight up running that without anything else will still exceed the available time I have on Earth, so we need to add caching! Or memoization.

Because this is a recursive function, I canā€™t create the thing that will store results within the function itself because it would be recreated as empty every time itā€™s called. Not ideal. We can however create it outside, and pass in a reference to it! Hence:

function solve_2() -> u64 {
    let numbers = parse(input).into_vec_of_numbers(); // pseudocode

    let mut sum = 0; // where we keep count
    let mut memo: HashMap<(u64, usize), u64> = HashMap::new(); // the cache

    for n in numbers {
        sum += count(n, 75, &mut memo);
    }
}

function count(number, steps, &mut memo) -> u64 {
    if memo.contains_key(&(number, steps)) {
        return memo[(&number, steps)];
    }

    if steps == 0 { return 1; } // no need to cache this

    let mut res:u64 = 0;

    if number = 0 {
        res = count(1, steps-1, memo);
    } else if number.as_string.len % 2 == 0 {
        res = count(number.as_string.left_half.as_number, steps-1)
            + count(number.as_string.right_half.as_number, steps-1)
    } else {
        res = count(number * 2024, steps - 1)
    }

    memo.insert((number, steps), res); // important that we use steps, rather than steps-1 !

    return res
}

With this result storage the calculation is practically instant. For additional fun, the total size of the map is 118,046 distinct elements. Thatā€™s significantly smaller than I thought it would be.

Why does this work?

Iā€™ve been thinking of why I missed this. I correctly identified a bunch of properties of the problem and the tree:

  • the order doesnā€™t matter
  • we can calculate things in isolation
  • each number will spawn the exact same tree wherever it is in the larger tree, the difference is how much of that tree we need

But I didnā€™t think to memoize it this way because weā€™d still need to reach the end of it at some point, and everything slows down at roughly blink 35 or thereabouts, depending on how much metal youā€™re throwing at the problem.

The complexity happens due to the split. Even if every 3rd splits on average, weā€™re still left with 2^25 = 33,554,432 or so numbers per branch. Every 2nd split would get us 274,877,906,944.

The recursive function is depth first thankfully, which is to our advantage. Once it reaches the very end of the very first branches of the first number, it starts saving results. Any new branch traversal will have some results already, at which point entire subtrees can be skipped, and their results memoized. The more this function runs, the larger the trees get where we know the results.

Thatā€™s why using it without caching will take forever, with caching will take a couple of ms.

Got my ā­ļøā­ļø with the help of The Internetā„¢. I learned something new. The video in question: https://www.youtube.com/watch?v=pVfsmQSlVOQ.

Spoiler photo by Yuvraj Singh on Unsplash