DEV Community

Dolly Sharma
Dolly Sharma

Posted on

Ninja's Training

Ninja Training Problem – Complete DP Guide (Recursion → Memoization → Tabulation → Space Optimization)

Dynamic Programming is one of the most important topics in DSA interviews and coding tests.
The Ninja Training problem is a classic example that helps you understand several DP techniques step by step.

In this guide we will cover:

  • Problem intuition
  • Why greedy fails
  • Recursion approach
  • Memoization (Top-Down DP)
  • Tabulation (Bottom-Up DP)
  • Space optimization
  • Dry run example

Problem Intuition

Ninja is planning a training schedule for N days.

Each day he can perform one of three activities:

Activity Meaning
0 Running
1 Fighting
2 Learning

Rule

You cannot perform the same activity on two consecutive days.

Goal

Maximize the total merit points.


Example

points =
[1 2 5]
[3 1 1]
[3 3 3]
Enter fullscreen mode Exit fullscreen mode

Optimal schedule:

Day 1 → Learning (5)
Day 2 → Running (3)
Day 3 → Fighting (3)

Total = 11
Enter fullscreen mode Exit fullscreen mode

Why Greedy Fails

Consider this example:

10 50 1
5 100 11
Enter fullscreen mode Exit fullscreen mode

Greedy Approach

Day1 → 50
Day2 → cannot choose 100 (same activity)
choose 11

Total = 61
Enter fullscreen mode Exit fullscreen mode

Optimal Solution

Day1 → 10
Day2 → 100

Total = 110
Enter fullscreen mode Exit fullscreen mode

Greedy fails because the best local choice does not guarantee global optimum.

Therefore we must explore all possibilities → Dynamic Programming.


DP State Idea

We define a state:

dp[day][last]
Enter fullscreen mode Exit fullscreen mode

Meaning:

Maximum points earned until "day"
if the last activity performed was "last"
Enter fullscreen mode Exit fullscreen mode

Where:

last = 0 → Running
last = 1 → Fighting
last = 2 → Learning
last = 3 → No activity yet
Enter fullscreen mode Exit fullscreen mode

Recursion Idea

Define a function:

solve(day, last)
Enter fullscreen mode Exit fullscreen mode

Meaning:

Maximum points obtainable from day 0 → day if the previous activity was last.

Recurrence

solve(day,last) =
max(points[day][task] + solve(day-1,task))
for all task ≠ last
Enter fullscreen mode Exit fullscreen mode

Recursion Code (C++)

int solve(int day, int last, vector<vector<int>> &points)
{
    if(day == 0)
    {
        int maxi = 0;
        for(int task=0; task<3; task++)
        {
            if(task != last)
                maxi = max(maxi, points[0][task]);
        }
        return maxi;
    }

    int maxi = 0;

    for(int task=0; task<3; task++)
    {
        if(task != last)
        {
            int point = points[day][task] + solve(day-1, task, points);
            maxi = max(maxi, point);
        }
    }

    return maxi;
}

int ninjaTraining(int n, vector<vector<int>> &points)
{
    return solve(n-1,3,points);
}
Enter fullscreen mode Exit fullscreen mode

Recursion Complexity

Time Complexity = O(3^N)
Enter fullscreen mode Exit fullscreen mode

Because many states repeat.

Example repeated calls:

solve(2,3)
solve(1,0)
solve(1,1)
solve(1,2)
Enter fullscreen mode Exit fullscreen mode

Each again calls:

solve(0, ...)
Enter fullscreen mode Exit fullscreen mode

This creates overlapping subproblems.


Memoization (Top-Down DP)

To avoid recomputation we store results in a DP table.

dp[day][last]
Enter fullscreen mode Exit fullscreen mode

If a state is already solved, we return it directly.


Memoization Code

int solve(int day,int last,vector<vector<int>> &points,vector<vector<int>> &dp)
{
    if(day==0)
    {
        int maxi=0;
        for(int task=0;task<3;task++)
        {
            if(task!=last)
                maxi=max(maxi,points[0][task]);
        }
        return dp[day][last]=maxi;
    }

    if(dp[day][last]!=-1)
        return dp[day][last];

    int maxi=0;

    for(int task=0;task<3;task++)
    {
        if(task!=last)
        {
            int point=points[day][task]+solve(day-1,task,points,dp);
            maxi=max(maxi,point);
        }
    }

    return dp[day][last]=maxi;
}

int ninjaTraining(int n,vector<vector<int>> &points)
{
    vector<vector<int>> dp(n,vector<int>(4,-1));
    return solve(n-1,3,points,dp);
}
Enter fullscreen mode Exit fullscreen mode

Memoization Complexity

States = n × 4
Transitions = 3

Time Complexity = O(n × 4 × 3)
Space Complexity = O(n × 4) + recursion stack
Enter fullscreen mode Exit fullscreen mode

Tabulation (Bottom-Up DP)

Instead of recursion, we build the solution iteratively from day 0 to day n-1.


Base Case (Day 0)

dp[0][0] = max(points[0][1], points[0][2])
dp[0][1] = max(points[0][0], points[0][2])
dp[0][2] = max(points[0][0], points[0][1])
dp[0][3] = max(points[0][0],points[0][1],points[0][2])
Enter fullscreen mode Exit fullscreen mode

Explanation:

If last activity was 0, we cannot do 0 today, so choose the best of 1 or 2.


Tabulation Code

int ninjaTraining(int n, vector<vector<int>> &points)
{
    vector<vector<int>> dp(n,vector<int>(4,0));

    dp[0][0]=max(points[0][1],points[0][2]);
    dp[0][1]=max(points[0][0],points[0][2]);
    dp[0][2]=max(points[0][0],points[0][1]);
    dp[0][3]=max({points[0][0],points[0][1],points[0][2]});

    for(int day=1;day<n;day++)
    {
        for(int last=0;last<4;last++)
        {
            dp[day][last]=0;

            for(int task=0;task<3;task++)
            {
                if(task!=last)
                {
                    int point=points[day][task]+dp[day-1][task];
                    dp[day][last]=max(dp[day][last],point);
                }
            }
        }
    }

    return dp[n-1][3];
}
Enter fullscreen mode Exit fullscreen mode

Tabulation Complexity

Time Complexity = O(n × 4 × 3)
Space Complexity = O(n × 4)
Enter fullscreen mode Exit fullscreen mode

Space Optimized Version

Observation:

dp[day] depends only on dp[day-1]
Enter fullscreen mode Exit fullscreen mode

So instead of storing n × 4, we store only one previous row.


Space Optimized Code

int ninjaTraining(int n, vector<vector<int>> &points)
{
    vector<int> prev(4,0);

    prev[0]=max(points[0][1],points[0][2]);
    prev[1]=max(points[0][0],points[0][2]);
    prev[2]=max(points[0][0],points[0][1]);
    prev[3]=max({points[0][0],points[0][1],points[0][2]});

    for(int day=1;day<n;day++)
    {
        vector<int> cur(4,0);

        for(int last=0;last<4;last++)
        {
            for(int task=0;task<3;task++)
            {
                if(task!=last)
                {
                    cur[last]=max(cur[last],
                    points[day][task]+prev[task]);
                }
            }
        }

        prev=cur;
    }

    return prev[3];
}
Enter fullscreen mode Exit fullscreen mode

Space Optimized Complexity

Time Complexity = O(n × 4 × 3)
Space Complexity = O(4)
Enter fullscreen mode Exit fullscreen mode

This is the most optimal solution.


Dry Run Example

Input:

points =
[1 2 5]
[3 1 1]
[3 3 3]
Enter fullscreen mode Exit fullscreen mode

Day 0

prev[0] = max(2,5) = 5
prev[1] = max(1,5) = 5
prev[2] = max(1,2) = 2
prev[3] = max(1,2,5) = 5
Enter fullscreen mode Exit fullscreen mode
prev = [5,5,2,5]
Enter fullscreen mode Exit fullscreen mode

Day 1

cur = [6,8,8,8]
Enter fullscreen mode Exit fullscreen mode

Day 2

Final result:

prev[3] = 11
Enter fullscreen mode Exit fullscreen mode

Answer:

11
Enter fullscreen mode Exit fullscreen mode

Key DP Pattern

Whenever you see problems with:

Days / Steps
Multiple choices
Restriction on previous choice
Enter fullscreen mode Exit fullscreen mode

Think:

dp[day][last]
Enter fullscreen mode Exit fullscreen mode

Problems With Same Pattern

  • Paint House
  • Vacation Problem (AtCoder DP)
  • Task Scheduling Problems
  • Activity Selection With Restrictions

If you'd like, I can also show the 7 most important Dynamic Programming patterns used in FAANG interviews, which cover almost 70% of DP problems.

1️⃣ The 3 Questions Trick for Identifying DP

Whenever you see a problem, ask these 3 questions:

**Question 1

Does the problem ask for MAX / MIN / COUNT / WAYS?**

Examples:

Maximum profit

Minimum cost

Number of ways

Maximum points

Ninja Training:

Maximum merit points

So ✔ DP candidate

Question 2

Does the problem have stages or steps?

Common stages:

Days

Index

Position in array

Grid cell

Number of items

In Ninja Training:

Days = stages
Question 3

Does the decision depend on the previous step?

If the current choice depends on previous choice, then DP is very likely.

Example:

Cannot do same activity on consecutive days

So today's choice depends on yesterday's activity.

That means the state must store previous activity.

2️⃣ How Interviewers Expect You to Think

The correct thinking pattern:

Step 1 → Identify state

dp[day][lastActivity]

Step 2 → Identify choices

3 activities

Step 3 → Identify restriction

task != lastActivity

Step 4 → Write transition

dp[day][last] =
max(points[day][task] + dp[day-1][task])
3️⃣ Visual State Diagram

For each day:

Running
Fighting
Learning

But if yesterday was Running, today you can choose only:

Fighting
Learning

So transitions look like this:

Running -> Fighting
Running -> Learning

Fighting -> Running
Fighting -> Learning

Learning -> Running
Learning -> Fighting

This forms a state graph, which is why DP works perfectly.

4️⃣ The Universal DP Template

Many DP problems follow this template:

State:
dp[index][previousChoice]

Transition:
dp[index][prev] =
max/min/ways of
(value[index] + dp[index-1][choice])
5️⃣ Problems With Same Pattern

These are almost identical to Ninja Training.

1️⃣ Paint House
2️⃣ Vacation Problem (AtCoder DP C)
3️⃣ Maximum Points with Restrictions
4️⃣ Activity Selection with Restrictions
5️⃣ Task Scheduling Problems

6️⃣ Another Super Important Interview Trick

If you see:

Day / Index
Choice from K options
Restriction on previous choice

Then DP dimension =

dp[n][k+1]

Example:

Ninja Training:

n days
3 tasks
+1 for "no previous task"

dp[n][4]
7️⃣ One More Advanced Trick (Used in Google Interviews)

If the constraint is large:

N ≤ 100000

Then recursion alone is impossible.

Interviewers expect:

DP with space optimization

Which reduces memory:

O(N*4) → O(4)
8️⃣ The Fast Interview Answer

If an interviewer asks how you recognized DP, answer like this:

The problem has stages (days), choices (3 activities), and a restriction based on the previous choice. Since the current decision depends on the previous state and we need to maximize total points, it forms overlapping subproblems, so dynamic programming is appropriate.

This answer sounds very strong in interviews.

Top comments (0)