DEV Community

Adam Leskis
Adam Leskis

Posted on

From Active Learning to Deliberate Practice: an iximiuz Labs case study

Much of the educational tech content on the internet is in the form of blogs/videos/tutorials/walkthroughs/etc, and a lot of it is good, some great, some not so great. One thing that the great stuff has in common is that it encourages a "Learning by Doing" approach. After all, you get better at building a REST API by actually building a REST API...not just reading about it, watching somebody else do it, or drawing out a diagram on a whiteboard.

To be sure, a lot of those things I just mentioned (eg, reading about REST APIs) are absolutely necessary input to build your conceptual framework of technical concepts, foundational components, and how the entire system fits together to accomplish a goal. There's very little chance you could just guess at what it should be and get it right on the first, or even 100th time, without SOME context around what you should do.

 

And with this context built through content, active learning is an excellent learning strategy to extend and consolidate the knowledge that you gain through reading/watching/hearing/etc about different technical concepts. By actually performing the behavior required to create the thing you've learned about (eg, our example REST API), you get all sorts of added benefits:

  • feedback on whether the thing works
  • having to deal with errors when it doesn't work
  • seeing where the gaps in your knowledge still are
  • exposing hidden assumptions about whether certain steps in the tutorial are obvious (they usually aren't)
  • the satisfaction of completing something

 
 

How This Content Gets Created

One of the greatest shortcomings, however, in this wealth of content on active learning is that each of the examples/scenarios/challenges needs to be created by a human.

In before: I hope you're not suggesting we just create content entirely with AI...no, I'm not going to argue that, since I think it would be unnecessarily wastefully, and we can get pretty far anyway without it.

Take, for example, a learning exercise where a learner is dropped into a linux environment, and has to find out why a systemd service has stopped. It used to be running, but now it's not, and eventually, with some helpful hints from the learning platform, the learner finds that permissions have been updated on some files that the service needs, and fixing those allows it to start again.

The learner has actually applied some knowledge to fix a system issue and rejoices in the glow of active learning success!

 

And now the learner would like to do it again, so they press "start" again on the activity, and they're dropped back into the same linux system, with the exact same instructions, the exact same error, and the exact same fix.

They remember probably 80% of what they just did (it was only 5 minutes ago), and so they don't have much trouble fixing it again. After repeating this about 5 times, the learner can go on autopilot, just repeating the commands from memory, and maybe even jumping directly to the fix without needing to go through the whole debugging flow.

This active learning can become passive when it's the same thing over and over, and can begin to be more of a test of memory than actual knowledge application.

 
 

Deliberate practice

A much stronger paradigm supported by the work of researchers like Ericsson (1993), is the framework of deliberate practice (the seminal article on deliberate practice is here).

The main points are that the learner needs to be engaged in narrowly focused intentional repetition, centered on a particular skill that they're trying to improve. So we could reimagine our previous example within this framing to have the following structure:

The learner still enters the familiar linux environment, but it's unclear which systemd service is having issues (since this is randomized at the start of the activity). Through investigation, the learner isolates which service it is and proceeds to update the file permissions to fix the service. On the next attempt, it's a different service that is having issues, and so the learner is prompted to use a specific process to isolate which service is misbehaving. This can continue as long as the learner wants to continue practicing.

Note that this new framing exposes a hidden complexity in the original activity...it's actually combining two realistic debugging activities: identifying a failing service and using file system permissions to fix the issue.

So you could also imagine a slightly different path based on the original scenario as the learner in the linux system, and where the learner already knows (or is told) which system is behaving problematically, but the underlying permissions issue is randomized and requires investigation and remediation:

  • the user the service is running under is randomized, which has implications for the file permissions necessary
  • the binary of the service isn't executable
  • the log directory isn't writable due to permissions mismatch
  • the ownership of a file directory is correct, but new files don't inherit the ownershipt because setgid is missing
  • various permission bit issues

Compared to the single repeated scenario, this has some structural differences and advantages:

  • the learning objective can be scoped down and isolated, which also allows more targeted feedback and assessment
  • the practice is necessarily more deliberate and intentional, since the fix can't be guessed ahead of time, so the role of short-term memory changes to be about the process rather than the solution
  • there's actually a reason for the learner to repeat the activity multiple times
  • the variation now forces the learner to exhibit the behavior we're trying to gain improvement in (related to the specific learning objective)
  • this approach is supported by research (as mentioned above)

 
 

Case Study #1 - Kubernetes OWASP Top Ten

https://labs.iximiuz.com/playgrounds/my-my-k3s-de88e13a

This is a playground set up to give the learner practice with running a vulnerability scanning tool (in this case, kubescape) to identify and fix a randomized security vulnerability from the OWASP Kubernetes Top 10 list in a running cluster.

The very specific and measurable learning objective is editing a kubernetes manifest/resource to fix the vulnerability. While this does look a bit like our original example of both finding/fixing an issue, the scanner bit is very much just repeated commands, and there's not much of a learning objective to target there.

Additionally, this first part of the activity is arguably not very authentic. Not many people are actually manually running scanners to identify vulnerable misconfigurations in their K8s clusters, since this is usually delegated to the CI pipelines (and rightly so!).

The second part is more defensible, as a means to give learners a mapping between vulnerability category (K01 - insecure workload configurations, like running as root) and what that actually looks like in a manifest.

While it does have gaps, it's a working MVP of a system that automatically presents a situation and gives feedback about whether the learner achieved a given outcome. So in terms of our discussion of deliberate practice, it's a bit closer to something that's supported by the research.

 
 

Case Study #2 - Log Parsing in Linux

https://labs.iximiuz.com/playgrounds/log-parser-lab-k3s-dd84febf

This playground is configured to present the learner with an opportunity to use common Linux tools (eg, grep, sed, awk, wc) to parse various log types (nginx, apache, syslog) for various investigative purposes:

  • finding the top unique IP address triggering failures
  • summing of bytes for an IP with a specific HTTP method over a given time period
  • finding the user with the most failed SSH logins over the last hour

Similar to the previous case, a strength here is that the learner needs to convert the requirements into a chain of commands. So it's also narrowly focused, is completely self-contained, and can continue as long as the learner wants to practice.

Since both the log type and investigation focus are varied at random for each new exercise, it's possible for the learner to use their short-term memory for applying knowledge of the various linux commands, rather than the keystrokes to type.

 
 

Disclaimer - We Need More Tools in General

Having said all of the above, why might it still be the wrong thing to do?

While I'm obviously biased towards the deliberate practice approach, it's not a format that a lot of learners are familiar with, specifically because it isn't very common.

In addition, other strands of cognitive research which discuss desirable difficulty (another good thing that deliberate practice incorporates), has a downside in that learners can feel like they're learning less. This is also a proposed reason for why, given the choice, learners still gravitate towards youtube walkthroughs and blog tutorials:

They feel like they're learning more because it's easy and they enjoy it

So it could be that this format isn't something that learners are even interested in, because they themselves feel like they're learning less from it.

Another challenge is how to implement these types of activities from a technical perspective. There are a few different challenges like:

  • where/how is this activity presented to learners (web browser, terminal cli, etc)?
  • what mechanism is used to randomize the activities?
  • how do we assess whether the learning objective has been achieved?
  • what format is the feedback on unsuccessful attempts, and when is it given?

Nevertheless, more choice in how users like to learn new things is never a bad thing, and with the internet, we can create and share these things at basically zero marginal cost. More is better, and I'm here for it!

Top comments (0)