DEV Community

Cover image for How I extend my blog with gamified learning
Jimmy Dahlqvist for AWS Community Builders

Posted on • Originally published at jimmydqv.com

How I extend my blog with gamified learning

One of the major reason that I write all of these blog posts is to help people learn about cloud and AWS. How would you know that you understood what you read and learned from it? I got a suggestion to add a quiz to the end of the blog, that was based on the content of the blog. This was great suggestion and I got to work on it.

kvist.ai is a Generative AI powered quiz solution powered by AWS, Serverless technologies, and Amazon Bedrock. Feeding this solution with my blog it automatically creates a quiz based on the content, which is exactly what I needed. Now, knowing the Founder, Lars Jacobsson, I did get access to some early features that helped my automate the process.

In this post I will discuss how I extended my CI/CD solution to add extra step to create the quiz and add it to the blog. My pipeline run in AWS and is serverless, event-driven, and powered by AWS StepFunctions, Lambda, EventBridge, and more.

Blog overview

To make sure you all understand how my blog is created and distributed, this will make it easier to understand the setup later. My blog consists of pure old static HTML, no React, Vue, or anything like that. The blog is distributed over CloudFront and S3, I use Lambda@Edge and CloudFront functions to manipulate the response and collect statistics, check out my post "Serverless statistics solution with Lambda@Edge" on that matter.

I write and create my posts using markdown, this is then converted to html with 11ty engine. The layout of the page are decided by the metadata in the front matter section, 11ty the uses the layouts I have created using Nunjucks. This way I can add metadata and control how the page is rendered, I can inject sections and links.

My build and deploy pipeline is based on GitHub and GitHub actions, which build the site on merge to main branch and sync the html files to teh S3 bucket.

Image showing blog overview

CI/CD overview

My CI/CD setup consists of two major parts, the build and deploy part that is invoked on a merge to main branch. This part is 100% GitHub actions based and looks like the part above.

The second part is where I perform two major things, I generate the quiz for the gamified learning and I use Polly to generate voice that reads my blog post. This is a mix of GitHub actions, that will build the page and upload to a staging bucket in S3, this part is invoked when I open, close, and modify a pull-request. After upload to the staging bucket is complete, GitHub actions will post an event onto an EventBridge event-bus and this is where my AWS based part takes over. In this blog we will focus on this AWS based part.

The AWS based CI/CD pipeline is event-driven and serverless, the primary services used are StepFunctions, Lambda and EventBridge. The flow is based on a saga pattern where the domain services hand over by post domain events on the event-bus, which will move the saga to the next step.

Image showing cicd overview

To summarize the image a bit, GitHub action will invoke the Information service that will reach out to GitHub to collect information about the pull-request, this will then invoke the Voice service that will generate the Polly text to speech action, followed by the Quiz service creating a quiz on Kvist.ai, finally the update service will edit the front matter of the markdown file and create a new commit on the pull-request branch.

Now, why do I implement this pipeline in AWS with StepFunctions and Lambda? Why not just implement it in GitHub actions. The answer is that I could do that, but I need to repeatedly call different Service API in AWS and with the integration with kvist.ai now it was just easier to implement it in AWS directly.

Technical deep dive

We have now come to the most fun part of this post, the technical deep dive. In this part I will try and explain and show how each of the services work. If we creates the overview image again but now with more details and AWS services, it will look like this.

Image showing cicd overview

There are still a lot going on in this image, but don't worry we will go into each of the services one by one and discuss the architecture and data flow.

Event structure

First let's take a quick look at the event structure, so you get an understanding how data are added. I have opted in to use the metadata-data pattern where each service will add information and post a new event onto the bus, so the next service in the saga can perform its part.

In the end an event looking like this will be posted, it has all the information needed for the update service to create a new commit.

{
    "metadata": {
        "traceid": "UUID"
    },
    "data": {
        "PullRequestInfo": {
            "PullRequestCommitSha": "SHA",
            "PullRequestBranch": "BRANCH",
            "PullRequestNumber": "XYZ"
        },
        "MarkdownFile": {
            "path": "FILENAME.md",
            "fileSlug": "FILE_SLUG"
        },
        "quiz": {
            "gameCode": "123456",
            "url": "https://kvist.ai/123456"
        },
        "Voice": {
            "LanguageCode": "en-US",
            "OutputFormat": "mp3",
            "OutputUri": "S3_URI",
            "RequestCharacters": 6637,
            "TaskStatus": "completed",
            "VoiceId": "Joanna"
        }
    }
}
Enter fullscreen mode Exit fullscreen mode

Normally you would not keep data from the invoking event, but since the entire chain is built as an saga pattern and instead of each service fetching the same information over and over again, it was easier to break the rules a bit.

Information Service

The very first step is to collect information about the pull-request. I will fetch what markdown file that was updated, where the rendered html file can be found in the staging bucket, what branch the pull-request originated from. This is done with a couple of Lambda functions running in sequence in a StepFunction.

Image showing Information StepFunctions Graph

To communicate with GitHub I use Octokit. Below are snippets from the CloudFormation Template and the code used to call GitHub API.


CollectPullRequestInfoStateMachineStandard:
  Type: AWS::Serverless::StateMachine
  Properties:
    DefinitionUri: statemachine/collect-info.asl.yaml
    Tracing:
      Enabled: true
    DefinitionSubstitutions:
      FetchPullRequestInfoFunctionArn: !GetAtt FetchPullRequestInfoFunction.Arn
      FetchMarkdownFilePathFunctionArn: !GetAtt FetchMarkdownFilePathFunction.Arn
      FetchHtmlFilePathFunctionArn: !GetAtt FetchHtmlFilePathFunction.Arn
      EventBridgeBusName:
        Fn::ImportValue: !Sub ${InfraStackName}:eventbridge-bus-name
    Policies:
      - Statement:
          - Effect: Allow
            Action:
              - logs:*
            Resource: "*"
      - LambdaInvokePolicy:
          FunctionName: !Ref FetchMarkdownFilePathFunction
      - LambdaInvokePolicy:
          FunctionName: !Ref FetchPullRequestInfoFunction
      - LambdaInvokePolicy:
          FunctionName: !Ref FetchHtmlFilePathFunction
      - EventBridgePutEventsPolicy:
          EventBusName:
            Fn::ImportValue: !Sub ${InfraStackName}:eventbridge-bus-name
    Type: STANDARD

FetchPullRequestInfoFunction:
  Type: AWS::Serverless::Function
  Properties:
    CodeUri: lambda/FetchPullRequestInfo
    Handler: app.handler
    Runtime: nodejs14.x
    Policies:
      - AWSLambdaBasicExecutionRole
      - SecretsManagerReadWrite
    Environment:
      Variables:
        REPO: !Ref Repo
        OWNER: !Ref RepoOwner
        APP_SECRETS: !Ref AppSecrets

Enter fullscreen mode Exit fullscreen mode

exports.handler = async (event) => {
  pullRequestNumber = event.detail.pr_number;
  if (pullRequestNumber == -1) {
    throw new Error("Pull Request Info not available!");
  }

  await initializeOctokit();
  const pullRequestInfo = await getPullRequest(pullRequestNumber);
  return pullRequestInfo;
};

const getPullRequest = async (pullRequestNumber) => {
  if (octokit) {
    const result = await octokit.rest.pulls.get({
      owner: process.env.OWNER,
      repo: process.env.REPO,
      pull_number: pullRequestNumber,
    });

    prData = {};
    prData["PullRequestCommitSha"] = result.data.head.sha;
    prData["PullRequestBranch"] = result.data.head.ref;
    prData["PullRequestNumber"] = pullRequestNumber;
    return prData;
  }
};
Enter fullscreen mode Exit fullscreen mode

The logic in this first part is not that complex, it's most about fetching the correct information.

Voice Service

The most complex part of this entire setup is the voice generation with Polly. I will start with an extract Lambda function, that will fetch the HTML file that will be use. In the transform part I create the SSML used by Polly when reading the post.

Image showing Voice StepFunctions Graph

Now, I will not go into details on this part as its actually a blog of its own. To learn more about it check out my post Serverless voice with Amazon Polly or watch my conversation with Johannes Koch on YouTube

Quiz Service

Now it's time to generate the quiz. This is done by calling an API provided by kvist.ai, this is called using an HTTP Endpoint task in StepFunctions. This is a great way to call an API without having to write any form of code.

Image showing Quiz StepFunctions Graph

In the extract Lambda function I strip all html tags from the blog, and creating the prompt that is sent to the API. The code for this step will use BeautifulSoup in Python to make the process smooth.


import json
import os
import boto3
from bs4 import BeautifulSoup


def handler(event, context):
    bucket = event["HtmlFile"]["Bucket"]
    key = event["HtmlFile"]["Key"]
    s3 = boto3.client("s3")

    content = s3.get_object(Bucket=bucket, Key=key)["Body"].read()
    extractedContent = extract(content)

    etlBucket = os.environ["ETL_BUCKET"]
    base = os.path.splitext(key)[0]

    txt_key = base + ".txt"
    s3.put_object(Body=extractedContent, Bucket=etlBucket, Key=txt_key)

    return {"Bucket": etlBucket, "Key": txt_key}


def extract(contents):
    soup = BeautifulSoup(contents)
    return soup.get_text("\n")

Enter fullscreen mode Exit fullscreen mode

To create a HTTP EndPoint you use the http:invoke state, below is a snippet from the asl file for the state machine.

Call kvist.ai API:
  Type: Task
  Resource: arn:aws:states:::http:invoke
  Parameters:
    ApiEndpoint: ${ApiEndPoint}
    Authentication:
      ConnectionArn: ${ConnectionArn}
    Headers:
      Content-Type: application/json
    RequestBody:
      prompt.$: $.Prompt
      numberOfQuestions: 5
      language: English
    Method: POST
Enter fullscreen mode Exit fullscreen mode

Looking in Workflow Studie we will end up with a configuration like this.

Image showing task configuration

For a deeper dive into StepFunctions HTTP EndPoint I recommend my post AWS StepFunctions HTTP Endpoint demystified

Update Service

The final step in this saga is to update the markdown file, add the correct front matter, and the create a new commit.

Image showing update StepFunctions Graph

In the first part of the StepFunction I copy the mp3 file that Polly generated to the correct location, I will not check in the mp3 files in the repo as these ar huge blobs and can be reproduced. Instead I copy them to a S3 bucket and during the build process the files will be pulled from this bucket and placed into the production bucket.

When updating the front matter I use gray-matter to make it a easy process. The code will check if there is a voice and quiz section in the event, and if so add new tags in the front matter part.

const updateFrontMatter = async (filePath, event) => {
  const fileContent = fs.readFileSync("/tmp/" + filePath, "utf8");
  const { data, content } = matter(fileContent);
  if (event.Voice) {
    data.audio = "CREATE_THE_AUDIO_PATH";
  } else if (event.Quiz) {
    data.quiz = event.Quiz.url;
  }
  const updatedContents = matter.stringify(content, data);
  fs.writeFileSync("/tmp/" + filePath, updatedContents);
};
Enter fullscreen mode Exit fullscreen mode

This will now create a front matter section looking like this one.

---
title: BLOG TITEL
description: DESCRIPTION
audio: /assets/audio/FILE_SLUG/en-US/Joanna.mp3
quiz: https://kvist.ai/12345
---
Enter fullscreen mode Exit fullscreen mode

The important parts here are audio and quiz as these will be used to render out special sections in the blog post.

The last part will create a new commit in the repo, this is once again done with Octokit, it will involve several steps as create the commit, upload files to the repo, etc. Below is a simplified version of the code.


const AWS = require("aws-sdk");
const { Octokit } = require("@octokit/rest");
const path = require("path");
const fs = require("fs");
const https = require("https");
const fg = require("fast-glob");
const { readFile } = require("fs").promises;

let octokit;
const owner = process.env.OWNER;
const repo = process.env.REPO;
let pullRequestNumber = -1;

exports.handler = async (event) => {
  pullRequestNumber = event.PullRequestInfo.PullRequestNumber;
  if (pullRequestNumber == -1) {
    throw new Error("Pull Request Info not available!");
  }

  await initializeOctokit();
  const fileSlug = event.MarkdownFile.fileSlug;
  const fileName = path.basename(event.MarkdownFile.path);

  await uploadToRepo(`/tmp/${fileSlug}`, event.PullRequestInfo);
};

const initializeOctokit = async () => {
  if (!octokit) {
    const gitHubSecret = await getSecretValue(
      process.env.APP_SECRETS,
      "github-token"
    );
    octokit = new Octokit({ auth: gitHubSecret });
  }
};

const uploadToRepo = async (coursePath, pullRequestInfo) => {
  const currentCommit = pullRequestInfo.PullRequestCommitSha;
  const branch = pullRequestInfo.PullRequestBranch;
  const filesPaths = await fg(coursePath + "/**/*.md");
  const filesBlobs = await Promise.all(filesPaths.map(createBlobForFile()));

  const pathsForBlobs = filesPaths.map((fullPath) =>
    path.relative(coursePath, fullPath)
  );

  const newTree = await createNewTree(
    filesBlobs,
    pathsForBlobs,
    pullRequestInfo.PullRequestCommitSha
  );
  const commitMessage = "Added quiz and audio file";
  const newCommit = await createNewCommit(
    commitMessage,
    newTree.sha,
    pullRequestInfo.PullRequestCommitSha
  );
  await setBranchToCommit(newCommit.sha, pullRequestInfo);
};

const getFileAsUTF8 = (filePath) => readFile(filePath, "utf8");

const createBlobForFile = () => async (filePath) => {
  const utf8Content = await getFileAsUTF8(filePath);
  const blobData = await octokit.rest.git.createBlob({
    owner: owner,
    repo: repo,
    content: utf8Content,
    encoding: "utf-8",
  });
  return blobData.data;
};

const createNewTree = async (blobs, paths, parentTreeSha) => {
  const tree = blobs.map(({ sha }, index) => ({
    path: paths[index],
    mode: `100644`,
    type: `blob`,
    sha,
  }));
  const { data } = await octokit.rest.git.createTree({
    owner: owner,
    repo: repo,
    tree,
    base_tree: parentTreeSha,
  });
  return data;
};

const createNewCommit = async (message, currentTreeSha, currentCommitSha) =>
  (
    await octokit.rest.git.createCommit({
      owner: owner,
      repo: repo,
      message,
      tree: currentTreeSha,
      parents: [currentCommitSha],
    })
  ).data;

const setBranchToCommit = (commitSha, pullRequestInfo) =>
  octokit.rest.git.updateRef({
    owner: owner,
    repo: repo,
    ref: `heads/${pullRequestInfo.PullRequestBranch}`,
    sha: commitSha,
  });

Enter fullscreen mode Exit fullscreen mode

Extending the solution

As the entire CI/CD solution is built as an event-driven system, using the saga pattern, the extension with a new step is very straight forward. This is where an event-driven solution really shine. I could develop the entire quiz service on the side and just invoke it on the same event as the voice service. When I was happy with the result, I updated the saga and changed what events invoked what parts.

The overall time to extend this solution was less than one hour. I didn't have to update any complex flows, just add a service and change the saga a bit. There is a reason why I truly enjoy working with serverless and event-driven systems.

Render the post

As mentioned before I use 11TY to render the blog posts from markdown to html. During this process 11TY will use the layout I have created for blog posts.

In this layout I have sections '% if audio %' and '% if quiz %' to check if this is available in the front matter. If it is it will add the html tags inside the if-blocks. Automatically adding audio and the quiz based on the data from the front matter.

---
layout: default
---

<article class="max-w-5xl mx-auto">

    <div id="content" class="prose text-gray-800 max-w-none">
        % if audio %
        <audio id="audio" controls="true" class="flex w-full">
            <source src={{ audio }} type="audio/mpeg">
            Your browser does not support the audio element.
        </audio>
        <div class="text-end text-blue-500 italic">Voice provided by Amazon Polly</div>
        % endif %


        % if quiz %
        <br>
        <div class="prose text-gray-800 max-w-none font-mono">
        <p class="text-2xl font-extrabold ">Post Quiz</p>

        <p class="font-normal font-mono">Test what you just learned by doing this <a rel="noreferrer" target="_blank" href="{{ quiz }}">five question quiz - {{ quiz }}.</a><br>
        Powered by <a rel="noreferrer" target="_blank" href="https://kvist.ai">kvist.ai</a> your AI generated quiz solution!</p>
        </div>
        % endif %
    </div>

</article>
Enter fullscreen mode Exit fullscreen mode

Final Words

This was a post where I explain how I added a step in my event-driven CI/CD pipeline to create a gamified learning experience for you, my readers. If you enjoy the quiz part go and check out kvist.ai

Check out My serverless Handbook for some of the concepts mentioned in this post.

Don't forget to follow me on LinkedIn and X for more content, and read rest of my Blogs

As Werner says! Now Go Build!

Top comments (0)