DEV Community

Cover image for Presenting AWS Speakers Directory, an AI Hackathon Project
Matt Morgan for AWS Community Builders

Posted on

Presenting AWS Speakers Directory, an AI Hackathon Project

Related Posts

Community Speakers Directory

Hackathons are an interesting phenomenon. We work hard for pay all day and then we continue working hard for free in the evenings and weekends instead of watching TV, hanging with friends and family, pursuing other hobbies and crafts, and sometimes in lieu of sleeping! Why? We each have our own motiviations, but for me, the temptation of collaborating with brilliant folks like Danielle, Johannes, and Julian is too powerful to ignore. A secondary, yet still compelling, motivation is to get hands-on experience with technologies heretofore missing from my toolbox. A distant third is whatever prizes may be offered.

Johannes picked the venue and organized the team like a seasoned professional planning a heist. The job was to create an AI-powered tool using the Transformers Agent framework from Hugging Face. This was a daunting task as none of the four of us have a great deal of experience with AI or ML models, but we did have a pretty good concept of what we could build, courtesy of Johannes. We would build a directory of speakers that can present at community events and use AI to enhance the content and provide a recommendation model to better match talks to events.

We further challenged ourselves by agreeing to:

We each brought expertise to the project that was indespensible to its success. Of course Johannes had the plan and brought the group together. He also organized meetings and make sure we were on task without pushing too hard (hey, it's just a hackathon). But he also wrote a lot of the frontend code and a large part of the AppSync resolvers. Amazing!

Danielle focused on how we can leverage Transformers Agent in Python Lambda functions. She even recorded a video explaining some of the fundamentals so that Johannes and I were able to go from zero to contributing in no time flat. Safe to say we were at risk of not meeting the requirements of the challenge without Danielle's contribution.

Julian designed our overall CDK architecture, implemented the Cognito flows, did a lot of additional Flutter work and also built out the Merge API with L1 constructs, a feat that deserves a blogpost of its own.

My contribution was to provide the AWS Organization, roles that hopefully didn't get in the way too much, and a lot of CDK optimization and fine-tuning. I designed our DynamoDB data model, wrote a bunch of the queries, and got to do my part on the AI bits.

It wasn't long into our project that Johannes and Danielle were honored as AWS Heroes. The hackathon organizers were kind enough to not kick them out of this Community Builders-organized event. Big congrats to Danielle and Johannes!

What follows is a deeper dive into some of the challenges we faced along the way and how we overcame them.

What We Built

Our ambition is to build the AWS User Group Speaker Directory, making it easy to find speakers for remote or in-person events for AWS User Groups. This tool is powered by Transformers Agent to enhance talk profiles, normalize tagging, and make matching speakers and talks to events simple. The Speakers Directory will reduce the burden of organizing events and raise up more voices in the community.

Our application consists of a web layer, a GraphQL API powered by AppSync and a DynamoDB table. This part of the application handles user flows, authentication and authorization with Cognito, and storage of event and talk data. Web assets (our Flutter application) are stored in S3 and served over CloudFront. We have a custom domain managed by Route 53. All of this is composed with the AWS Cloud Development Kit.

high-level architecture

CodeCatalyst

Johannes has been a big booster of CodeCatalyst, so it was no surprise he wanted to use it for this project. One cool thing I learned pretty early on is that it supports GitHub Actions out of the box. You can just use them. This is a pretty great feature. It was also nice how easy it was to set up with a deployment role and the tool automatically generated an appropriate role for using tools like CDK or SAM and asked if I wanted to use it. As far as I'm aware, this is the first time AWS has gone out of their way to generate a role that could be seen as suitable for such a CI/CD pipeline. That alone is worth the price of admission as I'm sure many long hours have been spent trying to design the ideal role and many pipelines wind up with permissions too broad!

Other than that, the user experience of CodeCatalyst still leaves something to be desired. I can comment on a pull request, but the author of the PR cannot reply to my comment. Navigating between pipelines, issues and repositories feels a bit sluggish and it's just too many clicks to open a pull request. I do like the idea of an integrated tool that plays well with AWS. I can imagine getting a lot of value from some deeper integrations, such as EventBridge rules firing on a PR. I don't know if that's on the team's roadmap, but it probably should be!

CodeCatalyst Pipeline

Flutter

I think Johannes is also the reason we picked Flutter for our web tier and Julian also seemed to have prior experience. I've known of Dart for years, but never used it. We had some rough going in the early days as we tried to shoehorn the Flutter build into a CDK asset bundle using Docker. The dang thing went out of memory and drove me crazy for several days before I gave up and went for a build outside of Docker. In the end, it wasn't a Flutter problem, but a Docker problem and some of the libs not working with my M1 Mac. The next challenge was getting CDK to correctly cache builds so it wouldn't make us wait out a Flutter build every time we wanted to deploy the application. It took some doing, but ultimately cleaning up the prior build artifacts just before starting a new build proved to be the ticket and we got asset hashing working well.

Beyond that, Flutter's component-based system let Julian and Johannes build quickly, the Amplify support seemed good and we have the possibility of targeting other platforms.

UI architecture

AppSync / Merged API

AppSync is a technology I haven't used much, but the rest of the team seemed familiar. We implemented JavaScript resolvers (transpiled from TypeScript). It's a bit funny to me how like and unlike Lambda this approach is, but overall it worked pretty well. Using TypeScript let us have more reusable code than we could've with plain js and no build process since (unlike Lambda) JS resolvers must be a single .js or .mjs file.

As a bit of a GraphQL noob, I found JS resolvers to be very easy to use and are a good mental model for working with this kind of technology.

AppSync / Merged API

AWS Organization

I had set up a Community Builders AWS Organization recently using superwerker. My intent is for this organization to show the best practices for setting up a multi-account organization with Organizational Units representing sandbox, test and production environments, scoped developer roles and limited production access. I think the developer roles need a little work, but the group was able to be productive in this environment, so that's a win.

Image Generation

We leverage Transformers Agent to generate an image to accompany a talk. This was a bit tricky to get going because the example I was working from uses gradio_tools which seems like a great library for LLM agents using Python, however it requires Python 3.8 or greater and the Docker image we were using was last built a year ago on Ubuntu 18 which uses Python 3.6. This is a very frustrating class of problem to have and I spent about a day trying to find a substitute image to work with or (ugh) upgrade Python in our base image. Ultimately I rebuilt the same image using Ubuntu 22 and Python 3.9 and that worked out for us, but it was a bit of a journey to get there.

Once we had an environment that allowed gradio_tools, the next challenge was that our image had ballooned to 5 GB with all the necessary dependencies, greatly slowly down development and deployment. And what was worse was our Lambda function would go out of memory before completing. We discussed moving the function to Fargate, a detour we weren't looking forward to. Johannes noted that Transformers Agent supports remote execution and I was able to get that working. This makes Lambda an ideal runtime, which definitely matches with our sensibilities!

We used this capability to generate an image based on the talk abstract after a new talk is created on the site. Because the image generation can take about a minute, we leveraged DynamoDB Streams to trigger an asynchronous workflow to generate the image and add it to a talk.

AI Architecture

AI-assisted Tagging and Recommendations

We found another, perhaps a more practical, use of Transformers Agent, the ability to pipeline a model like BART which can then be used to score labels as to whether or not they may apply to a text sequence. We reasoned this could be used to help classify our talks by assigning tags that fit them. The tags are then fed into a recommendation engine that will help determine whether or not a talk is appropraite for a specific event.

Our application allows both talks and events to receive tags when they are created. This establishes a baseline of tags. We might see values like "serverless", "devops", and "SAM". Given the baseline of common tags, we can feed them to the classifier and score them. Tags with high scores will be added to the talk or event. Then, if an event organizer wants to get a list of possible talks for the event, we have a real basis for comparison. Talks with like tags are fetched and ranked by number of matched tags.

Using pipeline made it fairly easy to get tag recommendations.

oracle = pipeline(model="facebook/bart-large-mnli")

def handler(event, context):
    tags = get_tags()

    candidate_labels = [t["tagName"]["S"].lower() for t in tags]
    image = event["Records"][0]["dynamodb"]["NewImage"]

    # Use both the title and the description of the talk to identify tags
    identifiedTags = oracle(
        f'{image["title"]["S"].lower()} {image["description"]["S"].lower()}',
        candidate_labels=candidate_labels,
    )

    labels = identifiedTags["labels"]
    scores = identifiedTags["scores"]

    selectedTags = [t for index, t in enumerate(labels) if scores[index] > 0.3]

    save_tags(image, selectedTags)

    response = {"statusCode": 200, "body": identifiedTags}
    return response
Enter fullscreen mode Exit fullscreen mode

This code responds to a DynamoDB Stream, gets a list of tags active in the system, then takes the title and description of the newly-added talk and scores the tags against the text we fed it. Based on our (limited) testing, a score of 0.3 or higher seemed to work out pretty well and we'd then save any of those tags to the talk.

The logic behind recommendations is fairly simple. Based on the tags assigned to an event, an organizer can get a list of recommended talks. We use the event tags to look for talks with similar tags and score them based on the number of matches.

DynamoDB Single Table Design

We decided to use the recommended approach of a single-table design for our database. This is something I have prior experience with, but I feel I've yet to master. However there is a process to follow! To begin with, it's best to craft an ERD so that we can understand what our entities are and how they relate to one another.

Speakers Directory ERD

From there, we should document expected access patterns and devise keys based on those.

Talks Access Patterns

It's inevitable that our expectations won't match reality 100% and when that happens, it's important to return to fundamentals and rework the design. I'd say that our keys changed pretty dramatically a handful of times over the course of the project as new access patterns emerged.

One thing that's challenging is when we realize we need to get all of a kind of thing. How should we do that? A scan with a filter condition? Only if absolutely necessary!

Looking at how we modeled Tags, they are part of an overloaded partition key so that Tags are separate entities assigned to our main domain objects of Talks and Events. Querying a Talk entity with its partition key only will also return its tag entities. Then there's a Global Secondary Index that lets us query by tag and get relevant Talks and Events.

This approach works well when we want to get a list of Talks that have the serverless tag associated with them. But when it came to implementing our recommendation engine, we wanted to be able to provide the classifier with a unique list of existing tags. The access pattern doesn't support this! In fact, the only way we could've implemented this with the current design is to do a full table scan and then filter down to a unique list of tags. I know it's a hackathon, but we just couldn't be satisfied with an approach like that.

So we introduced a new entity with a partition key of just tag. A paginated query on that key will return all the tags. We implemented a Dynamo Stream such that whenever we store a new talk, any tags that were added to it will be evaluated and added to our tags entity. That code, written in TypeScript, uses several great features of the DynamoDB API.

const command = new UpdateCommand({
  ExpressionAttributeNames: {
    '#quantity': 'quantity', '#tagName': 'tagName', '#_et': '_et', '#_ct': '_ct', '#_md': '_md',
  },
  ExpressionAttributeValues: {
    ':one': 1,
    ':tagName': tagName,
    ':timestamp': new Date().toISOString(),
    ':zero': 0,
    ':_et': 'TAG_COUNT',
  },
  Key: {
    pk: 'tag', sk: `tag#${tagName}`,
  },
  TableName: tableName,
  UpdateExpression: `SET #quantity = :one + if_not_exists(#quantity, :zero),
                          #tagName = :tagName,
                          #_et = :_et,
                          #_ct = if_not_exists(#_ct, :timestamp),
                          #_md = :timestamp
                      `,
});

await docClient.send(command);
Enter fullscreen mode Exit fullscreen mode

Although it wasn't an explicit requirement, it seemed like a good idea to be able to track how many times each tag is used. Thanks to the :one + if_not_exists(#quantity, :zero) expression, we can either increment the count of an existing tag or set the count of a new tag to 1. Although this is an Update command, DynamoDB APIs are idempotent so if the key doesn't exist, a new item will be created. if_not_exists is also used here to ensire the creation date (_ct) is only set if the item is new.

These tag summaries (entity type TAG_COUNT) are later queried for use in the summarization ML operation. That code is written in Python. It simply queries on the tag partition key and loops to make sure we get all the items on that key as DynamoDB queries are paginated.

def get_tags():
    response = client.query(
        ExpressionAttributeNames={"#pk": "pk"},
        ExpressionAttributeValues={":tag": {"S": "tag"}},
        KeyConditionExpression="#pk = :tag",
        TableName=tableName,
    )

    tags = response["Items"]

    # Need the loop to get all tags.
    while "LastEvaluatedKey" in response:
        response = client.query(
            ExclusiveStartKey=response["LastEvaluatedKey"],
            ExpressionAttributeNames={"#pk": "pk"},
            ExpressionAttributeValues={":tag": {"S": "tag"}},
            KeyConditionExpression="#pk = :tag",
            TableName=tableName,
        )
        tags.update(response["Items"])

    return tags
Enter fullscreen mode Exit fullscreen mode

Roadmap

Hackathon projects always leave plenty of things undone! We think this project could be a real asset to the community and so we have brainstormed a number of capabilities that could be added going forward. We invite the community to contribute and develop skills with ML, AppSync, CDK, Flutter, and more.

  • UX Improvements
  • Update entities
  • Improve recommendation engine with a live model
  • Text2Speech
  • Notification system
  • Recommendations include location/whether virtual
  • Allow users to include their own images vs using AI-generated
  • QoL devexp (builds are still slow)
  • Integration tests
  • Security tests and verification in pipeline
  • Speaker, talk, and event reviews / 5-star system
  • Import from sessionize/et al
  • Social login

Conclusion

This was an ambitious project and a lot of work during hours that we might otherwise be doing something else, but it's amazing working with Danielle, Julian, and Johannes and I know this won't be our last collaboration. Personally I learned a ton and got exposed to quite a few technologies I hadn't laid hands on before. As this project was focused on ML and Transformers Agent, we should give special attention there. I feel this API is fairly easy to use and it's an innovative way to expose ML models to developers. That said, it was clear to me over the course of the hackathon that data science is a real discipline and not one I've practiced enough to assert that I'm getting the most from these models and predictions.

Lowering the bar to entry to a deep and complex technology can only be a good thing for the community. It's a lot to ask of someone to master all the technologies outlined in this blogpost and that's coming from a team that has more than half a century of collective experience! Despite being seasoned coders, we still spent valuable hackathon time asking "do we need a VPC?", "how does Sagemaker work?", and "why is this thing so slow?" It's our hope that our contribution will help others to solve these problems and more, and to build great things.

Top comments (0)