DEV Community

Cover image for AWS Certified AI Practitioner Beta Exam Reaction
Jason Butz for AWS Community Builders

Posted on • Originally published at jasonbutz.info on

AWS Certified AI Practitioner Beta Exam Reaction

The other day, I took and passed the new AWS Certified AI Practitioner beta exam. I was surprised at how difficult and technical it was for a foundational exam. To be completely clear, AI/ML is not my focus, and I have limited experience with it. I've used GenAI and other ML models but with no tuning. I know about many concepts like knowledge bases, agents, and embeddings, and I have some understanding of how they work. I know about some prompt engineering techniques but haven't studied them intensely. Machine learning wasn't a focus during my undergraduate studies, but I did learn a little about machine learning and how some of it works. Generative AI wasn't even a concept back then. I passed the exam with a score of 704; I needed a 700 to pass.

AWS Certified AI Practitioner Early Adopter

I should start by thanking AWS and the training and certification team. Through messages I received as an AWS Community Builder, I was able to apply for a voucher to take the beta exam for free. I think the goal was to get people with the correct skill set for the exam to take it and ensure the team behind the exam gets good data to move the exam from a beta version to a final version. I qualified for and received one of those vouchers, so I took the exam at no financial cost to myself.

I did not study for this exam. Part of that was because I believed I had the knowledge for it, considering it was a foundational exam, and I exceeded the target audience based on the intended candidate and the candidate role examples on the AWS website . I may have overestimated my knowledge because I barely passed, or the exam may not target the desired audience as well as it could. Maybe a little of both.

The AWS website says the intended candidates are "Individuals who are familiar with, but do not necessarily build, solutions using AI/ML technologies on AWS." They list "business analyst, IT support, marketing professional, product or project manager, line-of-business or IT manager, sales professional" as example candidate roles. I would not expect candidates in those roles to pass this exam without intensive study.

I can't tell you the questions that cause me to think that. I must comply with AWS's certification agreement , so I can't discuss the exam's questions. But I can talk about what is in the exam guide on AWS's page for the certification . I should also mention that the exam has 15 unscored questions, so it is entirely possible that the questions I thought were too difficult for the audience were those unscored questions. The beta exam also has more questions than the standard exam because AWS is evaluating whether the questions are appropriate [ Source ].

To add more credence to why what I say matters, I'm a part of the AWS Certification SME program and have supported the development of the AWS Certified Developer Associate exam. That adds some additional complexity to sharing my perspective. I know more about how the exams are developed, but I am restricted in what I can share by NDAs. It means I know a little more about what goes on behind the scenes.

Recommended AWS Knowledge

  • Familiarity with the core AWS services (for example, Amazon EC2, Amazon S3, AWS Lambda, and Amazon SageMaker) and AWS core services use cases
  • Familiarity with the AWS shared responsibility model for security and compliance in the AWS Cloud
  • Familiarity with AWS Identity and Access Management (IAM) for securing and controlling access to AWS resources
  • Familiarity with the AWS global infrastructure, including the concepts of AWS Regions, Availability Zones, and edge locations
  • Familiarity with AWS service pricing models

The level of AWS knowledge for this exam is similar to that of the Cloud Practitioner exam, based on the exam guide. From my experience with the exam, some questions went beyond "familiarity" with these services and concepts. But, the recommended AWS knowledge section of the exam guide isn't where the focus of questions is drawn from. That's the domains and the task statements under them. I'll go through those next.

Domain 1: Fundamentals of AI and ML

This domain seems like some of the most important information someone with this certification should have, but it only comprises 20% of the scored content. Domains 2 and 3 have a higher percentage of the exam at 24% and 28%, respectively.

This domain has three task statements:

  • Task Statement 1.1: Explain basic AI concepts and terminologies.
  • Task Statement 1.2: Identify practical use cases for AI.
  • Task Statement 1.3: Describe the ML development lifecycle.

I have no issues with Task Statement 1.1; it seems ideally suited to this exam. Task Statement 1.2 is also generally well-suited, though I wouldn't expect marketing professionals or line-of-business managers to know much about the capabilities of AWS-managed AI/ML services.

Task Statement 1.3 and its objectives stretch what I expect the target candidates for this exam to know. I can accept that knowing about the components of an ML pipeline (objective 1.3.1), understanding the sources of ML models (objective 1.3.2), and understanding fundamental concepts of ML operations (MLOps) (objective 1.3.5) would be a benefit for the target candidates that need to know about AI. I expect it's something many of these target roles would need to study specifically for this exam. I don't think that model performance metrics for evaluating ML models (objective 1.3.6) are appropriate. I would expect data engineers and scientists to be doing that and guide what is going to be the best option. Understanding why performance metrics are important is appropriate, but not specific ones, and which ones should be applied.

Domain 2: Fundamentals of Generative AI

Given the hype around GenAI, this domain also seems like a good fit for the exam and has three task statements:

  • Task Statement 2.1: Explain the basic concepts of generative AI.
  • Task Statement 2.2: Understand the capabilities and limitations of generative AI for solving business problems.
  • Task Statement 2.3: Describe AWS infrastructure and technologies for building generative AI applications.

Like in Domain 1, Task Statement 2.1 makes sense. The foundation model lifecycle, the last objective for that Task Statement, is something the target candidate roles will probably need to study for the exam. Task Statement 2.2 is perfect for the exam. Anyone with AI or ML credentials should understand the limitations of GenAI. Task Statement 2.3 is also reasonable, for the most part. Some of the target candidate roles will need to specifically study the AWS services and features for developing GenAI applications (objective 2.3.1).

Domain 3: Applications of Foundation Models

Again, because of the hype focused on GenAI, questions around foundational models (FMs) and prompt engineering seem reasonable. There are four task statements focused on the FMs domain:

  • Task Statement 3.1: Describe design considerations for applications that use foundation models.
  • Task Statement 3.2: Choose effective prompt engineering techniques.
  • Task Statement 3.3: Describe the training and fine-tuning process for foundation models.
  • Task Statement 3.4: Describe methods to evaluate foundation model performance.

The task statements seem reasonable at the top level, but once you start looking at their specific objectives, they go beyond what I expect most of the target audience to know or need to know. Why would a marketing professional need to "Identify relevant metrics to assess foundation model performance" such as Recall-Oriented Understudy for Gisting Evaluation (ROUGE), Bilingual Evaluation Understudy (BLEU), or BERTScore? Identifying technical metrics is something engineers should take the lead on. I would expect the stakeholders, such as a marketing professional, to validate that choice by working with the engineers and ensuring business objectives are met. Coincidentally "determine whether a foundation model effectively meets business objectives" is one of the objectives under task statement 3.4.

Domain 4: Guidelines for Responsible AI

Using AI responsibly is something most people should agree with, and I have been seeing more and more focus on this. It doesn't surprise me to see it on the exam. There are only two task statements for this domain, but task statement 4.1 has a long list of objectives.

  • Task Statement 4.1: Explain the development of AI systems that are responsible.
  • Task Statement 4.2: Recognize the importance of transparent and explainable models.

The objectives for this domain are, for the most part, great. They are well suited to ensuring people are thinking about how to use AI responsibly. The only areas that go outside of what I expect the target audience to know are when it comes to tools and AWS features for ensuring responsible AI use.

Domain 5: Security, Compliance, and Governance for AI Solutions

A security, compliance, and governance domain seems a good idea. Still, it's a topic that can quickly get too detailed for someone who is only supposed to be familiar with AI/ML and does not need an in-depth understanding. There are only two task statements outlined for this domain:

  • Task Statement 5.1: Explain methods to secure AI systems.
  • Task Statement 5.2: Recognize governance and compliance regulations for AI systems.

Task Statement 5.1 immediately concerns me. Security is essential with AI and ML, but should someone at a foundational level of understanding be able to explain the methods used to secure an AI system? I don't think that's reasonable. Why should a sales or marketing professional need to know how to secure an AI system? Objective 5.1.3 says, "Describe best practices for secure data engineering (for example, assessing data quality, implementing privacy-enhancing technologies, data access control, data integrity)." This is a foundational AI/ML certification, not the associate-level data engineer certification .

Task Statement 5.2, at the top level, doesn't concern me, but the objectives seem to exceed the task statement. The task statement says to " recognize governance and compliance regulations" [emphasis mine], but the objectives ask candidates to identify regulatory standards and describe processes to follow governance protocols. I could see an argument being made that identifying standards counts as recognizing regulations, but describing processes to follow a regulation does not. I can recognize and identify, at least in most cases, when PCI DSS is relevant, but I can't describe all the processes necessary to be compliant. I expect to have an expert involved to ensure we follow the correct process. The same applies to any significant regulations, including AI-related regulations.

What have others said?

I started a chat with other AWS Community Builders, and there seems to be a consensus that the Certified AI Practitioner exam is much more difficult than we expected, especially considering it's a foundational exam. One Community Builder said the questions were foundational in length but required associate levels of knowledge. Many of us wouldn't expect non-technical people to pass the exam, which is a problem considering they are part of the target candidate groups. These conversations within the AWS Community Builder community have led to discussions with AWS. I spoke with two people from the AWS Certifications group and shared direct feedback, including most of what I have written here.

What I am Saying

The AWS Certified AI Practitioner exam is still in beta and is not in its final form. The questions on the exam will change. Hopefully, the questions will be better suited to the target candidates. I hope the exam guide and its task statements and objectives will change, too. The guide I am looking at is version 1.4 AIF-C01. Creating these exams is hard, and I don't want to discount the incredible effort that went into developing this exam. It's just that there is still work to do. The AWS team is listening to feedback, and I am sure they will adjust this exam.

I would not recommend this exam to non-technical people right now. Let the final version come out first. If you are going to take the beta exam, study the topics listed in the exam guide. I expect this exam to get easier, so if you have only a foundational level of experience with ML, you should count it as an achievement to pass the beta exam.

Good luck.

Top comments (0)