DEV Community

Esraa Ahmed
Esraa Ahmed

Posted on

AWS Foundational AI/ML Capabilities - Part 1

In this article, we will discuss the AWS Foundational AI/ML from AWS Whitepapers documentation.

Introduction
To successfully adopt and implement AI/ML technologies as part of the digital transformation journey, fundamental capabilities are important for organizations.
A capability is an organizational ability to use processes to deploy resources (such as people, technology, and other tangible or intangible assets) to achieve an outcome.

AI Foundational Capabilities

So, The stakeholders that are concerned with the perspective:

  1. Business perspective: Helps ensure that the AI/ML investments accelerate the digital and AI-transformation ambitions and business outcomes. It includes the CEO, COO, CTO, and CFO.
  2. People perspective: This perspective serves as a bridge between AI/ML technology and business, and aims to evolve a culture of continual growth and learning, where change becomes business-as-normal. Stakeholders include CIO, CTO, CHRO, and COO.
  3. Governance perspective: Helps you orchestrate the AI/ML initiatives while maximizing organizational benefits and minimizing transformation-related risks. Stakeholders include CIO, CTO, CFO, CDO and CRO.
  4. Platform perspective: Helps you build an enterprise-grade, scalable, cloud platform that enables to operate AI-enabled or infused services and products, but also provides you with the capability to develop new and custom AI solutions. Stakeholders include CTO, technology leaders, ML operations engineers, and data scientists.
  5. Security perspective: Helps you achieve the confidentiality, integrity, and availability of your data and cloud workloads. Stakeholders include CISO, CCO, and Security engineers and architectures.
  6. Operations perspective: Helps ensure that the cloud services, and in particular your AI/ML workloads, are delivered at a level that meets the needs of your business. Stakeholders include ML Operations, IT Managers, Infrastructure, and Operations Leaders.

Each perspective has an order by which the capabilities are addressed or improved, in the AI/ML transformation journey in time.

AI foundational capabilities ordered by maturity and evolution

1. Business perspective
The AI strategy in the age of AI/ML

complex decision-making processes, unstructured data, and constantly changing decision environments posed challenges for traditional computer science methods. The recent advances in ML have changed this, problems that require machines to see, understand language or learn from past data and predict outcomes can be addressed. The newly and readily available ML capabilities are questioning the market hypotheses of established organizations.

1.1 Strategy management
Unlock new business value through artificial intelligence and machine learning

Machine learning enables new value propositions that in turn lead to increased business outcomes, such as reduced business risk, growing revenue, operational efficiency, and improved ESG. Therefore, start by defining a business- and customer-centric north-star for your AI/ML adoption and underpin it with an actionable strategy that moves step by step to adopting AI/ML technology. Make sure that any adoption strategy is based on tangible (short-term and measurable) or at least aspirational (long-term and harder to measure).
AI/ML capabilities are becoming an integral part of customer expectations, so organizations should align their products and services with these evolving demands.
For each opportunity, ask if you need to build, tune, or adopt an existing AI/ML system.

1.2 Product management
Manage data-driven and AI/ML-infused or enabled products

Both the development as well as the operation and continuous creation of results of any AI/ML-based product include potentially costly uncertainties that require specific mitigation strategies.
When integrating AI/ML into products, start by understanding the value gain expected by your customers and users. Map measurable business proxies to decision points where AI/ML can enhance or automate processes. Define metrics in the ML solution domain to quantify the value gain and corresponding ML problems.
Crucially, these ML solutions impose certain data requirements on you and your product, hence you must investigate the 4 V’s of Data for each of them. While you build this knowledge bottom up, make sure to involve business, data, executive, and ML stakeholders in the assessment of your solution.
Facilitate knowledge transfer and enable the broader organization to build new AI/ML products by leveraging technologies like data mesh and data lake architectures. Establish mechanisms such as SageMaker Model Cards for effective communication and collaboration between teams and product groups.

1.3 Business Insights
The power of AI/ML to answer ambiguous questions or predict from past data

Business intelligence, mostly including descriptive and diagnostic analytics, is frequently where companies begin their journey when preparing to use AI. However, beyond descriptive and diagnostic analytics, ML enables predictive and even prescriptive capabilities and together they form the AI/ML journey.
AI/ML techniques are starting to augment subject matter experts (SMEs) in the BI process by providing new insights and answering the "why" and "what if" questions. This shift allows data and AI/ML to drive predictive decision-making.
To transition from BI to an AI/ML-enabled practice and advance to higher-level analytics, algorithms can be used alongside diagnostic analytics to understand key variables. In the early stages of transformation, any effective method can be to create a center of excellence for analytics (not necessarily AI/ML) that is closely tied to your cloud initiatives. Most importantly, create a rhythm of using AI/ML to inform major business decisions as this will drive the recognition of its value to a true business outcome.

1.4 Portfolio management
Identify and prioritize high-value AI/ML products and initiatives that are feasible

The challenge of ML initiatives is that short-term results must be shown without sacrificing long-term value. In the worst case, short-term thinking can lead to technical AI/ML proofs of concept (POCs) that never make it beyond that technical stage because they were focused on irrelevant business technicalities. Your first goal when identifying, prioritizing, and running ML projects and products must be to deliver on tangible business results.
Starting somewhere is crucial, and small wins can drive faith in your organization as it helps people connect to where they could use AI/ML in other portions of your business. At the same time, consider what larger customer and business problems you are solving through multiple AI/ML projects and products and combine those into a hierarchical portfolio where the lower layers of that portfolio enable the upper layers.
Next, embed in this portfolio the design of an AI/ML flywheel where the value that your portfolio provides propels business outcomes that, in turn, enable and create additional data from which your portfolio benefits.
This flywheel does not need to be on a single-product level but can reach through your portfolio. In the process of portfolio development, it becomes crucial to determine what to buy versus what to build.
Explore existing use cases and solutions in the market to leverage their maturity levels and consider custom modeling where necessary.
Create a self-reinforcing flywheel through your data strategy

1.5 Innovation management
Question long-standing market hypotheses and innovate your current business

ML offers new capabilities to businesses that can be and in many cases are disruptive to existing businesses and value chains. The power of this general-purpose technology is seen and felt across sectors and there is virtually no exception to that, as the long-term goal of AI/ML research is to replicate or at least imitate intelligence.
Understand evolving customer expectations and needs, both internally and externally, using the business outcomes suggested by the CAF-AI. Consider the value chain of ML-enabled products and differentiate between innovation for cost reduction, revenue, and profit gains, or new income channels.
Use and position ML as a unique differentiator to the respective internal and external stakeholders, and customers. Integrate ML to unlock new capabilities, augment existing ones, and reduce effort through automation. Capitalize and double down on domain-specific knowledge that is represented in the data you access. Design a healthy data value chain for your AI/ML system. Foster a grassroots movement by cultivating internal AI/ML champions, including business owners, product managers, technical experts, and C-suite executives. Maintain a balance between audacious goals and achievable milestones. Recognize that the value of ML systems is driven by high-quality, governed, and accessible data, requiring a dynamic data strategy.

1.6 New: Generative AI
Use the general-purpose capabilities of large AI/ML models

Generative AI, powered by large AI/ML models known as foundation models (FMs), has the potential to create new content and ideas across various domains.
When planning to adopt this powerful branch of AI/ML, there are three considerations. Do you require to build such FMs:

  1. From scratch and uniquely tailored to your business?
  2. Fine-tune a pre-trained model and capitalize on the abilities it has already learned?
  3. Use an existing FM from a supplier without further tuning? The choice depends on your business case. To effectively leverage generative AI and foundation models, it is crucial to have a well-defined data strategy and establish a data flywheel. The quality and relevance of the data used will significantly impact the performance and differentiation of the generative AI system.

2. People Perspective
Culture and change toward AI/ML-first
Adopting AI/ML and creating value reliably and repeatably is not a purely technological challenge. Any AI/ML initiative is crucially dependent on the people that guardrail and drive it.

2.1 New: ML fluency
Building a shared language and mental model.

The boundaries and semantic scope of artificial intelligence and machine learning is not well specified. Both terms are also overloaded with varying mental models and emotional interpretations, which is why it's key to align internally on what stakeholders mean by it.
Once that first layer of interpretation is spread across the organization, tackle the second, more technical one: AI/ML projects and requirements differ in terminology and what importance is assigned to them. From the product management practice to the engineering and data science practice, align on what joint understanding is needed to work effectively.
Promote ML fluency and ML culture through trainings to gain buy-in throughout the organization. This understanding becomes valuable in helping business owners adapt to the unique aspects of ML use cases and in managing customer expectations.
When communicating AI/ML outputs, consider the different mental models and terminology that customers may have. Ensuring graceful failure and maintaining trust can be challenging. By using the right language and fostering fluency, efficiency can be improved.

2.2 Workforce transformation
Attracting, enabling, and managing AI/ML talent from user to builder.

Being able to attract, retain, and retrain talent that can push your AI/ML strategy forward is one of the most crucial aspects of AI/ML success. There are many roles that are necessary for AI/ML success, some of which you can outsource while others can only have their impact as the in-house workforce.
Align your hiring strategy with your AI/ML strategy and ambition:

  • Technical talents (such as data scientists, applied scientist, deep learning architects, and ML engineers.
  • Non-technical product talents (such as ML product managers) that manage roadmaps and identify needs. Align your hiring strategy with your AI/ML strategy and ambition:
  • For scientifically ambitious large-scale initiatives, consider hiring PhDs with experience, complemented by business-focused counterparts like ML strategists.
  • Transitioning existing talent to AI/ML roles can foster organization-wide adoption.
  • Hiring ML engineers and deep learning architects is appropriate when relying on established solutions, foundation models, or AI/ML work beyond your organization's capabilities. In addition to the internal workforce, choose the right AWS partners to support your AI/ML agenda. When talent is scarce, communicate your AI/ML vision externally and initiate initiatives to yield results and attract new talent. Retaining AI/ML talent can be challenging due to high demand and the disparity between academic work and real-world applications. . Continuously retrain your AI/ML workforce to acquire new skills needed in the rapidly evolving AI/ML space. It's worth noting that the headcount-to-value ratio in AI/ML tends to be lower than in other fields, as a small team of skilled practitioners can outperform larger teams due to the intellectual nature of the work.

2.3 Organizational Alignment
Strengthening and relying on cross-organizational collaboration

When AI/ML becomes top-of-mind for organizations, providing an encapsulated and empowered separate unit that spreads and disseminates its value and knowledge across the organization is a typical first step. The AI/ML center of excellence (COE) is a unit that can fill this role, where AI/ML-focused teams are hired and evolved. It's important to align reporting lines in the COE with stakeholders who have ownership of the AI/ML strategy, ensuring short paths to the C-suite for quick decision-making and organizational agility. Aligning incentives with the strategy, business goals, and customer needs is crucial to avoid the common pitfall of AI/ML units
Over time, workforce transformation should enable the broader organization and other builders to effectively utilize the COE and existing AI/ML services while fostering collaboration. Guard against the "not invented here" syndrome, encouraging the organization to leverage readily available cloud solutions that meet business requirements. While such units, other internal builders, and AI/ML talent evolve, enable your data flywheel by establishing a data-driven product mentality.

2.4 Culture Evolution
Culture is king, even more so when adopting AI/ML

Developing an AI/ML-first culture is a long and challenging process as it often requires breaking up old mental models. In typical cloud and software development, the cultural focus is on empowering builders to codify complex rules and systems. AI/ML relies much more on a culture of searching for the right inputs that generate the desired output.
With such a value-driven mindset in place, zoom in on the cornerstones of an AI/ML-first culture:

  • Experimental mindset paired with agile engineering practices Cross team and business unit collaboration and reliance
  • Bottom-up and top-down AI/ML opportunity discovery
  • Broad and inclusive AI/ML adoption solution design driven by customer value Start expanding your AI/ML-first culture with the following:
  • Empower your builders to experiment with AI/ML systems, not for experimentation's sake, but because building an AI/ML system involves exploring which solution pathways work and which are dead ends.
  • Embrace a culture where data is the interface between teams and value is created in tandem with each other. Be careful not to build business-distant data science teams.
  • Empower a culture where value is identified, recognized, and enabled at all levels of the organization.

By embracing these principles, organizations can cultivate an AI/ML-first culture that drives innovation and creates value across the board.

3. Governance perspective
Managing an AI/ML-driven organization

3.1 Cloud Financial Management (CFM)
Plan, measure, and optimize the cost of AI/ML in the cloud

The cost and investment profile of AI/ML can be somewhat surprising to adopters as there often is a zig-zag pattern (high/low/high/low) when developing such a system. This is true both on the individual use cases level, as well as on the bigger picture and all-entailing AI/ML initiative level. While cost is unsurprisingly associated with the concrete use case and your industry, it's also dependent on the state of the AI/ML system: Learning or inferencing. Start by analyzing the cost profile of AI/ML use cases over time and factor in this zig-zag cost pattern that is inherent to many cases.
While most ML POCs will be relatively low cost (compute-wise), there are a few technical approaches that can become costly quickly. In such cases, refer to the AWS purpose-built AI/ML hardware (Trainium / Trn1 and Inferentia / Inf2) to help keep costs down. If you have access to the right talent, AI/ML services, and AWS partners, let them estimate the resources needed for different phases of your use cases and overall AI/ML strategy. If feasible, calculate an incremental improvement of an ML metric.
After the first system is built, the cost of the following minimum viable product (MVP) phase can, dependent on the use case then again have a relatively high cost. After the model is deployed, inference itself is largely dependent on the volume of requests, and in many cases, the inference cost itself is again relatively low. If not, refer to the purpose-built AWS Inferential architecture. Monitoring model metrics and flagging drift will alert you to changes and the potential need to retrain your algorithms.
By carefully managing costs and aligning AI/ML investments with business value, organizations can navigate the financial aspects of AI/ML development and deployment.

3.2 Data Curation
Create value from data catalogs and products

Your ability to acquire, label, clean, process, and interact with data will increase your speed, decrease time-to-value, and boost your model’s performance (such as accuracy). When models stall for accuracy, consider going back and enriching, growing, or improving the data you are feeding the algorithm.
Collecting data with ML in mind is crucial to achieving your AI/ML roadmap and you should ask yourself and other leaders: “Are we enabling AI/ML innovation through democratizing data?”, “Does my organization think of my data as a product?” and “Is my data discoverable across my organization?” While answers to these questions often sit on a spectrum between yes and no, the key thing to remember is that it’s all about re-enforcing a culture where data is recognized as the genesis of modern invention.
Data quality assessments and rules around the governance can either accelerate the use of your data or stop all progress. Balance these two and use proper tooling to allow your whole organization to innovate. Have direct owners of datasets to avoid the tragedy of the commons, which in turn will help you build a robust data ecosystem. Start small and then continually add to your data mesh.
Easy-to-use human-readable data repositories and dictionaries will empower all skill levels to start using your data to create value. This will increase the speed to decide upon the additional investment cost needed for other cases considerably.

3.3 Risk management
Use the cloud to mitigate and manage the risks inherent to AI.

While every new technology comes with a new set of risks attached (and AI/ML is no different), don’t let that fool you: Managing the risks involved both in the design and development process of AI/ML systems as well as in the deployment and long-term operations and application of AI/ML is challenging and not overly well understood in the industry yet. Start by factoring in the risk of sunken cost into the development process as the outcome of an AI/ML development.
Manage risks across the entire process or larger system, considering the impact of AI/ML on professional, organizational, and societal levels. Account for challenges arising from data and concept drift and invest in security measures to protect against bad actors. Recognize the complexity of achieving human-level parity in certain domains.
By being mindful of these risks and implementing appropriate measures, organizations can navigate the challenges and ensure the responsible and effective use of AI/ML.

3.4 New: Responsible use of AI
Foster continuous AI innovation through responsible AI practices.

The responsible use of AI/ML is crucial for fostering continuous AI innovation. Organizations should consider and address responsible AI practices early on and throughout the lifecycle of their AI/ML journey.
Consider how your AI system will impact individuals, users, customers, and society as a whole. Scale the impact of responsible AI dimensions such as explainability, fairness, governance, privacy, security, robustness, and transparency over time. Address the implications of AI on different cultures and demographics, and include algorithmic fairness.
Embed explainability by design in your ML lifecycle where possible and establish practices to recognize and discover both intended and unintended biases.
Use best practices that enable a culture of responsible use of AI/ML and build or use systems to enable your teams to inspect these factors. While this cost accumulates before the algorithms reach the production state, it will pay off in the mid-term by mitigating damage.

Top comments (1)

Collapse
 
jettliya profile image
Jett Liya • Edited

Amazon Web Services (AWS) provides foundational AI/ML (Artificial Intelligence/Machine Learning) capabilities to empower businesses with cutting-edge technologies. These foundational elements form the backbone of AWS's comprehensive AI/ML services.

SageMaker: AWS SageMaker is a fully managed service that simplifies the ML workflow. It allows users to build, train, and deploy machine learning models at scale.
Here is a website that will teach you more about artificial intelligence tools: Visit Website

Rekognition: AWS Rekognition offers powerful image and video analysis, enabling applications to detect objects, scenes, and activities, as well as perform facial recognition.

Comprehend: With AWS Comprehend, businesses can gain insights from vast amounts of text by extracting key entities, sentiments, and language patterns.

Polly: AWS Polly is a text-to-speech service that converts written text into natural-sounding speech, enhancing user experiences across various applications.

Transcribe: AWS Transcribe provides automatic speech recognition, converting spoken language into written text, facilitating transcription for diverse applications.

Translate: AWS Translate offers language translation capabilities, enabling businesses to provide content in multiple languages for a global audience.

Personalize: AWS Personalize uses machine learning to create personalized recommendations for users, enhancing engagement on platforms.

Forecast: AWS Forecast utilizes machine learning to generate accurate predictions, enabling businesses to anticipate future trends and make informed decisions.

DeepLens: AWS DeepLens is a deep learning-enabled video camera that facilitates hands-on learning and experimentation with computer vision applications.

These foundational AI/ML capabilities from AWS empower organizations to leverage the potential of artificial intelligence and machine learning, fostering innovation, efficiency, and data-driven decision-making.