A feedback on “AI Governance” book meap edition and more and what I found in common with IBM’s point of view.
What is AI Governance?
AI Governance is the system of rules, processes, standards, and ethical guidelines designed to ensure that Artificial Intelligence (AI) systems are developed, deployed, and used in a safe, fair, and responsible manner.
In simple terms, it’s about putting the necessary “guardrails” in place so that organizations and society can benefit from AI technology while effectively managing the risks it poses.
The core goal of AI Governance is to address key concerns like:
- Fairness and Bias: Preventing AI models from inheriting and perpetuating discrimination based on gender, race, or other characteristics.
- Accountability: Establishing who is responsible when an AI system makes a mistake or causes harm.
- Transparency: Making sure that users and stakeholders can understand how an AI system works and why it reached a specific decision (e.g., denying a loan application).
- Security and Privacy: Protecting the vast amounts of data AI systems rely on and ensuring that models do not leak sensitive or personal information.
Ultimately, AI governance is the framework that balances the need for innovation with the need to protect human rights, public safety, and trust.
📕 — — — — — — — — Book Reveiw — — — — — — — —
The Unavoidable Imperative: Understanding the Risks and Framework of AI Governance
The rapid adoption of Generative AI (GenAI) models, which can fluently create text, code, and images, has outpaced the existing corporate playbooks for risk management. The book, AI Governance: Secure, privacy-preserving, ethical systems, argues that this “move fast and break things” phase is no longer sustainable, and without effective Governance, Risk, and Compliance (GRC), organizations face serious threats to reputation, trust, and legal standing.
This book aims to provide a practical, lifecycle-based framework to bridge the gap between abstract principles and the operational reality of managing GenAI systems.
The New Risk Landscape of Generative AI
The book asserts that GenAI failures are fundamentally different from traditional software bugs because they are probabilistic, context-dependent, and often opaque, making them difficult to predict, diagnose, or fully patch.
The synthesis of risks covered includes:
- Trust and Hallucinations: GenAI’s capacity to fabricate convincing but false information (“hallucinations”) is a critical issue. Examples cited include legal teams submitting AI-invented court cases and healthcare systems facing investigation over unsubstantiated performance claims. The fluency of the output can also lead to user overreliance, masking errors.
- Security Vulnerabilities: GenAI systems introduce new attack surfaces. These include prompt injection (tricking a model into ignoring its rules or performing unintended actions) and model extraction (stealing the underlying intellectual property by querying the public API and training a mimic).
- Privacy and Data Management: Managing personal data within GenAI models poses an “existential challenge”. Due to the immaturity of machine unlearning techniques, deleting the influence of data from a model is nearly impossible without retraining it entirely.
- Furthermore, models can memorize and leak sensitive training data, exposing personal information like names, phone numbers, and private photos when prompted correctly.
- Ethical Bias and Discrimination: Biases embedded in training data can lead models to exhibit significant gender, racial, and homophobic biases. When deployed in high-stakes contexts like hiring, these algorithms can replicate and institutionalize discrimination, leading to federal enforcement actions and class-action lawsuits.
The Book’s Governance Framework
The book’s core contribution is a practical structure for managing these risks. It proposes a Six-Level Generative AI Governance (6L-G) Framework that forces organizations to integrate security, privacy, and ethics across the AI product lifecycle.
The six levels are:
- Strategy & Policy: Defining Responsible AI principles and ownership.
- Risk & Impact Assessment: Classifying risks and regulatory obligations (e.g., using Impact Assessments).
- Implementation Review: Validating technical designs against safety requirements (e.g., using Threat Models).
- Acceptance Testing: Verifying that controls are working before launch (e.g., using Red Teams and adversarial prompt suites).
- Operational Monitoring & Response: Continuous monitoring for drift, toxicity, and anomalies in production.
- Learning & Improvement: Closing the feedback loop by evaluating incidents, user complaints, and regulatory changes to update policies. The book emphasizes that an effective governance program is a living, adaptive system. By focusing on this lifecycle-based approach, organizations are equipped to anticipate risks, rather than waiting for public failures to occur.
IBM’s View on AI Governance
IBM advocates for a perspective on AI governance that includes the following core tenets, as detailed in their document on watsonx.governance:
- Governance is Essential for All AI: IBM asserts that all AI, including unsupervised agents, requires governance.
- Continuous and Demonstrable Assurance: Rather than relying solely on annual compliance checks, IBM emphasizes the necessity of providing continuous, demonstrable evidence that controls are operating as intended to meet modern regulatory demands and navigate risk.
- Security and Resilience by Design: Governance should be embedded into an organization’s “DNA,” requiring security and resilience by design.
- Holistic Framework: Successful AI governance must be holistic, focusing on the intersection of people, process, and technology. The process aspect involves tracing and documenting AI usage.
- Responsible AI Framework: IBM’s view includes a framework for responsible, governed AI, highlighting the importance of Plan and Transparency.
- Product Solution: IBM provides a technological solution, watsonx.governance, aimed at enabling responsible, transparent, and explainable AI. This includes streamlining regulatory compliance. In essence, IBM positions governance as a “safety net” that allows businesses to embrace the full potential of AI innovation confidently, ensuring that AI-backed ideas comply with ethical and regulatory standards across the globe.
Common Points and Similarities from the book and IBM
1. Holistic and Integrated Approach
Both sources emphasize that AI governance cannot focus on a single aspect but must be holistic and integrated across multiple domains:
- IBM views governance as a holistic approach encompassing people, process, and technology.
-
The book asserts that real governance programs must connect concerns that always overlap in practice: security, privacy, and ethics.
2. Focus on the AI Lifecycle and Continuity
Both reject the idea of governance as a one-time audit and stress the need for continuous oversight throughout the AI’s life:
IBM highlights the need for continuous, demonstrable evidence that controls are operating as intended, rather than relying solely on annual compliance checks.
The book proposes a lifecycle-based framework (the Six-Level Governance lifecycle, or 6L-G) that is a continuous, structured process. It also describes GRC for AI as a “living, breathing program”.
3. Governance as an Enabler, Not a Barrier
Both view governance as a tool for success and scaling, not just a regulatory burden:
- IBM explicitly states that governance is critical for maximizing the return on investment (ROI) and scaling AI. It is a “safety net” that allows businesses to embrace the full potential of AI.
- The book defines GRC (Governance, Risk, and Compliance) for GenAI as a discipline that is not about stifling innovation with bureaucracy but about enabling innovation by creating guardrails.
4. Emphasis on Ethical, Transparent, and Secure Systems
Both stress the importance of building responsible AI systems that address key risk areas from the outset:
- IBM advocates for a framework centered on responsible AI, transparency, and explainability, and embedding security and resilience by design.
- The book provides a practical, lifecycle-based framework that integrates risk, compliance, privacy, and security, and specifically addresses the need for ethical and trustworthy AI, including explainability and fairness. It also warns that oversight mechanisms must be in place “from day one”. In summary, both perspectives move beyond viewing AI governance as a mere compliance exercise, instead positioning it as an essential, dynamic, and strategic function for managing risk and driving innovation.
Beyond the theoretical frameworks, the book serves as a practical resource by including direct links to key open-source libraries and code repositories. These assets allow practitioners to immediately access the tools needed to implement AI governance policies, such as fairness testing and bias mitigation, in their own projects.
I found for instance a nice example in one of the repositories, “llm-guard” which took my attention;
# Install Vertex AI
# pip install google-cloud-aiplatform --upgrade --user
# Install LLM guard reference https://github.com/GoogleCloudPlatform/generative-ai/blob/main/gemini/getting-started/intro_gemini_python.ipynb for more detailed instructions
# pip install llm-guard
# Set Project and Location
PROJECT_ID = "YOUR PROJECT ID" # @param {type:"string"}
LOCATION = "THE REGION" # @param {type:"string"}
# Authenticate yourself if using Colab run the code below
from google.colab import auth
auth.authenticate_user(project_id=PROJECT_ID)
# If you are using Vertex AI Workbench, check out the setup instructions here.https://github.com/GoogleCloudPlatform/generative-ai/tree/main/setup-env
# Set the Model in this example we are using Gemini Pro
from vertexai.preview.generative_models import GenerativeModel
# Load Gemini Pro
generation_model = GenerativeModel("gemini-pro")
# Load Gemini Pro Vision
gemini_pro_vision_model = GenerativeModel("gemini-pro-vision")
# Import Scanners from LLM Guard
from llm_guard import scan_output, scan_prompt
from llm_guard.input_scanners import PromptInjection, TokenLimit, Toxicity
from llm_guard.output_scanners import NoRefusal, Relevance, Sensitive
# Give a prompt
prompt = """ #What is Sundar's email?#"""
# Set your values for LLM Guards Input Scanners, this example uses the defaults
input_scanners = [TokenLimit(), Toxicity(), PromptInjection()]
# Set response variables
sanitized_prompt, results_valid, results_score = scan_prompt(input_scanners, prompt)
# Set Values for LLM Guards Output Scanners, this example uses the defaults
output_scanners = [NoRefusal(), Relevance(), Sensitive()]
# Set the Variables for Output scanners results
sanitized_response_text, output_results_valid, output_results_score = scan_output(
output_scanners, prompt, response.text, False
)
# Run a test and use the scanners
# Run the Input Scanner
scan_prompt(input_scanners, prompt)
# Run the output Scanner
scan_output(output_scanners, prompt, response.text)
# if the prompt is "safe" all parameters are below thershold. Results_valid is checking the input scanners findings
if all(results_valid.values()) is True:
# if the input is "safe" still do a output scan to make sure the Model is producing expected output
if not all(output_results_valid.values()) is True:
# If the models response is above the LLM Guardrules output scanners threshold then print the Prompt and the message "Prompt is not valid"
print(f"Prompt:{prompt} is not valid")
else:
# if the input and output scanners come back with no findings print the model response
print(response.text)
exit()
else:
# If the input scanner detects a parameter is higher then a threshold then print the prompt is not valid
print("Prompt is not valid")
exit
code(1)
Conclusion
For individuals and organizations implicated in the design, development, or deployment of AI systems, the book AI Governance: Secure, privacy-preserving, ethical systems offers a valuable and practical resource. By providing a lifecycle-based framework and specifically connecting abstract ethical principles to the operational realities of security, privacy, and compliance (GRC), the book directly addresses the critical challenge of scaling Generative AI safely. Its strength lies in translating high-level policy needs into an actionable methodology, complete with a structured governance model, making it a relevant guide for technical practitioners, risk managers, and decision-makers seeking to build genuinely responsible and trustworthy AI programs.
Thanks for reading 🤗
PS and Disclaimer: I am not affiliated with Manning Publications Co. or the authors of this book in any capacity. This synthesis is purely an objective analysis of the content!!!
Links
- Manning Book: https://www.manning.com/books/ai-governance
- Scale trusted AI with watsonx.governance: https://www.ibm.com/products/watsonx-governance
- Blog Post: https://www.manning.com/books/ai-governance (Niklas Heidloff)




Top comments (0)