A Blueprint for Defensible AI

A Blueprint for Defensible AI

An article by Luk Arbuckle, Chief Methodologist, Privacy Analytics

The rapid evolution of artificial intelligence (AI) and emerging regulations has created the demand for robust frameworks and standards to ensure AI’s responsible and effective deployment. To achieve this, integrating guidance for AI with standards and frameworks is needed to manage risks, align with regulations, and enhance trust in AI systems. This will facilitate readiness assessments, options analyses, and feature design for defensible AI.

Two key tools stand out in this landscape: ISO/IEC 42001 AI Management System and the NIST AI Risk Management Framework. Together, these tools provide comprehensive guidance for integrating AI into organizational practices while maintaining a strong focus on risk management, continuous improvement, and ethical considerations.

The Big Picture

ISO/IEC 42001 AI Management System offers an overarching quality management approach to AI, encompassing organizational objectives, leadership, risk assessment, and continuous improvement. This standard serves as a gateway to numerous other AI standards, published or under development, providing a holistic view of AI systems.

Key components of ISO/IEC 42001 include:

  • Organizational objectives & leadership: aligning AI initiatives with organizational goals and ensuring leadership commitment.
  • Risk & opportunities management: identifying and treating risks while capitalizing on opportunities.
  • Management of concerns: addressing stakeholder concerns and ensuring transparent communication.
  • Evaluation & improvement: implementing continuous monitoring, analysis, and evaluation to enhance the AI system over time.

This standard emphasizes the importance of risk assessment and treatment, impact assessments, and the need for ongoing evaluation and improvement. It underscores the necessity for a structured approach to managing AI within an organization, ensuring that AI systems are effective and also defensible. As an international standard, it also provides a path to conformance with best practices.

The Deep Dive

The NIST AI Risk Management Framework, while similar to ISO/IEC 42001 without a formal conformance option, is valuable for developing policies and treatment options. It is comprehensive, covering the standard NIST approach of mapping, measuring, and managing risk with a core component of governance that cuts across all activities. As with many NIST frameworks, it is likely to become a benchmark for evaluating maturity.

Key elements of the NIST framework include:

  • Map: identifying AI systems, their contexts, and associated risks.
  • Measure: evaluating AI system risks and their impact.
  • Manage: establishing resources to monitor and deal with risks.
  • Govern: implementing core policies to ensure trustworthy AI.

The NIST framework also offers an online playbook with suggested actions, transparency practices, documentation, and references , making it practical for organizations to adopt and tailor to their specific needs. Its alignment with the OECD framework for the classification of AI systems, providing a policy tool to evaluate risks to people and the environment in specific operating contexts, further enhances its utility.

A Unified Approach

AI and privacy are entangled, inspiring integrated solutions for data protection and AI governance when information about people is at stake. Enhanced data protection, scalable AI governance, and robust infrastructure are critical for improving decision-making accuracy and reliability. AI governance extends into areas such as data governance, data architecture and quality management, data modeling and ontologies, to name a few. It also incorporates IT practices like ethical hacking, vulnerability assessments, and penetration testing.

As AI governance continues to evolve, the need for structured frameworks and standards is becoming increasingly important. The ISO/IEC 42001 AI Management System and the NIST AI Risk Management Framework provide comprehensive guidance to ensure responsible and defensible AI deployments. By integrating these standards into data governance practices and robust data infrastructure, organizations can enhance data protection, scalability, and decision-making accuracy.

The future of AI governance lies in continuous improvement, ethical considerations, and the seamless integration of AI and privacy solutions. Contact us when you want to learn more and see how our advisory or consulting services can help you drive trustworthy insights at scale.

Archiving / Destroying

Are you unleashing the full value of data you retain?

Your Challenges

Do you need help...

OUR SOLUTION

Value Retention

Client Success

Client: Comcast

Situation: California’s Consumer Privacy Act inspired Comcast to evolve the way in which they protect the privacy of customers who consent to share personal information with them.

Evaluating

Are you achieving intended outcomes from data?

Your Challenge

Do you need help...

OUR SOLUTION

Unbiased Results

Client Success

Client: Integrate.ai

Situation: Integrate.ai’s AI-powered tech helps clients improve their online experience by sharing signals about website visitor intent. They wanted to ensure privacy remained fully protected within the machine learning / AI context that produces these signals.

Accessing

Do the right people have the right data?

Your Challenges

Do you need help...

OUR SOLUTION

Usable and Reusable Data

Client Success

Client: Novartis

Situation: Novartis’ digital transformation in drug R&D drives their need to maximize value from vast stores of clinical study data for critical internal research enabled by their data42 platform.

 

Maintaining

Are you empowering people to safely leverage trusted data?

Your Challenges

Do you need help...

OUR SOLUTION

Security / compliance efficiency

CLIENT SUCCESS

Client: ASCO’s CancerLinQ

Situation: CancerLinQ™, a subsidiary of American Society of Clinical Oncology, is a rapid learning healthcare system that helps oncologists aggregate and analyze data on cancer patients to improve care. To achieve this goal, they must de-identify patient data provided by subscribing practices across the U.S.

 

Acquiring / Collecting

Are you acquiring the right data? Do you have appropriate consent?

Your Challenge

Do you need help...

OUR SOLUTIONS

Consent / Contracting strategy

Client Success

Client: IQVIA

Situation: Needed to ensure the primary market research process was fully compliant with internal policies and regulations such as GDPR. 

 

Planning

Are You Effectively Planning for Success?

Your Challenges

Do you need help...

OUR SOLUTION

Build privacy in by design

Client Success

Client: Nuance

Situation: Needed to enable AI-driven product innovation with a defensible governance program for the safe and responsible use
of voice-to-text data under Shrems II.

 

Join the next 5 Safes Data Privacy webinar

This course runs on the 2nd Wednesday of every month, at 11 a.m. ET (45 mins). Click the button to register and select the date that works best for you.