Microsoft Azure AI Foundry Models and Microsoft Security Copilot achieve ISO/IEC 42001:2023 certification

9 months ago 175

Microsoft has achieved ISO/IEC 42001:2023 certification—a globally recognized modular for Artificial Intelligence Management Systems for some Azure AI Foundry Models and Microsoft Security Copilot.

Microsoft has achieved ISO/IEC 42001:2023 certification—a globally recognized modular for Artificial Intelligence Management Systems (AIMS) for some Azure AI Foundry Models and Microsoft Security Copilot. This certification underscores Microsoft’s committedness to gathering and operating AI systems responsibly, securely, and transparently. As liable AI is rapidly becoming a concern and regulatory imperative, this certification reflects however Microsoft enables customers to innovate with confidence.

Raising the barroom for liable AI with ISO/IEC 42001

ISO/IEC 42001, developed by the International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC), establishes a globally recognized model for the absorption of AI systems. It addresses a wide scope of requirements, from hazard absorption and bias mitigation to transparency, quality oversight, and organizational accountability. This planetary modular provides a certifiable model for establishing, implementing, maintaining, and improving an AI absorption system, supporting organizations successful addressing risks and opportunities passim the AI lifecycle.

By achieving this certification, Microsoft demonstrates that Azure AI Foundry Models, including Azure OpenAI models, and Microsoft Security Copilot prioritize liable innovation and are validated by an autarkic 3rd party. It provides our customers with added assurance that Microsoft Azure’s exertion of robust governance, hazard management, and compliance practices crossed Azure AI Foundry Models and Microsoft Security Copilot are developed and operated successful alignment with Microsoft’s Responsible AI Standard.

Supporting customers crossed industries

Whether you are deploying AI successful regulated industries, embedding generative AI into products, oregon exploring caller AI usage cases, this certification helps customers:

  • Accelerate their ain compliance journey by leveraging certified AI services and inheriting governance controls aligned with emerging regulations.
  • Build spot with their ain users, partners, and regulators done transparent, auditable governance evidenced with the AIMS certification for these services.
  • Gain transparency into however Microsoft manages AI risks and governs liable AI development, giving users greater assurance successful the services they physique on.

Engineering spot and liable AI into the Azure platform

Microsoft’s Responsible AI (RAI) program is the backbone of our attack to trustworthy AI and includes 4 halfway pillars—Govern, Map, Measure, and Manage—which guides however we design, customize, and negociate AI applications and agents. These principles are embedded into some Azure AI Foundry Models and Microsoft Security Copilot, resulting successful services designed to beryllium innovative, harmless and accountable.

We are committed to delivering connected our Responsible AI committedness and proceed to physique connected our existing enactment which includes:

  1. Our AI Customer Commitments to assistance our customers connected their liable AI journey.
  2. Our inaugural Responsible AI Transparency Report that enables america to grounds and stock our maturing practices, bespeak connected what we person learned, illustration our goals, clasp ourselves accountable, and gain the public’s trust.
  3. Our Transparency Notes for Azure AI Foundry Models and Microsoft Security Copilot assistance customers recognize however our AI exertion works, its capabilities and limitations, and the choices strategy owners tin marque that power strategy show and behavior.
  4. Our Responsible AI resources site which provides tools, practices, templates and accusation we judge volition assistance galore of our customers found their liable AI practices.

Supporting your liable AI travel with trust

We admit that liable AI requires much than technology; it requires operational processes, hazard management, and wide accountability. Microsoft supports customers successful these efforts by providing some the level and the expertise to operational spot and compliance. Microsoft remains steadfast successful our committedness to the following:

  • Continually improving our AI absorption system.
  • Understanding the needs and expectations of our customers.
  • Building onto the Microsoft RAI programme and AI hazard management.
  • Identifying and actioning upon opportunities that let america to physique and support spot successful our AI products and services. 
  • Collaborating with the increasing assemblage of liable AI practitioners, regulators, and researchers connected advancing our liable AI approach.  

ISO/IEC 42001:2023 joins Microsoft’s extended portfolio of compliance certifications, reflecting our dedication to operational rigor and transparency, helping customers physique responsibly connected a unreality level designed for trust. From a healthcare enactment striving for fairness to a fiscal instauration overseeing AI risk, oregon a authorities bureau advancing ethical AI practices, Microsoft’s certifications alteration the adoption of AI astatine standard portion aligning compliance with evolving planetary standards for security, privacy, and liable AI governance.

Microsoft’s instauration successful information and information privateness and our investments successful operational resilience and liable AI shows our dedication to earning and preserving spot astatine each layer. Azure is engineered for trust, powering innovation connected a secure, resilient, and transparent instauration that gives customers the assurance to standard AI responsibly, navigate evolving compliance needs, and enactment successful power of their information and operations.

Learn much with Microsoft

As AI regulations and expectations proceed to evolve, Microsoft remains focused connected delivering a trusted level for AI innovation, built with resiliency, security, and transparency astatine its core. ISO/IEC 42001:2023 certification is simply a captious measurement connected that path, and Microsoft volition proceed investing successful exceeding planetary standards and driving liable innovations to assistance customers enactment ahead—securely, ethically, and astatine scale.

Explore however we enactment spot astatine the halfway of unreality innovation with our attack to security, privacy, and compliance astatine the Microsoft Trust Center. View this certification and report, arsenic good arsenic different compliance documents connected the Microsoft Service Trust Portal.


The ISO/IEC 42001:2023 certification for Azure AI Foundry: Azure AI Foundry Models and Microsoft Security Copilot was issued by Mastermind, an ISO-accredited certification assemblage by the International Accreditation Service (IAS). 

Read Entire Article