2025 Responsible AI Transparency Report

From Building AI to Supporting Customers and Growing

Highlights from the 2025 Responsible AI Transparency Report

In 2024, Microsoft has made significant investments in responsible AI tools, policies, and practices to keep pace with AI innovation.

We’ve enhanced our responsible AI tools to expand the scope of measuring and mitigating risk in various forms, including images, audio, and video, as well as text, and we’ve expanded support for agentic systems, which are expected to be key areas of AI investment and innovation by 2025 and beyond.

We took a proactive, phased approach to new regulatory requirements, including the EU AI Act, and provided a range of resources and resources to help our customers continue to innovate while complying with relevant regulations. Initial investments to build a comprehensive, industry-leading, and responsible AI program have enabled us to fully strengthen our ability to respond to AI regulations in earnest in 2024.

Through pre-launch reviews and red team activities, we have consistently applied a consistent risk management system. This includes thorough oversight and review of high-impact, high-risk AI and generative AI use cases, including the release of key models and all PHI models newly added to Azure OpenAI Service. To better support responsible AI documentation, we have also introduced an internal workflow tool that allows you to centrally manage the various requirements set forth in the Responsible AI Standard. Through the Sensitive Uses and Emerging Technologies team, we continued to provide practical consultation on high-impact and high-risk AI use cases. The use of generative AI in healthcare and science will grow significantly in 2024. The team analyzed a variety of cases, worked closely with researchers, and provided early guidance on emerging risks and emerging AI technologies to foster innovation and support the establishment of internal policies and guidelines.

We worked with stakeholders around the world to build a consistent governance framework that enables cross-border use of AI. As part of this effort, he has published a book on governance in various fields and contributed to the creation of a unified standard for testing AI systems.

Plans for the second half of 2025 and beyond

As AI innovation and adoption continue to evolve, Microsoft’s core goals remain the same. The foundation that must be in place to drive the broad and beneficial use of AI around the world is to earn trust. In the journey ahead, we plan to focus on three areas to remain flexible in our ever-changing environment while maintaining our unwavering commitment to AI governance:

  1. Develop flexible and agile risk management tools and practices to keep pace with AI advancements. For individuals and organizations around the world to take full advantage of AI’s transformative potential, their ability to predict and manage risk must evolve with the pace of innovation. To that end, we have flexible tools and practices to respond quickly to advances in AI technology and the growing number of deployment scenarios with varying risk profiles. To achieve these goals, we will increase our investment in risk management systems and provide tools and implementation strategies to effectively address the most common risks. We’ll also have a Microsoft company-wide system in place to facilitate sharing of test sets, mitigations, and best practices across teams.
  2. Enable effective governance across the AI supply chain. Building and maintaining trust in AI requires that all actors—model developers, application builders, and system users—work together to design, develop, and operate trustworthy. AI regulations, including the European Union AI Act, reflect the need for information sharing among supply chain participants. While Microsoft is embracing this concept of shared responsibility, we also know that in a fast-paced AI ecosystem, it is still complex to clearly define how each entity’s responsibilities fit together. To further clarify roles and expectations, we are working internally and with external stakeholders to deepen a common understanding of the issue.
  3. Revitalizing the ecosystem through shared norms and effective tools for measuring and assessing AI risks. AI risk measurement and assessment is still in its infancy, and we will continue to invest to support its growth. This includes research that pushes the boundaries of risk measurement and the development of tools that can be applied to large-scale operations. By sharing tools and best practices across the ecosystem, we will actively support the establishment of shared standards and norms for AI risk assessment and measurement.
Experience a new way of working with AI!
Get started together through Copilot for Microsoft 365 with Cloocus.
Secured By miniOrange