The National Institute of Standards and Technology (NIST) has introduced a new program called Assessing Risks and Impacts of AI (ARIA) to assist organizations and individuals in assessing the validity, reliability, safety, security, privacy, and fairness of AI technologies. ARIA builds upon the AI Risk Management Framework released by NIST in January 2023 and aims to develop new methodologies and metrics for quantifying the functionality of AI systems in real-world scenarios. The program aligns with NIST's goal of establishing a foundation for safe and trustworthy AI.
In January 2023, NIST released the AI Risk Management Framework, which provided guidance on managing risks associated with AI systems. The framework emphasized the importance of addressing risks related to the trustworthiness of AI systems, including their reliability, safety, security, privacy, and fairness. The ARIA program expands on this framework by developing new methodologies and metrics for evaluating the functionality of AI systems in real-world contexts. By doing so, ARIA aims to assist organizations and individuals in making informed decisions about the use of AI technologies.
The ARIA program will focus on several key areas, including the development of methods for evaluating the trustworthiness of AI systems, the creation of metrics for assessing the impact of AI technologies, and the establishment of best practices for implementing AI systems in various domains. NIST will collaborate with industry, academia, and other stakeholders to develop these methodologies and metrics, ensuring that they are comprehensive, rigorous, and applicable across different AI applications.
The introduction of the ARIA program reflects the growing recognition of the need for standardized approaches to evaluate and verify the capabilities and impacts of AI technologies. As AI becomes increasingly integrated into various aspects of society, it is crucial to ensure that these technologies are reliable, safe, secure, private, and fair. The ARIA program will play a vital role in promoting the development and adoption of AI systems that meet these criteria, thereby fostering trust and confidence in AI technologies.
The ARIA program is part of NIST's broader efforts to advance the field of AI and support the development of trustworthy AI systems. NIST has been actively involved in AI research and standardization, working closely with industry, academia, and government agencies to address the challenges associated with AI technologies. By providing guidance, tools, and resources, NIST aims to facilitate the responsible and ethical use of AI and promote the development of AI systems that benefit society as a whole.
ORO Labs, a company specializing in AI systems management, has achieved the world's first accredited ISO/IEC 42001:2023 certification for an Artificial Intelligence Management System (AIMS). The certification was issued by Mastermind under its accreditation maintained by the International Accreditation Service (IAS). ORO Labs underwent a rigorous internal audit of its management system to earn the certification, which recognizes the company's commitment to AI governance and data privacy and security policies. The ISO 42001 standard provides requirements for establishing, implementing, maintaining, and continually improving AI systems.
ORO Labs' platform helps customers streamline the procurement process and includes GenAI-powered features for enhanced user experience. Co-founder Lalitha Rajagopalan highlighted the importance of creating smart workflows that automate compliance with the appropriate safeguards for AI-powered recommendations and process automation. The certification marks a significant milestone for ORO Labs and demonstrates their dedication to ensuring the responsible and secure management of AI systems.
The integration of the ARIA program by NIST and the achievement of ISO/IEC 42001:2023 certification by ORO Labs highlight the ongoing efforts to evaluate, verify, and manage AI capabilities and impacts. These initiatives contribute to the development of safe, trustworthy, and ethical AI systems, promoting confidence and reliability in the use of AI technologies.