Long-Term Goals for Trustworthy AI

The EU’s High-Level Expert Group (HLEG) on AI has laid clear guidelines for what makes AI trustworthy. These guidelines were first shared in 2018, and after gathering over 500 public comments, they were refined to focus on three key principles that trustworthy AI must follow:

  • Legality. AI should follow all laws and regulations that apply to it.
  • Ethicality. It must align with ethical principles and values, respecting human rights and dignity.
  • Robustness. AI needs to be technically sound and reliable while also considering the social context in which it operates.

The EU has outlined seven essential requirements to ensure that AI systems are trustworthy. These guidelines help organizations assess whether their AI meets ethical and safety standards. Here’s a breakdown of what each requirement means and how it contributes to building trust in AI:

Human Agency and Oversight

It’s crucial that AI doesn’t take full control. We need humans to remain in charge of decision-making, whether by staying involved during critical steps (“human-in-the-loop”) or by overseeing AI’s actions (“human-on-the-loop”). 

This balance ensures that AI supports human judgment, not replaces it. When you integrate it with human oversight, you create trustworthy and ethical systems.

Technical Robustness and Safety

AI systems must be reliable, secure, and accurate. They should be able to bounce back from disruptions and function consistently. Technically robust systems are safer and more reliable for users. 

Safety measures, like built-in safeguards, protect people from AI malfunctions and errors. This ensures that AI operates effectively and doesn’t pose unintended risks.

Privacy and Data Governance

AI must respect privacy by protecting personal data and adhering to strict data governance practices. 

Good data governance ensures compliance with laws and enhances the quality of the data that AI systems rely on. When privacy is safeguarded, users can trust that their personal information is handled responsibly.

Transparency

Transparency means AI systems should clearly explain how they work. Users must understand what data is being used, how decisions are made, and what’s happening behind the scenes. 

Traceability and clear explanations are key. This openness helps people trust the system because they know what’s happening and why.

Diversity, Non-discrimination, and Fairness

AI should be fair and inclusive, avoiding any form of bias. Diverse voices should be involved when designing AI systems to ensure they’re accessible to everyone. 

Fair AI helps prevent discrimination and makes technology more inclusive for a wider range of users, ensuring it benefits society.

Societal and Environmental Well-being

AI systems should contribute positively to society and be environmentally sustainable. This means designing AI with societal goals, like reducing environmental harm or supporting ethical values. 

AI that promotes sustainability is responsible and benefits everyone in the long run.

Accountability

Accountability is about ensuring that someone is responsible for AI outcomes. AI systems should have mechanisms in place to trace actions, and ways should be available to address any negative impacts or misuse.

Frequently asked questions

What are the NIST requirements for AI?

The NIST AI RMF outlines requirements for developing and deploying trustworthy AI systems, focusing on reliability, safety, security, transparency, accountability, and fairness. Organizations must also establish governance frameworks to ensure compliance with ethical and regulatory standards for an effective AI risk management.

Which US agency is responsible for the AI risk management framework?

The National Institute of Standards and Technology (NIST), an agency of the U.S. Department of Commerce, is responsible for the AI Risk Management Framework (AI RMF). NIST develops and promotes measurement standards and technology to enhance innovation and industrial competitiveness. The agency collaborates with various stakeholders to ensure the framework’s relevance and applicability across different sectors.

When did NIST release the AI risk management framework?

NIST released the AI Risk Management Framework (AI RMF) on January 26, 2023.

Does NIST AI RMF have a certification?

Currently, the NIST AI RMF does not offer a formal certification. Instead, it serves as a guideline and best practices framework for organizations to align their AI risk management practices with. However, organizations can demonstrate compliance and adherence to the framework through self-assessments, third-party audits, and by implementing the recommended practices.

Who can perform NIST AI assessments?

NIST AI assessments can be performed by qualified internal teams, third-party auditors, or consultants with expertise in AI risk management and the NIST AI RMF. IS Partners offers a complete package of services to help organizations implement the AI RMF standards according to their industry requirements.

Get started

Get a quote today!

Fill out the form to schedule a free, 30-minute consultation with a senior-level compliance expert today!

ioc-checkAnalysis of your compliance needs
ioc-checkTimeline, cost, and pricing breakdown
ioc-checkA strategy to keep pace with evolving regulations

Great companies think alike.

Join hundreds of other companies that trust IS Partners for their compliance, attestation and security needs.

healthwaresystems logovrs-veraclaim-logorichmond-day-logoteladocmcl logoaffinity logo

Scroll to Top