AIHR’s AI readiness research shows that while HR leaders largely believe in AI’s potential, only 30% of HR teams have a clear purpose, defined value, and prioritized use cases for AI.
This gap is not driven by a lack of interest or resistance to change. Instead, it reflects a governance challenge. HR teams often lack clarity on what AI can be used for, who owns decisions and risks, and how AI should be adopted responsibly at scale.
Without clear governance, AI experimentation continues, but adoption stalls. Risk becomes a reason to delay rather than a factor to manage.
The AIHR AI Risk Framework is a strategic framework designed to help HR professionals safely and effectively adopt AI. In this article, we share the Framework, explore its parts, and provide guidance on implementation through practical steps and probing questions.
Why is an AI risk framework needed?
All risk frameworks help ensure legislative compliance and risk management, but they are also strategic tools to guide adoption, prioritize implementation initiatives, and support decision-making.
AIHR’s research shows that HR teams that treat AI risk purely as a compliance exercise tend to remain stuck in fragmented experimentation. In contrast, those that embed risk management into strategy, governance, and operating models are significantly more likely to scale AI responsibly and sustainably.
The purpose of the AIHR AI Risk Framework is threefold:
- To guide HR professionals in making informed decisions about adopting AI
- To provide a structured approach for risk monitoring and mitigation by providing an overview of risk exposure in the internal and external environment
- To define the process of risk management required to drive adoption
Our research consistently shows that HR teams with clearer decision frameworks and defined risk ownership move faster, not slower, in AI adoption.
Adopting and implementing the risk framework helps support the organization’s broader AI adoption strategy. It also leads to strategic outcomes for the safe, secure, and sustainable use of AI:
- Safe: Using AI in a fit-for-purpose manner that does no harm and does not exclude or discriminate unfairly,
- Secure: Ensuring that AI is used in a secure way to avoid cyber attacks, data mismanagement, and compromises of confidentiality and privacy, and
- Sustainable: Adopting AI practices that can be repeated and scaled ensures the sustained adoption of AI at scale across the organization in a manner that provides value.
The AI risk framework explained
Using and adopting AI comes with specific risks that need careful monitoring and management. One of the most obvious risks is related to the nature and functionality of AI technologies, including issues like bias, fairness, explainability, and unintended consequences.
Another type of risk arises from how AI technologies are used in practice, which can stem from gaps in user knowledge, lead to reputational damage, or conflict with organizational values. Additionally, as AI becomes widespread across industries, important legislative requirements and standards must be followed, creating compliance-related risks that need active management.
The 4 parts of the framework
The AIHR AI Risk Framework consists of four interconnected parts, each addressing significant risks linked to AI’s use and adoption:
- The first two parts focus on external risks (external environment) and internal risks (internal environment) within organizations.
- The third component is data governance, which is crucial for addressing both external and internal risks and requires dedicated attention.
- Finally, the framework outlines the various levels at which these risks should be managed to guide policy, practices, and individual behaviors that support effective adoption. A clear risk management process supports the framework, helping manage different risks across various levels.
AIHR’s AI readiness data shows that risk exposure increases sharply when governance, data management, and accountability mechanisms lag behind AI experimentation. The framework is intentionally designed to surface and manage risk across both external and internal environments, while anchoring accountability at the right organizational levels.

Risks related to the external environment
External risks are often outside the organization’s direct control, but their impact is amplified when organizations lack clear governance, transparency, and accountability mechanisms. These risks include:
Reputational risk
For HR, reputational risk emerges when AI is introduced into people decisions without clear governance, transparency, and accountability. AI-driven practices in hiring, promotions, performance evaluation, or employee monitoring are highly visible and deeply personal, making missteps particularly damaging to trust in HR and the organization as an employer.
Employee and candidate perceptions of AI use are shaped by broader concerns about workforce displacement, fairness, and whether organizations are replacing human judgment with automated decision-making. When HR cannot clearly explain or justify how AI is used, reputational damage can follow quickly.
Sustainability also plays an increasing role in employer reputation. The environmental impact of AI technologies, including energy consumption and carbon footprint, adds another layer of risk that HR must factor into responsible AI adoption and employer value propositions.
Questions to ask:
- What will people think about our organization if we take this action?
- How will this affect the environment and our sustainability goals?
- How will these actions impact the communities we serve and our customers’ perceptions?
What the research says
A recent Pew Research survey reveals that 52% of Americans are more worried than excited about AI’s growing role in everyday life, a rise from 38% in 2022. Awareness of AI is on the rise, with 90% of people having heard of it, but many are concerned about privacy and keeping human control over AI technologies. Opinions vary in areas like healthcare and online services, influenced by education and income levels. Nevertheless, privacy concerns are significant across all demographics.
Legislative risk
As governments worldwide introduce and expand regulations on AI, HR teams must remain vigilant in ensuring that AI use in people-related processes complies with evolving legal requirements. Regulations such as the EU AI Act and emerging U.S. legislation increasingly focus on high-risk AI applications, many of which sit squarely within HR, including recruitment, performance management, and reward decisions.
HR leaders need to regularly review how AI is used across HR practices to ensure compliance with transparency, fairness, and data governance requirements. Failure to do so can expose the organization to significant penalties, particularly where AI systems are found to discriminate or make employment decisions without appropriate oversight.
As AI regulation continues to develop, HR can no longer treat compliance as a future concern. Legislative risk is now a moving target that directly affects HR policies, processes, and technology choices, requiring active monitoring and HR-led governance.
Questions to consider:
- What local legislation do we need to comply with?
- How does global legislative sentiment affect our AI strategies and actions?
What the legislation says
State legislatures in the U.S. are increasingly introducing bills to regulate artificial intelligence. A significant step was taken when Colorado passed the Colorado AI Act on May 17, 2024, making it the first comprehensive AI law in the country. This law, set to take effect in 2026, focuses on regulating automated decision-making systems.
It defines high-risk AI systems as those involved in important decision-making, highlighting the need to prevent bias and discrimination in AI results. Developers and users must take reasonable steps to avoid any discriminatory impacts from AI-driven decisions.
California is also taking action on AI with its California Consumer Privacy Act, which includes rules for automated decision-making technology (ADMT). The California Privacy Protection Agency has released draft rules that outline consumer rights for notice, access, and opting out of ADMT.
Although these regulations are still being developed, they are expected to require more transparency on how businesses use AI when finalized in 2024. In 2023, more than 40 state AI-related bills were introduced, highlighting the growing focus on AI regulation across the nation.
Transparency and explainability
Increasing regulatory scrutiny is placing greater expectations on HR teams to be transparent about how AI is used in people-related decisions. Compliance with emerging legal frameworks will require HR to clearly document and explain where AI supports or influences HR processes such as recruitment, performance evaluation, promotions, and employee monitoring.
Beyond transparency, explainability presents a critical risk consideration for HR. Explainability refers to the ability of HR professionals to understand and explain how AI systems arrive at decisions that affect employees and candidates. When AI outcomes cannot be clearly explained, HR risks undermining trust, accountability, and the defensibility of people decisions.
As AI becomes more embedded in HR practices, the inability to explain AI-driven outcomes will increasingly be viewed as a governance failure. HR leaders must ensure that AI models used in HR are understandable, auditable, and supported by human expertise capable of explaining decisions when challenged.
Questions to ask:
- How would we explain the AI solution or output to someone unfamiliar with the technology?
- Would our actions pass the billboard test if put under the spotlight?
- Are we transparent about how we use data and for what purpose?
- Will our practices pass scrutiny from the outside?
AI transparency
Accountability for AI
In February 2024, Air Canada was ordered to pay damages to a passenger after its virtual assistant provided incorrect information during a difficult time. Following the death of his grandmother in November 2023, Jake Moffatt consulted the airline’s chatbot about bereavement fares. The virtual assistant advised Moffatt to purchase a regular-priced ticket from Vancouver to Toronto and apply for a bereavement discount within 90 days.
Acting on this advice, he bought a one-way ticket to Toronto and a return flight to Vancouver. However, when Moffatt submitted his refund claim, Air Canada rejected it, stating that bereavement fares could not be claimed after purchasing tickets.
Moffatt took the matter to a Canadian tribunal, arguing that the airline was negligent and had misrepresented its policies through the chatbot. Air Canada attempted to avoid liability by arguing it wasn’t responsible for the chatbot’s misinformation. The tribunal disagreed, stating that the airline failed to ensure the chatbot provided accurate information.
Risks related to the internal environment
AIHR’s research shows that internal AI risks most often emerge where skills, governance, and behavioral norms lag behind AI adoption. Without applied AI fluency and clear ethical guidance, even well-intentioned AI use can undermine trust and fairnessRisks within the internal environment are often more controllable and can be directly addressed by how a business applies and uses AI. These risks include:
Ethical considerations
Ethical considerations go beyond meeting legal requirements; organizations need to set clear principles for adopting AI. This means considering the effects of AI on job loss, the need for reskilling, and workforce changes. Having ethical guidelines can help navigate these challenges and ensure AI is used responsibly within the organization.
Questions to ask:
- Is this the right decision for our organization?
- Will these actions contradict our values and principles?
- What effect will these actions have on our culture?
Example from practice
OpenAI filtered out sexual and violent content from the dataset used to train DALL·E 3 by employing classifiers to detect inappropriate material. Likewise, different AI models learned to recognize which questions are not suitable for responses.
For instance, earlier models attempted to answer questions like “How do I build a bomb?” and those that could be interpreted as hate speech, while newer versions can now identify inappropriate prompts and refuse to answer.
Privacy and confidentiality
Data should always be handled with the utmost respect for privacy and confidentiality, whether it involves employees or customers. Organizations need to ensure that personal data is stored, processed, and managed according to legal standards and ethical practices. This includes how data is processed through AI tools and whether it influences AI-generated recommendations.
Questions to ask:
- How are we keeping data secure?
- Are we transparent about how we use the data?
- Is our data protected and compliant with regulations?
- Are we transparent about how we use individual data?
- Have we informed consumers and employees about our use of personal information?
Example from practice
In 2020, Clearview AI faced major backlash for collecting billions of photos from social media and websites without user consent to develop a facial recognition system. This raised significant privacy issues since many people didn’t know their images were being gathered and used. Clearview AI subsequently sold its technology to law enforcement agencies, adding further legal and ethical complications and igniting discussions about surveillance, privacy rights, and the misuse of personal data.
Bias and fairness
Using AI introduces risks related to bias, which need to be carefully mitigated. AI systems can interpret data in ways that inadvertently exclude certain groups or reinforce harmful stereotypes. For instance, AI tools in recruitment need close monitoring to ensure they don’t unfairly filter out candidates based on irrelevant criteria. It’s important to understand how AI makes decisions and ensure those decisions follow fairness principles to minimize bias.
A clear approach to managing bias begins with understanding where AI is used, for what purposes, and under what controls. Proper monitoring of AI algorithms is essential to ensure their behavior aligns with organizational goals and ethical standards.
Questions to ask:
- Do we have controls in place to monitor known biases?
- Do we have oversight to monitor how the AI model continues to learn?
- How frequently do we validate how AI performs in line with its intended purpose?
Example from practice
Biases have been identified in generative AI applications, particularly in how they portray professionals of different ages and genders. Academic research found that when prompted to create images of individuals in specialized professions, the system produced images of younger and older people, but older individuals were consistently depicted as men. This reinforces gender stereotypes, suggesting men are more likely to hold senior or specialized roles in the workplace.
Data governance as the cornerstone
Data governance—which includes how data is used, stored, and eventually destroyed—is a key concern for both external and internal environments. Organizations need to ensure their data handling practices meet changing legal requirements and align with ethical decisions about data use, all while being transparent to build trust with stakeholders.
Good data governance should focus on the following aspects:
- Data quality and integrity: This means ensuring data is complete, consistent, valid, and accurate.
- Data collection practices: Organizations should clearly identify data sources, label data accurately for AI training, and work to reduce bias in data collection.
- Data privacy and security: Compliance with data privacy regulations and best practices, including techniques like anonymization, encryption, and access control, is essential.
- Data lifecycle management: This involves managing data retention, storage, and disposal, along with traceability and versioning of datasets to support reproducibility and auditing of AI models.
Managing risks across three levels
Risks related to the external and internal environment and data management can arise at various levels. Therefore, when considering how to manage AI-related risks best, this has to be done across three levels within the organization.

Level 1: Individual behavior
At the individual level, HR practitioners must address how employees interact with AI tools. Proper education, clear guidelines, and regular training are necessary to ensure employees understand their ethical and legal obligations when using AI.
Level 2: Processes, practice, and systems
The second level involves the systems, processes, and practices for AI adoption. Organizations must carefully manage how AI is integrated into workflows, including monitoring third-party vendor systems to ensure alignment with internal policies.
Managing AI-related risks is not a once-off event but should be done through a continuous cycle that helps identify risks, outlines necessary actions to mitigate and manage, and monitors risks in the longer term. This includes four steps:
- Step 1. Identify: Recognize existing and emerging risks within the framework.
- Step 2. Mitigate: Develop and implement strategies to reduce identified risks.
- Step 3. Monitor: Regularly review risks and the effectiveness of mitigation efforts.
- Step 4 . Audit: Conduct periodic audits to assess the framework’s performance and adjust as needed.
AIHR’s research shows that organizations that treat AI risk management as a continuous cycle, rather than a one-off exercise, are far more likely to sustain AI adoption and build trust across the workforce.
Level 3: Organizational policies and philosophy
At the highest level, organizations should establish a formal AI policy outlining their AI governance approach. This policy should guide decision-making across all aspects of AI adoption, from risk mitigation to ethical considerations.
Implementing a unified and continuous risk management process
Managing AI-related risks is not a once-off event but should be done through a continuous cycle that helps identify risks, outlines necessary actions to mitigate and manage, and monitors risks in the longer term. This includes four steps:
- Step 1 – Identify: Recognize existing and emerging risks within the framework.
- Step 2 – Mitigate: Develop and implement strategies to reduce identified risks.
- Step 3 – Monitor: Regularly review risks and the effectiveness of mitigation efforts.
- Step 4 – Audit: Conduct periodic audits to assess the framework’s performance and adjust as needed.
This approach surfaces and manages risks consistently and ensures that risk management is strategically aligned and responsive to internal and external changes.
Take action
In an AI-driven future, HR’s credibility will be defined not by how fast it adopts AI, but by how responsibly and confidently it enables the organization to do so.
By adopting a holistic risk framework, HR professionals can confidently navigate the complexities of AI, ensuring its adoption is secure, sustainable, and aligned with the ultimate goal: creating meaningful value for the organization and its people.






