Skip to main content

On-demand webinar coming soon...

Blog

Navigating the EU AI Act

Parliament has approved the AI Act, ensuring safety and fundamental rights while protecting innovation

Param Gopalasamy
Content Marketing Specialist, CIPP/E, CIPP/US, CIPM
November 13, 2023

EU flags in front of the EU Commission Building

A horizontal approach: Standing apart on the global stage 

In crafting its approach to artificial intelligence (AI) legislation, the European Union (EU) has opted for a horizontal legislative framework. The EU’s AI legal framework embraces an industry-agnostic perspective and is meticulously designed with nearly a hundred articles.

Here, we’ll look to provide a window into the EU AI Act. This piece of legislation is not just the first of its kind—but also a benchmark for global AI regulation, developed to help create a precedent in the rapidly evolving AI landscape.  

Guarding values, fueling innovation 

The EU AI Act is carefully balanced. It’s not just about throwing a safety net around society, economy, fundamental rights, and the bedrock values of Europe that might be at risk due to AI systems; it’s also a nod to the power and potential of AI innovation, with built-in safeguards designed to promote and protect inventive AI strides. It looks to strike the balance of risk management and protecting critical infrastructure from potential pitfalls, while promoting the innovations that general-purpose AI can bring with it.

Crafting the EU AI Act has been anything but a walk in the park, with the definition of AI being one of the contentious corners. Since its inception proposal in April 2021, the Act has been a living document, seeing numerous iterations, each amendment reflecting the fluid discourse around AI technology and its implications for society. 

At the trilogue meeting in December, France, Germany, and Italy raised concerns about the limitations placed on powerful AI models, and wanted to take a lighter regulatory regime for models like OpenAI’s GPT-4.  

After ongoing discussion, the compromise reached by the EU commission was to take a tiered approach, with horizontal transparency rules for all models and additional obligations for compelling models with systemic risk. 

 

Where the Act stands now

On February 2, 2024, the Committee of Permanent Representatives voted to endorse the political agreement reached in December of 2023. On March 13, Parliament voted to endorse the Act, with 523 votes in favor, 46 against and 49 abstentions.

The AI Act will enter into force 20 days after its publication in the EU’s Office Journal. The provisions on prohibited systems will apply after 6 months, and obligations for providers of general-purpose AI will apply after 12 months. Most other requirements will apply after two years. 

High risk systems that are intended to be used as a safety component of a product or are covered by other laws in the EU have 36 months to comply with the EU AI Act. 

 

AI: Breaking down the concept 

Originally, the Act defined machine learning, the basis of AI systems,  as “including supervised, unsupervised and reinforcement learning, using a wide variety of methods including deep learning.” The text includes an updated definition, which defines AI systems as “machine-based systems designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.”  

The complexity of AI systems is a sliding scale, with more intricate systems requiring substantial computing power and input data. The output from these systems can be simple or mightily complex, varying with the sophistication of the AI in play.

This broad definition covers a range of technologies, from your everyday chatbots to highly sophisticated generative AI models. But it’s important to note that not every AI system falling under the Act’s broad definition will be regulated. The Act plays it smart with a risk-based approach, bringing under its regulatory umbrella only those systems associated with specific risk levels.AI regulation: Calibrated to risk 

 

AI Regulation: Calibrated to Risk

Here’s where it gets interesting. The EU AI Act has different baskets for AI systems. Some are seen as posing an unacceptable risk to European values, leading to their prohibition. High-risk systems, while not banned, have to dance to a tighter regulatory tune. It’s vital to remember that these risk categories aren't static; the Act is still in a draft stage, and as more changes come, these risk categories will likely be fine-tuned as well. 
 

EU AI Act risk levels 

The EU AI Act defines multiple levels of permissible risk: high risk, limited risk, and minimal risk. 

These are the levels of “permissible risk” that are allowed by organizations, however, “unacceptable risk” is a risk level which is not allowed by organizations at which point companies need to change their models accordingly.  

 

  • Unacceptable Risk — Social scoring systems, real-time remote biometric verification.

  • High Risk — Credit scoring systems, automated insurance claims . 

    For processes that fall into this bucket, companies need to conduct a conformity assessment and register it with an EU database before the model is available to the public. 

    Apart from this, these high-risk processes require detailed logs and human oversight as well.  

  • Limited Risk — Chatbots, personalization. 

    For limited risk processes, companies need to ensure that they’re being completely transparent with their customers about what AI is being used for and the data involved.  

  • Minimal Risk — For any processes that companies use that fall into the “minimal risk” bucket, the draft EU AI Act encourages providers to have a code of conduct in place that ensures AI is being used ethically. 

Pyramid graphic showing the levels of permissable AI risk areas defined by the EU AI Act and what the act requires organizations to do to address these areas of risk. Starting from the lowest level: Minimal risk areas require a code of conduct; limited risk areas need transparency; high risk areas need conformity assessments; and at the top level are areas that are considered unacceptable.

 

Conformity assessments 

Of these risk levels, high-risk systems will pose the highest compliance burden on organizations, as they’ll have to continue to meet obligations for conformity assessments. Conformity assessments (CA) require companies to ensure that their “high-risk” systems meet the following: 

  • The quality of data sets used to train, validate and test the AI systems; the data sets must be “relevant, representative, free of errors and complete.”
  • Detailed technical documentation.

  • Record-keeping in the form of automatic recording of events.

  • Transparency and the provision of information to users.

  • Human oversight.

This assessment is mandatory before a high-risk AI system is made available or used in the EU market. It ensures that AI systems comply with EU standards, particularly if there are significant modifications or changes in intended use. The main responsible party for CA is the “provider” –the entity putting the system on the market. However, under certain circumstances, the responsibility can shift to the manufacturer, distributor, or importer, especially when they modify the system or its purpose. 
 

Who performs a CA? 

The CA can be done internally or by an external “notified body.” Internal CAs are common as providers are expected to have the necessary expertise. Notified bodies come into play particularly when an AI system is used for sensitive applications like real-time biometric identification and does not adhere to pre-defined standards. 

During an internal CA, the provider checks compliance with quality management standards, assesses technical documentation, and ensures the AI system's design and monitoring are consistent with requirements. Success results in an EU declaration of conformity and a CE marking, signaling compliance, which must be kept for ten years and provided to national authorities if requested. 

For third-party CAs, notified bodies review the system and its documentation. If compliant, they issue a certificate; otherwise, they require the provider to take corrective action. 
 

How often should you perform a CA? 

Conformity assessment isn't a one-off process; providers must continually monitor their AI systems post-market to ensure they remain compliant with the evolving draft EU AI Act. In cases where a notified body is involved, they will conduct regular audits to verify adherence to the quality management system. 
 

Engaging all players in the AI game 

The EU AI Act is not just handing out responsibilities to AI providers; it’s casting its net wider to include various actors in the AI lifecycle, from users to deployers. And its reach is not just limited to the EU; it has global ambitions, affecting entities even outside the EU, thus having implications that are worldwide. 
 

Fines: A significant deterrent 

With the EU Parliament's recent adjustments to the EU AI Act, the fines for non-compliance have seen a hike, now standing at a maximum of 35 million euros or up to 7% of global turnover. For context, these fines are 50% greater than that of the GDPR, which has maximum fines of $20M or 4% of global turnover, underlining the EU’s commitment to ensuring strict adherence to the EU AI Act.  

 

Charting the course towards regulated AI 

The EU AI Act is a bold statement by the EU, meticulously balancing the act of fostering AI innovation while ensuring that the core values and rights of society are not compromised. With the Act inching closer to its final stages of approval, it’s crucial for everyone in the AI space to keep an eye on its development.  

Whether you’re a provider, user, or someone involved in the deployment of AI, preparing for a future where AI is not just a technological marvel but also a subject of defined legal boundaries and responsibilities is imperative. This introduction offers a glimpse into the EU AI Act’s journey and potential impact, setting the stage for the deeper analysis that unfolds in the subsequent sections. So, buckle up and let’s dive deeper into understanding the nuances and implications of the EU AI Act together. 
 

AI frameworks: A global perspective 

A landscape in flux: The global heat map of AI frameworks 

The global AI framework landscape underscores the imperative need for more cohesive international rules and standards pertaining to AI. The proliferation of AI frameworks is undeniable, calling for enhanced international collaboration to at least align on crucial aspects, such as arriving at a universally accepted definition of AI. 
 

Global map showing the different AI regulations and proposals from various major countries.

 

Within the tapestry of the European Union’s legal framework, the EU AI Act is a significant thread, weaving its way towards completion. Concurrently, there’s a mosaic of initiatives at the member-state level, with authoritative bodies across various nations rolling out non-binding guidelines, toolkits, and resources aimed at providing direction for the effective use and deployment of AI. 
 

Efficient future processes through AI

AI promises quicker, more efficient, and accurate processes in various sectors. For example, in insurance, AI has streamlined the assessment process for car accidents, optimizing a process that was once manual and lengthy. This example serves as a testament to AI's potential to significantly improve various aspects of business and everyday life. 

But engaging with AI is a nuanced dance, a careful balancing act between leveraging its unparalleled potential and navigating the associated risks. With its transformative and disruptive capabilities, AI invites cautious and informed engagement. 

Recognizing its transformative power while preparing for the challenges it brings to the table is essential for individuals and organizations alike as they navigate the dynamic landscape of artificial intelligence in the modern age. 
 

Weighing AI’s pros and cons in business 

Risks: Transparency, accuracy, and bias 

Despite its myriad advantages, AI isn’t without substantial challenges and risks. For starters, some AI systems, which may be perceived as “black boxes,” have been the subject of intense scrutiny and debate over transparency issues. This concern is particularly salient with larger AI systems, such as extensive language models, where there’s a lack of clarity on the training data employed. This raises significant copyright and privacy concerns, which need to be addressed head-on.  

Furthermore, the struggle with ensuring the accuracy of AI systems persists, with several instances of erroneous AI responses and predictions documented. Notably, bias that may arise in AI systems—stemming from the prejudiced data they may be trained on—poses a risk of discrimination, requiring vigilant monitoring and rectification efforts from stakeholders involved. 
 

AI as solution: Turning risks into opportunities 

Interestingly, AI isn’t just a challenge; it is also a potential solution to these conundrums. For instance, AI can be leveraged to identify and mitigate biases within datasets. Once these biases are discerned, strategic steps can be taken to rectify them, ensuring that AI can be harnessed optimally to maximize its benefits while minimizing associated risks. 
 

Developing AI governance: The way forward 

Laying the foundations for AI governance 

With the dynamic and complex AI landscape unfolding rapidly, there is an urgent need for legal and privacy professionals to lay the groundwork for robust AI governance and compliance programs. A wealth of existing guidance provides a preliminary roadmap for the essential requirements of such programs, with senior management's endorsement being a pivotal first step in this endeavor.  

Engaging C-suite executives and ensuring they comprehend the magnitude and intricacies of AI's influence is crucial for fostering a culture of AI responsibility throughout the organization. This initiative transcends mere compliance, extending to building trust in AI applications – a cornerstone for successful business operations. 
 

Practical steps towards an AI governance framework 

On the material front, organizations can use practical guidelines for ethical AI use. These guidelines are aligned with the AI principles from the Organization for Economic Cooperation and Development (OECD): 

 

  1. Transparency: Efforts should be directed towards demystifying AI applications, making their operations and decisions understandable and explainable to users and stakeholders. 

  2. Privacy Adherence: AI applications should respect and protect users’ privacy, handling personal data judiciously and in compliance with relevant privacy laws and regulations. 

  3. Human Control: Especially in high-risk areas, there should be mechanisms for human oversight and control over AI applications, ensuring they align with human values and expectations.

  4. Fair Application: Strategies for detecting and mitigating biases in AI applications should be implemented, promoting fairness and avoiding discrimination.

  5. Accountability: There should be comprehensive documentation and recording of AI operations, allowing for scrutiny, accountability, and necessary corrections. 

 

AI ethics policy: A critical element 

The establishment of AI Ethics Policies, informed by ethical impact assessments, is essential in navigating challenges and making informed, ethical decisions regarding AI use. For example, instead of outright blocking certain AI applications, ethical impact assessments can guide organizations in implementing nuanced, responsible use policies, especially for sensitive data. Ethical considerations should inform every step of AI application, from inception and development to deployment and monitoring. 
 

Inclusive AI governance: A size-agnostic imperative 

Importantly, AI governance is not an exclusive domain of large corporations with extensive resources. With AI use cases proliferating across various sectors, companies of all sizes will inevitably engage with AI, necessitating AI governance frameworks tailored to their specific needs and capacities. 

A few universal principles apply regardless of the company’s size. First, securing executive buy-in and adopting a multidisciplinary approach is imperative for successful AI governance implementation.  

Second, organizations should commence with high-level principles as a starting point, even if they are small or merely purchasing ready-made AI models. Training and upskilling employees across various functions, including procurement and technology, is also vital to understand and mitigate the risks associated with AI tools and applications. 
 

Embedding core governance principles 

Six core governance principles need to be embedded into AI governance programs: 

  1. Governance and Accountability: Establishing a structure for accountability, possibly through AI oversight committees or ethics review boards, is essential. Governance should be enforced throughout AI’s lifecycle, from inception to operation.

  2. Human Oversight: Adopting a human-centric approach, with trained human reviewers at various stages, is crucial for ethical AI application. 

  3. Fairness and Ethics Alignment: AI outputs should align with fairness and ethical standards, reflecting an organization’s culture and values. 

  4. Data Management: Implementing robust data management processes, tracking modifications to datasets and mapping data sources, is key for reliable AI systems. 

  5. Transparency Enhancement: Ensuring that AI decision-making processes are transparent and understandable is necessary for building trust and compliance. 

  6. Privacy and Cybersecurity: Addressing legal data processing requirements, conducting privacy impact assessments, and mitigating AI-specific cyber risks are imperative for secure and compliant AI applications. 

Given the pace at which AI is evolving and its profound implications, organizations must proactively develop and implement AI governance programs. By adopting a set of core governance principles and practices, organizations can navigate the AI landscape responsibly, ethically, and effectively. These principles, informed by ethical considerations, legal compliance, and a commitment to transparency and accountability, will guide organizations in harnessing AI’s benefits while mitigating its risks, ultimately fostering trust and success in the AI-driven future. 
 

Value-driven AI governance 

As organizations delve deeper into the realm of AI, developing and implementing AI governance programs aligned with their values is paramount. These governance frameworks should not only ensure compliance with legal standards but also reflect the ethical commitments and values of the organizations.  

Whether it's about making tough trade-offs between transparency and security or deciding on the ethical use of data, a values-driven approach to AI governance provides a reliable compass guiding organizations through the intricate landscape of AI applications and ethics. 
 

Final thoughts and tips on AI governance 

AI, GDPR, and data privacy 

When considering the interaction between AI, the draft of the EU AI Act, and GDPR, it’s crucial to acknowledge existing guidance on utilizing AI in line with GDPR. Noteworthy resources include the toolkit provided by the UK’s Information Commissioner's Office (ICO) and the comprehensive guidance and self-assessment guide offered by France's CNIL. These tools offer valuable controls and checklists, assisting organizations in ensuring compliance of their AI use with GDPR requirements. 

A starting point for aligning data usage within AI frameworks with GDPR principles is to conduct diligent Data Protection Impact Assessments (DPIAs) to ensure that all these processes remain compliant.  

AI governance start point: Privacy professionals are well-positioned to serve as orchestrators, bringing together various functions and skill sets within organizations to address AI governance comprehensively. This collaborative approach not only ensures compliance but also functions as a business enabler, fostering a proactive and informed approach to emerging challenges and opportunities in the AI landscape. 

Keep calm and AI: Embrace technological developments with a sense of calm and curiosity. Engaging with the fast-paced and continually evolving field of AI requires a willingness to learn and adapt, acknowledging that understanding and addressing the risks and potentials of AI is a journey rather than a destination. 

Evolution of Professional Roles: With the continuous changes in technology and data processing, the roles of data protection officers are evolving, potentially transitioning towards “data trust officers”. It’s imperative for professionals in the field to be open to assuming new roles and responsibilities as the technology and regulatory landscape transforms. 

 

To give your organization a 5-step plan to get started: 

  1. Engage with AI governance programs immediately; proactive engagement is crucial.

  2. Secure management buy-in since AI governance requires a multi-stakeholder, enterprise-wide approach.

  3. Assemble a diverse and skilled team, encompassing legal, compliance, data science, HR, information security, and possibly external experts .

  4. Prioritize, set realistic and achievable goals, and consider adopting a phased approach to AI governance. 

  5. Stay abreast of AI developments, actively engage with industry peers, and participate in AI governance initiatives to foster a collaborative and informed community. 

With the evolving landscape of AI, organizations must proactively engage with AI governance. A collaborative, multi-stakeholder approach is necessary to address the complex challenges and opportunities presented by AI.  

 

To learn more about how AI Governance can help your organization, request a demo today.


You may also like

Webinar

Privacy Management

Scaling to new heights with AI Governance

Join OneTrust experts to learn about how to enforce responsible use policies and practice “shift-left” AI governance to reduce time-to-market.

June 25, 2024

Learn more

Webinar

AI Governance

Governing data for AI

In this webinar, we’ll break down the AI development lifecycle and the key considerations for teams innovating with AI and ML technologies.

June 04, 2024

Learn more

Webinar

AI Governance

Embedding trust by design across the AI lifecycle

In this webinar, we’ll look at the AI development lifecycle and key considerations for governing each phase.

May 07, 2024

Learn more

Webinar

AI Governance

Navigating AI policy in the US: Insights on the OMB Announcement

This webinar will provide insights for navigating the pivotal intersection of the newly announced OMB Policy and the broader regulatory landscape shaping AI governance in the United States. Join us as we unpack the implications of this landmark policy on federal agencies and its ripple effects across the AI ecosystem.

April 18, 2024

Learn more

Webinar

AI Governance

Data privacy in the age of AI

In this webinar, we’ll discuss the evolution of privacy and data protection for AI technologies.

April 17, 2024

Learn more

Resource Kit

AI Governance

OneTrust's journey to AI governance resource toolkit

What actually goes into setting up an AI governance program? Download this resource kit to learn how OneTrust is approaching our own AI governance, and our experience may help shape yours.

April 11, 2024

Learn more

Interactive Tool

Privacy Management

OneTrust Data Privacy Maturity Model self-assessment

This self-assessment will help you to gauge the maturity of your privacy program and understand the areas the areas of improvement that can further mature your privacy operations.

April 01, 2024

Learn more

Webinar

AI Governance

AI in (re)insurance: Balancing innovation and legal challenges

Learn the challenges AI technology poses for the (re)insurance industry and gain insights on balancing regulatory compliance with innovation.

March 14, 2024

Learn more

Webinar

Privacy Management

Fintech, data protection, AI and risk management

Watch this session for insights and strategies on buiding a strong data protection program that empowers innovation and strengthens consumer trust.

March 13, 2024

Learn more

Webinar

Privacy Management

Managing cybersecurity in financial services

Get the latest insights from global leaders in cybersecurity managment in this webinar from our Data Protection in Financial Services Week 2024 series.

March 12, 2024

Learn more

Webinar

AI Governance

Government keynote: The state of AI in financial services

Join the first session for our Data Protection in Financial Services Week 2024 series where we discuss the current state of AI regulations in the EU.

March 11, 2024

Learn more

White Paper

AI Governance

Getting started with AI governance: Practical steps and strategies

Download this white paper to explore key drivers of AI and the challenges organizations face in navigating them, ultimately providing practical steps and strategies for setting up your AI governance program.

March 08, 2024

Learn more

Webinar

AI Governance

Revisiting IAPP DPI Conference – Key global trends and their impact on the UK

Join OneTrust and PA Consulting as they discuss key global trends and their impact on the UK, reflecting on the topics from IAPP DPI London.

March 06, 2024

Learn more

Webinar

AI Governance

AI regulations in North America

In this webinar, we’ll discuss key updates and drivers for AI policy in the US; examining actions being taken by the White House, FTC, NIST, and the individual states. 

March 05, 2024

Learn more

In-Person Event

Responsible AI

Data Dialogues: Implementing Responsible AI

Learn how privacy, GRC, and data professionals can assess AI risk, ensure transparency, and enhance explainability in the deployment of AI and ML technologies.

February 23, 2024

Learn more

AI Governance

Catch it Live: See the All-New Features in OneTrust's Winter Release

See the latest OneTrust platform features that improve on customers' ability to build trust, ensure compliance, and manage risk.

February 22, 2024

Learn more

Webinar

AI Governance

Global trends shaping the AI landscape: What to expect

In this webinar, OneTrust DataGuidance and experts will examine global developments related to AI, highlighting key regulatory trends and themes that can be expected in 2024.

February 13, 2024

Learn more

eBook

Privacy Management

Understanding the Data Privacy Maturity Model

Data privacy is a journey that has evolved from a regulatory compliance initiative to a customer trust imperative. This eBook provides an in-depth look at the Data Privacy Maturity Model and how the business value of a data privacy program can realised as it matures.

February 07, 2024

Learn more

Webinar

AI Governance

The EU AI Act

In this webinar, we’ll break down the four levels of AI risk under the AI Act, discuss legal requirements for deployers and providers of AI systems, and so much more.

February 06, 2024

Learn more

Webinar

Responsible AI

Preparing for the EU AI Act: Part 2

Join Sidley and OneTrust DataGuidance for a reactionary webinar to unpack the recently published, near-final text of the EU AI Act.

February 05, 2024

Learn more

Data Sheet

Privacy Automation

An overview of the Data Privacy Maturity Model

Data privacy is evolving from a regulatory compliance initiative to a customer trust imperative. This data sheet outlines the four stages of the Data Privacy Maturity Model to help you navigate this shift.

February 05, 2024

Learn more

Checklist

AI Governance

Questions to add to existing vendor assessments for AI

Managing third-party risk is a critical part of AI governance, but you don’t have to start from scratch. Use these questions to adapt your existing vendor assessments to be used for AI.

January 31, 2024

Learn more

Webinar

AI Governance

Getting started with AI Governance

In this webinar we’ll look at the AI Governance landscape, key trends and challenges, and preview topics we’ll dive into throughout this masterclass.

January 16, 2024

Learn more

Webinar

AI Governance

First Annual Generative AI Survey: Business Rewards vs. Security Risks Panel Discussion

OneTrust sponsored the first annual Generative AI survey, published by ISMG, and this webinar breaks down the key findings of the survey’s results.

January 12, 2024

Learn more

Report

AI Governance

ISMG's First annual generative AI study - Business rewards vs. security risks: Research report

OneTrust sponsored the first annual ISMG generative AI survey: Business rewards vs. security risks.

January 04, 2024

Learn more

Webinar

AI Governance

Building your AI inventory: Strategies for evolving privacy and risk management programs

In this webinar, we’ll talk about setting up an AI registry, assessing AI systems and their components for risk, and unpack strategies to avoid the pitfalls of repurposing records of processing to manage AI systems and address their unique risks. 

December 19, 2023

Learn more

Webinar

Responsible AI

Preparing for the EU AI Act

Join Sidley and OneTrust DataGuidance for a reactionary webinar on the EU AI Act.

December 14, 2023

Learn more

Webinar

Consent & Preferences

Marketing Panel: Balance privacy and personalization with first-party data strategies

Join this on-demand session to learn how you can leverage first-party data strategies to achieve both privacy and personalization in your marketing efforts.

December 04, 2023

Learn more

Webinar

AI Governance

Revisiting IAPP DPC: Top trends from IAPP's privacy conference in Brussels

Join OneTrust and KPMG webinar to learn more about the top trends from this year’s IAPP Europe DPC. 

November 28, 2023

Learn more

eBook

AI Governance

Navigating the draft EU AI Act

With the use of AI proliferating at an exponential rate, the EU is in the process of rolling out a comprehensive, industry-agnostic regulation that looks to minimize AI’s risk while maximizing its potential.

November 17, 2023

Learn more

eBook

Responsible AI

Conformity assessments under the proposed EU AI Act: A step-by-step guide

Conformity Assessments are a key and overarching accountability tool introduced by the EU AI Act. Download the guide to learn more about the Act, Conformity Assessments, and how to perform one.

November 17, 2023

Learn more

Webinar

Responsible AI

OneTrust AI Governance: Championing responsible AI adoption begins here

Join this webinar demonstrating how OneTrust AI Governance can equip your organization to manage AI systems and mitigate risk to demonstrate trust.

November 14, 2023

Learn more

White Paper

AI Governance

AI playbook: An actionable guide

What are your obligations as a business when it comes to AI? Are you using it responsibly? Learn more about how to go about establishing an AI governance team. 

October 31, 2023

Learn more

Infographic

AI Governance

The Road to AI Governance: How to get started

AI Governance is a huge initiative to get started with for your organization. From data mapping your AI inventory to revising assessments of AI systems, put your team in a position to ensure responsible AI use across all departments.

October 06, 2023

Learn more

White Paper

AI Governance

How to develop an AI governance program

Download this white paper to learn how your organization can develop an AI governance team to carry out responsible AI use in all use cases.

October 06, 2023

Learn more

eBook

Responsible AI

AI, Chatbots, and beyond: Your questions answered

We answer your questions about AI and chatbot privacy concerns and how it is changing the global regulatory landscape.

August 08, 2023

Learn more

Webinar

Responsible AI

Unpacking the EU AI Act and its impact on the UK

Prepare your business for EU AI Act and its impact on the UK with this expert webinar. We explore the Act's key points and requirements, building an AI compliance program, and staying ahead of the rapidly changing AI regulatory landscape.

July 12, 2023

Learn more

Webinar

Responsible AI

AI, chatbots and beyond: Combating the data privacy risks

Prepare for AI data privacy and security risks with our expert webinar. We will delve into the evolving technology and how to ensure ethical use and regulatory compliance.

June 27, 2023

Learn more

Webinar

AI Governance

The EU's AI Act and developing an AI compliance program

Join Sidley and OneTrust DataGuidence as we discuss the proposed EU AI Act, the systems and organizations that it covers, and how to stay ahead of upcoming AI regulations.

May 30, 2023

Learn more

White Paper

AI Governance

Data protection and fairness in AI-driven automated data processing applications: A regulatory overview

With AI systems impacting our lives more than ever before, it's crucial that businesses understand their legal obligations and responsible AI practices.  

May 15, 2023

Learn more

Webinar

AI Governance

AI regulation in the UK – The current state of play

Join OneTrust and their panel of experts as they explore Artificial Intelligence regulation within the UK, sharing invaluable insights into where we are and what’s to come.

March 20, 2023

Learn more

Regulation Book

AI Governance

AI Governance: A consolidated reference

Download this reference book and have foundational AI governance documents at your fingertips as you position your organization to meet emerging AI regulations and guidelines.

Learn more

Webinar

AI Governance

AI governance masterclass

Navigate global AI regulations and identify strategic steps to operationalize compliance with the AI governance masterclass series.

Learn more

Webinar

AI Governance

Mature your data privacy program

OneTrust DataGuidance and Sidley are joined by industry experts for the annual Data Protection in Financial Services Week.

Learn more