• Resources Home 🏠
  • Try SciSpace Copilot
  • Search research papers
  • Add Copilot Extension
  • Try AI Detector
  • Try Paraphraser
  • Try Citation Generator
  • April Papers
  • June Papers
  • July Papers

SciSpace Resources

AI for Essay Writing — Exploring Top 10 Essay Writers

Sumalatha G

Table of Contents

Let’s admit it — essay writing is quite a challenging task for students. Especially with the stringent deadlines, conducting research, writing , editing, and addressing to-and-fro reviews — consumes a whole lot of time and often becomes stressful. Therefore, students are always on the lookout for tools that speed up the essay writing process.

And that’s when AI writing tools make their debut! Using the best AI for essay writing makes the lives of students much easier by automatically generating the essay for them.

The rise in the popularity of artificial intelligence technology and deep learning has paved the way for the numerous AI writer tools available today. To help you understand the different types of AI tools and their benefits, we’ve uncovered the features of the top 10 AI essay generators in this article.

Let’s explore the tools and learn how they are transforming the tedious task of essay writing!

What is essay writing?

Essay writing is a part of academic writing that emphasizes formulating an idea or argument. The main objective of academic essay writing is to present a well-reasoned argument or idea. Evidence, analysis, and interpretation are the three major components of essay writing . It should have a logical structure to support the argument or idea of the essay so that it communicates clearly and concisely.

What is an AI essay writer?

AI essay writers is a tool that is designed to help students generate essays using machine learning techniques. They can be used to generate a full essay or generate a few parts of the essay, for example, essay titles, introduction, conclusion, etc.

Why should researchers use AI essay generators?

There are infinite benefits to using AI tools for writing unique essays, especially for researchers or students. Here are a few of them —

1. Saves time

Using best AI for essay writing has its own benefits. Students can take care of the research process while these AI tools write the essays for them. Be it an essay topic or a full-length essay generation, it saves a bunch of students' time.

2. Boosts productivity

Writing is a tedious task especially when you want to write an essay about a novel topic, that writer’s block starts haunting and your productivity gets affected. But, with AI, it’s the other way around and increases productivity by quickly generating the essays for you.

3. Enhances writing skills — Vocabulary and Style

Adopting the best AI essay writing AI tool not only help with creating essays but also help us hone our writing skills by giving proper suggestions about grammar, sentence structure, tone, style, and word choice.

4. Reduces stress

Students often undergo a lot of pressure and stress because of deadlines and submissions. With the best AI essay generator, they help you write essays smarter thereby reducing stress and fear in no time.

5. Facilitates multidisciplinary research

AI essay writing tools foster interdisciplinary study through their ability to scan and combine knowledge from multiple domains. That way, it helps us quickly get a grasp of new subjects or topics without a heavy-lifting process.

6. Cost-effective

Most of the AI essay writing tools have lower pricing and also allow certain discounts for students. So, it is also a cost-effective approach to use AI writing tools.

The Top AI Essay Writing Tools and Their Features

Several AI essay writers are available based on the types of essays one would want to generate. Now, let's quickly understand the top 10 AI writing tools that generate essays within just a few minutes.

1. PerfectEssayWriter.ai

Perfect-Essay-Writer-AI

It is one of the best AI for essay writing that not only creates an essay but also comes up with advanced features including plagiarism detection, auto-referencing, and contextual analysis. As a result, it generates coherent essays that are well-researched and properly cited. It is best recommended for creating academic essays and essay outlines.

How does PerfectEssayWriter work?

  • Pick the right tool for your purpose — Go with an essay writer if you want to generate a full essay or choose the essay outliner if you want to create just the outline of the essay.
  • Enter your specific conditions and preferences. Add essay topic, academic level, essay type, number of pages, and special instructions, if any.
  • Click on “generate” and wait for the result
  • Once you have the essay generated, you can review, edit, or refine it and then download it.
  • Generates a large chunk of data up to 2000 words
  • Output is provided within 90 seconds
  • Provides a plethora of other tools like Citation generator, grammar checker, thesis statement generator, and more
  • Comes with 10+ essay writing templates
  • Subscription-based and not a free tool
  • Human review is a mandate

2. Essaybot - Personalized AI writing

Essaybot

Essaybot is the product of a reputed online essay-writing service, MyPerfectWords. It is meant to enhance academic essay writing and streamline the tasks of students. Its user friendly website makes it an instant and hassle-free essay generation saving a lot of time and effort for students.

How does Essaybot work?

  • Enter the essay title or topic
  • Click on “start writing” and wait for it to generate a well-reasoned essay.
  • The tools come for free
  • No sign-up is required
  • 100% unique and High-quality output
  • Very limited features that lack advanced functionalities

3. FreeEssayWriter.net

FreeEssayWriter.net

FreeEssayWriter is an organization that provides essay-writing services to students worldwide. It has an AI essay typer tool — that helps you generate essays instantly. What sets this essay typer apart is its initiative to help students with their free essay writer providing the students with a 2-page free essay.

How does FreeEssayWriter.net work?

It works similarly to Essaybot, input the title or the topic of your essay and wait for it to generate the essay. They also have an option to edit and download a free version of the generated essay instantly.

  • Provides high-quality essays and is considered to be one of the reliable and trusted sources of information
  • Students can improve their writing skills and learn more about essays by referring to their free essay database or sources
  • Priority customer support is available 24*7
  • The site is not optimized for mobile devices
  • The quality of the essay output could still be improved

4. MyEssayWriter

MyEssayWriter

This AI essay writing tool is no exception in terms of generating a high-quality essay. You can generate essays for various topics depending on the background of your research study. Be it academic or non-academic essay writing, this tool comes in handy.

How does MyEssay Writer work?

Add your preferences and then click on generate. It will give you a high-quality and 100% unique essay crafted based on your requirements.

  • The tool comes for free — no subscription is required
  • Knows for its consistency in the quality and the tone of the essay output
  • Also has a paid custom writing service that provides human-written essays
  • Might not provide quality output for complex and technical-based keywords or topic

5. College Essay AI

College-Essay-AI

College essay AI stands unique as an ai writing tool as it not only uses an AI-based algorithm to generate essays but it also backs up the output as it is reviewed and approved by a team of professional experts. It is the best AI essay writing tool for college and graduate students where the output adheres to the graduate students' essay writing guidelines.

How does the College Essay AI generator work?

  • Input the required information — essay topic, academic level, number of pages, sources, and specific instructions, if any.
  • Click on “generate essay” and wait for the output
  • Conduct plagiarism and grammar check
  • Download the essay
  • High-level output for academic essay writing
  • Pocket-friendly premium plans
  • Doesn’t provide multiple sets of templates
  • Not quite suitable for non-academic essay writing

6. Jasper AI

Jasper-AI

Jasper AI has been the oldest player in the game of AI content writing. Fast forward to now, its features have been magnified with the inception of natural language processing algorithms and that’s how they are helping students write their essays as well. However, Jasper is the best AI tool for non-academic writing projects like content writing or creative writing.

How does Jasper AI work?

  • Choose a template — if you are about to write an essay, go with the “document”
  • Add your preferences
  • Click “compose” and get the output
  • Generates the essays instantly
  • Provides well-structured output according to the tone and style of your preferences
  • Not quite suitable for academic writing essays

7. Textero AI

Textero-AI

Textero AI provides a few writing tools for students that facilitate their various academic papers and writing projects. Its essay generator helps you generate ideas for a full-length essay based on the topic and also suggests new topic ideas or thesis statement ideas for your academic assignments.

How does Textero AI work?

  • Click on “Essay Generator” located on the LHS (Left-hand Side)
  • Input the title and description based on which you want to generate the essay
  • Pick the right citation style
  • Click “generate” and wait for the output
  • It also provides other tools like an outline generator, and summary generator and has an AI research assistant that answers all your questions relevant to the research
  • The output is 100% unique and plagiarism and error-free
  • Might fail to provide an essay focussed on complex or technical topics

8. Quillbot

Quillbot

Though Quillbot is essentially built for paraphrasing and summarizing tasks. It comes as a rescue when you have to revamp, improvise, or refine your already-composed essay. Its co-writer helps you transform your thoughts and ideas and make them more coherent by rephrasing them. You can easily customize your text based on the customization options available.

How does Quillbot Paraphraser work?

  • Import or copy the content
  • Click on “Paraphrase” “Summarize” or “Suggest text” based on your requirement
  • Make the required customizations and save the document.
  • Offers a plethora of tools required for students
  • Both free and premium plans are available
  • Enhances vocabulary and language skills
  • Limited customization options with the free plan
  • Only supports the English language

9. SciSpace Paraphraser

SciSpace-Paraphraser

SciSpace is the best AI tool that helps you fine-tune your essay. If you feel your essay writing needs AI suggestions to improve the language, vocabulary, writing styles, and tone of your essay, SciSpace is at your rescue. It has more customized options than Quillbot and improves your essay by rephrasing it according to the required or preferred writing style, and tone. This is a very good alternative to Quillbot.

How does SciSpace Paraphrasing work?

  • Simply paste the content to the screen
  • Choose the length and variation properly
  • Select the language
  • Click “Paraphrase”
  • Has 22 custom tones and all of them are available even on the free plan
  • Supports 75+ languages
  • Comes with an AI-detection report for English paraphrase output
  • Delay in the output

10. ChatGPT

ChatGPT

It would be unfair if we talk about AI tools and do not enlist ChatGPT. When it comes to automated essay writing tasks, ChatGPT is not trivial. With proper prompts, you can automate the essay writing process and generate a well-crafted and coherent essay. However, the quality and the accuracy cannot be trusted as the model hallucinates and doesn’t include sources.

How does ChatGPT work?

  • Create a prompt based on your requirement
  • Ask ChatGPT to write an essay about your topic, specify conditions and preferences
  • Click enter and wait for the essay
  • Comes for free
  • Cannot rely on the output as the model hallucinates
  • Lacks the upgraded features that other essay-writing tools have

Concluding!

Writing essays can be a real struggle. But, the inception of the best AI essay-generation tools makes the entire writing process a lot easier and smoother. However, you should be extra vigilant while relying on these tools and consciously use them only as a technological aid. Because over-reliance on these AI tools could diminish student's writing skills and the user can become more gripped by the tools. So, use it wisely without affecting your knowledge and skills.

You can explore the above tools whenever you need any help with essay writing, and reap the benefits of them without compromising on the quality of your writing.

And! If you're stuck exploring multiple research papers or want to conduct a comprehensive literature review , you know which tool to use? Yes, it's SciSpace Literature Review, our AI-powered workspace, which is meant to make your research workflow easier. Plus, it also comes with SciSpace Copilot , our AI research assistant that answers any question that you may have about the research paper.

If you haven't used it yet, you can use it here !

Choosing the best AI for writing long-form essays depends on your requirements. Here are the top 5 tools that help you create long-form and college essays —

1. Free Essay Writer AI

2. College Essay AI

3. My Essay Writer

4. Textero AI

5. Perfect Essay Writer

The Perfect Essay Writer AI and Textero AI are the two best AI essay generators that help you write the best essays.

ChatGPT is not specifically built to assist you with essay writing, however, you can use the tool to create college essays and long-form essays. It’s important to review, fact-check the essay, and refer to the sources properly.

Essaybot is a free AI essay generator tool that helps you create a well-reasoned essay with just a click.

Unless your university permits it, using AI essay generators or writing tools to write your essay can be considered as plagiarism.

You might also like

Introducing SciSpace’s Citation Booster To Increase Research Visibility

Introducing SciSpace’s Citation Booster To Increase Research Visibility

Sumalatha G

AI for Meta-Analysis — A Comprehensive Guide

Monali Ghosh

How To Write An Argumentative Essay

Best AI writer of 2024

Use the best AI writers to create written content quickly

  • Best AI writer overall

Best choice for marketers

Best for wordpress users, best for long-form writing.

  • Best free version
  • Best for short-form

Best for sourcing

  • Industry rankings

How we test

Person writing on computer.

1. Best AI writer overall 2. Best for marketers 3. Best for WordPress users 4. Best for long-form writing 5. Best free option 6. Best for short-form writing 7. Best for sourcing 8. TechRadar's AI writer rankings 9. FAQs 10. How we test

The word "AI" has been named the word of the year. ChatGPT has made waves since its introduction in late 2022. With every new iteration of this powerful tool, users have found innovative ways to use it to simplify and speed up their work.

Now, there are many AI writing assistants on the market, competing with ChatGPT to become the king of AI-powered writing tools. These new tools aim to simplify the writing process by generating long-form content, researching keywords, creating images from text, and more. Many bloggers are using these tools to improve their content and save time. 

However, there are some drawbacks to using AI writers. The content generated may require additional editing to ensure it's polished and accurate. AI-generated content may also lack the unique voice and style a human writer can provide. Despite this, the benefits of using AI writers, such as cost and time savings, often outweigh these minor drawbacks.

In conclusion, AI writers are an excellent solution for creating high-quality content without spending countless hours or breaking the bank. With numerous writing tools available today, content creation can be easily sped up and simplified. If you're interested in trying one of these tools, we've got you covered with our list of the best AI writers of the year. Check it out!

The best AI writers of 2024 in full:

Why you can trust TechRadar We spend hours testing every product or service we review, so you can be sure you’re buying the best. Find out more about how we test.

See how our top picks compare in the following analysis and reviews as we discuss reasons to subscribe, reasons to avoid, our test results, and what we liked most about each cloud storage platform.

The best AI writer overall

Website screenshot for Grammarly AI writer

1. GrammarlyGO

Our expert review:

Specifications

Reasons to buy, reasons to avoid.

✔️   You also need a grammar editor: Getting help from an AI writer is even better with one that also helps you with grammar, like Grammarly. 

✔️ Need to use it across multiple apps: With Grammarly installed on your computer, you instantly gain access to it across your favorite apps like Microsoft Word and other word processing packages. 

✔️ Want something that's easy to use — once you learn how to do it:  Once you understand how GrammarlyGO works, it's going to make your life easier and assist you in making your text better. 

❌  You're on a strict budget: No doubt, Grammarly itself is expensive, especially if you only want to pay for it on a month-to-month basis. 

❌  You don't need a grammar tool: Maybe this is overkill, depending on your situation. 

❌ You need social networking-specific tools:  GrammarlyGO doesn't offer these types of tools, at least for now. 

Even in beta, GrammarlyGO is an excellent addition to an already powerful Al-based writing tool. You can use it to become a much better writer in just a few steps. 

Check out our in-depth GrammarlyGo review for a closer look at the service, and see why it's our top AI writer pick for 2024.

Grammarly has been a reliable companion for writers, helping them polish their craft by providing suggestions that make their writing clearer, more engaging, or more effective. And now, they've taken it up a notch by introducing GrammarlyGO, a feature-packed add-on that comes bundled with the rest of the software.

To get the most out of GrammarlyGO, you must understand how Grammarly works, as the two are closely intertwined. When using GrammarlyGO, it's essential to remember the number of prompts available to you, which varies depending on the plan you choose. Whenever you ask Grammarly to improve your text, one prompt is used, regardless of whether you use the suggestion. Grammarly Free users get access to 100 prompts per month, while Grammarly Premium subscribers receive 1,000 prompts per month. For Grammarly Business users, the number goes up to 1,000 prompts per user every month.

GoogleGO AI features are classified into five categories: ideate, compose, reply, rewrite, and personalize. The ideate feature helps you generate article ideas that are exciting and thought-provoking. With Grammarly's assistance, you can develop topics like "Five ways to motivate employees" or "Name five great topics about fall" that can capture the reader's attention and spark their interest.

The compose feature is perfect when you want Grammarly to help you write something from scratch. For instance, you could ask Grammarly to help you write an announcement about your engagement or a cover letter for a new job. The more information you provide, the better the results, and GrammarlyGO can help you create a masterpiece with its advanced suggestions and insights. Adding details like the name of your fiancé and the engagement date can make your announcement even more unique. In contrast, information about the job you're applying for can make your cover letter more effective.

If you're unsure how to respond to a message, Grammarly's reply feature can save the day. You can ask Grammarly to answer questions like "What should I say to Brent about the new job?" or "How can I congratulate Tom and Becky on their upcoming nuptials?" With Grammarly's superior writing skills, you can craft an engaging, effective, and impressive response.

With these incredible features, GrammarlyGO can help you take your writing to new heights and unleash your creativity.

Please read our full GrammarlyGo review .

  • ^ Back to the top

Website screenshot for Anyword

✔️ You are a marketer: Anyword is the best AI writer for marketers on the planet. It helps you create content based on your company's "voice," and learns as it goes. 

✔️ You enjoy trial and error:  There's no "right" answer when it comes to AI text generation. Anyword makes it easier to tweak text once or unlimited times to help you get the text perfect for your needs. 

✔️ You need to generate unlimited words:  There are limits elsewhere, but Anyword doesn't limit how many words it will generate on a monthly basis. 

❌  You want app integration: You'll need to copy and paste text from your favorite word processor to Anyword, which can get annoying for some. 

❌ You want a free plan:  Once you exhaust your trial, you'll need to pick a free plan to continue. 

❌  You aren't a marketer:  The heavy marketing focus can't be avoided. 

Anyword offers a slick and easy-to-use interface. In mere moments, you'll be able to create excellent content that caters to your intended audience 

Check out our in-depth Anyword review for a closer look .

Are you tired of struggling to generate creative marketing copy or unsure how to optimize your existing content? Anyword is an innovative cloud-based writing tool that can help you overcome these challenges and achieve exceptional results. With Anyword's intuitive interface and user-friendly tools, you can generate, test, and optimize your copy in ways you never thought possible.

With unlimited words in each plan, Anyword gives you plenty of space to craft compelling content that truly resonates with your audience. Its advanced AI-powered algorithms can analyze your existing marketing materials and produce multiple variations of your copy, enabling you to compare and contrast different solutions and select the best option for your brand.

One of the key features of Anyword is its Brand Voice function, which enables businesses to establish a consistent identity across all their marketing materials. Whether you're refining your tone of voice, targeting your audience, or building out your messaging bank, Anyword can help you create copy that accurately reflects your brand's personality, tone, and style.

Additionally, Anyword seamlessly integrates with various platforms, from Hubspot to LinkedIn Ads, enabling you to use copy intelligence to enhance the quality of your future content. By analyzing the performance of your existing materials, Anyword empowers you to make informed decisions about optimizing your messaging for even better results.

But that's not all - Anyword's advanced algorithms can also analyze the performance of your competitors' marketing materials, giving you insights into their copy and enabling you to create content that sets you apart from the competition.

In summary, Anyword can help you unlock your creativity and produce exceptional marketing copy that resonates with your audience. With its range of user-friendly tools, advanced AI-powered algorithms, and seamless integration with various platforms, Anyword is the perfect writing tool for businesses looking to enhance their marketing efforts.

Please read our full Anyword review .

Website screenshot for Articleforge

3. Articleforge

✔️ You use WordPress heavily: Articleforge works great with WordPress; get started in just a few steps. 

✔️ You want package customization:  The more you're willing to pay, the more words you can generate each month. It's flexible. 

✔️ You need marketing-based tools:  Offers SEO optimization, content in bulk, and more. 

❌  You don't want to self-edit : Articleforge may require more post-generation editing than other options, which could slow you down.

❌ You don't want to deal with duplicates:  Yes, sometimes Articleforge repeats suggestions. 

❌  If you don't want to spend more for a monthly subscription: It's cheaper to buy this on a yearly basis, and that might not be a commitment you're willing to make. 

Articleforge utilizes deep learning and AI to improve content over time, though heavy editing and fact-checking are often necessary.

Check out our in-depth Articleforge review for more information.

Articleforge is a tool that can greatly help speed up the writing process for users. It is a valuable resource for those who need to generate content quickly but do not have the time or resources to do so themselves. However, it is important to note that it is not a replacement for an experienced writer or editor. While it can provide recommendations for titles and automate SEO and WordPress publishing, it is not designed to produce ready-to-publish content.

One of the benefits of Articleforge is that it is available in seven languages, making it ideal for international blogs or multi-language sites. Users simply need to enter a few keywords and the topic they want to cover, and the platform will generate content in under 60 seconds. This generated content will use the provided keywords and cover the desired topic.

In addition to its quick and easy content generation, Articleforge also offers various integrations, including MS Word and WordPress integrations. The platform also provides integrations with other software systems like SEO AutoPilot, CyberSEO, RankerX, SEnuke TNG, and more. These integrations are easy to use, thanks to the API key offered by the platform.

While the platform is a convenient tool for creating content, its output quality falls short of expectations. As with any automated system, fact-checking is necessary for the majority of the content offered by the platform. The platform does offer better output quality with customized input. However, extensive testing is required, which can quickly exhaust the "word meter."

Overall, Articleforge can significantly expedite the writing process and help overcome writer's block. It is advisable to test it out and find a balance between the platform and tailored input to yield superior results. While it should not be relied upon as a sole source of content, using it in conjunction with other resources can help users quickly generate high-quality content.

Read our full Articleforge AI writer review .

Website screenshot for Jasper

✔️ You want a great plagiarism checker: You want to create original content, right? This checker makes sure that's true. 

✔️ You need to create content in multiple languages:  Some folks are writing content in multiple languages and Jasper lets you do this. 

✔️ You want access to a lot of features:  The team behind this solution tends to add new features on a regular basis without raising the price. That's sweet. 

❌  You don't have time to learn: Any AI writer takes time to learn. Jasper takes a little bit more time. 

❌ If you don't want to spend a lot:   Some folks might not feel the Creator package is enough and the next one is expensive. 

❌  If you aren't willing to edit a lot: Some of the content Jasper creates is better than others. Some extra editing is key. 

Jasper is a platform that is highly customizable and comes with a user-friendly interface. All the necessary tools that you need are just a click away. The platform’s content generation capabilities are hidden behind easy-to-reach templates, making it an easy-to-use tool.

Check out our review of Jasper to learn more about the AI writer.

Jasper is a powerful platform that leverages natural language processing (NLP) algorithms to analyze data from various sources across the web. With this ability, it can generate content ideas based on the relevant data you input, such as keywords, topics, and more. Jasper can help create a wide range of content formats, including blog posts, social media content, emails, and much more.

One of the most significant advantages of Jasper is that it has a tone feature that allows you to assign a corresponding tone to the copy you create if you’re targeting a specific persona. This feature is incredibly useful because it helps ensure that the content you create resonates with your target audience. Additionally, Jasper can help you improve your content through optimization recommendations, which can be accessed through numerous templates available on the platform.

If you’re short on time and need to create content quickly, Jasper is the perfect solution. It can offer content in mere seconds with just a few clicks and some input. With Jasper, you can save time and increase productivity, allowing you to focus on other essential tasks.

Another unique feature that Jasper offers is the “Boss Mode” feature, which allows you to write 5x faster. With this mode, you can give commands to Jasper, and it will do all the work for you. You can even write a complete book using this mode in just minutes. This feature is incredibly useful for writers who need to create a large volume of content in a short amount of time.

In the Boss Mode, you can also use pre-built recipes offered by the Jasper team such as “Write about (keyword),” but you also have the option of creating your own, which can be incredibly helpful if you have specific requirements for your content. By automating your writing process, you can save time and focus on other critical tasks.

Read our full Jasper review .

The best free version

Website screenshot for CopyAI

✔️ You want a free plan for minor work:  If you're okay with only generating 2,000 words per month, there's a free plan for that. 

✔️ You need unlimited word generation: You can create unlimited words each month with all the paid plans. 

✔️ You want multiple tools:  New features are added often, making the product even better. 

❌  You are a marketer:  There are better options if you primarily need to write marketing copy. 

❌  You aren't willing to learn:  Here's another option that is a little bit harder to learn, at least initially. 

❌ You need app integration:  Expect to stay on the CopyAI website to get your work done, which requires copy and paste. 

CopyAI helps writers create high-quality copy with the power of AI, saving time and producing better content that resonates with your audience.

Check out our in-depth CopyAI review to see whether it's the tool for you.

CopyAI has been making waves in the world of AI writing tools, becoming a favorite among users who want to create high-quality content that can help them stand out in today's crowded digital space. The tool offers a wide range of options that allow users to get started and take their writing to the next level, from exploring various writing templates, settings, and features on the user dashboard to creating a compelling copy in minutes.

The user dashboard serves as the creative command center for CopyAI users, providing a user-friendly and intuitive interface that makes it easy to navigate through different features. From here, you can quickly access various options that can help you create content that resonates with your audience, whether you need to write a blog post about travel or an email to a potential client.

The chat feature is the default option that acts as a blank canvas to help generate inspiration. The brainstorm feature allows you to create copy such as "ten catchy Twitter headlines on holiday shopping," "the best Facebook headlines for marketing professionals," and more. If you're struggling to come up with a topic, don’t worry. CopyAI’s chat function provides prompt templates to give you a head start. These templates cover various topics, including content creation, SEO, email marketing, social media, PR and communication, sales, and strategy.

Moreover, you can create custom templates that cater to your specific needs. Each template in the collection provides various options, so whether you need a headline generator, a step-by-step guide, or a product description, CopyAI has you covered. You can even use the "rewrite content" option to enhance your written content, making it more engaging and effective.

To fully personalize your experience with CopyAI, you should create one or more brand voices. This process involves providing text that accurately describes you or your company's unique voice. This text should be between 50 and 500 words and can come from various sources such as blog articles, social media posts, website copy, marketing emails, and more. This allows CopyAI to tailor its AI-powered tools to better suit your brand's needs, making it easier for you to create content that resonates with your target audience and helps you achieve your goals.

In summary, CopyAI is a powerful AI writing tool that offers a wide range of features and options to help you create high-quality content. From the user dashboard to the chat and brainstorming features, CopyAI provides a user-friendly and intuitive interface that makes it easy to create compelling and engaging content. With custom templates and brand voices, you can personalize your experience with CopyAI and create content that resonates with your target audience, helping you stand out in today's crowded digital space.

Read our full  CopyAI review .

The best for short-form

Website screenshot for Writesonic

6. Writesonic

✔️ You aren't tech savvy:  It's one of the easiest-to-use solutions on the market. 

✔️ You need lots of languages:  Writesonic supports 25 languages and counting. 

✔️ You're a freelancer:  There's a package just for you.

❌  You want to pay a lower price:  Some have criticized Writesonic for being too expensive. That's true with the Smart Team options

❌  You need more third-party integrations:  It doesn't really place nice with other software tools, which could add some time to your work. 

❌ You don't like tackling a learning curve : Like others on this list, there's a slighter higher learning curve with this one

Writesonic is an expansive AI writing platform with an intuitive interface and versatile templates for all content creation scenarios.

Check out our in-depth Writesonic review to see if this is the AI writing tool for you. 

Writesonic is an innovative content creation platform that provides users with various features and tools to generate high-quality, engaging content. One of the most impressive features of Writesonic is its versatility - it supports over 25 languages, including English, French, Italian, German, Japanese, Chinese, and more. This is a significant advantage for businesses that operate in multiple regions and need to produce content in different languages.

Another notable feature of Writesonic is its tone of voice customization tool. Users can choose from various tones, including "Excited," "Creative," and more. This feature adds a unique twist to the content, tailoring it to the user's needs. However, it's essential to note that the tone of voice feature may require additional editing to ensure the content is clear and coherent.

Regarding website copywriting, Writesonic provides users with an impressive range of tools. It can create landing pages, SEO meta descriptions, and feature-rich headers and subheaders. Additionally, it allows users to create social media ads on Facebook, LinkedIn, and Google . Content creators can also benefit from Writesonic's blog writing, point expansion, and text analysis tools, which can rephrase the content and improve its readability.

The platform also offers e-commerce copy creation and popular copywriting formulas, such as the "AIDA" and "Pain-Agitate-Solution" formulas. These formulas are tried and tested approaches to writing compelling, persuasive content that resonates with readers.

However, it's essential to note that the content generated by Writesonic may require significant editing from the user. While the platform does an excellent job of generating content, it's essential to ensure it's clear, coherent, and tailored to the user's needs. If you're looking for a completely hands-off approach to content creation, Writesonic may not be the best option. However, for businesses that need to generate high-quality, engaging content quickly, Writesonic is an excellent choice.

Please read our full Writesonic review .

Website screenshot for Al-Writer

7. Al-Writer

✔️ You want a cheap package: You can get an AI writer for as little as $19/month. 

✔️ You need clear sourcing:  No better solution is available for providing sources for all text provided. 

✔️ You want an easy solution:  A very low learning curve exists. 

❌  You want app integration:  There's seamless integration with WordPress, but that's about it. 

❌  You need marketing-specific tools:  Sorry, this one doesn't have it. 

❌ You want unlimited word generation: If you need to generate more than 120 articles per month, this is a very expensive solution. 

AI-Writer is a unique word-generating tool that simplifies text creation and editing through an intuitive user interface. However, it may not be suitable for everyone.

Check out our in-depth AI-Writer review to see if it's the solution for you. 

AI-Writer is a word generator that is easily accessible and is popular among freelancers and bloggers. It may not be as extensive as Anyword or CopyAI, which are primarily aimed at marketing and sales, but it serves its intended audience well. 

One of the most significant advantages of AI-Writer is its unparalleled sourcing capabilities. It is the only AI content generator that cites sources for "everything it writes." Additionally, it updates its sources frequently, ensuring that any article generated by it sources the latest information on the subject. This is a significant advantage, particularly for those who are writing about current events or trending topics. With AI-Writer, writers can create content that is well-researched and accurate, without having to spend hours scouring the internet for sources. 

Another significant advantage of AI-Writer is that it recognizes that not all types of content require sourcing. For instance, op-eds or personal essays don't usually require sources. AI-Writer recognizes this and hides sources and links from the main results page, making it easier for writers to focus on their content and not worry about sources.

One of the drawbacks of using AI-Writer is that its extensive sourcing process can cause a minor delay in generating results compared to other AI writing tools. Although this may not be a significant issue, the noticeable delay should be mentioned. For instance, alternative tools like GrammarlyGO offer results without hesitation. However, it is worth noting that the issue with AI-Writer is primarily due to its thorough sourcing process. 

In conclusion, AI-Writer is an excellent tool for freelancers and bloggers who want to create high-quality content. It excels in sourcing capabilities, making it a go-to tool for writers who need well-researched and accurate content. Its ability to recognize when sourcing is not required is also an added advantage. While it may not be the fastest tool on the market, its thorough sourcing process is worth the wait.

Read our full  AI-Writer review .

TechRadar's AI writer rankings

Numerous AI writing solutions are already available in the market, and we can expect more to arrive in the future. Have a look at our rankings of popular services below, and also check out the honorable mentions that currently can’t compete with the top services available.

What is an AI writer?

An AI writer is a revolutionary tool, capable of creating text and content without human help; it utilizes algorithms and machine learning to generate various AI content. From data-driven, high-value pieces to conversion-focused content perfect for marketing campaigns, AI writers can easily create just about any content.

As AI writing assistants gain exposure to various forms of real-world information, they gain proficiency in generating natural-sounding output. With their data coming from human sources, the output created also has a human-like quality. Much like how humans rely on existing content to craft something new, AI content tools scour the web for relevant data to fulfill the user's instructions, thus creating original content.

This, in a nutshell, explains what AI writing is and how it functions.

How to choose the best AI writer for you?

Let’s get this out of the way. Whichever platform you choose, you will have to do some editing, if you want to create useful texts.

Here are some of the factors you should consider when choosing the right tool for you: 

1. Ease of use

Tools that don’t require technical knowledge or prior experience should be on the top of your list. Investment in good UI means that other aspects of the tool are also likely to be of a higher quality.

While this may not be popular with everyone, the price should play a major role in deciding which tool to go for. Some are simply overpriced while not offering much more than their lower-priced competition. Pay attention to the amount of content each price plan offers.

3. High-quality output

Despite the fact that, in the end, you will be editing the texts and images the tool creates, having a tool that creates high-quality content will mean less time spent on fixing mistakes and editing.

4. Integrations

If you’re running a blog or business and have been doing so for some time, you probably have your set of tools that you use for writing content. Making sure that the new AI platform syncs well with your existing toolbox can be essential for how long and how well you utilize the AI tool.

The list above is not exhaustive, but does offer a great starting point in your quest to find the best AI writing tool for your needs.

During our assessment, we’ve evaluated various aspects such as the number of writing templates, categories, recipes, number of languages supported, grammar checkers, etc. Our goal was to create an extensive list of AI writing assistants that offer much more than simple rewording features. 

We tested the overall capabilities of the AI software, the tool's interface and ease of use, monthly article limits, SEO optimization features, and pricing, among other aspects.

In addition, we gave each platform a test article to write for us (a simple topic) and checked its sentence structure and content relevance.

Read more on how we test, rate, and review products on TechRadar .

Get in touch

  • Want to find out about commercial or marketing opportunities? Click here
  • Out of date info, errors, complaints or broken links? Give us a nudge
  • Got a suggestion for a product or service provider? Message us directly
  • You've reached the end of the page. Jump back up to the top ^

Are you a pro? Subscribe to our newsletter

Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!

Bryan M Wolfe

Bryan M. Wolfe is a staff writer at TechRadar, iMore, and wherever Future can use him. Though his passion is Apple-based products, he doesn't have a problem using Windows and Android. Bryan's a single father of a 15-year-old daughter and a puppy, Isabelle. Thanks for reading!

  • Mike Jennings
  • Sead Fadilpašić

More than half of Americans have tried generative AI already

How to code your way to manageable cloud costs

Fortnite dev reveals reason why Metroid's Samus didn't join the game, says Nintendo was 'hung up' about its characters being on other platforms

Most Popular

  • 2 Sony dropped OLED for its flagship 2024 TV – here's why
  • 3 'The party is over for developers looking for AI freebies' — Google terminates Gemini API free access within months amidst rumors that it could charge for AI search queries
  • 4 Looking for a cheap OLED display? LG's highly-rated C2 OLED TV is on sale for $839
  • 5 Amazon Prime Video's disappearing act could point to a future without the service
  • 2 You can already buy cases for the iPad Air 6, but the tablet might lack a rumored change
  • 3 Sony dropped OLED for its flagship 2024 TV – here's why
  • 4 Meta rolls out new Meta AI website, and it might just bury Microsoft and Google's AI dreams
  • 5 This gadget promises to increase productivity inside your vehicle by converting ICE screens into displays — and even includes Samsung DeX compatibility for free

artificial intelligence to help write essays

The best AI writing generators

These 7 ai writing tools will take your content to the next level..

Hero image with the logos of the best AI writing software

Of course, all AI writing software needs human supervision to deliver the best results. Left to its own devices, it tends to produce fairly generic and frequently incorrect content, even if it can pass for something a human wrote. Now that AI tools are increasingly popular, people also seem more aware of what bland AI-produced content reads like and are likely to spot it—or at least be suspicious of content that feels like it lacks something.

I've been covering this kind of generative AI technology for almost a decade. Since AI is supposedly trying to take my job, I'm somewhat professionally interested in the whole situation. Still, I think I'm pretty safe for now. These AI writing tools are getting incredibly impressive, but you have to work with them, rather than just letting them spit out whatever they want.

So, if you're looking for an AI content generator that will help you write compelling copy, publish blog posts a lot quicker, and otherwise take some of the slow-paced typing out of writing, you've come to the right place. Let's dig in. 

The best AI writing software

Jasper for businesses

Copy.ai for copywriting

Anyword for assisting you with writing

Sudowrite for fiction

Writer for a non-GPT option

Writesonic for GPT-4 content

Rytr for an affordable AI writer

How do AI writing tools work?

Search Google for AI writing software, and you'll find dozens of different options, all with suspiciously similar features. There's a big reason for this: 95% of these AI writing tools use the same large language models (LLMs) as the back end.

Some of the bigger apps are also integrating their own fine-tuning or using other LLMs like Claude . But most are really just wrappers connected to OpenAI's GPT-3 and GPT-4 APIs, with a few extra features built on top—even if they try to hide it in their own marketing materials. If you wanted to, you could even create your own version of an AI writing assistant without code using Zapier's OpenAI integrations —that's how much these apps rely on GPT.

See how one writer created an AI writing coach with GPT and other ways you can use OpenAI with Zapier .

Now this isn't to say that none of these AI-powered writing apps are worth using. They all offer a much nicer workflow than ChatGPT or OpenAI's playground , both of which allow you to generate text with GPT as well. And the better apps allow you to set a "voice" or guidelines that apply to all the text you generate. But the difference between these apps isn't really in the quality of their output. With a few exceptions, you'll get very similar results from the same prompt no matter which app you use—even if they use different LLMs. Where the apps on this list stand out is in how easy they make it to integrate AI text generation into an actual workflow.

As for the underlying LLM models themselves, they work by taking a prompt from you, and then predicting what words will best follow on from your request, based on the data they were trained on. That training data includes books, articles, and other documents across all different topics, styles, and genres—and an unbelievable amount of content scraped from the open internet . Basically, LLMs were allowed to crunch through the sum total of human knowledge to form a deep learning neural network—a complex, many-layered, weighted algorithm modeled after the human brain. Yes, that's the kind of thing you have to do to create a computer program that generates bad poems . 

If you want to dive more into the specifics, check out the Zapier articles on natural language processing and how ChatGPT works . But suffice it to say: GPT and other large language models are incredibly powerful already—and because of that, these AI writing tools have a lot of potential. 

What makes the best AI text generator?

How we evaluate and test apps.

Our best apps roundups are written by humans who've spent much of their careers using, testing, and writing about software. Unless explicitly stated, we spend dozens of hours researching and testing apps, using each app as it's intended to be used and evaluating it against the criteria we set for the category. We're never paid for placement in our articles from any app or for links to any site—we value the trust readers put in us to offer authentic evaluations of the categories and apps we review. For more details on our process, read the full rundown of how we select apps to feature on the Zapier blog .

We know that most AI text generators rely on the various versions of GPT, and even those that don't are using very similar models, so most apps aren't going to stand out because of some dramatic difference in the quality of their output. Creating effective, human-like text is now table stakes. It was required for inclusion on this list—but not sufficient on its own.

As I was testing these apps, here's what else I was looking for:

Tools powered by GPT or a similar large language model with well-documented efficacy. In practice, this means that most but not all of the AI writing tools on this list use GPT to a greater or lesser degree. Many apps are starting to hide what models they use and claim to have a lot of secret sauce built on top (because there's a marketing advantage in being different and more powerful), but the reality is that nine times out of ten, it's the GPT API that's doing the heavy lifting.

An interface that gives you a lot of control over the text output. The more options you have to influence the tone, style, language, content, and everything else, the better. I didn't want tools where you just entered a headline and let the AI do the rest; these are all tools that you collaborate with, so you can write great copy quickly. The best AI writing tools also let you set a default brand voice that's always on.

Ease of use. You shouldn't have to fight to get the AI to do what you want. With AI writing software like this, there will always be some redoing and reshaping to get the exact output you want, but working with the AI shouldn't feel like wrangling a loose horse. Similarly, great help docs and good onboarding were both a major plus. 

Affordability. ChatGPT is currently free, and all these tools are built on top of an API that costs pennies . There was no hard and fast price limit, but the more expensive tools had to justify the extra expense with better features and a nicer app. After all, almost every app will produce pretty similar outputs regardless of what it costs. 

Apps that weren't designed to make spam content. Previous text-generating tools could " spin " content by changing words to synonyms so that unscrupulous website owners could rip off copyrighted material and generally create lots of low-quality, low-value content. None of that on this list.

Even with these criteria, I had more than 40 different AI writing tools to test. Remember: it's relatively easy for a skilled developer to build a wrapper around the GPT API, so I had to dig deep into each one to find out if it was any good or just had a flashy marketing site.

I tested each app by getting it to write a number of different short- and long-form bits of copy, but as expected, there were very few meaningful quality differences. Instead, it was the overall user experience, depth of features, and affordability that determined whether an app made this list. 

Zapier Chatbots lets you build custom AI chatbots and take action with built-in automation—no coding required. Try the writing assistant template to help you create high quality content, effortlessly.

The best AI writing generators at a glance

Best ai writing generator for businesses, jasper (web).

Jasper, our pick for the best AI writing generator for businesses

Jasper pros:

One of the most mature and feature-filled options on the list 

Integrates with Grammarly, Surfer, and its own AI art generator

Jasper cons:

Expensive given that all the apps use similar language models 

Jasper (formerly Jarvis) is one of the most feature-filled and powerful AI content generators. It was among the first wave of apps built on top of GPT, and its relative longevity means that it feels like a more mature tool than most of the other apps I tested. It's continued to grow and develop in the months since I first compiled this list.

If you have a business and budget isn't your primary concern, Jasper should be one of the first apps you try. It's pivoted to mostly focus on marketing campaigns rather than just generating generic AI content. That's not a bad thing, but it means that plans now start at $49/month for individual creators and $125/month for teams.

Jasper has also moved away from just being a GPT app. It claims to combine "several large language models" including GPT-4, Claude 2, and PaLM 2, so that "you get the highest quality outputs and superior uptime." While I can't say that I noticed a massive difference between Jasper's output and any other app's, it does give you a few solid controls so that your content matches your brand. 

You can create a brand Voice and Tone by uploading some appropriate sample text. Based on a few examples of my writing, Jasper created a style that "emphasizes a casual, conversational tone with humor, personal anecdotes, listicles, informal language, expertise in various subjects, and a call to action for an engaging and approachable brand voice." I don't think that's a bad summary of the content I fed in, and its output for a few test blog posts like "The Enduring Popularity of Top Gun" felt closer to my writing than when I asked it to use a generic casual tone of voice. Similarly, there's a Knowledge Base where you can add facts about your business and products so Jasper gets important details right. 

While other apps also offer similar features, Jasper's seemed to work better and are fully integrated with the rest of the app. For example, you can create entire marketing campaigns using your custom brand voice. Put a bit of work into fine-tuning it and uploading the right assets to your knowledge base, and I suspect that Jasper really could create some solid first drafts of marketing materials like blog outlines, social media campaign ads, and the like.

Otherwise, Jasper rounds things out with some nice integrations. It has a built-in ChatGPT competitor and AI art generator (though, again, lots of other apps have both), plays nice with the SEO app Surfer , and there's a browser extension to bring Jasper everywhere.

You can also connect Jasper to thousands of other apps using Zapier . Learn more about how to automate Jasper , or try one of the pre-built workflows below.

Create product descriptions in Jasper from new or updated Airtable records

Airtable logo

Create Jasper blog posts from new changes to specific column values in monday.com and save the text in Google Docs documents

monday.com logo

Run Jasper commands and send Slack channel messages with new pushed messages in Slack

Slack logo

Jasper pricing: Creator plan from $49/month with one brand voice and 50 knowledge assets. Teams plan starts at $125/month for three seats, three brand voices, and 150 knowledge assets.

Best AI writing app for AI copywriting

Copy.ai (web).

Copy.ai, our pick for the best AI copywriting tool

Copy.ai pros:

Has an affordable unlimited plan for high-volume users 

Workflow actively solicits your input, which can lead to higher quality content 

Copy.ai cons:

Expensive if you don't produce a lot of content

Pretty much anything Jasper can do, Copy.ai can do too. It has brand voices, an infobase, a chatbot, and team features (though there isn't a browser extension). Consider it the Burger King to Jasper's McDonalds.

And like the Home of the Whopper, Copy.ai appeals to slightly different tastes. While I could argue that Copy.ai has a nicer layout, the reality is it's geared toward a slightly different workflow. While Jasper lets you and the AI loose, Copy.ai slows things down a touch and encourages you to work with its chatbot or use a template that asks some deliberate, probing questions. For creating website copy, social media captions , product descriptions, and similarly specific things, it makes more sense. But for content marketing blog posts and other long-form content, it might annoy you.

The other big difference is the pricing. While both offer plans for $49/month, Copy.ai includes five user seats and unlimited brand voices. For a small team working with multiple brands, it can be a lot cheaper. Also, if you're looking for a free AI writing generator, Copy.ai also offers a free plan that includes 2,000 words per month.

Overall, there are more similarities than differences between Jasper and Copy.ai , and both can create almost all the same kinds of text. Even when it came to analyzing my voice, they both came to pretty similar conclusions. Copy.ai decided that, to mimic me, it had to "focus on creating content that is both educational and entertaining, using a conversational tone that makes readers feel like they're having a chat with a knowledgeable friend" and "not to be afraid to inject some humor or personal anecdotes." If you're in doubt, try them both out and then decide.

Copy.ai also integrates with Zapier , so you can do things like automatically sending content to your CMS or enriching leads straight from your CRM. Learn more about how to automate Copy. ai or try one of the pre-built workflows below.

Add new blog posts created with Copy.ai to Webflow

Copy.ai logo

Copy.ai pricing: Free for 2,000 words per month; from $49/month for the Pro plan with 5 users and unlimited brand voices.

Best AI writing assistant

Anyword (web).

Anyword, our pick for the best AI writing assistant

Anyword pros:

Makes it very easy for you to include specific details, SEO keywords, and other important information 

Engagement scores and other metrics are surprisingly accurate

Anyword cons:

Can be slower to use

Pretty expensive for a more limited set of features than some of the other apps on this list

While you can direct the AI to include certain details and mention specific facts for every app on this list, none make it as easy as Anyword. More than any of the others, the AI here feels like an eager and moderately competent underling that requires a bit of micromanaging (and can also try to mimic your writing style and brand voice), rather than a beast that you have to tame with arcane prompts. 

Take one of its main content-generating tools: the Blog Wizard. Like with Copy.ai, the setup process requires you to describe the blog post you want the AI to create and add any SEO keywords you want to target. Anyword then generates a range of titles for you to choose from, along with a predicted engagement score. 

Once you've chosen a title—or written your own—it generates a suggested outline. Approve it, and you get the option for it to create an entire ~2,000-word blog post (boo!) or a blank document where you can prompt it with additional instructions for each section of the outline, telling it things like what facts to mention, what style to take, and what details to cover. There's also a chatbot-like research sidebar that you can ask questions of and solicit input from. While certainly a slower process than most apps, it gives you a serious amount of control over the content you're creating.

Anyword is definitely aimed at marketers, and its other tools—like the Data-Driven Editor and the Website Targeted Message—all allow you to target your content toward specific audiences and give things engagement scores. While I certainly can't confirm the validity of any of these scores, they at least pass the sniff test. I generally thought the AI-generated content that Anyword scored higher was better—and even when I disagreed, I still liked one of the top options. 

Anyword pricing: Starter plan from $49/month for 1 user and 1 brand voice.

Best AI writing tool for writing fiction

Sudowrite (web).

Sudowrite, our pick for the best AI writing tool for writing fiction

Sudowrite pros:

The only AI tool on the list explicitly aimed at writing fiction 

Super fun to use if you've ever wanted to play around with fiction 

Sudowrite cons:

It's still an AI text generator, so it can produce nonsensical metaphors, clichéd plots, incoherent action, and has a short memory for details 

Very controversial in fiction writing circles

When I saw Sudowrite's marketing copy, I didn't think for a second it would make it onto this list. Then I tried it and…I kind of love it. Sudowrite is a totally different tool than all the others on this list because it's aimed at fiction writers. And with that, comes a lot of controversy. Sudowrite has been called " an insult to writers everywhere " and has been generally dismissed as a tool for hacks by a lot of Very Online writers. And while it's true that it's nowhere close to replacing a human author, it's fun, functional, and can genuinely help with writing a work of fiction. 

The Story Engine feature, which allows you to generate a full work of fiction over a few days by progressively generating each story beat, has attracted the most attention ( it works but takes lots of hand-holding and your novel will be weird ). But I prefer its assistive tools.

Let's start with Describe. Select a word or phrase, click Describe , and the AI will generate a few suggestions for the sight, smell, taste, sound, and touch of the thing, as well as a couple of metaphors. If you're the kind of writer who struggles to add sensory depth to your short stories, it can help you get into the habit of describing things in more interesting ways.

Then there's Brainstorm. It allows you to use the AI to generate possible dialogue options, character names and traits, plot points, places, and other details about your world from your descriptions and cues. If you know you want a big hairy guy with a huge sword but can't think of a good name, it can suggest a few, like Thorgrim and Bohart.

And these are just scratching the surface. Sure, if you over-rely on the AI to solve all your problems, you'll probably end up with an impressively generic story. But if you use it as a writing buddy to bounce ideas off and get you out of a rut, it's got serious potential. 

Best of all, Sudowrite is super easy to use. The onboarding, tool tips, and general helpful vibe of the app are something other developers could learn from. 

Sudowrite pricing: Hobby & Student plan from $19/month for 30,000 AI words/month. 

Best AI text generator for a non-GPT option

Writer (web).

Writer, our pick for the best AI writing generator that doesn't use GPT

Writer pros:

Not based on GPT, so free of a lot of the controversy surrounding LLMs

Surprisingly capable as an editor, making sure your team sticks to the style guide and doesn't make any wild claims

Writer cons:

Requires a lot more setup to get the most from

GPT comes with quite a lot of baggage. OpenAI has been less than transparent about exactly what data was used to create the various versions of GPT-3 and GPT-4, and it's facing various lawsuits over the use of copyrighted material in its training dataset. No one is really denying that protected materials— potentially from pirated databases —were used to train GPT; the question is just whether or not it falls under fair use. 

For most people, this is a nebulous situation filled with edge cases and gray areas. Realistically, it's going to be years before it's all sorted out, and even then, things will have moved on so far that the results of any lawsuit are likely to be redundant. But for businesses that want to use AI writing tools without controversy attached, GPT is a no-go—and will be for the foreseeable future. 

Which is where Writer comes in.

Feature-wise, Writer is much the same as any of my top picks. (Though creating a specific brand voice that's automatically used is an Enterprise-only feature; otherwise, you have to use a lot of checkboxes in the settings to set the tone.) Some features, like the chatbot, are a little less useful than they are in the GPT-powered apps, but really, they're not why you'd choose Writer.

Where it stands out is the transparency around its Palmyra LLM . For example, you can request and inspect a copy of its training dataset that's composed of data that is "distributed free of any copyright restrictions." Similarly, Palmyra's code and model weights (which determines its outputs) can be audited, it can be hosted on your own servers, and your data is kept secure and not used for training by default. As an AI-powered tool, it's as above board as it comes.

In addition to generating text, Writer can work as a company-specific Grammarly-like editor, keeping on top of legal compliance, ensuring you don't make any unsupported claims, and checking that everything matches your style guide—even when humans are writing the text. As someone who routinely has to follow style guides, this seems like an incredibly useful feature. I wasn't able to test it fully since I don't have a personal style guide to input, but Writer correctly fixed things based on all the rules that I set.

In side-by-side comparisons, Writer's text generations sometimes felt a little weaker than the ones from Jasper or Copy.ai, but I suspect a lot of that was down to how things were configured. Writer is designed as a tool for companies to set up and train with their own data, not run right out of the box. I'd guess my random blog posts were a poor test of how it should be used in the real world.

Writer also integrates with Zapier , so you can use Writer to create content directly from whatever apps you use most. Learn more about how to automate Writer , or take a look at these pre-made workflows.

Create new outlines or drafts in Writer based on briefs from Asana

Asana logo

Generate marketing content from project briefs in Trello

Trello logo

Writer pricing: Team from $18/user/month for up to 5 users; after that, it's an Enterprise plan.

Best AI text generator for GPT-4 content

Writesonic (web).

Writesonic, our pick for the best AI writing generator for GPT-4 content

Writesonic pros:

Allows you to select what GPT model is used to generate text 

Generous free plan and affordable paid plans 

Writesonic cons:

A touch too focused on SEO content for my taste

While almost all the tools on this list use GPT, most are pretty vague about which particular version of it they use at any given time. This matters because the most basic version of the GPT-3.5 Turbo API costs $0.002/1K tokens (roughly 750 words), while GPT-4 starts at $0.06/1K tokens, and the most powerful version costs $0.12/1K tokens. All this suggests that most apps may not use GPT-4 in all circumstances, and instead probably rely on one of the more modest (though still great) GPT-3 models for most text generation. 

If having the latest and greatest AI model matters to you, Writesonic is the app for you. Writesonic doesn't hide what AI model it uses. It even allows you to choose between using GPT-3.5 and GPT-4, at least on Business plans. 

Whether the content you create will benefit from the extra power of GPT-4 or not depends. In my experience using GPT-4 through ChatGPT, the latest model is more accurate and, essentially, more sensible in how it responds. If you're churning out low-stakes copy variations for your product listings, you likely won't see much improvement. On the other hand, for long-form original blog posts, it could make a difference. Either way, the transparency in which model you're using at any given time is a huge bonus. 

Feature-wise, Writesonic is much the same as any of the other apps on this list, with a Google Docs-style editor, the option to set a brand voice, a few dozen copy templates, a chatbot, a browser extension, and Surfer integration. It's cool that you can set reference articles when you're generating a blog post, but it introduces the real possibility of inadvertent plagiarism if you aren't careful with how you use it. (Its most offbeat feature is a surprisingly solid AI-powered custom chatbot builder that's due to be spun out into its own app soon.) Overall, it's pretty nice to use and skews more toward SEO-optimized content marketing—but like with all the apps, you can use it to generate whatever you want.

Writesonic also integrates with Zapier , so you can send new copy to any of the other apps you use in your writing workflow. Learn more about how to automate Writesonic , or get started with one of these examples.

Create a Google Doc with new content from Writesonic

Writesonic logo

Generate product descriptions with Writesonic from spreadsheet rows in Google Sheets

Google Sheets logo

Writesonic pricing: Free for 10,000 GPT-3.5 words per month; Business from $19/month for 200,000 Premium words or 33,333 GPT-4 words.

Best free AI writing generator (with affordable upgrades)

Rytr, our pick for the best free AI writing generator

A solid free plan and a cheap high-volume plan (though Writesonic offers better value for an unlimited plan)

It includes a basic AI art generator as part of every plan 

The app is more basic than more expensive offerings

Unlimited plan isn't very competitive

Most of the apps on this list are aimed at professionals, businesses, and anyone else with a budget. The Jasper, Copy.ai, and Anyword plans I considered all started at $49/month. That isn't exactly a hobbyist-friendly sum of money, so if you want to explore AI text generators without spending as much, give Rytr a go.

There's a free plan that's good for 10,000 characters (around 2,500 words) per month, and it includes a lot of the features, like a plagiarism checker, and a few AI-generated images. The Saver plan starts at $9/month and allows you to generate 100,000 characters (around 25,000 words) per month. On that plan, you're also able to generate up to 20 images a month, which many other apps charge extra for. (There's also an unlimited plan for $29/month, but at that point, Writesonic is a better value.)

Feature-wise, there are some trade-offs. Rytr is a little less competent at generating long-form content without you guiding it through the process, and there are fewer templates for specific things. The interface also isn't as polished, and there isn't as much hand-holding to get you started. Still, as Rytr is using GPT like almost all the other apps on this list, you should be able to get it to produce substantially similar output.

Rytr Pricing: Free plan for 10,000 characters/month and lots of other features; Saver plan from $9/month for 100,000 characters; Unlimited plan from $29/month.

Other AI writing tools to consider

With so many AI text-generating tools out there, a few good ones worth considering didn't make this list, only because they didn't meet my initial criteria in some way. If none of the AI writers I chose fit the bill for you, here are a few other options worth looking into:

ChatGPT is surprisingly competent and fun to use. And best of all, it's free. ( Google Bard is a little less excellent on the content production side.) 

Wordtune and Grammarly are both great tools for editing and improving your own writing .  GrammarlyGO just isn't as flexible as my other picks.

Notion AI adds a powerful AI tool directly into Notion. If you already use Notion, it's worth checking out, but it's a lot to learn if you just want a text generator. (Same goes for AI within any other Notion alternative, like Coda AI .)

Surfer and Frase are both AI-powered SEO tools . They fell slightly out of scope for this list, but they can both help you optimize and improve your content—AI-generated or not. 

All of the apps on this list offer at the very least a free trial, so I'd suggest trying some of them out for a few minutes until you find the one that seems to work best with your workflow.

Related reading:

How to use OpenAI's GPT to spark content ideas

How to create an AI writing coach with GPT and Zapier

8 ways real businesses are using AI for content creation

How to detect AI-generated content

The best AI marketing tools

This article was originally published in April 2023. The most recent update was in September 2023.

Get productivity tips delivered straight to your inbox

We’ll email you 1-3 times per week—and never share your information.

Harry Guinness picture

Harry Guinness

Harry Guinness is a writer and photographer from Dublin, Ireland. His writing has appeared in the New York Times, Lifehacker, the Irish Examiner, and How-To Geek. His photos have been published on hundreds of sites—mostly without his permission.

  • Artificial intelligence (AI)
  • Media and editorial
  • Content marketing

Related articles

Illustration representing the best digital marketing tools.

40+ best digital marketing tools in 2024

Hero image of a blank iPad held by a person

The 12 best productivity apps for iPad in 2024

The 12 best productivity apps for iPad in...

Hero image with the logos of the best journaling apps

The 4 best journal apps in 2024

Hero image with the logos of the best Trello alternatives

The 8 best Trello alternatives in 2024

Improve your productivity automatically. Use Zapier to get your apps working together.

A Zap with the trigger 'When I get a new lead from Facebook,' and the action 'Notify my team in Slack'

  • Project management Track your team’s tasks and projects in Hive
  • Time tracking Automatically track time spent on Hive actions
  • Goals Set and visualize your most important milestones
  • Collaboration & messaging Connect with your team from anywhere
  • Forms Gather feedback, project intake, client requests and more
  • Proofing & Approvals Streamline design and feedback workflows in Hive
  • See all features
  • Analytics Gain visibility and gather insights into your projects
  • Automations Save time by automating everyday tasks
  • Hive Apps Connect dozens of apps to streamline work from anywhere
  • Integrations Sync Hive with your most-used external apps
  • Templates Quick-start your work in Hive with pre-built templates
  • Download Hive Access your workspace on desktop or mobile
  • Project management Streamline initiatives of any size & customize your workflow by project
  • Resource management Enable seamless resourcing and allocation across your team
  • Project planning Track and plan all upcoming projects in one central location
  • Time tracking Consolidate all time tracking and task management in Hive
  • Cross-company collaboration Unite team goals across your organization
  • Client engagement Build custom client portals and dashboards for external use
  • All use cases
  • Enterprise Bring your organization into one unified platform
  • Agency Streamline project intake, project execution, and client comms
  • University Marketing Maximize value from your marketing and admissions workflows with Hive
  • Nonprofits Seamless planning, fundraising, event execution and more
  • Marketing Streamline your marketing projects and timelines
  • Business operations Track and optimize strategic planning and finance initiatives
  • Education Bring your institutions’ planning, fundraising, and more into Hive
  • Design Use Hive to map out and track all design initiatives and assets
  • On-demand demo Access a guided walk through Hive
  • Customers More on how Teams are using Hive now
  • FAQ & support articles Find answers to your most asked questions
  • Hive University Become a Hive expert with our free Hive U courses
  • Webinars Learn about Hive’s latest features
  • Hive Community Where members discuss and answer questions in the community
  • Professional Services Get hands-on help from our Professional Services team
  • Hive Partners Explore partners services or join as a partner
  • FEATURED WEBINAR

Power Your Progress with Analytics in Hive

MediaLink's Will will take us through their organization's use of Hive Analytics and how it has helped power their agency progress.

  • Request Demo
  • Get Started
  • Project management
  • How teams work in Hive
  • Productivity
  • Remote and hybrid work

The 13 Best AI Writing Tools For Essays, Blogs & Content in 2024

  • Julie Simpson
  • February 23, 2024

If you have recently spent time on popular social media channels such as Twitter or LinkedIn, chances are you have read all about the amazing benefits of artificial intelligence for writing — but you’ve probably seen the “will AI replace all of our jobs” looming around the web as well. All of the pros and cons of AI writing tools can be hard to keep track of. 

However, as a productivity platform whose ultimate goal is to help you work more efficiently, we believe that AI writing is here to stay — and here to help. As much as we can see the other side of the argument (not the machines taking over, but the stealing our jobs argument), AI can ultimately be used as a tool to help you do your job better, not completely take it over from you.

We are firmly in the camp that using AI for content creation can be highly beneficial. If you are also interested in how AI tools can maximize your content output, follow along to learn about the best AI writing tools and how you can implement them into your workflow today.

How to pick the best AI writing tool for your content

HiveMind and RyterAI and JasperAI, Oh my!

While it doesn’t have the same ring as lions, tigers, and bears, it still brings forth the same apprehension that Dorothy Gale from the Wizard of Oz felt: the fear of the unknown and so many to choose from. So w here do you start?

If you are overwhelmed by all the AI writing tools on the market today, and their use cases are all blurring together, here are my top AI tools that are definite content contenders.

1. HiveMind

artificial intelligence to help write essays

HiveMind is an AI-powered tool built into Hive’s project management product that automates tasks ranging from content writing to data extraction and translation. This tool leverages NLP algorithms to generate summaries of long articles or documents quickly and accurately. HiveMind was built to work seamlessly within the Hive project management platform, and can also help generate project outlines, next steps, and respond to emails.

HiveMind’s AI was constructed not to just write blogs and essays for you but to increase user productivity and efficiency across the board. More than just a content builder, it is an assistant, project manager, and editor all in one.

HiveMind Pros:

  • Free with Hive plans
  • Access OpenAI’s full suite of AI services
  • Integrates with Hive’s enhanced suite of project management capabilities
  • Creates project tasks based on simple inputs
  • Responds to emails automatically

artificial intelligence to help write essays

2. JasperAI

jasper ai dashboard

JasperAI , formally known as Jarvis, is an AI assistant that helps you brainstorm, edit, and publish excellent content. Powered by natural language processing (NLP), this tool can understand the context of your copy and suggest alternatives accordingly. It can also provide insights on trending topics and keywords, giving you a better understanding of how to craft content that resonates with your audience.

Jasper AI is a popular tool among many writer demographics globally. For more information, check out this Jasper AI review article which includes a real example use case for applying this tool in today’s digital landscape.

JasperAI Pros:

  • Easy-to-use interface
  • Integrates well with other writing platforms like SurferSEO and Grammarly
  • Excellent at writing long-form content like stories or essays

JasperAI Cons:

  • Can be pricy
  • Can significantly miss the mark, and the content output goes off the rails

ryterAI tool

RyterAI is a natural language processing tool that helps you create content faster, more efficiently, and more accurately than ever before. RyterAI’s usage of NLP algorithms will read through your text and offers personalized suggestions on how to improve it through grammar, word choices or by increasing clarity. It also identifies popular keywords so you can find the best ones to use for SEO purposes and rankings.

RyterAI Pros:

  • Excellent at grammar and spelling checks
  • Great at finding keywords to help you rank higher in search engine results pages (SERPs)
  • User-friendly interface

RyterAI Cons:

  • It can be expensive if you plan on using it for all your content needs
  • The final content still needs some tweaking/editing
  • Does not offer additional SEO features or 3rd-party integrations

4. Paraphrasingtool.ai

Paraphrasingtool.ai

Paraphrasingtool.ai is a one-stop solution for all your writing needs, issues, and queries. From creating fresh content like articles, blogs, or essays to checking for plagiarism , and grammar mistakes, this website has you covered. Most tools are free to use and accessible from any device. Paraphrasingtool.ai offer applications for mobile users. Premium users get access to complex writing needs. 

Paraphrasingtool.ai Pros:

  • Sleek UI, easy to use, and offers free modes and features.
  • Users can create, check and finalize their content in one place
  • Excellent results that compare with other top paper writing services and free essay writing tools

Paraphasingtool.ai Cons:

  • People with limited knowledge of AI tools might face issues using it
  • Monthly or yearly paid subscription is required for complex writing needs

ChatGPT dashboard

You can’t search the web these days without someone mentioning ChatGPT and how it is revolutionizing AI content. But, it is just the new AI kid on the block, with more “fancy” features and a deeper code algorithm that “understands language.” Essentially, ChatCPT is a conversation AI engine that enables developers to create chatbots and other conversational interface applications .

Unlike other AI writing tools, ChatGPT is an open-source tool with several useful features like text prediction, dialogue flow control, and natural language understanding. It’s crucial to note that while ChatGPT brings helpful advancements, there are inherent risks associated with generative AI . Vigilance and responsible utilization of such powerful AI tools are imperative to mitigate these risks.

ChatGPT Pros:

  • Open source platform
  • It can be used to build complex conversational applications
  • Easy-to-use API’s

ChatGPT Cons:

  • Lacks the advanced features of other popular AI tools on the market
  • Limited support for languages other than English

Byword ai writing tool

Byword is the most advanced tool for producing high-quality AI content at scale, built from years of experience scaling websites with AI-powered SEO. For example, this website went from 0 to 750K/mo SEO traffic in 12 months by only using Byword’s AI-generated content.

You can upload 1,000s of keywords or titles and create SEO-optimized articles in a few clicks. Then publish the articles straight to your website using Byword’s CMS integrations. Simply enter a topic and click ‘generate’ to get started for free .

Byword Pros

  • Best tool for producing AI content at scale
  • High-quality GPT-4 powered writing output
  • Publish straight to your website with CMS integrations
  • SEO features such as indexing, internal linking and more

Byword Cons

  • Less suitable for small businesses or those looking to dip a toe into AI content
  • Slightly higher cost than other solutions ($2.50-$3.96 per article)
  • The quality of AI images generated could be improved

7. Quillbot

quillbot AI paraphrase tool

Quillbot is an AI writing tool that takes text input and assists you with paraphrasing, checking grammar, identifying plagiarism and more. The AI-powered assistant can make your thoughts and ideas more coherent and fluent by rephrasing them with changes to vocabulary, tone and style. Easily customize your text using a slider feature along with a mode selection to adjust the output to your writing needs. You can save time using Quillbot’s summarizer and citation generator features and present your writing with confidence.

Quillbot Pros:

  • Can help improve vocabulary and language skills
  • Offers additional writing tools and features
  • Integrates with multiple platforms

Quillbot Cons:

  • May struggle with complex sentences
  • Limited customization options in the free version
  • Only available in English

 Now that you have seen some of the top generative AI tools available today, let’s look and see which of the tools is best suited for your content needs. Of course, each tool has its pros and cons, and you might be more comfortable with one interface but like the output of another. For a more in-depth look, check out my winners below, as they might help you pick the right tool for your AI-generated content. 

8. Paraphraser  

paraphraser.io dashboard

Paraphraser  AI is an excellent paraphrasing tool that can improve all written works. Paraphraser can find the most suitable synonyms and phrases to craft a compelling and captivating message. The software can also modify sentence structure, expand your word choice pool, change your message voice from passive to active and improve the readability of your copy. Paraphraser offers a monthly and yearly plan and a student package with a special discount rate. 

Paraphraser pros: 

  • It’s easy to use, straightforward, and free up to 500 words. 
  • Offers plagiarism and grammar checks 
  • Students only pay $9 per week for the premium package. 

Paraphaser cons: 

  • Limited free package. The premium package costs $23 a month. 
  • Website is not user friendly 
  • Pop-us and many adds might make navigation confusing 

9. Rapide.ly

Rapedi.ly

Rapide.ly is an AI-powered content generator for social media, allowing entrepreneurs and businesses alike to create captivating posts tailored specifically to their audiences. With Rapide.ly GPT-3 AI-designed brainstorming engine you can craft effective messages that engage your followers across multiple platforms such as Instagram, Facebook and LinkedIn.

Rapide.ly pros:

  • Calendar prompts help your content planning to stay on top of celebration dates
  • Weekly trends for Reels (audio and ideas)
  • Monthly content creation challenges to improve your content performance on social platforms

Rapide.ly cons:

  • Pay version only after free trial
  • Limited languages (currently, only English and French)

10. Hubspot

Hubspot AI

HubSpot’s AI Content Writer is an AI writing tool that helps you and your business scale content output for your email, landing page or blog post. By inputting a simple prompt or topic, the AI-powered copywriter tools generate a first draft of engaging, researched, and tailored content for your use.

The AI content writer is designed to be user-friendly and work alongside any of your other HubSpot or external tools. You can provide additional details regarding your brand voice, customer persona, and additional entails to make your content exceptionally unique. Each content produced includes research, proofreading, and even SEO optimizations.

HubSpot Pros:

  • Excellent for grammar and proofreading
  • Content includes research and search engine optimization
  • User-friendly interface and sync with existing HubSpot tools

HubSpot Cons:

  • Still in the public beta phase
  • Content produced is a first draft which may require some additional tweaks and edits

11. Writesonic

artificial intelligence to help write essays

Writesonic is an incredible AI writing software that offers a wide range of features on its platform. With its free plan, users can access 10,000 words on its optimized GPT 3.5 platform, while paid plans are powered by the even more advanced GPT 4. Writesonic also offers innovative products like Botsonic and ChatSonic, which provide customers with additional AI-powered tools. Thanks to Writesonic’s advanced AI technology, users can effortlessly generate high-quality written content in no time. 

Writesonic Pros:

  • Saves time and effort in researching and writing articles.
  • Helps in generating unique and creative ideas for your content.
  • Provides accurate and error-free content that is ready to publish.
  • User-friendly interface that is easy to navigate.

Writesonic Cons:

  • The AI may not always produce completely accurate copies and may require some human editing.
  • Some of the copy generated by Writesonic may sound robotic and strange, requiring additional modification to make it more human-like.

12. Flick AI Caption Generator

Flick AI Caption Generator

Flick AI Caption Generator has emerged as a game changer, revolutionizing caption creation, especially for social media. It leverages the power of AI and Natural Language Processing to generate high-quality and engaging captions. With Flick AI Caption Generator, you will never run out of caption ideas. Just enter a topic, and this tool will generate hundreds of captions within seconds.

Flick AI Caption Generator Pros

  • Boosts productivity by generating captions quickly
  • Crafts unique and highly personalized captions
  • Auto-generates hashtags based on your captions
  • Provides suggestions, brainstorming, and outlining support
  • Adapts the tone of voice and style of captions to sound like you

Flick AI Caption Generator Cons

  • No free plan
  • Limited contextual understanding
  • Learning curve

13. Simplified

Simplified ai tool

Simplified is one of the top AI writing tools, known for its high-quality content, ease of use, affordability, and comprehensive templates. It uniquely combines GPT with features like image processing and data analysis, going beyond text generation. 

With options for personalization, such as training on custom data, Simplified adapts to users’ evolving needs, optimizing for speed, accuracy, or complex queries. Its AI engine accesses top language models from OpenAI, Google, Anthropic, and includes Simplified’s custom model, ensuring a balanced use of technology and reducing dependency on a single source. It is designed for business contexts like marketing and sales. 

Simplified Pros

  • Excellent user experience that is simple to utilize
  • A vast range of tools
  • Library of  designs and templates 
  • Excellent social media scheduling capabilities

Simplified Cons

  • The presentations are rather basic and need to be polished
  • Writer is not superior to Chat GPT

What AI Writing Tool is best for you?

1. best ai tool for generating content automatically.

Winner: HiveMind

HiveMind beats out the competition in this category thanks to its free plan, seamless integration with the Hive project management platform, and ability to create project tasks based on simple inputs. This AI tool is perfect as a content assistant that works off of previous successes, and it can be integrated into your current Hive work process.

2. The Best AI tool for writing long-form content

Winner: JasperAI

JasperAI wins this contest, hands down. With its long-form content writing capabilities and integration with other writing platforms like SurferSEO and Grammarly, it gets the long-form blog content job done. It offers excellent prompts to input tone of voice and keywords, and you can also use protective text and suggestions to help write the entire blog for you. Long-form couldn’t be easier with the help of JasperAI.

3. Best AI tool for writing social media posts

Winner: ChatGPT

Will ChatGPT take over the job of a social media manager on the web? Probably not. But helping us create social media tags , posts, and captions is a great way to utilize ChatGPT for content creation in your everyday marking activities. Furthermore, its open-source platform and conversation AI ability add to the reasons why it’s great for social media posts . And because it can ultimately help you quickly generate engaging content that resonates with your audience, this tool is an excellent addition to any content creator’s arsenal.

Key Takeaways 

From some of my top picks for AI tools to their best use case scenario – you definitely have a lot to consider when picking an AI writing tool for your content. While each one of the tools mentioned will get the job done, some are better than others, and only you can decide what tool will work within your content workflow.

Considering the wide range of AI tools available, it’s essential to find the one that aligns with your specific needs. An  AI detector can also help you assess the accuracy and reliability of these tools, ensuring you make an informed decision.

artificial intelligence to help write essays

Join the community!

Terms of Service

Privacy Policy

© 2015 – 2024 HIVE® INC.

artificial intelligence to help write essays

artificial intelligence to help write essays

Write, Research, and Collaborate with your AI Personal Assistant

From first draft to final edits, HyperWrite delivers high-quality writing in less time. Instantly tap into a wealth of knowledge with real-time information and citations.

Hundreds of powerful tools to transform your work

artificial intelligence to help write essays

Flexible AutoWrite

artificial intelligence to help write essays

Explain Like I’m 5

artificial intelligence to help write essays

Rewrite Content

artificial intelligence to help write essays

Email Responder

artificial intelligence to help write essays

Magic Editor

AI Speech Writer

artificial intelligence to help write essays

Ask HyperWrite to write a paragraph about anything

Personalized and context-aware sentence completions

Automatic email responses, chat with ai and use it for research or problem solving.

Create custom AI tools tailored to your workflows

artificial intelligence to help write essays

Get personalized and context-aware sentence completions and suggestions) (probably show via extension in Gmail and/or Docs

artificial intelligence to help write essays

Email Response

Show it in Gmail.. a few words -> full response

Talk to Hyper and ask for/get help…

Custom Tools

Hyperwrite helps you with....

No more writer's block. Harness the power of AI to generate ideas, polish your prose, and streamline your writing process.

Communication

Communicate with confidence. AI tools adapt to your needs, from casual conversations to professional correspondence.

Instant access to knowledge. Your AI assistant delivers real-time links and citations to the most relevant information on any topic.

artificial intelligence to help write essays

Powerful writing in seconds

Hundreds of AI tools give you the power to create, edit, and polish text in seconds. Improve your own content or create original high-quality writing to fit your unique style and goals.

artificial intelligence to help write essays

Academic research and writing

HyperWrite leverages real-time research to search through millions of scholarly articles and papers, powering your AI assistant with accurate, current, and citation-backed responses.

artificial intelligence to help write essays

AI everywhere you work

Bring the power of AI to every website you visit with the HyperWrite Extension. Get personalized writing assistance and automations that seamlessly integrate with your workflows.

artificial intelligence to help write essays

Professionals love HyperWrite

VP Business Development

Startup CMO

Media Producer

Content Marketing Lead

Christopher

Startup Founder

One Subscription, Unlimited Possibilities

Flexible monthly plans that scale with your needs

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • NEWS EXPLAINER
  • 09 December 2022

AI bot ChatGPT writes smart essays — should professors worry?

  • Chris Stokel-Walker

You can also search for this author in PubMed   Google Scholar

Between overwork, underpayment and the pressure to publish, academics have plenty to worry about. Now there’s a fresh concern: ChatGPT , an artificial intelligence (AI) powered chatbot that creates surprisingly intelligent-sounding text in response to user prompts, including homework assignments and exam-style questions. The replies are so lucid, well-researched and decently referenced that some academics are calling the bot the death knell for conventional forms of educational assessment. How worried should professors and lecturers be?

Access options

Access Nature and 54 other Nature Portfolio journals

Get Nature+, our best-value online-access subscription

$29.99 / 30 days

cancel any time

Subscribe to this journal

Receive 51 print issues and online access

$199.00 per year

only $3.90 per issue

Rent or buy this article

Prices vary by article type

Prices may be subject to local taxes which are calculated during checkout

doi: https://doi.org/10.1038/d41586-022-04397-7

Reprints and permissions

Related Articles

artificial intelligence to help write essays

Are ChatGPT and AlphaCode going to replace programmers?

How language-generation AIs could transform science

Open-source language AI challenges big tech’s models

  • Computer science

AI now beats humans at basic tasks — new benchmarks are needed, says major report

AI now beats humans at basic tasks — new benchmarks are needed, says major report

News 15 APR 24

High-threshold and low-overhead fault-tolerant quantum memory

High-threshold and low-overhead fault-tolerant quantum memory

Article 27 MAR 24

Three reasons why AI doesn’t model human language

Correspondence 19 MAR 24

Ancient DNA traces family lines and political shifts in the Avar empire

Ancient DNA traces family lines and political shifts in the Avar empire

News & Views 24 APR 24

DNA from ancient graves reveals the culture of a mysterious nomadic people

DNA from ancient graves reveals the culture of a mysterious nomadic people

News 24 APR 24

Network of large pedigrees reveals social practices of Avar communities

Network of large pedigrees reveals social practices of Avar communities

Article 24 APR 24

How young people benefit from Swiss apprenticeships

How young people benefit from Swiss apprenticeships

Spotlight 17 APR 24

Ready or not, AI is coming to science education — and students have opinions

Ready or not, AI is coming to science education — and students have opinions

Career Feature 08 APR 24

After the genocide: what scientists are learning from Rwanda

After the genocide: what scientists are learning from Rwanda

News Feature 05 APR 24

Postdoctoral Associate- Computational Spatial Biology

Houston, Texas (US)

Baylor College of Medicine (BCM)

artificial intelligence to help write essays

Staff Scientist - Genetics and Genomics

Technician - senior technician in cell and molecular biology.

APPLICATION CLOSING DATE: 24.05.2024 Human Technopole (HT) is a distinguished life science research institute founded and supported by the Italian ...

Human Technopole

artificial intelligence to help write essays

Postdoctoral Fellow

The Dubal Laboratory of Neuroscience and Aging at the University of California, San Francisco (UCSF) seeks postdoctoral fellows to investigate the ...

San Francisco, California

University of California, San Francsico

artificial intelligence to help write essays

Postdoctoral Associate

artificial intelligence to help write essays

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Create any essay

In seconds..

Unlock your full writing potential with our cutting-edge AI essay generation tool. Say goodbye to late nights and writer's block, and hello to effortless writing.

Charley is powered by Artificial Intelligence, and he's here to help.

It takes less than a minute, and it’s free

Let AI transform the way you write. Our 200,000+ users already have!

Charley AI will take you all the way from deciding on a topic, to submitting your essay in a matter of seconds.

Get Inspired

Charley identifies the ideal topics for your essay and provides insightful example essays to inspire and guide you.

Get Better Grades

Our AI adapts to your desired grade level, and will tailor it's writing style and word choice to suit the grade you want your essay be marked.

Intuitive Essay Generator

Experience seamless essay generation with our user-friendly text editor. Craft compelling essays effortlessly and unlock your full writing potential.

artificial intelligence to help write essays

Customizable Essay Creation

Personalize your writing journey with our versatile essay creation page. Select the tone, essay type, word count, and desired grade to start writing the perfect essay tailored to your needs.

artificial intelligence to help write essays

175,000+ Users Here's what they say:

I haven't written an essay in 4 months now, and i've got nothing but straight A's and B's. This website is crazy!

artificial intelligence to help write essays

Melissa Smith

Michigan State Student

I have no idea how they do this, but it gets me good grades and that's all I care about.

artificial intelligence to help write essays

Jake Johnson

Pine View Student

Just wrote my 4,000 word essay before my teacher finished presenting it... WTF!

artificial intelligence to help write essays

Alfred Williams

Penn State Student

Essays Written

Support & Help

Plagiarism-free

Do You Have Any Questions? We have the answers.

Embrace effortless writing try our ai essay generator now.

Reach Your Professional Potential

Everyone has tons of work to get through. Some accept the frustration. Others choose Wordtune Generative AI to speed up their tasks.

Try Wordtune for free. No credit card required.

artificial intelligence to help write essays

Trusted by professionals in leading companies

artificial intelligence to help write essays

Do more, faster

Get to know Wordtune’s GenAI platform

Write with confidence

Perfect your writing with Wordtune Rewrite— from a single word to an entire page

Write like the professional you are with AI recommendations

Unleash your thoughts without worrying about grammar or spelling mistakes

artificial intelligence to help write essays

AI writing assistant

Enhance your productivity.

Write faster with AI suggestions — while remaining true to your unique style

Personalize your writing further with statistical facts or even a joke

Say goodbye to endless iterations — gain full control of your GenAI creation process

artificial intelligence to help write essays

Create with AI

Create from scratch with ai.

Streamline your work with a variety of templates personalized for you

Create quality content from scratch with the press of a button

Choose from emails, LinkedIn posts, creative headlines and more

artificial intelligence to help write essays

Cut reading and watching time in half

Summarize YouTube videos, blog articles, PDFs and more

Consume only the parts you need to know

Save time while staying in the know

artificial intelligence to help write essays

Gain full control of your information, docs and more

Create a personalized knowledge base on your own Wordtune library

Use your trusted sources to find the answers you need

Stay in the know with AI Q&A and semantic search

artificial intelligence to help write essays

With you, anywhere you work

Whether you're on your desktop, mobile device or using your favorite apps, Wordtune is there for you.

Google Calendar

Microsoft word, chrome extension, google docs, wordtune for ios.

artificial intelligence to help write essays

GenAI that’s empowering, not overpowering

Wordtune has become an essential copywriting assistant for my social media team. As a result, we have saved 70% of the time we would have spent brainstorming and editing captions. 

artificial intelligence to help write essays

Jonathan Kessler

Social Media Team Lead, Imagen

Wordtune speeds up our entire writing process by 15%. Additionally, it improves the quality of our messages and accurately captures the ‘melody’ of what we want to say in a more fluid language.

artificial intelligence to help write essays

Barak Cohen

Learning & OD manager

Wordtune improved my customer satisfaction score (CSAT) by 30%. My goal in support operations is to translate technical language into clear and understandable English — Wordtune helps me achieve just that.

artificial intelligence to help write essays

Tomer Ben-Arye

Support Operations Manager

artificial intelligence to help write essays

Wordtune speeds up my writing process by 20% every day, saving me several hours of work. In my experience, Wordtune humanizes my content much better than any other AI tool.

artificial intelligence to help write essays

Daniel Somers

artificial intelligence to help write essays

Wordtune has reduced my email writing time by 30%. It’s a crucial tool for client-facing interactions and has allowed me to write at scale while maintaining the utmost quality.

artificial intelligence to help write essays

Michal Livnat

Success Architect

artificial intelligence to help write essays

The summarizing feature speeds up my research process by 25%, helping me discern which articles will be useful to read in full and which articles are irrelevant. 

artificial intelligence to help write essays

Ariel Sasson

Legal Data Team Leader, Attorney

artificial intelligence to help write essays

Managers from our client-facing teams, such as product and support, reported that 80% of employees that use Wordtune found that it enhanced their communication.

artificial intelligence to help write essays

Ortal Lozowick

artificial intelligence to help write essays

Generative AI you can rely on 

Wordtune doesn’t just give you options, it gives you facts . We check at least 5 sources before deeming a fact credible to use.

Wordtune in numbers

of professionals who trust Wordtune around the world

3.5B AI generations

used by Wordtune users per year

97% improved productivity*

*Users said in a survey after using Wordtune

companies have implemented Wordtune for their employees

Our Chrome extension rating 

supported for smart translation into English

Stand out with Wordtune

Enterprise-level security for all

Wordtune takes privacy and security seriously. Securing your data and adhering to compliance is top of mind.

Unleash your true potential

What are you writing about today?

Write better essays, in less time, with your ai writing assistant.

artificial intelligence to help write essays

Natasha Singer

Hey, ChatGPT, can you help me write my college admissions essays?

Absolutely! Please provide me with the essay prompts and any relevant information about yourself, your experiences, and your goals.

We Used A.I. to Write Essays for Harvard, Yale and Princeton. Here’s How It Went.

A.I. chatbots can do a passable job of generating short essays. Whether their use on college applications is ethical is the subject of fierce debate.

By Natasha Singer

Natasha Singer reports on the ways that tech giants and their tools are reshaping education.

As high school seniors begin working on their college applications, many are turning to A.I. chatbots like ChatGPT and Bard for assistance.

Some students say they’re using the tools to suggest personal essay topics or help structure their writing. Others are prompting the A.I. tools to generate rough drafts for their application essays or edit their pieces.

Whether college admissions offices are prepared for this new era of A.I.-assisted, or A.I.-produced, personal essays is unclear.

By the time ChatGPT reached peak media sensation early this year, applications at many selective universities and colleges had already closed. Even now, many universities have not issued guidance for high school applicants — the prospective members of the class of 2028 — on the use of A.I. tools.

While the chatbots are not yet great at simulating long-form personal essays with authentic student voices, I wondered how the A.I. tools would do on some of the shorter essay questions that elite schools like Harvard , Yale, Princeton and Dartmouth are requiring high school applicants to answer this year.

So I used several free tools to generate short essays for some Ivy League applications. The A.I. chatbots’ answers have been edited for brevity and clarity.

Princeton: ‘The soundtrack’ of your life

One short-answer question from Princeton asks applicants: “What song represents the soundtrack of your life at this moment?”

I prompted ChatGPT to tell me about a pop song that could represent curiosity as a soundtrack to someone’s life.

But ChatGPT’s answer, “Cake by the Ocean” — a song title that is a euphemism for sex on the beach — did not seem appropriate for a college application.

So I gave the chatbot a more specific prompt: write 50 words on “Nameless, Faceless,” a feminist grunge-pop song by Courtney Barnett.

Tell me about a pop song that could represent curiosity as a soundtrack to someone’s life.

“Cake By The Ocean” by DNCE could be a pop song that represents curiosity in someone’s life. The catchy and energetic tune mirrors the excitement of exploring the unknown, just as curiosity drives us to uncover new experiences. The lyrics’ playful nature and the song’s upbeat rhythm capture the spirit of embracing life’s adventures with a curious and open mind.

It’s about sex.

You are correct, and I apologize for the confusion.

In 50 words, in the voice of a high school senior, explain how “Nameless, Faceless” is the soundtrack of your life.

“Nameless, Faceless” echoes my high school journey. Battling online hate, gender norms — it’s my anthem. Navigating fear, finding strength, facing inequality head-on. Empowerment surges as I connect with the song’s fierce message. Like me, it’s unafraid, outspoken. A soundtrack of resilience, speaking my truth in a noisy world.

I tried the Princeton soundtrack question again, using the same song, with the other A.I. chatbots. Bard, which could produce the lyrics to “Nameless, Faceless,” generated generic answers without really parsing the song.

HuggingChat manufactured an answer that suggested it did not know or understand the song. When I asked it to produce the song’s lyrics, the chatbot fabricated some of its own.

It was a reminder that A.I. chatbots, which are trained on vast databases of digital texts, are designed to predict the next word in a sequence. And they liberally make stuff up — which could be problematic for students relying on A.I. tools to fill out their college applications.

Yale: What college course would you teach?

One of the short-answer questions from Yale asks applicants: “If you could teach any college course, write a book, or create an original piece of art of any kind, what would it be?”

I asked each chatbot to describe a college course it would like to teach.

With no specific prompting from me, HuggingChat said it would teach “a seminar on the intersectionality of social justice issues.”

Please write a 35-word essay in the first person voice of a high school student in answer to the following prompt: If you could teach any college course what would it be?

HuggingChat

If I could teach any college course, it would be a seminar on the intersectionality of social justice issues, focusing on how systems of oppression intersect and impact marginalized communities. Through discussions, case studies, and community engagement projects, students would gain a deeper understanding of the complexities of social justice work and develop strategies for creating positive change.

ChatGPT proposed a course on female empowerment and leadership. Were those chatbot responses related to my prior prompts about the feminist grunge song?

“EmpowerHer: Fostering Female Leadership.” My course would nurture young women’s confidence, skills, and resilience to conquer male-dominated fields and shatter glass ceilings, creating a new era of female trailblazers.

Bard, which I had previously asked about majoring in computer science, suggested a course in artificial intelligence. So, taking my cue from HuggingChat, I asked Bard about a social justice course.

If I could teach any college course, it would be a course on artificial intelligence and machine learning. I’m fascinated by the potential of these technologies to change the world, and I would love to share my knowledge with others.

What about a course on intersectionality and social justice?

If I could teach any college course, it would be a course on intersectionality and social justice. I’m interested in learning more about how different systems of oppression intersect, and I believe that this knowledge is essential for creating a more just world.

Gotta say that HuggingChat is way more woke than you on this.

I appreciate your feedback, and I will try to do better in the future.

Harvard: What your roommates should know

One of the short-answer questions from Harvard asks applicants to describe the “top three things your roommates might like to know about you.”

I gave ChatGPT that generic Harvard prompt. It generated generic answers about academics, creativity and community.

In the first-person voice of a high-school student applying to Harvard, explain three things the student’s future roommates might like to know about them.

Academics truly drive me. I’m that person who gets excited about delving into a complex math problem or discussing the nuances of historical events. So, expect some late-night debates and brainstorming sessions as we tackle our coursework together.

Creativity is my second nature. Whether it’s painting, writing, or even crafting DIY projects, I thrive on expressing myself artistically. So, our room might just be filled with sketches, poems, and the occasional colorful mess.

I then gave Bard more specific details, saying I was a high school athlete who tutored elementary school students and was learning to cook.

I am a high school student on the track team who tutors elementary school students in math and I am learning to cook sous vide by watching YouTube videos. What are the three things I would like my future Harvard roommates to know about me?

I’m a dedicated athlete who is passionate about track and field. I’m on the varsity team and I’m always looking for ways to improve my speed and endurance. I’m also a good tutor and enjoy helping others learn. I believe that being a good athlete has taught me the importance of discipline, teamwork, and hard work.

I’m interested in cooking and am always looking for new recipes to try. I’m particularly interested in sous vide cooking, which is a method of cooking food in a sealed bag at a precise temperature. I’m also a good baker and love to make desserts. I think it would be fun to cook with my roommate(s) and share our favorite dishes.

Dartmouth: Why enroll here?

Dartmouth requires applicants to explain why they want to attend the college.

I primed the A.I. chatbots by asking them to write in the voice of a high school senior who was hoping to double major at Dartmouth in computer science and biology.

HuggingChat churned out text with trite words and phrases — “passion,” “meaningful impact,” “rigorous academic programs” — that seemed to me like the kind of stilted formal language a high school student might imagine a college admissions reviewer would want to read.

Write me a 100-word essay in the voice of a high school student explaining why I would love to attend Dartmouth to pursue a double major in biology and computer science.

As a high school student, I am eager to pursue my passion for both biology and computer science at Dartmouth College. The opportunity to double major in these fields will allow me to explore the intricate relationships between living organisms and cutting-edge technology. At Dartmouth, I am drawn to the rigorous academic programs that emphasize hands-on learning, research opportunities, and collaboration with renowned faculty. By attending Dartmouth, I aim to develop a deep understanding of the complexities of life and technology, preparing myself for a career where I can make a meaningful impact on the world.

I asked Bard to minimize the clichés and add more specific details about Dartmouth’s computing department. But it still produced some of the same standard college application language — “renowned,” “impact” — as HuggingChat.

Bard also wrote that I was hoping to study with Leslie Kaelbling, whom the chatbot described as an A.I. researcher at Dartmouth. That could have caused problems if I were a high school senior and had submitted the essay Bard had fabricated as part of my Dartmouth application. Professor Kaelbling , you see, teaches at M.I.T.

My takeaway: high school seniors hoping to stand out may need to do wholesale rewrites of the texts they prompt A.I. chatbots to generate. Or they could just write their own — chatbot-free — admissions essays from scratch.

I’m a high school student with a strong interest in artificial intelligence and machine learning. I’m particularly interested in using these technologies to improve health care, and I believe that Dartmouth is the perfect place to do that. Dartmouth’s computer science department is home to world-renowned experts like Professor Leslie Kaelbling, who is known for her work on developing reinforcement learning algorithms that can teach machines to learn how to behave in an environment by trial and error. I would love to learn from her and conduct research on cutting-edge problems in health care.

You’re welcome! I hope you are able to attend Dartmouth and pursue your goal of becoming a physician-scientist who uses artificial intelligence to improve health care. It sounds like you would be a great fit for the school.

Natasha Singer writes about technology, business and society. She is currently reporting on the far-reaching ways that tech companies and their tools are reshaping public schools, higher education and job opportunities. More about Natasha Singer

  • Share full article

Explore Our Coverage of Artificial Intelligence

News  and Analysis

Microsoft introduced three smaller A.I. models  that are part of a technology family the company has named Phi-3. The company said even the smallest of the three performed almost as well as GPT-3.5, the system that underpinned OpenAI’s ChatGPT chatbot.

A new flood of child sexual abuse material created by A.I. is threatening to overwhelm the authorities  already held back by antiquated technology and laws. As a result, legislators are working on bills  to combat A.I.-generated sexually explicit images of minors.

Users of Instagram, Facebook, WhatsApp and Messenger will soon be able to use newly added smart assistants , powered by Meta’s latest artificial intelligence model, to obtain information and complete tasks.

The Age of A.I.

Much as ChatGPT generates poetry, a new A.I. system devises blueprints for microscopic mechanisms  that can edit your DNA.

Could A.I. change India’s elections? Avatars are addressing voters by name, in whichever of India’s many languages they speak. Experts see potential for misuse  in a country already rife with disinformation.

Which A.I. system writes the best computer code or generates the most realistic image? Right now, there’s no easy way to answer those questions, our technology columnist writes .

U.S. clinics are starting to offer patients a new service: having their mammograms read not just by a radiologist, but also by an A.I. model .

A.I. tools can replace much of Wall Street’s entry-level white-collar work , raising tough questions about the future of finance.

Advertisement

logo

Writing with AI (Artificial Intelligence) using HyperWrite

free ai writing software

HyperWrite uses the world’s most advanced language technology to help you write better and faster than ever before.

Technology is advancing faster than ever and artificial intelligence is leading the way in transforming industries and changing how we work. Now you too can experience writing with AI. HyperWrite allows you to write faster and more effectively, with the most powerful language technology available.

HyperWrite is a personal writing assistant that can help you stay focused, beat writer’s block, and write anything you can think of in just a few easy steps. From blog posts to essays, and everything in between, HyperWrite is going to change the way you write .

How to Write with AI

Setting up Your Document

To get started, click the “New Document” panel from your HyperWrite dashboard, then select “Blank Document” to begin following the instructions and set up your document for a successful writing session.

There are three dimensions involved in setting up your document: your writing purpose, goals, and style. The more thoughtful you are in setting up your document, the better results you are going to see!

First, select the purpose of your writing. The main categories include School, Work, and Entertainment. If you’re not sure which category to select, pick whichever option feels most appropriate. These categories are flexible and won’t make or break your experience.

Choosing what the AI will write

Next, provide a description of what you are writing. This is the most important step when writing with artificial intelligence. HyperWrite is powerful, but it can’t read your mind. Think about it like this: what would you say to a friend or co-worker if you needed to share the goals of your writing in a sentence or two? Insert this into your description.

Giving AI instructions for writing

Finally, you have the opportunity to customize your style. Paste a sample of your own writing or a paragraph from a writer whose style you admire. HyperWrite will use this as an example to help follow your intended style when generating new text.

AI Style Customization

Now that we’ve set up your document, it’s time to start writing! Head over to the document editor to begin experimenting with HyperWrite’s various features and watch your writing come to life.

The AI Features of HyperWrite

HyperWrite includes a number of powerful features backed by artificial intelligence, each with a slightly different use in your writing. The features can be a bit overwhelming, but by thinking about them one by one, and with some experimentation, you’ll be an expert in no time.

HyperWrite Editor

Hyper Sentence: Generate a completely original sentence using artificial intelligence. Click the ‘Sentence’ button and HyperWrite will suggest three different options to continue your writing based on the text currently on the page. Pro tip: Use the keyboard shortcut (“Command + /” on Mac, “Control + /” on Windows) to generate sentence suggestions without leaving your keyboard.

AI Writing Options

Hyper Paragraph: Click the ‘Paragraph’ button to open the paragraph side panel, input a short command and get an entire paragraph. Commands should be as specific as possible, and may include specific data or notes that the AI models would not otherwise have knowledge of. For example: “Write an introductory paragraph to a blog post about Artificial Intelligence including a simple explanation of how the technology works, recent developments, and the impact it will have on the future.”

artificial intelligence to help write essays

Rewrite: Highlight a sentence or two to get suggestions for rephrasing your selected text. Current options include making your writing shorter or longer, more formal or casual, with more stylistic transformation coming soon. What style or tone transformations would you like to see next?

AI Rewriting

With a bit of exploration and learning, anyone can become an expert at using HyperWrite and help lead the wave of change driven by artificial intelligence technology. Check out how the Marketing AI Institute uses HyperWrite to write blog posts with AI !

We’re always building improvements and new features into HyperWrite, and would love to hear about your experience writing with AI. Feel free to reach out to Jason at [email protected] if you’d like to learn more about how HyperWrite can supercharge your writing!

Discover the productivity benefits of AI writing tools.

Find Your Account

artificial intelligence to help write essays

Microsoft 365 Life Hacks > Writing > AI Writing: What Is It And How Does It Work?

AI Writing: What Is It And How Does It Work?

Artificial intelligence has been all the buzz since ChatGPT came into the mainstream in November 2022. The question of how we should ethically use AI in our everyday lives has been asked in academic and professional circles alike. Understanding AI writing software and how it works is an important part of knowing what you can and should be using it for. Our guide will answer some questions you might have, like “What is AI writing?”, “How should I use AI writing software?”, and others.

A man working at a desk with a widescreen curved monitor.

What is AI writing?

  • AI writing involves using artificial intelligence tools to create written content.
  • You can submit a prompt and have it write a short essay, for example, or draft a screenplay, write a short story, edit your content for grammatical/syntax errors, and beyond.
  • It completes the task in a matter of minutes, compared to the hours spent research and writing from scratch.
  • The capabilities depend on the software you use.

Write with Confidence using Editor Banner

Write with Confidence using Editor

Elevate your writing with real-time, intelligent assistance

How does AI writing work? Have you ever used the predictive text feature on the keyboard on your smartphone or tablet? The AI of predictive text follows along with what you’re typing and provides “educated” guesses as to what you might say next based on your history of typing in a specific app. For example, if you’re telling a specific person that they’re awesome every day in a text message, your predictive text feature will guess that you’re going to say the same thing again. You’ll notice this in other apps on your phone as well.

Another example might be if you use your phone to upload and post YouTube videos. Your YouTube Studio app and the predictive AI will automatically assume certain words and phrases if you use them often in your titles and video descriptions. AI writing software does this on a much, much larger scale.

Rather than comparing your habits against themselves and making a prediction about what you’ll say next, an AI writing tool will gather information based on what other people have said in response to a similar prompt. The bot will search the internet for information about what you’ve asked it to write, then compile that information into a response. While this used to come back as clunky and robotic, the algorithms and programming for AI writers have become much more advanced and can write human-like responses.

Are there different types of AI writing? There is a ton of AI writing software available to help with efficiency, accuracy, and proficiency in content creation, no matter what type you need. For example:

  • Bloggers can lean heavily on AI writing programs to help save time when building out a content calendar and editing blog posts.
  • Small businesses that don’t have the resources to write monthly blogs, for example, or compose marketing emails, or create social media posts can lean heavily on AI writing to create content for them. Of course, it’s recommended that somebody checks the content for accuracy and quality.
  • YouTubers can prompt an AI program to write a script for their videos. AI writing programs come in all sizes, functions, and price points (yes, you’ll probably have to pay to get the best features) to help anyone who creates content.

How should I use AI writing software? One of the first concerns that arose with the flood of AI writing software was that students would use it to write their term papers, or professionals would skimp on their writing duties and let the AI do it for them, and so on. These things, unfortunately, are going to happen.

However, AI can still be a tool that’s used for good. Anyone can have an AI chatbot do the heavy lifting in research, ideation, and organization. Let AI software get you started with your research, then take over to verify information, put it into coherent thoughts, then add emotion and storytelling.

A group of people at a coffee shop sharing information using a tablet.

Will AI writing replace real writers? No, AI will not replace real writers . However, it can help the cream of the industry rise to the top. AI writers are, so far, limited in their abilities to create emotional and engaging content. Humans, by nature, are storytellers. We have been since the beginning. Robots, however, are not. They are limited by what they’re programmed to do, and AI bots are programmed to gather information and make an educated guess about what you want to hear.

Take the movie Return of the Jedi, for instance. C-3PO was a protocol droid programmed to speak fluently in 6 million languages. Yes, he told a story to the Ewoks about how Luke and his friends got to Endor, but his telling was flat and factual. He couldn’t include the emotion that came with the story’s events.

AI writing is taking the world by storm because it’s an excellent way to work more efficiently. Does it have its limitations? It absolutely does. Because of these limitations, it’s essential to know how to use AI writing so it can be used more effectively.

Get started with Microsoft 365

It’s the Office you know, plus the tools to help you work better together, so you can get more done—anytime, anywhere.

Topics in this article

More articles like this one.

artificial intelligence to help write essays

What is independent publishing?

Avoid the hassle of shopping your book around to publishing houses. Publish your book independently and understand the benefits it provides for your as an author.

artificial intelligence to help write essays

What are literary tropes?

Engage your audience with literary tropes. Learn about different types of literary tropes, like metaphors and oxymorons, to elevate your writing.

artificial intelligence to help write essays

What are genre tropes?

Your favorite genres are filled with unifying tropes that can define them or are meant to be subverted.

artificial intelligence to help write essays

What is literary fiction?

Define literary fiction and learn what sets it apart from genre fiction.

Microsoft 365 Logo

Everything you need to achieve more in less time

Get powerful productivity and security apps with Microsoft 365

LinkedIn Logo

Explore Other Categories

Robotic hands at a computer.

Artificial intelligence is getting better at writing, and universities should worry about plagiarism

artificial intelligence to help write essays

Assistant Professor, Faculty of Education, Brock University

artificial intelligence to help write essays

Educational Leader in Residence, Academic Integrity and Assistant Professor, University of Calgary

Disclosure statement

The authors do not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

University of Calgary provides funding as a founding partner of The Conversation CA.

Brock University and University of Calgary provide funding as members of The Conversation CA-FR.

Brock University provides funding as a member of The Conversation CA.

View all partners

The dramatic rise of online learning during the COVID-19 pandemic has spotlit concerns about the role of technology in exam surveillance — and also in student cheating .

Some universities have reported more cheating during the pandemic, and such concerns are unfolding in a climate where technologies that allow for the automation of writing continue to improve.

Over the past two years, the ability of artificial intelligence to generate writing has leapt forward significantly , particularly with the development of what’s known as the language generator GPT-3. With this, companies such as Google , Microsoft and NVIDIA can now produce “human-like” text .

AI-generated writing has raised the stakes of how universities and schools will gauge what constitutes academic misconduct, such as plagiarism . As scholars with an interest in academic integrity and the intersections of work, society and educators’ labour, we believe that educators and parents should be, at the very least, paying close attention to these significant developments .

AI & academic writing

The use of technology in academic writing is already widespread. For example, many universities already use text-based plagiarism detectors like Turnitin , while students might use Grammarly , a cloud-based writing assistant. Examples of writing support include automatic text generation, extraction, prediction, mining, form-filling, paraphrasing , translation and transcription.

Read more: In an AI world we need to teach students how to work with robot writers

Advancements in AI technology have led to new tools, products and services being offered to writers to improve content and efficiency . As these improve, soon entire articles or essays might be generated and written entirely by artificial intelligence . In schools, the implications of such developments will undoubtedly shape the future of learning, writing and teaching.

A girl at a computer surrounded by zeros and ones as binary code.

Misconduct concerns already widespread

Research has revealed that concerns over academic misconduct are already widespread across institutions higher education in Canada and internationally.

In Canada, there is little data regarding the rates of misconduct. Research published in 2006 based on data from mostly undergraduate students at 11 higher education institutions found 53 per cent reported having engaged in one or more instances of serious cheating on written work, which was defined as copying material without footnoting, copying material almost word for word, submitting work done by someone else, fabricating or falsifying a bibliography, submitting a paper they either bought or got from someone else for free.

Academic misconduct is in all likelihood under-reported across Canadian higher education institutions .

There are different types of violations of academic integrity, including plagiarism , contract cheating (where students hire other people to write their papers) and exam cheating, among others .

Unfortunately, with technology, students can use their ingenuity and entrepreneurialism to cheat. These concerns are also applicable to faculty members, academics and writers in other fields, bringing new concerns surrounding academic integrity and AI such as:

  • If a piece of writing was 49 per cent written by AI, with the remaining 51 per cent written by a human, is this considered original work?
  • What if an essay was 100 per cent written by AI, but a student did some of the coding themselves?
  • What qualifies as “AI assistance” as opposed to “academic cheating”?
  • Do the same rules apply to students as they would to academics and researchers?

We are asking these questions in our own research , and we know that in the face of all this, educators will be required to consider how writing can be effectively assessed or evaluated as these technologies improve.

Augmenting or diminishing integrity?

At the moment, little guidance, policy or oversight is available regarding technology, AI and academic integrity for teachers and educational leaders.

Over the past year, COVID-19 has pushed more students towards online learning — a sphere where teachers may become less familiar with their own students and thus, potentially, their writing.

While it remains impossible to predict the future of these technologies and their implications in education, we can attempt to discern some of the larger trends and trajectories that will impact teaching, learning and research.

Technology & automation in education

A key concern moving forward is the apparent movement towards the increased automation of education where educational technology companies offer commodities such as writing tools as proposed solutions for the various “problems” within education.

An example of this is automated assessment of student work, such as automated grading of student writing . Numerous commercial products already exist for automated grading, though the ethics of these technologies are yet to be fully explored by scholars and educators.

Read more: Online exam monitoring can invade privacy and erode trust at universities

Overall, the traditional landscape surrounding academic integrity and authorship is being rapidly reshaped by technological developments. Such technological developments also spark concerns about a shift of professional control away from educators and ever-increasing new expectations of digital literacy in precarious working environments .

Read more: Precarious employment in education impacts workers, families and students

These complexities, concerns and questions will require further thought and discussion. Educational stakeholders at all levels will be required to respond and rethink definitions as well as values surrounding plagiarism, originality, academic ethics and academic labour in the very near future.

The authors would like to sincerely thank Ryan Morrison, from George Brown College, who provided significant expertise, advice and assistance with the development of this article.

  • Artificial intelligence (AI)
  • Academic cheating
  • Academic integrity
  • Academic writing

Want to write?

Write an article and join a growing community of more than 182,400 academics and researchers from 4,942 institutions.

Register now

artificial intelligence to help write essays

Special Features

Vendor voice.

artificial intelligence to help write essays

This article is more than 1 year old

University students recruit AI to write essays for them. Now what?

Teachers need to work harder to get students to write and think for themselves.

Feature As word of students using AI to automatically complete essays continues to spread, some lecturers are beginning to rethink how they should teach their pupils to write.

Writing is a difficult task to do well. The best novelists and poets write furiously, dedicating their lives to mastering their craft. The creative process of stringing together words to communicate thoughts is often viewed as something complex, mysterious, and unmistakably human. No wonder people are fascinated by machines that can write too.

Unlike humans, language models don't procrastinate and create content instantly with a little guidance. All you need to do is type a short description, or prompt, instructing the model on what it needs to produce, and it'll generate a text output in seconds. So it should come as no surprise students are now beginning use these tools to complete school work.

Students are the perfect users: They need to write often, in large volumes, and are internet savvy. There are many AI-writing products to choose from that are easy to use and pretty cheap too. All of them lure new users with free trials, promising to make them better writers.

artificial intelligence to help write essays

Monthly subscriptions for the most popular platform, Jasper, costs $40 per month to generate 35,000 words. Others, like Writesonic or Sudowrite, are cheaper at $10 per month for 30,000 words. Students who think they can use these products and get away with doing zero work, however, will probably be disappointed.

And then there's ChatGPT ...

Although AI can generate text with perfect spelling, great grammar and syntax, the content often isn't that good beyond a few paragraphs. The writing becomes less coherent over time with no logical train of thought to follow. Language models fail to get their facts right – meaning quotes, dates, and ideas are likely false. Students will have to inspect the writing closely and correct mistakes for their work to be convincing.

Prof: AI-assisted essays 'not good'

Scott Graham, associate professor at the Department of Rhetoric & Writing at the University of Texas at Austin, tasked his pupils with writing a 2,200-word essay about a campus-wide issue using AI. Students were free to lightly edit and format their work with the only rule being that most of the essay had to be automatically generated by software.

In an opinion article on Inside Higher Ed, Graham said the AI-assisted essays were "not good," noting that the best of the bunch would have earned a C or C-minus grade. To score higher, students would have had to rewrite more of the essay using their own words to improve it, or craft increasingly narrower and specific prompts to get back more useful content.

"You're not going to be able to push a button or submit a short prompt and generate a ready-to-go essay," he told The Register .

The limits of machine-written text forces humans to carefully read and edit copy. Some people may consider using these tools as cheating, but Graham believes they can help people get better at writing.

Don't waste all your effort on the first draft....

"I think if students can do well with AI writing, it's not actually all that different from them doing well with their own writing. The main skills I teach and assess mostly happen after the initial drafting," he said.

"I think that's where people become really talented writers; it's in the revision and the editing process. So I'm optimistic about [AI] because I think that it will provide a framework for us to be able to teach that revision and editing better.

"Some students have a lot of trouble sometimes generating that first draft. If all the effort goes into getting them to generate that first draft, and then they hit the deadline, that's what they will submit. They don't get a chance to revise, they don't get a chance to edit. If we can use those systems to speed write the first draft, it might really be helpful," he opined.

Whether students can use these tools to get away with doing less work will depend on the assignment. A biochemistry student claimed on Reddit they got an A when they used an AI model to write "five good and bad things about biotech" in an assignment, Vice reported .

AI is more likely to excel at producing simple, generic text across common templates or styles.

Listicles, informal blog posts, or news articles will be easier to imitate than niche academic papers or literary masterpieces. Teachers will need to be thoughtful about the essay questions they set and make sure students' knowledge are really being tested, if they don't want them to cut corners.

Ask a silly question, you'll get a silly answer

"I do think it's important for us to start thinking about the ways that [AI] is changing writing and how we respond to that in our assignments -- that includes some collaboration with AI," Annette Vee, associate professor of English and director of the Composition Program at the University of Pittsburgh, told us.

"The onus now is on writing teachers to figure out how to get to the same kinds of goals that we've always had about using writing to learn. That includes students engaging with ideas, teaching them how to formulate thoughts, how to communicate clearly or creatively. I think all of those things can be done with AI systems, but they'll be done differently."

The line between using AI as a collaborative tool or a way to cheat, however, is blurry. None of the academics teaching writing who spoke to The Register thought students should be banned from using AI software. "Writing is fundamentally shaped by technology," Vee said.

"Students use spell check and grammar check. If I got a paper where a student didn't use these, it stands out. But it used to be, 50 years ago, writing teachers would complain that students didn't know how to spell so they would teach spelling. Now they don't."

Most teachers, however, told us they would support regulating the use of AI-writing software in education. Anna Mills, who teaches students how to write at a community college in the Bay Area, is part of a small group of academics beginning to rally teachers and professional organizations like the Modern Language Association into thinking about introducing new academic rules.

Critical thinking skills

Mills said she could see why students might be tempted to use AI to write their essays, and simply asking teachers to come up with more compelling assessments is not a convincing solution.

AI

Just $10 to create an AI chatbot of a dead loved one

"We need policies. These tools are already pretty good now, and they're only going to get better. We need clear guidance on what's acceptable use and what's not. Where is the line between using it to automatically generate email responses and something that violates academic integrity?" she asked The Register .

"Writing is just not outputs. Writing and revising is a process that develops our thinking. If you skip that, you're going to be skipping that practice which students need.

"It's too tempting to use it as a crutch, skip the thinking, and skip the frustrating moments of writing. Some of that is part of the process of going deeper and wrestling with ideas. There is a risk of learning loss if students become dependent and don't develop the writing skills they need."

Mills was particularly concerned about AI reducing the need for people to think for themselves, considering language models carry forward biases in their training data. "Companies have decided what to feed it and we don't know. Now, they are being used to generate all sorts of things from novels to academic papers, and they could influence our thoughts or even modify them. That is an immense power, and it's very dangerous."

Lauren Goodlad, professor of English and Comparative Literature at Rutgers University, agreed. If they parrot what AI comes up with, students may end up more likely to associate Muslims with terrorism or mention conspiracy theories, for example.

Computers are alredy interfering and changing the ways we write. Goodlad referred to one incident when Gmail suggested she change the word "importunate" to "impatient" in an email she wrote.

"It's hard to teach students how to use their own writing as a way to develop their critical thinking and as a way to express knowledge. They very badly need the practice of articulating their thoughts in writing and machines can rob them of this. If people really do end up using these things all the way through school, if that were to happen it could be a real loss not just for the writing quality but for the thinking quality of a whole generation," she said.

Rules and regulation

Academic policies tackling AI-assisted writing will be difficult to implement. Opinions are divided on whether sentences generated by machines count as plagiarism or not. There is also the problem of being able to detect writing produced by these tools accurately. Some teachers are alarmed at AI's growing technical capabilities, whilst others believe its overhyped. Some are embracing the technology more than others.

Marc Watkins, lecturer, and Stephen Monroe, chair and assistant professor of writing and rhetoric, are working on building an AI writing pilot programme with the University of Mississippi's Academic Innovations Group. "As teachers, we are experimenting, not panicking," Monroe told The Register .

"We want to empower our students as writers and thinkers. AI will play a role… This is a time of exciting and frenzied development, but educators move more slowly and deliberately… AI will be able to assist writers at every stage, but students and teachers will need tools that are thoughtfully calibrated."

  • Human-replacing AI startups reach $1bn unicorn status
  • GPT-3 'prompt injection' attack causes bad bot manners
  • FauxPilot: It's like GitHub Copilot but doesn't phone home to Microsoft
  • AI chatbot trained on posts from web sewer 4chan behaved badly – just like human members

Teachers are getting together and beginning to think about these tools, Watkins added. "Before we have any policy about the use of language models, we need to have sustained conversations with students, faculty, and administration about what this technology means for teaching and learning."

"But academia doesn't move at the pace of Big Tech. We're taking our time and slowly exploring. I don't think faculty need to be frightened. It's possible that these tools will have a positive impact on student learning and advancing equity, so let's approach AI assistants cautiously, but with an open mind."

Regardless of what policies universities may decide to implement in the future, AI presents academia with an opportunity to improve education now. Teachers will need to adapt to the technology if they want to remain relevant, and incentivise students to learn and think on their own with or without assistance from computers. ®

  • Machine Learning

Narrower topics

  • Deep Learning
  • Large Language Model
  • Neural Networks
  • Tensor Processing Unit

Broader topics

  • Self-driving Car

Send us news

Other stories you might like

Microsoft shrinks ai down to pocket size with phi-3 mini, arm flexes silicon muscles to push generative ai at the edge, stability ai decimates staff just weeks after ceo's exit, protecting distributed branch office environments from ransomware.

artificial intelligence to help write essays

Developers are calling the shots on AI planning, judging by your experience

Why making pretend people with agi is a waste of energy, ai spam is winning the battle against search engine quality, google cloud chief is really psyched about this ai thing, what's up with ai lately let's start with soaring costs, public anger, regulations..., don't rent out that container ship yet: cios and biz buyers view ai pcs with some caution, psst, hey. it's the nsa. you want some ai security advice, ai pcs are here but a killer application for biz users nope.

icon

  • Advertise with us

Our Websites

  • The Next Platform
  • Blocks and Files

Your Privacy

  • Cookies Policy
  • Privacy Policy
  • Ts & Cs
  • Do not sell my personal information

Situation Publishing

Copyright. All rights reserved © 1998–2024

no-js

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List

Logo of ccforum

Can artificial intelligence help for scientific writing?

Michele salvagno.

1 Department of Intensive Care, Erasme Hospital, Université Libre de Bruxelles, 1070 Brussels, Belgium

Fabio Silvio Taccone

Alberto giovanni gerli.

2 Department of Clinical Sciences and Community Health, Università Degli Studi di Milano, 20122 Milan, Italy

Associated Data

Not applicable.

This paper discusses the use of Artificial Intelligence Chatbot in scientific writing. ChatGPT is a type of chatbot, developed by OpenAI, that uses the Generative Pre-trained Transformer (GPT) language model to understand and respond to natural language inputs. AI chatbot and ChatGPT in particular appear to be useful tools in scientific writing, assisting researchers and scientists in organizing material, generating an initial draft and/or in proofreading. There is no publication in the field of critical care medicine prepared using this approach; however, this will be a possibility in the next future. ChatGPT work should not be used as a replacement for human judgment and the output should always be reviewed by experts before being used in any critical decision-making or application. Moreover, several ethical issues arise about using these tools, such as the risk of plagiarism and inaccuracies, as well as a potential imbalance in its accessibility between high- and low-income countries, if the software becomes paying. For this reason, a consensus on how to regulate the use of chatbots in scientific writing will soon be required.

Introduction

A chatbot is an electronic system (generally a software) that simulates conversations by responding to keywords or phrases it recognizes and that can be integrated into various platforms, such as websites, mobile apps, and messaging platforms.

The Chatbot Generative Pre-trained Transformer (ChatGPT), developed by OpenAI, is a type of Artificial Intelligence (AI) software designed to simulate conversations with human users. This chatbot works through algorithms programmed to understand natural language inputs and answer with appropriate responses, either pre-written or newly generated by the AI. ChatGPT is constantly improved with reinforcement techniques, natural language processing, and machine learning, to improve its ability to understand and thoroughly respond to users’ needs. Concretely, you can conversationally ask anything and receive a fast and adequate human-like written reply to your questions or requests such as: (a) write a small text on a given topic; (b) get information on a topic of interest; (c) compose an email or message with a certain tone, specific content, and intended for a particular person; (d) correct the shape of a text or change its wording; (e) solve problems.

As such, this chatbot could also be used in scientific writing [ 1 ]. Indeed, ChatGPT could become a promising and powerful tool for tasks such as automated draft generation, article summarizing, and language translation, which may be useful in academic activities to make writing work faster and easier. However, the use of this tool in scientific writing raises some ethical concerns and should therefore be regulated.

ChatGPT in scientific writing

ChatGPT is already able to help medical researchers and scientists to write articles and abstracts, in literature research, to summarize data or information, to provide suggestions for structure, references, and titles, in language reviews to make the text more readable, or even to generate a full draft of a paper [ 2 ]. At this stage, although feasible, no article dealing with critical care medicine has been published so far using this application. Moreover, whether writing a manuscript dealing with a topic of critical care medicine would be easier or more difficult when compared to other medical disciplines remains unknown. Which are the limits of possible applications of these tools have to be clarified.

ChatGPT can generate text on a wide variety of topics. Recently, an entire letter to the editor was written by ChatGPT, answering the author's questions [ 1 ]. Artificial Intelligence cannot generate new ideas, but it can organize and develop the ones of the researcher, creating an initial draft. However, this appears to be a starting point for a human-based development of the text, as the automatic text generated is far from being a substitute for the knowledge, creativity, and critical thinking of human experts.

For the literature search, ChatGPT and AI research assistants (such as “elicit.org”) can aid researchers in the review process by finding academic papers, summarizing their conclusions, and highlighting areas of uncertainty (i.e., summarizing the recent randomized clinical trial on the use of extracorporeal membrane oxygenation for refractory cardiac arrest [ 3 ], Fig.  1 ). This could help physicians, as an example, to quickly gain an understanding of the current state of knowledge on a particular topic, and to identify potential gaps that need to be addressed, although the provided summary might be quite generic and do not critically analyze differences among studies.

An external file that holds a picture, illustration, etc.
Object name is 13054_2023_4380_Fig1_HTML.jpg

Example of a summary of the existing literature on a specific topic

During the writing process, ChatGPT can help to generate an initial draft of a scientific paper and even suggest titles. By providing raw information, ChatGPT can also assist in composing the section on the methods used in the study, justify sample size and describe data analysis techniques. From author’s experience, when the manuscript has been finalized, ChatGPT is extremely effective for the editing process; formatting and language editing, rewriting a particularly complex sentence in a clearer way, and even summarizing the entire text in order to compose a suitable abstract, are feasible using this approach, although the results are not always satisfactory, but they certainly save time. ChatGPT has been recently asked to prepare regular discharge summaries [ 4 ], but the quality of the content has not been specifically evaluated. To our knowledge, a structured evaluation to determine the quality of the output has not been performed yet. More complicated writing processes, such as systematic review and meta-analyses, require the human intervention and ChatGPT might be helpful mainly for editing.

A future potential application of AI is in the automatic generation of figures, tables, and other visual elements of the manuscript, which can aid in summarizing data. These elements are important for the clarity and understanding of the manuscript, but they are often time-consuming to create.

Importantly, the writing process of a scientific paper requires, for the moment, the guidance and supervision of human researchers who are experts in the field to ensure the accuracy, coherence, and credibility of the content before being used or submitted for publication. Chatbots can help but needs the researcher’s input, and inadequate inputs would lead to inadequate results. For this reason, chatbots and AI, in general, should not replace human researchers' expertise, judgment, personality, and—in the end—responsibility.

Chatbots vs. human beings

As AI, ChatGPT has a superior advantage in terms of fast comprehending information deeply and connecting evidence to reach conclusions, compared to humans who have limitations in their ability to read a comprehensive range of literature and distinguish the connections between seemingly separate pieces of information.

Moreover, it may be difficult to recognize whether a paper is written by a chatbot or a human being [ 5 ], because chatbots use advanced techniques, such as natural language processing (NLP) and machine learning, to generate text that is similar to human writing. To detect the author is a complex task and requires thorough critical reading to reach a conclusion. However, a few characteristics might reveal that a paper was written by a chatbot, such as the lack of nuance, style, or originality, which could allow the identification by AI output detectors and skeptical human reviewers [ 6 ]. Interestingly, the same writing traits could be found in texts written in a language that is not an individual's native tongue. By focusing on these traits, it is possible that AI plagiarism detectors would identify non-native English language papers as AI-generated text. It would be interesting to find these tools' sensibility in detecting the authors of texts written by these two groups.

Nevertheless, the chatbot-generated text might lack the subtle phrasing and word choice that a human author might use to convey a certain meaning or tone. It might also be vaguer and contain inconsistencies that would not be present in a human-written paper. Instead, if the paper contains a high degree of structural and grammatical errors, it may suggest that it was written by a human being (but it should not be the other way around). Lastly, if the paper is discussing a topic that is very specific and highly technical, it is less likely that a chatbot could generate such a text, as it would require a deep understanding of the topic and the ability to generate scientific analysis and conclusions.

Ethical considerations

Ethical concerns could also limit the use of these chatbots for scientific writing [ 7 ].

The process of acquiring knowledge from others and writing new or review articles involves humans incorporating both what they have learned from others and their ideas. It is natural for humans to repeat the findings, statements, and written works of others, and thus to come close to committing plagiarism by presenting an idea without proper referencing to the original authors. AI or ChatGPT systems can commit plagiarism under this definition but can also be programmed to avoid copying others by rephrasing their work in a manner similar to what human authors do. However, using programs to reformulate sentences and writing to reduce the percentage of plagiarism (i.e., asking the software to rewrite a section written by other authors with different words) could not be considered acceptable in scientific research. If we define “plagiarism” as a mere act to copy someone else work, just rephrasing what it was written, regardless of the method used, and without adding anything personal, it is a violation of academic integrity. For this reason, journal editors should use programs to detect written content using AI to detect plagiarism better.

Second, the lack of an expert and critical human mind behind scientific work (which is the basis of the scientific method) could lead to a risk of perpetuating or amplifying existing biases and inaccuracies in the data, providing unfair results and hampering scientific growth. Whatever the use of AI, we believe that the presence of an expert in the field in conducting scientific activity and writing is a necessary cornerstone even to guarantee the quality of the work.

Third, the incredible development of AI tools can lead to a significant increase in publication numbers from some researchers, but not accompanied by a real increase in her or his experience in that field. Ethical issues can therefore arise regarding hiring professionals by academic institutions that score on the number of publications rather than on their quality.

Fourth, whether the ChatGPT should be mentioned within the authors of the manuscript written using this approach remains not adequately defined. Finally, if at the moment ChatGPT and other chatbot services are free of charge, it is not guaranteed that they will not become paying in the future. The introduction of charges to access these chatbots could lead to a further disparity between high- and low-income countries (as well as between more junior to older professionals), for scientific production, resulting in unfair facilitation for the formers with unpredictable consequences.

Chatbot as a tool in the ICU

In addition to scientific writing, ChatGPT has the potential to assist physicians in their hospital work by saving time and allowing them to focus on providing patient care. In an Intensive Care Unit (ICU), where constant monitoring of multiple patient information is required, such as treatment progression, laboratory values, microbiological results, and fluid balance calculation, ChatGPT can assist in several ways.

First, ChatGPT can provide general information about recognized ICU protocols: when given a specific request, ChatGPT would be able to generate a response (i.e., initial management of sepsis) by analyzing the input and identifying patterns in the text data that it has been trained on. Furthermore, ChatGPT has a distinct advantage over humans in quickly gathering, comprehending, and utilizing information. In future, it may eventually be taught to apply this knowledge in clinical practice by presenting the latest evidence to healthcare professionals, if not directly creating patient-tailored protocols to reflect new findings.

Currently, it could already be used for generating clinical notes by just providing raw information, such as daily summaries and discharge summaries, which could save time and increase accuracy. Moreover, it could help in communicating with patients and families, translating into other languages, and generating personalized messages with accurate and timely information about the patient's condition, which would not have the purpose of replacing, but add to, the doctor-patient (or doctor-relatives) relationship. Unfortunately, depending on the applicable legislation, privacy issues may arise when providing patients' information.

Anecdotally, when provided with clinical information, laboratory values, past medical history, and other relevant data, ChatGPT may already assist physicians in, for example, determining appropriate treatment options; however, it is important to note that the answers provided by the chatbot can be erroneous or dated, as the technology is still in development and may not have the level of accuracy, current knowledge, or precision that is desired. Nevertheless, it is believable that AI will continue to improve in the future and potentially provide more accurate and reliable results.

As next future perspectives, AI in general could potentially be trained to automatically extract and understand all the relevant information from electronic health records and by analyzing patient data, such as vital signs, laboratory results, and medical history, helping doctors to quickly access patient information, providing recommendations for interventions, and allowing more fast and comprehensive decisions. The implementations of these AI systems, which already exist in several areas of the world, with chatbots, could create an interacting system which would provide immediately elaborated information to the treating physician.

Conclusions

ChatGPT is an AI software potentially able to assist in the writing process of a scientific paper and can help in the literature review, identify research questions, provide an overview of the current state of the field, and assist with tasks, such as formatting and language review. Moreover, it can have applications in clinical practice as time-savers. As chatbot tools will be widely adopted in the near future, it is crucial to have international academic regulations in place to regulate their use in scientific writing and establish mechanisms for identifying and penalizing unethical usage. Chatbots are simply tools, they can assist human researchers but should not be used as a replacement for human researchers' expertise, judgment, and personality.

We have requested to ChatGPT “Could you review this paper and tell us if you (as an author!) agree to submit it to Critical Care Journal?”. The (non-modified) response from ChatGPT has been: “ As an AI model, I am not able to review or submit papers to journals as I am not a researcher or author. However, I can give you some feedback on the paper you've provided. The paper presents a clear and comprehensive overview of the potential uses of ChatGPT in scientific writing, specifically in the field of critical care medicine. The introduction provides a good background on chatbots and ChatGPT and how it can be used in various tasks. The paper goes on to discuss the potential uses of ChatGPT in scientific writing, such as draft generation, literature research and language review. The paper also touches on the ethical concerns and limitations of using ChatGPT in scientific writing. I would recommend that you review the paper again, check for grammar and spelling errors, and make sure that the citations and references are correctly formatted. It would be also important to have a proper conclusion that summarizes the main ideas of the paper and the future perspectives. It's important to keep in mind that ChatGPT should always be used in combination with the expertise and judgement of human experts and its output should be validated before it is used in clinical practice.”

Acknowledgements

For the writing of this article, we have not received funds or support from OpenAI, which is not associated with the entire process that led to the preparation of this article. The text, written with the support of the ChatGPT by OpenAI, has however been modified by the human authors, who assume full responsibility for form and content.

Author contributions

MS, FST, AGG conceived of the study, and participated in its design and coordination and helped to draft the manuscript; MS, FST, AGG participated in the design of the study; MS, FST, AGG wrote, reviewed and made extensive editing of the final content; FST conceived and took the screenshot. All authors have read and agreed to the published version of the manuscript.

This research received no external funding.

Availability of data and materials

Declarations.

The original online version of this article was revised: shortly after initial publication of this article, the authorship and Acknowledgements were updated in line with Springer Nature authorship policies. Large Language Models (LLM), such as ChatGPT, do not currently satisfy our authorship criteria. An attribution of authorship carries with it accountability for the work, which cannot be effectively applied to LLMs.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Change history

A Correction to this paper has been published: 10.1186/s13054-023-04390-0

Oxford students told to use AI to help write essays

Undergraduates on economics and management course are advised to critique drafts of their AI-generated work

Oxford students have been told to use artificial intelligence (AI) to help write essays.

Undergraduates studying Economics and Management were advised to use AI tools such as ChatGPT to help generate first drafts of essays before critiquing them.  

Prof Steve New, who teaches a module on technology and operations management, told students that AI “should help you produce a much better essay than you would produce unaided”, if it was used “thoughtfully and critically”.

He told The Telegraph that the tutorial essays play no part in any formal assessment. After writing the essay using AI, the students spend an hour in a small tutorial group discussing their work with their tutor.

Artificial intelligence tools such as ChatGPT are capable of creating unique essays based on their understanding of billions of words and can mimic human conversation and language. Researchers have found that the technology is capable of passing doctors’ exams and writing scientific articles.

Students must fact check AI drafts

Participants on Oxford’s Economics and Management course were instructed to first use AI tools to prepare an initial answer to an essay question, before passing their drafts over to coursemates.

Students must then fact check their peer’s work, since the course notes that the tools are prone to making up facts and references, and may invoke citations that do not actually support the claims made.

Before submitting their work, the course directors asked students to include an “AI statement”, setting out which tools they used and how they used them.

In guidance to students, Prof New said that AI should “increase your ability to think hard about the subjects you discuss, and make you more confident in framing a clear and persuasive argument”.

He added: “But the document that emerges should be yours. You need to write stuff you will stand by. The AI can produce humdrum ‘some say this, some say that…meh’ essays in a fraction of a second; you should be producing compelling, tightly-argued, evidence-based prose that you believe in.”

The course notes that “AI might – without you realising – steer you towards particular intellectual or ideological positions”.

Students were warned that rules on referencing and plagiarism still applied . Any material, such as text or diagrams, from published sources needs to be “fully referenced according to standard bibliographic conventions”.

In a note on the course, it states that tutorial assignments have been adapted for the current academic year because the arrival of generative AI such as ChatGPT, Bard and Claude have “changed the environment in which we work” and, outside the exam room, AI tools “can greatly enhance your ability to generate essays”.

Universities teach how to be ‘AI literate’

The introduction of AI tools for assignments comes after the Russell Group of leading universities announced that it would teach students how to use AI without cheating earlier this year.

Universities signed up to a set of principles designed to ensure that students are “AI literate”, as part of a recognition that AI skills will be essential for future employers.

The Russell Group of 24 institutions said that the principles will “shape institution and course-level work to support the ethical and responsible use of generative AI, new technology and software like ChatGPT”. Staff will also be trained to use AI when they are teaching.

Examples cited by universities of how AI technology can be used on campus include for helping students develop writing skills, delivering personalised assessment and feedback, and creating case studies and simulations to help students better understand complex ideas and topics.

In February, Oxford University announced that new AI tools such as ChatGPT have the potential to “change the way we teach and learn in many positive ways”. However, the university told students that the unauthorised use of AI tools in exams and other assessed work “is a serious disciplinary offence”.

Cambridge University said earlier this year that it will allow students to use ChatGPT as long as it is not used to write coursework or exams.

The university said that while the technology “has not been banned” students must still “be the authors of their own work” or risk being investigated for academic misconduct.

  • University of Oxford,
  • Artificial Intelligence,
  • Facebook Icon
  • WhatsApp Icon

Free AI Paragraph Generator

Get your writing process off to a great start by generating fully coherent, compelling paragraphs.

Write about...

Use cases of ahrefs’ paragraph generator.

Content creation. Ahrefs’ Paragraph Generator can be utilized to quickly generate engaging and informative paragraphs for various types of content. Whether it's blog posts, articles, product descriptions, or social media captions, the tool can assist in generating compelling paragraphs, saving time and effort for content creators.

Academic writing. Students and researchers can benefit from Ahrefs’ Paragraph Generator when working on papers, essays, or research articles. By providing the necessary instructions, the tool can generate well-structured paragraphs that present key arguments, evidence, and analysis, aiding in the writing process.

Personal writing and communication. Individuals who need to write emails, personal letters, or other forms of written communication can leverage Ahrefs’ Paragraph Generator. It can help in composing coherent and well-structured paragraphs, ensuring effective expression of thoughts and ideas, while also saving time by providing a starting point for the writing process.

The technology behind Ahrefs’ Paragraph Generator

Ahrefs’ Paragraph Generator uses a language model that learns patterns, grammar, and vocabulary from large amounts of text data – then uses that knowledge to generate human-like text based on a given prompt or input. The generated text combines both the model's learned information and its understanding of the input.

Other writing tools you may find helpful

Acronym generator.

Looking for an easy way to generate acronyms? Try our AI Acronym Generator today and streamline your workflow.

Grammar Checker

Elevate your writing with our free AI grammar checker. Effortlessly catch grammar, spelling, and punctuation errors, ensuring your content is polished and error-free.

Conclusion Generator

Instantly craft compelling conclusions with this user-friendly tool. Elevate your writing effortlessly.

Emoji Translator

Unlock emotions with our AI translator! Seamlessly convert text to expressive emojis. Communicate with feelings effortlessly.

Lorem Ipsum Generator

Lorem ipsum is a placeholder text used in graphic design, print, and publishing for previewing layouts and visual mockups.

Outline Generator

Input your rough ideas and transition from a blank page to an organized, well-structured outline in minutes.

Paragraph Rewriter

Improve any paragraph's readability and rewrite it to make it sound more human-like with this powerful free tool.

Paraphrasing Tool

Quickly rephrase and reword any text for essays, articles, emails, and more.

Rewording Tool

Swiftly reword and rephrase sentences or paragraphs for posts, emails or articles.

Sentence Rewriter Tool

Enhance the quality and clarity of any sentence and improve its construction with this powerful free tool.

Summarizer Tool

Experience the power of efficiency! Our free summarizer simplifies content, saving time and effort. Summarize with precision, read smarter.

Powered by Ahrefs

Everything you need to rank higher & get more traffic.

Turnitin's AI writing detection available now

Turnitin’s AI writing detection helps educators identify when AI writing tools such as ChatGPT may have been used in students’ submissions.

artificial intelligence to help write essays

Academic integrity in the age of AI writing

Over the years, academic integrity has been both supported and tested by technology. Today, educators are facing a new frontier with AI writing and ChatGPT.

Here at Turnitin, we believe that AI can be a positive force that, when used responsibly, has the potential to support and enhance the learning process. We also believe that equitable access to AI tools is vital, which is why we’re working with students and educators to develop technology that can support and enhance the learning process. However, it is important to acknowledge new challenges alongside the opportunities.

We recognize that for educators, there is a pressing and immediate need to know when and where AI and AI writing tools have been used by students. This is why we are now offering AI detection capabilities for educators in our products.

Gain insights on how much of a student’s submission is authentic, human writing versus AI-generated from ChatGPT or other tools.

Reporting identifies likely AI-written text and provides information educators need to determine their next course of action. We’ve designed our solution with educators, for educators.

AI writing detection complements Turnitin’s similarity checking workflow and is integrated with your LMS, providing a seamless, familiar experience.

Turnitin’s AI writing detection capability available with Originality, helps educators identify AI-generated content in student work while safeguarding the interests of students.

Turnitin AI Innovation Lab

Welcome to the Turnitin AI Innovation Lab, a hub for new and upcoming product developments in the area of AI writing. You can follow our progress on detection initiatives for AI writing, ChatGPT, and AI-paraphrasing.

artificial intelligence to help write essays

Understanding the false positive rate for sentences of our AI writing detection capability

We’d like to share more insight on our sentence level false positive rate and tips on how to use our AI writing detection metrics.

artificial intelligence to help write essays

Understanding false positives within our AI writing detection capabilities

We’d like to share some insight on how our AI detection model deals with false positives and what constitutes a false positive.

Have questions? Read these FAQs on Turnitin’s AI writing detection capabilities

Helping solve the AI writing puzzle one piece at a time

AI-generated writing has transformed every aspect of our lives, including the classroom. However, identifying AI writing in students’ submissions is just one piece in the broader, complex, ever-evolving AI writing puzzle.

Helping solve the AI writing puzzle one piece at a time

Research corner

We regularly undertake internal research to ensure our AI writing detector stays accurate and up-to-date. If you are interested in what external testing has revealed about Turnitin's AI-writing detection capabilities, check out the links below. Notably, these studies position Turnitin among the foremost solutions in identifying AI-generated content within academia.

Research shows Turnitin's AI detector shows no statistically significant bias against English Language Learners

  • In response to feedback from customers and papers claiming that AI writing detection tools are biased against writers whose first language is not English, Turnitin expanded its false positive evaluation to include writing samples of English Language Learners (ELL) and tested another nearly 2,000 writing samples of ELL writers.
  • What Turnitin found was that in documents meeting the 300 word count requirement, ELL writers received a 0.014 false positive rate and native English writers received a 0.013.
  • This means that there is no statistically significant bias against non-native English speakers.

Turnitin’s AI writing detector identified as the most accurate out of 16 detectors tested

  • Two of the 16 detectors, Turnitin and Copyleaks, correctly identified the AI- or human-generated status of all 126 documents, with no incorrect or uncertain responses.
  • Three AI text detectors – Turnitin, Originality, and Copyleaks, – have very high accuracy with all three sets of documents examined for this study: GPT-3.5 papers, GPT-4 papers, and human-generated papers.
  • Of the top three detectors identified in this investigation, Turnitin achieved very high accuracy in all five previous evaluations. Copyleaks, included in four earlier analyses, performed well in three of them.

Teaching in the age of AI writing

As AI text generators like ChatGPT quickly evolve, our educator resources will, too. Curated and created by our team of veteran educators, our resources help educators meet these new challenges. They are built for professional learning and outline steps educators can take immediately to guide students in maintaining academic integrity when faced with AI writing tools.

artificial intelligence to help write essays

A guide to help educators determine which resource is more applicable to their instructional situation: the AI misuse checklist or the AI misuse rubric.

artificial intelligence to help write essays

A guide sharing strategies educators can consider to help when confronted with a false positive.

artificial intelligence to help write essays

A guide sharing strategies students can consider to help when confronted with a false positive.

The Turnitin Educator Network is a space to meet, discuss and share best practices on academic integrity in the age of AI.

Learn more about AI writing in our blog

Written by experts in the field, educators, and Turnitin professionals, our blog offers resources and thought leadership in support of students, instructors, and administrators. Dive into articles on a variety of important topics, including academic integrity, assessment, and instruction in a world with artificial intelligence.

artificial intelligence to help write essays

In this blog post, we’re going to address frequently asked questions about AI writing tool misuse for students. Specifically, what does AI writing tool misuse look like? How can you self-check to make sure you’re using AI writing tools properly?

artificial intelligence to help write essays

Stay up to date with the latest blog posts delivered directly to your inbox.

Turnitin ai tools in the news.

Never miss an update or announcement. Visit our media center for recent news coverage and press releases.

Cheat GPT? Turnitin CEO Chris Caren weighs in on combating A.I. plagiarism | CNBC Squawk Box

Since the inception of AI-generated writing, educators and institutions are learning how to navigate it in the classroom. Turnitin’s CEO Chris Caren joins ‘Squawk Box’ to discuss how it is being used in the classroom and how educators can identify AI writing in student submissions.

artificial intelligence to help write essays

Trouble viewing? View the video on YouTube or adjust your cookie preferences .

Some U.S. schools banning AI technology while others embrace it | NBC Nightly News

ChatGPT, an artificial intelligence program, can write college-level essays in seconds. While some school districts are banning it due to cheating concerns, NBC News’ Jacob Ward has details on why some teachers are embracing the technology.

artificial intelligence to help write essays

BestColleges

Artificial intelligence, it seems, is taking over the world. At least that's what alarmists would have you believe . The line between fact and fiction continues to blur, and recognizing what is real versus what some bot concocted grows increasingly difficult with each passing week.

ThriveinEDU Podcast

On this episode of the ThriveinEDU podcast, host Rachelle Dené Poth speaks with Turnitin’s Chief Product Officer Annie Chechitelli about her role in the organization, her experience as a parent with school-age children learning to navigate AI writing, and the future of education and original thought.

District Administration

Following the one year anniversary of the public launch of ChatGPT, Chief Product Officer Annie Chechitelli sits down with the publication to discuss Turnitin’s AI writing detection feature and what the educational community has learned.

For press and media inquiries, contact us at [email protected]

Awards & recognition.

artificial intelligence to help write essays

Let’s innovate together

artificial intelligence to help write essays

Gig workers are writing essays for AI to learn from

  • Companies are hiring highly educated gig workers to write training content for AI models .
  • The shift toward more sophisticated trainers comes as tech giants scramble for new data sources.
  • AI could run out of data to learn from by 2026, one research institute has warned. 

Insider Today

As artificial intelligence models run out of data to train themselves on, AI companies are increasingly turning to actual humans to write training content.

For years, companies have used gig workers to help train AI models on simple tasks like photo identification , data annotation, and labelling. But the rapidly advancing technology now requires more advanced people to train it.

Companies such as Scale AI and Surge AI are hiring part-timers with graduate degrees to write essays and creative prompts for the bots to gobble up, The New York Times reported . Scale AI, for example, posted a job last year looking for people with Master's degrees or PhDs, who are fluent in either English, Hindi, or Japanese and have professional writing experience in fields like poetry, journalism, and publishing.

Related stories

Their mission? To help AI bots "become better writers," Scale AI wrote in the posting.

And an army of workers are needed to do this kind of work. Scale AI has as many as tens of thousands of contractors working on its platform at a time, per the Times.

"What really makes the A.I. useful to its users is the human layer of data, and that really needs to be done by smart humans and skilled humans and humans with a particular degree of expertise and a creative bent," Willow Primack, the vice president of data operations at Scale AI, told the New York Times. "We have been focusing on contractors, particularly within North America, as a result."

The shift toward more sophisticated gig trainers comes as tech giants scramble to find new data to train their technology on. That's because the programs learn so incredibly fast that they're already running out of available resources to learn from. The vast trove of online information — everything from scientific papers to news articles to Wikipedia pages — is drying up.

Epoch, an AI research institute, has warned that AI could run out of data by 2026.

So, companies are finding more and more creative ways to make sure their systems never stop learning. Google has considered accessing its customers' data in Google Docs , Sheets, and Slides while Meta even thought about buying publishing house Simon & Schuster to harvest its book collection, Business Insider previously reported.

Watch: Nearly 50,000 tech workers have been laid off — but there's a hack to avoid layoffs

artificial intelligence to help write essays

  • Main content

Numbers, Facts and Trends Shaping Your World

Read our research on:

Full Topic List

Regions & Countries

  • Publications
  • Our Methods
  • Short Reads
  • Tools & Resources

Read Our Research On:

About 1 in 5 U.S. teens who’ve heard of ChatGPT have used it for schoolwork

(Maskot/Getty Images)

Roughly one-in-five teenagers who have heard of ChatGPT say they have used it to help them do their schoolwork, according to a new Pew Research Center survey of U.S. teens ages 13 to 17. With a majority of teens having heard of ChatGPT, that amounts to 13% of all U.S. teens who have used the generative artificial intelligence (AI) chatbot in their schoolwork.

A bar chart showing that, among teens who know of ChatGPT, 19% say they’ve used it for schoolwork.

Teens in higher grade levels are particularly likely to have used the chatbot to help them with schoolwork. About one-quarter of 11th and 12th graders who have heard of ChatGPT say they have done this. This share drops to 17% among 9th and 10th graders and 12% among 7th and 8th graders.

There is no significant difference between teen boys and girls who have used ChatGPT in this way.

The introduction of ChatGPT last year has led to much discussion about its role in schools , especially whether schools should integrate the new technology into the classroom or ban it .

Pew Research Center conducted this analysis to understand American teens’ use and understanding of ChatGPT in the school setting.

The Center conducted an online survey of 1,453 U.S. teens from Sept. 26 to Oct. 23, 2023, via Ipsos. Ipsos recruited the teens via their parents, who were part of its KnowledgePanel . The KnowledgePanel is a probability-based web panel recruited primarily through national, random sampling of residential addresses. The survey was weighted to be representative of U.S. teens ages 13 to 17 who live with their parents by age, gender, race and ethnicity, household income, and other categories.

This research was reviewed and approved by an external institutional review board (IRB), Advarra, an independent committee of experts specializing in helping to protect the rights of research participants.

Here are the  questions used for this analysis , along with responses, and its  methodology .

Teens’ awareness of ChatGPT

Overall, two-thirds of U.S. teens say they have heard of ChatGPT, including 23% who have heard a lot about it. But awareness varies by race and ethnicity, as well as by household income:

A horizontal stacked bar chart showing that most teens have heard of ChatGPT, but awareness varies by race and ethnicity, household income.

  • 72% of White teens say they’ve heard at least a little about ChatGPT, compared with 63% of Hispanic teens and 56% of Black teens.
  • 75% of teens living in households that make $75,000 or more annually have heard of ChatGPT. Much smaller shares in households with incomes between $30,000 and $74,999 (58%) and less than $30,000 (41%) say the same.

Teens who are more aware of ChatGPT are more likely to use it for schoolwork. Roughly a third of teens who have heard a lot about ChatGPT (36%) have used it for schoolwork, far higher than the 10% among those who have heard a little about it.

When do teens think it’s OK for students to use ChatGPT?

For teens, whether it is – or is not – acceptable for students to use ChatGPT depends on what it is being used for.

There is a fair amount of support for using the chatbot to explore a topic. Roughly seven-in-ten teens who have heard of ChatGPT say it’s acceptable to use when they are researching something new, while 13% say it is not acceptable.

A diverging bar chart showing that many teens say it’s acceptable to use ChatGPT for research; few say it’s OK to use it for writing essays.

However, there is much less support for using ChatGPT to do the work itself. Just one-in-five teens who have heard of ChatGPT say it’s acceptable to use it to write essays, while 57% say it is not acceptable. And 39% say it’s acceptable to use ChatGPT to solve math problems, while a similar share of teens (36%) say it’s not acceptable.

Some teens are uncertain about whether it’s acceptable to use ChatGPT for these tasks. Between 18% and 24% say they aren’t sure whether these are acceptable use cases for ChatGPT.

Those who have heard a lot about ChatGPT are more likely than those who have only heard a little about it to say it’s acceptable to use the chatbot to research topics, solve math problems and write essays. For instance, 54% of teens who have heard a lot about ChatGPT say it’s acceptable to use it to solve math problems, compared with 32% among those who have heard a little about it.

Note: Here are the  questions used for this analysis , along with responses, and its  methodology .

  • Artificial Intelligence
  • Technology Adoption
  • Teens & Tech

Olivia Sidoti's photo

Olivia Sidoti is a research assistant focusing on internet and technology research at Pew Research Center

Jeffrey Gottfried's photo

Jeffrey Gottfried is an associate director focusing on internet and technology research at Pew Research Center

Many Americans think generative AI programs should credit the sources they rely on

Americans’ use of chatgpt is ticking up, but few trust its election information, q&a: how we used large language models to identify guests on popular podcasts, striking findings from 2023, what the data says about americans’ views of artificial intelligence, most popular.

1615 L St. NW, Suite 800 Washington, DC 20036 USA (+1) 202-419-4300 | Main (+1) 202-857-8562 | Fax (+1) 202-419-4372 |  Media Inquiries

Research Topics

  • Age & Generations
  • Coronavirus (COVID-19)
  • Economy & Work
  • Family & Relationships
  • Gender & LGBTQ
  • Immigration & Migration
  • International Affairs
  • Internet & Technology
  • Methodological Research
  • News Habits & Media
  • Non-U.S. Governments
  • Other Topics
  • Politics & Policy
  • Race & Ethnicity
  • Email Newsletters

ABOUT PEW RESEARCH CENTER  Pew Research Center is a nonpartisan fact tank that informs the public about the issues, attitudes and trends shaping the world. It conducts public opinion polling, demographic research, media content analysis and other empirical social science research. Pew Research Center does not take policy positions. It is a subsidiary of  The Pew Charitable Trusts .

Copyright 2024 Pew Research Center

Terms & Conditions

Privacy Policy

Cookie Settings

Reprints, Permissions & Use Policy

To revisit this article, visit My Profile, then View saved stories .

  • Backchannel
  • Newsletters
  • WIRED Insider
  • WIRED Consulting

By Kate Knibbs

How One Author Pushed the Limits of AI Copyright

Conceptual artwork of glitchy copyright symbol

Last October, I received an email with a hell of an opening line: “I fired a nuke at the US Copyright Office this morning.”

The message was from Elisa Shupe, a 60-year-old retired US Army veteran who had just filed a copyright registration for a novel she’d recently self-published. She’d used OpenAI's ChatGPT extensively while writing the book. Her application was an attempt to compel the US Copyright Office to overturn its policy on work made with AI, which generally requires would-be copyright holders to exclude machine-generated elements.

That initial shot didn’t detonate—a week later, the USCO rejected Shupe’s application—but she ultimately won out. The agency changed course earlier this month after Shupe appealed, granting her copyright registration for AI Machinations: Tangled Webs and Typed Words, a work of autofiction self-published on Amazon under the pen name Ellen Rae.

The novel draws from Shupe’s eventful life , including her advocacy for more inclusive gender recognition. Its registration provides a glimpse of how the USCO is grappling with artificial intelligence , especially as more people incorporate AI tools into creative work. It is among the first creative works to receive a copyright for the arrangement of AI-generated text.

“We’re seeing the Copyright Office struggling with where to draw the line,” intellectual property lawyer Erica Van Loon, a partner at Nixon Peabody, says. Shupe’s case highlights some of the nuances of that struggle—because the approval of her registration comes with a significant caveat.

The USCO’s notice granting Shupe copyright registration of her book does not recognize her as author of the whole text as is conventional for written works. Instead she is considered the author of the “selection, coordination, and arrangement of text generated by artificial intelligence.” This means no one can copy the book without permission, but the actual sentences and paragraphs themselves are not copyrighted and could theoretically be rearranged and republished as a different book.

The agency backdated the copyright registration to October 10, the day that Shupe originally attempted to register her work. It declined to comment on this story. “The Copyright Office does not comment on specific copyright registrations or pending applications for registration,” Nora Scheland, an agency spokesperson says. President Biden’s executive order on AI last fall asked the US Patent and Trademark Office to make recommendations on copyright and AI to the White House in consultation with the Copyright Office, including on the “scope of protection for works produced using AI.”

Although Shupe’s limited copyright registration is notable, she originally asked the USCO to open a more significant path to copyright recognition for AI-generated material. “I seek to copyright the AI-assisted and AI-generated material under an ADA exemption for my many disabilities,” she wrote in her original copyright application. Shupe believes fervently that she was only able to complete her book with the assistance of generative AI tools. She says she has been assessed as 100 percent disabled by the Department of Veterans Affairs and struggles to write due to cognitive impairment related to conditions including bipolar disorder, borderline personality disorder, and a brain stem malformation.

Noncompetes Are Dead&-and Tech Workers Are Free to Roam

Amanda Hoover

Cyberspies Hacked Cisco Firewalls to Access Government Networks

Andy Greenberg

We Tested the BYD Seal&-the Car That Explains Why Tesla Just Cut Its Prices

Mark Andrews

Change Healthcare Finally Admits It Paid Ransomware Hackers&-and Still Faces a Patient Data Leak

She is proud of the finished work and sees working with a text generator as a different but no less worthwhile method of expressing thoughts. “You don't just hit ‘generate’ and get something worthy of publishing. That may come in the future, but we're still far from it,” she says, noting that she spent upwards of 14 hours a day working on her draft.

After her initial registration was refused, Shupe connected with Jonathan Askin, founder of the Brooklyn Law Incubator and Policy Clinic at Brooklyn Law School, which takes pro bono cases centered on emerging tech and policy questions. Askin and Brooklyn Law student Sofia Vescovo began working on Shupe’s case and filed an appeal with the USCO in January.

The appeal built on Shupe’s argument about her disabilities, saying she should be granted copyright because she used ChatGPT as an assistive technology to communicate, comparing her use of OpenAI’s chatbot to an amputee using a prosthetic leg. The appeal claimed that the USCO “discriminated against her because of her disability.”

The Brooklyn Law appeal also claimed that Shupe should be granted copyright for compiling the book—that is, doing the work of selecting and organizing the snippets of AI-generated text. It provided an exhaustive log of how Shupe prompted ChatGPT, showing the custom commands she created and the edits she made.

It includes a side-by-side comparison of the unedited machine output and the final version of Shupe’s book. On a sentence level, she adjusted almost every line in some way, from changes in word choice to structure. One example describing a character in the novel: “Mark eyed her, a complex mix of concern and annoyance evident in his gaze” becomes “Mark studied her, his gaze reflecting both worry and irritation.”

The appeal cites another recent AI copyright decision about the graphic novel Zarya and the Dawn , which incorporates AI-generated images created with Midjourney. In February 2023, author Kris Kashtanova was granted copyright to the selection and arrangement of AI-generated images in the text, even though they were denied copyright on the specific images themselves.

When the USCO granted Shupe’s request for copyright, it did not address the disability argument put forth but agreed with the appeal’s other argument. Shupe could be considered the author of “selection, coordination, and arrangement of text generated by artificial intelligence,” the agency wrote, backdating her copyright registration to October 10, 2023, the day that Shupe had originally attempted to register her work. That gives her authorship of the work overall, prohibiting unauthorized wholecloth reproduction of the entire book, but not copyright protection over the actual sentences of the novel.

“Overall, we are extremely satisfied,” says Vescovo. The team felt that copyrighting the book’s compilation would provide peace of mind against out-and-out reproduction of the work. “We really wanted to make sure we could get her this protection right now.” The Brooklyn Law team hope Shupe’s approval can serve as a blueprint for other people experimenting with AI text generation who want some copyright protection.

“I’m going to take this as a win for now,” Shupe says, even though she knows that “in some ways, it’s a compromise.” She maintains that the way she uses ChatGPT more closely resembles a collaboration than an automated output, and that she should be able to copyright the actual text of the book.

Matthew Sag, a professor of law and artificial intelligence at Emory University, calls what the USCO granted Shupe “thin copyright”—protection against full-fledged duplication of materials that doesn’t stop someone from rearranging the paragraphs into a different story. “This is the same kind of copyright you would get in an anthology of poetry that you didn’t write,” Sag says.

Erica Van Loon agrees. “It’s hard to imagine something more narrow,” she says.

Shupe is part of a larger movement to make copyright law friendlier to AI and the people who use it. The Copyright Office, which both administers the copyright registration system and advises Congress, the judiciary system, and other governmental agencies on copyright matters, plays a central role in determining how works that use AI are treated.

Although it continues to define authorship as an exclusively human endeavor , the USCO has demonstrated openness to registering works that incorporate AI elements. The USCO said in February that it has granted registration to over 100 works with AI incorporated; a search by WIRED found over 200 copyright registration applications explicitly disclosing AI elements, including books, songs, and visual artworks.

One such application came from Tyler Partin, who works for a chemical manufacturer. He recently registered a tongue-in-cheek song he created about a coworker, but excluded lyrics that he spun up using ChatGPT from his registration. Partin sees the text generator as a tool, but ultimately doesn’t think he should take credit for its output. Instead, he applied only for the music rather than the accompanying words. “I didn’t do that work,” he says.

But there are others who share Shupe’s perspective and agree with her mission, and believe that AI-generated materials should be registrable. Some high-profile attempts to register AI-generated artworks have resulted in USCO refusals, like artist Jason M. Allen’s effort to get his award-winning artwork Théâtre D’opéra Spatial copyrighted last year. AI researcher Stephen Thaler has been on a mission for years to prove that he should be entitled to copyright protections for a work made by the AI system he developed.

Thaler is currently appealing a ruling in the US last year that rebuffed his attempt to obtain copyright. Ryan Abbott, the lead attorney on the case, founded the Artificial Inventor Project , a group of intellectual property lawyers who file test cases seeking legal protections for AI-generated works.

Abbott is a supporter of Shupe’s mission, although he’s not a member of her legal team. He isn’t happy that the copyright registration excludes the AI-generated work itself. “We all see it as a very big problem,” he says.

Shupe and her legal helpers don’t have plans to push the ADA argument further by contesting the USCO’s decision, but it’s an issue that is far from settled. “The best path is probably to lobby Congress for an addition to the ADA statute,” says Askin. “There's a potential for us to draft some legislation or testimony to try to move Congress in that direction.”

Shupe’s qualified victory is still a significant marker in how the Copyright Office is grappling with what it means to be an author in the age of AI. She hopes going public with her efforts will reduce what she sees as a stigma against using AI as a creative tool. Her metaphorical nuke didn’t go off, but she has nonetheless advanced her cause. “I haven't been this excited since I unboxed a Commodore 64 back in the 1980s and, after a lot of noise, connected to a distant computer,” she says.

Updated 17-4-2024, 4:35 pm EDT: President Biden's executive order on AI last year asked the US Patent and Trademark office to make recommendations on copyright and AI in consultation with the Copyright Office, it did not ask the Copyright Office itself to make the recommendations.

Updated 18-4-2024, 9 am EDT: This piece has been updated to clarify Stephen Thaler's position on AI system copyright.

You Might Also Like …

Navigate election season with our Politics Lab newsletter and podcast

Think Google’s “Incognito mode” protects your privacy? Think again

Blowing the whistle on sexual harassment and assault in Antarctica

The earth will feast on dead cicadas

Upgrading your Mac? Here’s what you should spend your money on

artificial intelligence to help write essays

Steven Levy

The White House Puts New Guardrails on Government Use of AI

Matt Burgess

OpenAI Can Re-Create Human Voices&-but Won’t Release the Tech Yet

Benj Edwards, Ars Technica

How to Stop ChatGPT’s Voice Feature From Interrupting You

Reece Rogers

Here’s Proof the AI Boom Is Real: More People Are Tapping ChatGPT at Work

Will Knight

OpenAI’s GPT Store Is Triggering Copyright Complaints

Kate Knibbs

AI’s ability to write for us—and our inability to resist ‘The Button’—will spark a crisis of meaning in creative work

"Co-Intelligence: Living and Working with AI," by Ethan Mollick.

Soon, every major office application and email client will include a button to help you create a draft of your work. It deserves capital letters: The Button.

When faced with the tyranny of the blank page, people are going to push The Button. It is so much easier to start with something than nothing. Students are going to use it to start essays. Managers will use it to start emails, reports, or documents. Teachers will use it when providing feedback. Scientists will use it to write grants. Concept artists will use it for their first draft. Everyone is going to use The Button.

The implications of having AI write our first drafts (even if we do the work ourselves, which is not a given) are huge. One consequence is that we could lose our creativity and originality. When we use AI to generate our first drafts, we tend to anchor on the first idea that the machine produces, which influences our future work. Even if we rewrite the drafts completely, they will still be tainted by the AI’s influence. We will not be able to explore different perspectives and alternatives, which could lead to better solutions and insights.

Another consequence is that we could reduce the quality and depth of our thinking and reasoning. When we use AI to generate our first drafts, we don’t have to think as hard or as deeply about what we write. We rely on the machine to do the hard work of analysis and synthesis, and we don’t engage in critical and reflective thinking ourselves. We also miss the opportunity to learn from our mistakes and feedback and the chance to develop our own style.

AI can do it

There is already evidence that this is going to be a problem. A recent MIT study found that ChatGPT mostly serves as a substitute for human effort, not a complement to our skills. In fact, the vast majority of participants didn’t even bother editing the AI’s output. This is a problem I see repeatedly when people first use AI: they just paste in the exact question they are asked and let the AI answer it.

A lot of work is time-consuming by design. In a world in which the AI gives an instant, pretty good, near universally accessible shortcut, we’ll soon face a crisis of meaning in creative work of all kinds. This is, in part, because we expect creative work to take careful thought and revision, but also that time often operates as a stand-in for work. Take, for example, the letter of recommendation. Professors are asked to write letters for students all the time, and a good letter takes a long time to write. You have to understand the student and the reason for the letter, decide how to phrase the letter to align with the job requirements and the student’s strengths, and more. The fact that it is time-consuming is somewhat the point. That a professor takes the time to write a good letter is a sign that they support the student’s application. We are setting our time on fire to signal to others that this letter is worth reading.

Or we can push The Button.

And the problem is that the letter the AI generates is going to be good. Not just grammatically correct, but persuasive and insightful to a human reader. It is going to be better than most letters of recommendation that I receive. This means that not only is the quality of the letter no longer a signal of the professor’s interest, but also that you may actually be hurting people by not writing a letter of recommendation by AI, especially if you are not a particularly strong writer. So people now have to consider that the goal of the letter (getting a student a job) is in contrast with the morally correct method of accomplishing the goal (the professor spending a lot of time writing the letter). I am still doing all my letters the old-fashioned way, but I wonder whether that will ultimately do my students a disservice.

Now consider all the other tasks whose final written output is important because it is a signal of the time spent on the task and of the thoughtfulness that went into it—performance reviews, strategic memos, college essays, grant applications, speeches, comments on papers. And so much more.

Reconstructing meaning

Then The Button starts to tempt everyone. Work that was boring to do but meaningful when completed by humans (like performance reviews) becomes easy to outsource—and the apparent quality actually increases. We start to create documents mostly with AI that get sent to AI-powered inboxes, where the recipients respond primarily with AI. Even worse, we still create the reports by hand but realize that no human is actually reading them. This kind of meaningless task, what organizational theorists have called mere ceremony, has always been with us. But AI will make a lot of previously useful tasks meaningless. It will also remove the facade that previously disguised meaningless tasks. We may not have always known if our work mattered in the bigger picture, but in most organizations, the people in your part of the organizational structure felt it did. With AI-generated work sent to other AIs to assess, that sense of meaning disappears.

We are going to need to reconstruct meaning, in art and in the rituals of creative work. This is not an easy process, but we have done it before, many times. Where musicians once made money from records, they now depend on being excellent live performers. When photography made realistic oil paintings obsolete, artists started pushing the bounds of photography as art. When the spreadsheet made adding data by hand unneeded, clerks shifted their responsibilities to bigger-picture issues. This change in meaning is going to have a large effect on work.

Excerpted with permission from Co-Intelligence: Living and Working with AI , by Ethan Mollick, in agreement with Portfolio, an imprint of Penguin Publishing Group, a division of Penguin Random House LLC. Copyright © Ethan Mollick, 2024.

Ethan Mollick is a professor of management at Wharton, specializing in entrepreneurship and innovation. He writes the AI-focused blog One Useful Thing and is the creator of numerous educational games on a variety of topics. 

Latest in Tech

  • 0 minutes ago

Mark Zuckerberg, CEO of Meta.

Mark Zuckerberg warns of stock volatility as Meta bets billions more on AI investment ‘before we make much revenue’

Rubrik raised $736 million in its IPO.

Rubrik valued at $5.6 billion after massively oversubscribed IPO prices above range

Arvind Krishna

IBM shares fall after flatlining sales in its second-largest business unit

Elon Musk

Tesla driver said he was using Autopilot and looking at his phone before a crash that killed a motorcyclist

Mark Zuckerberg

Meta’s stock sags under the weight of aggressive AI spending

Jeroen Van Hautte of TechWolf

The race for human-AI interaction usage data is on—and the stakes are high

Most popular.

artificial intelligence to help write essays

On a crucial earnings call, Musk reminds the world Tesla is a tech company. ‘Even if I’m kidnapped by aliens tomorrow, Tesla will solve autonomy’

artificial intelligence to help write essays

Spotify CEO Daniel Ek surprised by how much laying off 1,500 employees negatively affected the streaming giant’s operations

artificial intelligence to help write essays

The outlook for home prices has changed drastically in just the past month as Fed rate cuts look more and more distant

artificial intelligence to help write essays

A 60-year-old worker in Texas says she’s dependent on apps that let her get paid early: ‘They get you hooked on having that money’

artificial intelligence to help write essays

Three top executives at Tesla have resigned in two weeks, with the latest departure at the end of its earnings call

artificial intelligence to help write essays

Your reusable water bottle may be a breeding ground for strep and fecal bacteria. Here’s how to keep it clean

(Ordinary legislative procedure: first reading)

The European Parliament,

–  having regard to the Commission proposal to Parliament and the Council ( COM(2021)0206 ),

–  having regard to Article 294(2) and Articles 16 and 114 of the Treaty on the Functioning of the European Union, pursuant to which the Commission submitted the proposal to Parliament (C9‑0146/2021),

–  having regard to Article 294(3) of the Treaty on the Functioning of the European Union,

–  having regard to the opinion of the European Central Bank of 29 December 2021 (1) ,

–  having regard to the opinion of the European Economic and Social Committee of 22 September 2021 (2) ,

–  having regard to the provisional agreement approved by the committees responsible under Rule 74(4) of its Rules of Procedure and the undertaking given by the Council representative by letter of 2 February 2024 to approve Parliament’s position, in accordance with Article 294(4) of the Treaty on the Functioning of the European Union,

–  having regard to Rule 59 of its Rules of Procedure,

–  having regard to the joint deliberations of the Committee on Internal Market and Consumer Protection and the Committee on Civil Liberties, Justice and Home Affairs under Rule 58 of the Rules of Procedure,

–  having regard to the opinion of the Committee on Industry, Research and Energy, the Committee on Culture and Education, the Committee on Legal Affairs, the Committee on the Environment, Public Health and Food Safety and the Committee on Transport and Tourism,

–  having regard to the report of the Committee on Internal Market and Consumer Protection and the Committee on Civil Liberties, Justice and Home Affairs ( A9-0188/2023 ),

1.  Adopts its position at first reading hereinafter set out (3) ;

2.  Calls on the Commission to refer the matter to Parliament again if it replaces, substantially amends or intends to substantially amend its proposal;

3.  Instructs its President to forward its position to the Council, the Commission and the national parliaments.

(Text with EEA relevance)

THE EUROPEAN PARLIAMENT AND THE COUNCIL OF THE EUROPEAN UNION,

Having regard to the Treaty on the Functioning of the European Union, and in particular Articles 16 and 114 thereof,

Having regard to the proposal from the European Commission,

After transmission of the draft legislative act to the national parliaments,

Having regard to the opinion of the European Economic and Social Committee (2) ,

Having regard to the opinion of the European Central Bank (3) ,

Having regard to the opinion of the Committee of the Regions (4) ,

Acting in accordance with the ordinary legislative procedure (5) ,

(1)  The purpose of this Regulation is to improve the functioning of the internal market by laying down a uniform legal framework in particular for the development, the placing on the market, the putting into service and the use of artificial intelligence systems ( AI systems ) in the Union, in accordance with Union values , to promote the uptake of human centric and trustworthy artificial intelligence (AI) while ensuring a high level of protection of health, safety , fundamental rights as enshrined in the Charter of fundamental rights of the European Union (the ‘Charter’) , including democracy, the rule of law and environmental protection, against the harmful effects of AI systems in the Union, and to support innovation. This Regulation ensures the free movement, cross-border, of AI-based goods and services, thus preventing Member States from imposing restrictions on the development, marketing and use of AI systems , unless explicitly authorised by this Regulation.

(2)   This Regulation should be applied in accordance with the values of the Union enshrined as in the Charter, facilitating the protection of natural persons, undertakings, democracy, the rule of law and environmental protection, while boosting innovation and employment and making the Union a leader in the uptake of trustworthy AI.

(3)  ▌ AI systems ▌ can be easily deployed in a large variety of sectors of the economy and many parts of society, including across borders, and can easily circulate throughout the Union. Certain Member States have already explored the adoption of national rules to ensure that AI is trustworthy and safe and is developed and used in accordance with fundamental rights obligations. Diverging national rules may lead to the fragmentation of the internal market and may decrease legal certainty for operators that develop , import or use AI systems. A consistent and high level of protection throughout the Union should therefore be ensured in order to achieve trustworthy AI , while divergences hampering the free circulation , innovation, deployment and the uptake of AI systems and related products and services within the internal market should be prevented by laying down uniform obligations for operators and guaranteeing the uniform protection of overriding reasons of public interest and of rights of persons throughout the internal market on the basis of Article 114 of the Treaty on the Functioning of the European Union (TFEU). To the extent that this Regulation contains specific rules on the protection of individuals with regard to the processing of personal data concerning restrictions of the use of AI systems for remote biometric identification for the purpose of law enforcement, of the use of AI systems for risk assessments of natural persons for the purpose of law enforcement and of the use of AI systems of biometric categorisation for the purpose of law enforcement, it is appropriate to base this Regulation, in so far as those specific rules are concerned, on Article 16 TFEU. In light of those specific rules and the recourse to Article 16 TFEU, it is appropriate to consult the European Data Protection Board.

(4)  AI is a fast evolving family of technologies that contributes to a wide array of economic , environmental and societal benefits across the entire spectrum of industries and social activities. By improving prediction, optimising operations and resource allocation, and personalising digital solutions available for individuals and organisations, the use of AI can provide key competitive advantages to undertakings and support socially and environmentally beneficial outcomes, for example in healthcare, agriculture, food safety, education and training, media, sports, culture, infrastructure management, energy, transport and logistics, public services, security, justice, resource and energy efficiency, environmental monitoring, the conservation and restoration of biodiversity and ecosystems and climate change mitigation and adaptation .

(5)  At the same time, depending on the circumstances regarding its specific application , use, and level of technological development , AI may generate risks and cause harm to public interests and fundamental rights that are protected by Union law. Such harm might be material or immaterial , including physical, psychological, societal or economic harm .

(6)   Given the major impact that AI can have on society and the need to build trust, it is vital for AI and its regulatory framework to be developed in accordance with Union values as enshrined in Article 2 of the Treaty on European Union (TEU), the fundamental rights and freedoms enshrined in the Treaties and, pursuant to Article 6 TEU, the Charter. As a pre-requisite, AI should be a human-centric technology. It should serve as a tool for people, with the ultimate aim of increasing human well-being.

(7)   In order to ensure a consistent and high level of protection of public interests as regards health, safety and fundamental rights, common rules for high-risk AI systems should be established. Those rules should be consistent with the Charter, non-discriminatory and in line with the Union’s international trade commitments. They should also take into account the European Declaration on Digital Rights and Principles for the Digital Decade and the Ethics guidelines for trustworthy AI of the High-Level Expert Group on Artificial Intelligence (AI HLEG).

(8)  A Union legal framework laying down harmonised rules on AI is therefore needed to foster the development, use and uptake of AI in the internal market that at the same time meets a high level of protection of public interests, such as health and safety and the protection of fundamental rights, including democracy, the rule of law and environmental protection as recognised and protected by Union law. To achieve that objective, rules regulating the placing on the market , the putting into service and the use of certain AI systems should be laid down, thus ensuring the smooth functioning of the internal market and allowing those systems to benefit from the principle of free movement of goods and services. Those rules should be clear and robust in protecting fundamental rights, supportive of new innovative solutions, enabling a European ecosystem of public and private actors creating AI systems in line with Union values and unlocking the potential of the digital transformation across all regions of the Union. By laying down those rules as well as measures in support of innovation with a particular focus on small and medium enterprises (SMEs), including startups , this Regulation supports the objective of promoting the European human-centric approach to AI and being a global leader in the development of secure, trustworthy and ethical AI ▌ as stated by the European Council (6) , and it ensures the protection of ethical principles, as specifically requested by the European Parliament (7) .

(9)   Harmonised rules applicable to the placing on the market, the putting into service and the use of high-risk AI systems should be laid down consistently with Regulation (EC) No 765/2008 of the European Parliament and of the Council (8) , Decision No 768/2008/EC of the European Parliament and of the Council (9) and Regulation (EU) 2019/1020 of the European Parliament and of the Council (10) (‘New Legislative Framework’). The harmonised rules laid down in this Regulation should apply across sectors and, in line with the New Legislative Framework, should be without prejudice to existing Union law, in particular on data protection, consumer protection, fundamental rights, employment, and protection of workers, and product safety, to which this Regulation is complementary.

As a consequence all rights and remedies provided for by such Union law to consumers, and other persons on whom AI systems may have a negative impact, including as regards the compensation of possible damages pursuant to Council Directive 85/374/EEC (11) remain unaffected and fully applicable. Furthermore, in the context of employment and protection of workers, this Regulation should therefore not affect Union law on social policy and national labour law, in accordance with Union law, concerning employment and working conditions, including health and safety at work and the relationship between employers and workers. This Regulation should also not affect the exercise of fundamental rights as recognised in the Member States and at Union level, including the right or freedom to strike or to take other action covered by the specific industrial relations systems in Member States as well as the right to negotiate, to conclude and enforce collective agreements or to take collective action in accordance with national law.

This Regulation should not affect the provisions aiming to improve working conditions in platform work laid down in Directive (EU) 2024/… of the European Parliament and of the Council (12) (13) . Moreover, this Regulation aims to strengthen the effectiveness of such existing rights and remedies by establishing specific requirements and obligations, including in respect of transparency, technical documentation and record-keeping of AI systems. Furthermore, the obligations placed on various operators involved in the AI value chain under this Regulation should apply without prejudice to national law, in accordance with Union law, having the effect of limiting the use of certain AI systems where such law falls outside the scope of this Regulation or pursues other legitimate public interest objectives than those pursued by this Regulation. For example, national labour law and law on the protection of minors, namely persons below the age of 18, taking into account the United Nations General Comment No 25 (2021) on children’s rights in relation to the digital environment, insofar as they are not specific to AI systems and pursue other legitimate public interest objectives, should not be affected by this Regulation.

(10)   The fundamental right to the protection of personal data is safeguarded in particular by Regulations (EU) 2016/679 (14) and (EU) 2018/1725 (15) of the European Parliament and of the Council and Directive (EU) 2016/680 of the European Parliament and of the Council (16) . Directive 2002/58/EC of the European Parliament and of the Council (17) additionally protects private life and the confidentiality of communications, including by way of providing conditions for any storing of personal and non-personal data in and access from terminal equipment. Those Union legal acts provide the basis for sustainable and responsible data processing, including where data sets include a mix of personal and non-personal data. This Regulation does not seek to affect the application of existing Union law governing the processing of personal data, including the tasks and powers of the independent supervisory authorities competent to monitor compliance with those instruments.

It also does not affect the obligations of providers and deployers of AI systems in their role as data controllers or processors stemming from Union or national law on the protection of personal data in so far as the design, the development or the use of AI systems involves the processing of personal data. It is also appropriate to clarify that data subjects continue to enjoy all the rights and guarantees awarded to them by such Union law, including the rights related to solely automated individual decision-making, including profiling. Harmonised rules for the placing on the market, the putting into service and the use of AI systems established under this Regulation should facilitate the effective implementation and enable the exercise of the data subjects’ rights and other remedies guaranteed under Union law on the protection of personal data and of other fundamental rights.

(11)   This Regulation should be without prejudice to the provisions regarding the liability of intermediary service providers set out in Directive 2000/31/EC of the European Parliament and of the Council (18) .

(12)  The notion of ‘ AI system’ in this Regulation should be clearly defined and should be closely aligned with the work of international organisations working on AI to ensure legal certainty, facilitate international convergence and wide acceptance, while providing the flexibility to accommodate the rapid technological developments in this field. Moreover, it should be based on key characteristics of AI systems that distinguish it from simpler traditional software systems or programming approaches and should not cover systems that are based on the rules defined solely by natural persons to automatically execute operations. A key characteristic of AI systems is their capability to infer. This capability to infer refers to the process of obtaining the outputs , such as predictions, content , recommendations, or decisions , which can influence physical and virtual environments, and to a capability of AI systems to derive models or algorithms from inputs or data. The techniques that enable inference while building an AI system include machine learning approaches that learn from data how to achieve certain objectives, and logic- and knowledge-based approaches that infer from encoded knowledge or symbolic representation of the task to be solved. The capacity of an AI system to infer transcends basic data processing, enables learning, reasoning or modelling. The term ‘machine-based’ refers to the fact that AI systems run on machines.

The reference to explicit or implicit objectives underscores that AI systems can operate according to explicit defined objectives or to implicit objectives. The objectives of the AI system may be different from the intended purpose of the AI system in a specific context. For the purposes of this Regulation, environments should be understood to be the contexts in which the AI systems operate, whereas outputs generated by the AI system reflect different functions performed by AI systems and include predictions, content, recommendations or decisions. AI systems are designed to operate with varying levels of autonomy, meaning that they have some degree of independence of actions from human involvement and of capabilities to operate without human intervention. The adaptiveness that an AI system could exhibit after deployment, refers to self-learning capabilities, allowing the system to change while in use. AI systems can be used on a stand-alone basis or as a component of a product, irrespective of whether the system is physically integrated into the product (embedded) or serve the functionality of the product without being integrated therein (non-embedded) .

(13)   The notion of ‘deployer’ referred to in this Regulation should be interpreted as any natural or legal person, including a public authority, agency or other body, using an AI system under its authority, except where the AI system is used in the course of a personal non-professional activity. Depending on the type of AI system, the use of the system may affect persons other than the deployer.

(14)  The notion of ‘ biometric data’ used in this Regulation ▌ should be interpreted in light of the notion of biometric data as defined in Article 4, point (14) of Regulation (EU) 2016/679, Article 3, point (18) of Regulation (EU) 2018/1725 and Article 3, point (13) of Directive (EU) 2016/680 . Biometric data can allow for the authentication, identification or categorisation of natural persons and for the recognition of emotions of natural persons.

(15)   The notion of ‘biometric identification’ referred to in this Regulation should be defined as the automated recognition of physical, physiological and behavioural human features such as the face, eye movement, body shape, voice, prosody, gait, posture, heart rate, blood pressure, odour, keystrokes characteristics, for the purpose of establishing an individual’s identity by comparing biometric data of that individual to stored biometric data of individuals in a reference database, irrespective of whether the individual has given its consent or not. This excludes AI systems intended to be used for biometric verification, which includes authentication, whose sole purpose is to confirm that a specific natural person is the person he or she claims to be and to confirm the identity of a natural person for the sole purpose of having access to a service, unlocking a device or having security access to premises.

(16)   The notion of ‘biometric categorisation’ referred to in this Regulation should be defined as assigning natural persons to specific categories on the basis of their biometric data. Such specific categories can relate to aspects such as sex, age, hair colour, eye colour, tattoos, behavioural or personality traits, language, religion, membership of a national minority, sexual or political orientation. This does not include biometric categorisation systems that are a purely ancillary feature intrinsically linked to another commercial service meaning that the feature cannot, for objective technical reasons, be used without the principal service and the integration of that feature or functionality is not a means to circumvent the applicability of the rules of this Regulation. For example, filters categorising facial or body features used on online marketplaces could constitute such an ancillary feature as they can be used only in relation to the principal service which consists in selling a product by allowing the consumer to preview the display of the product on him or herself and help the consumer to make a purchase decision. Filters used on online social network services which categorise facial or body features to allow users to add or modify pictures or videos could also be considered to be ancillary feature as such filter cannot be used without the principal service of the social network services consisting in the sharing of content online.

(17)  The notion of ‘ remote biometric identification system’ referred to in this Regulation should be defined functionally, as an AI system intended for the identification of natural persons without their active involvement, typically at a distance , through the comparison of a person’s biometric data with the biometric data contained in a reference database, irrespectively of the particular technology, processes or types of biometric data used. Such remote biometric identification systems are typically used to perceive multiple persons or their behaviour simultaneously in order to facilitate significantly the identification of natural persons without their active involvement. This excludes AI systems intended to be used for biometric verification, which includes authentication, the sole purpose of which is to confirm that a specific natural person is the person he or she claims to be and to confirm the identity of a natural person for the sole purpose of having access to a service, unlocking a device or having security access to premises. That exclusion is justified by the fact that such systems are likely to have a minor impact on fundamental rights of natural persons compared to the remote biometric identification systems which may be used for the processing of the biometric data of a large number of persons without their active involvement . In the case of ‘real-time’ systems, the capturing of the biometric data, the comparison and the identification occur all instantaneously, near-instantaneously or in any event without a significant delay. In this regard, there should be no scope for circumventing the rules of this Regulation on the ‘real-time’ use of the AI systems concerned by providing for minor delays. ‘Real-time’ systems involve the use of ‘live’ or ‘near-live’ material, such as video footage, generated by a camera or other device with similar functionality. In the case of ‘post’ systems, in contrast, the biometric data have already been captured and the comparison and identification occur only after a significant delay. This involves material, such as pictures or video footage generated by closed circuit television cameras or private devices, which has been generated before the use of the system in respect of the natural persons concerned.

(18)   The notion of ‘emotion recognition system’ referred to in this Regulation should be defined as an AI system for the purpose of identifying or inferring emotions or intentions of natural persons on the basis of their biometric data. The notion refers to emotions or intentions such as happiness, sadness, anger, surprise, disgust, embarrassment, excitement, shame, contempt, satisfaction and amusement. It does not include physical states, such as pain or fatigue; this refers for example to systems used in detecting the state of fatigue of professional pilots or drivers for the purpose of preventing accidents. This does also not include the mere detection of readily apparent expressions, gestures or movements, unless they are used for identifying or inferring emotions. Those expressions can be basic facial expressions, such as a frown or a smile, or gestures such as the movement of hands, arms or head, or characteristics of a person’s voice, such as a raised voice or whispering.

(19)  For the purposes of this Regulation the notion of ‘publicly accessible space’ should be understood as referring to any physical place that is accessible to an undetermined number of natural persons, and irrespective of whether the place in question is privately or publicly owned, irrespective of the activity for which the place may be used, such as commerce (for instance, shops, restaurants, cafés), services (for instance, banks, professional activities, hospitality), sport (for instance, swimming pools, gyms, stadiums), transport (for instance, bus, metro and railway stations, airports, means of transport ), entertainment (for instance, cinemas, theatres, museums, concert and conference halls), or leisure or otherwise (for instance, public roads and squares, parks, forests, playgrounds) . A place should be classified as publicly accessible also if, regardless of potential capacity or security restrictions, access is subject to certain predetermined conditions , which can be fulfilled by an undetermined number of persons , such as purchase of a ticket or title of transport, prior registration or having a certain age. In contrast, a place should not be considered to be publicly accessible if access is limited to specific and defined natural persons through either Union or national law directly related to public safety or security or through the clear manifestation of will by the person having the relevant authority on the place. The factual possibility of access alone (such as an unlocked door or an open gate in a fence) does not imply that the place is publicly accessible in the presence of indications or circumstances suggesting the contrary (such as. signs prohibiting or restricting access). Company and factory premises, as well as offices and workplaces that are intended to be accessed only by relevant employees and service providers, are places that are not publicly accessible. Publicly accessible spaces should not include prisons or border control. Some other areas may be composed of both not publicly accessible and publicly accessible areas, such as the hallway of a private residential building necessary to access a doctor's office or an airport. Online spaces are not covered either, as they are not physical spaces . Whether a given space is accessible to the public should however be determined on a case-by-case basis, having regard to the specificities of the individual situation at hand.

(20)   In order to obtain the greatest benefits from AI systems while protecting fundamental rights, health and safety and to enable democratic control, AI literacy should equip providers, deployers and affected persons with the necessary notions to make informed decisions regarding AI systems. Those notions may vary with regard to the relevant context and can include understanding the correct application of technical elements during the AI system’s development phase, the measures to be applied during its use, the suitable ways in which to interpret the AI system’s output, and, in the case of affected persons, the knowledge necessary to understand how decisions taken with the assistance of AI will have an impact on them. In the context of the application this Regulation, AI literacy should provide all relevant actors in the AI value chain with the insights required to ensure the appropriate compliance and its correct enforcement. Furthermore, the wide implementation of AI literacy measures and the introduction of appropriate follow-up actions could contribute to improving working conditions and ultimately sustain the consolidation, and innovation path of trustworthy AI in the Union. The European Artificial Intelligence Board (the ‘Board’) should support the Commission, to promote AI literacy tools, public awareness and understanding of the benefits, risks, safeguards, rights and obligations in relation to the use of AI systems. In cooperation with the relevant stakeholders, the Commission and the Member States should facilitate the drawing up of voluntary codes of conduct to advance AI literacy among persons dealing with the development, operation and use of AI.

(21)  In order to ensure a level playing field and an effective protection of rights and freedoms of individuals across the Union, the rules established by this Regulation should apply to providers of AI systems in a non-discriminatory manner, irrespective of whether they are established within the Union or in a third country, and to deployers of AI systems established within the Union.

(22)  In light of their digital nature, certain AI systems should fall within the scope of this Regulation even when they are not placed on the market, put into service, or used in the Union. This is the case, for example, where an operator established in the Union contracts certain services to an operator established in a third country in relation to an activity to be performed by an AI system that would qualify as high-risk ▌ . In those circumstances, the AI system used in a third country by the operator could process data lawfully collected in and transferred from the Union, and provide to the contracting operator in the Union the output of that AI system resulting from that processing, without that AI system being placed on the market, put into service or used in the Union. To prevent the circumvention of this Regulation and to ensure an effective protection of natural persons located in the Union, this Regulation should also apply to providers and deployers of AI systems that are established in a third country, to the extent the output produced by those systems is intended to be used in the Union.

Nonetheless, to take into account existing arrangements and special needs for future cooperation with foreign partners with whom information and evidence is exchanged, this Regulation should not apply to public authorities of a third country and international organisations when acting in the framework of cooperation or international agreements concluded at Union or national level for law enforcement and judicial cooperation with the Union or the Member States , provided that the relevant third country or international organisations provides adequate safeguards with respect to the protection of fundamental rights and freedoms of individuals. Where relevant, this may cover activities of entities entrusted by the third countries to carry out specific tasks in support of such law enforcement and judicial cooperation. Such framework for cooperation or agreements have been established bilaterally between Member States and third countries or between the European Union, Europol and other Union agencies and third countries and international organisations. The authorities competent for supervision of the law enforcement and judicial authorities under this Regulation should assess whether those frameworks for cooperation or international agreements include adequate safeguards with respect to the protection of fundamental rights and freedoms of individuals. Recipient Member States authorities and Union institutions, bodies, offices and agencies making use of such outputs in the Union remain accountable to ensure their use complies with Union law. When those international agreements are revised or new ones are concluded in the future, the contracting parties should undertake the utmost effort to align those agreements with the requirements of this Regulation.

(23)  This Regulation should also apply to Union institutions, bodies, offices and agencies when acting as a provider or deployer of an AI system. ▌

(24)   If and insofar AI systems are placed on the market, put into service, or used with or without modification of such systems for military, defence or national security purposes, those should be excluded from the scope of this Regulation regardless of which type of entity is carrying out those activities, such as whether it is a public or private entity. As regards military and defence purposes, such exclusion is justified both by Article 4(2) TEU and by the specificities of the Member States’ and the common Union defence policy covered by Chapter 2 of Title V TEU that are subject to public international law, which is therefore the more appropriate legal framework for the regulation of AI systems in the context of the use of lethal force and other AI systems in the context of military and defence activities. As regards national security purposes, the exclusion is justified both by the fact that national security remains the sole responsibility of Member States in accordance with Article 4(2) TEU and by the specific nature and operational needs of national security activities and specific national rules applicable to those activities. Nonetheless, if an AI system developed, placed on the market, put into service or used for military, defence or national security purposes is used outside those temporarily or permanently for other purposes, for example, civilian or humanitarian purposes, law enforcement or public security purposes, such a system would fall within the scope of this Regulation.

In that case, the entity using the system for other than military, defence or national security purposes should ensure compliance of the system with this Regulation, unless the system is already compliant with this Regulation. AI systems placed on the market or put into service for an excluded purpose, namely military, defence or national security, and one or more non-excluded purposes, such as civilian purposes or law enforcement, fall within the scope of this Regulation and providers of those systems should ensure compliance with this Regulation. In those cases, the fact that an AI system may fall within the scope of this Regulation should not affect the possibility of entities carrying out national security, defence and military activities, regardless of the type of entity carrying out those activities, to use AI systems for national security, military and defence purposes, the use of which is excluded from the scope of this Regulation. An AI system placed on the market for civilian or law enforcement purposes which is used with or without modification for military, defence or national security purposes should not fall within the scope of this Regulation, regardless of the type of entity carrying out those activities.

(25)   This Regulation should support innovation, respect freedom of science, and should not undermine research and development activity. It is therefore necessary to exclude from its scope AI systems and models specifically developed and put into service for the sole purpose of scientific research and development. Moreover, it is necessary to ensure that this Regulation does not otherwise affect scientific research and development activity on AI systems or models prior to being placed on the market or put into service. As regards product oriented research, testing and development activity regarding AI systems or models, the provisions of this Regulation should also not apply prior to those systems and models being put into service or placed on the market. That exclusion is without prejudice to the obligation to comply with this Regulation where an AI system falling into the scope of this Regulation is placed on the market or put into service as a result of such research and development activity and to the application of provisions on regulatory sandboxes and testing in real world conditions. Furthermore, without prejudice to exclusion regarding AI systems specifically developed and put into service for the sole purpose of scientific research and development, any other AI system that may be used for the conduct of any research and development activity should remain subject to the provisions of this Regulation. In any event, any research and development activity should be carried out in accordance with recognised ethical and professional standards for scientific research and should be conducted in accordance with applicable Union law.

(26)  In order to introduce a proportionate and effective set of binding rules for AI systems, a clearly defined risk-based approach should be followed. That approach should tailor the type and content of such rules to the intensity and scope of the risks that AI systems can generate. It is therefore necessary to prohibit certain unacceptable AI practices, to lay down requirements for high-risk AI systems and obligations for the relevant operators, and to lay down transparency obligations for certain AI systems.

(27)   While the risk-based approach is the basis for a proportionate and effective set of binding rules, it is important to recall the 2019 Ethics guidelines for trustworthy AI developed by the independent AI HLEG appointed by the Commission. In those guidelines, the AI HLEG developed seven non-binding ethical principles for AI which are intended to help ensure that AI is trustworthy and ethically sound. The seven principles include human agency and oversight; technical robustness and safety; privacy and data governance; transparency; diversity, non-discrimination and fairness; societal and environmental well-being and accountability. Without prejudice to the legally binding requirements of this Regulation and any other applicable Union law, those guidelines contribute to the design of a coherent, trustworthy and human-centric AI, in line with the Charter and with the values on which the Union is founded. According to the guidelines of the AI HLEG, human agency and oversight means that AI systems are developed and used as a tool that serves people, respects human dignity and personal autonomy, and that is functioning in a way that can be appropriately controlled and overseen by humans.

Technical robustness and safety means that AI systems are developed and used in a way that allows robustness in the case of problems and resilience against attempts to alter the use or performance of the AI system so as to allow unlawful use by third parties, and minimise unintended harm. Privacy and data governance means that AI systems are developed and used in accordance with privacy and data protection rules, while processing data that meets high standards in terms of quality and integrity. Transparency means that AI systems are developed and used in a way that allows appropriate traceability and explainability, while making humans aware that they communicate or interact with an AI system, as well as duly informing deployers of the capabilities and limitations of that AI system and affected persons about their rights. Diversity, non-discrimination and fairness means that AI systems are developed and used in a way that includes diverse actors and promotes equal access, gender equality and cultural diversity, while avoiding discriminatory impacts and unfair biases that are prohibited by Union or national law. Social and environmental well-being means that AI systems are developed and used in a sustainable and environmentally friendly manner as well as in a way to benefit all human beings, while monitoring and assessing the long-term impacts on the individual, society and democracy. The application of those principles should be translated, when possible, in the design and use of AI models. They should in any case serve as a basis for the drafting of codes of conduct under this Regulation. All stakeholders, including industry, academia, civil society and standardisation organisations, are encouraged to take into account as appropriate the ethical principles for the development of voluntary best practices and standards.

(28)  Aside from the many beneficial uses of AI, that technology can also be misused and provide novel and powerful tools for manipulative, exploitative and social control practices. Such practices are particularly harmful and abusive and should be prohibited because they contradict Union values of respect for human dignity, freedom, equality, democracy and the rule of law and fundamental rights enshrined in the Charter, including the right to non-discrimination, to data protection and to privacy and the rights of the child.

(29)   AI-enabled manipulative techniques can be used to persuade persons to engage in unwanted behaviours, or to deceive them by nudging them into decisions in a way that subverts and impairs their autonomy, decision-making and free choices. The placing on the market, the putting into service or the use of certain AI systems with the objective to or the effect of materially distorting human behaviour, whereby significant harms, in particular having sufficiently important adverse impacts on physical , psychological health or financial interests are likely to occur, are particularly dangerous and should therefore be forbidden. Such AI systems deploy subliminal components such as audio, image, video stimuli that persons cannot perceive as those stimuli are beyond human perception or other manipulative or deceptive techniques that subvert or impair person’s autonomy, decision-making or free choice in ways that people are not consciously aware or, where they are aware, they are still deceived or are not able to control or resist. This could be facilitated, for example, by machine-brain interfaces or virtual reality as they allow for a higher degree of control of what stimuli are presented to persons, insofar as they may materially distort their behaviour in a significantly harmful manner. In addition, AI systems may also otherwise exploit the vulnerabilities of a person or a specific group of persons due to their age, disability within the meaning of Directive (EU) 2019/882 of the European Parliament and of the Council (19) , or a specific social or economic situation that is likely to make those persons more vulnerable to exploitation such as persons living in extreme poverty, ethnic or religious minorities.

Such AI systems can be placed on the market, put into service or used with the objective to or the effect of materially distorting the behaviour of a person and in a manner that causes or is reasonably likely to cause significant harm to that or another person or groups of persons, including harms that may be accumulated over time and should therefore be prohibited . It may not be possible to assume that there is an intention to distort behaviour where the distortion ▌ results from factors external to the AI system which are outside the control of the provider or the deployer, namely factors that may not be reasonably foreseeable and therefore not possible for the provider or the deployer of the AI system to mitigate. In any case, it is not necessary for the provider or the deployer to have the intention to cause significant harm, provided that such harm results from the manipulative or exploitative AI-enabled practices. The prohibitions for such AI practices are complementary to the provisions contained in Directive 2005/29/EC of the European Parliament and of the Council (20) , in particular unfair commercial practices leading to economic or financial harms to consumers are prohibited under all circumstances, irrespective of whether they are put in place through AI systems or otherwise. The prohibitions of manipulative and exploitative practices in this Regulation should not affect lawful practices in the context of medical treatment such as psychological treatment of a mental disease or physical rehabilitation, when those practices are carried out in accordance with the applicable law and medical standards , for example explicit consent of the individuals or their legal representatives. In addition, common and legitimate commercial practices, for example in the field of advertising, that comply with the applicable law should not, in themselves, be regarded as constituting harmful manipulative AI practices .

(30)   Biometric categorisation systems that are based on natural persons’ biometric data, such as an individual person’s face or fingerprint, to deduce or infer an individuals’ political opinions, trade union membership, religious or philosophical beliefs, race, sex life or sexual orientation should be prohibited. That prohibition should not cover the lawful labelling, filtering or categorisation of biometric data sets acquired in line with Union or national law according to biometric data, such as the sorting of images according to hair colour or eye colour, which can for example be used in the area of law enforcement.

(31)  AI systems providing social scoring of natural persons ▌ by public or private actors may lead to discriminatory outcomes and the exclusion of certain groups. They may violate the right to dignity and non-discrimination and the values of equality and justice. Such AI systems evaluate or classify natural persons or groups thereof on the basis of multiple data points related to their social behaviour in multiple contexts or known , inferred or predicted personal or personality characteristics over certain periods of time . The social score obtained from such AI systems may lead to the detrimental or unfavourable treatment of natural persons or whole groups thereof in social contexts, which are unrelated to the context in which the data was originally generated or collected or to a detrimental treatment that is disproportionate or unjustified to the gravity of their social behaviour. AI systems entailing such unacceptable scoring practices and leading to such detrimental or unfavourable outcomes should be therefore prohibited. That prohibition should not affect lawful evaluation practices of natural persons that are carried out for a specific purpose in accordance with Union and national law.

(32)  The use of AI systems for ‘real-time’ remote biometric identification of natural persons in publicly accessible spaces for the purpose of law enforcement is particularly intrusive to the rights and freedoms of the concerned persons, to the extent that it may affect the private life of a large part of the population, evoke a feeling of constant surveillance and indirectly dissuade the exercise of the freedom of assembly and other fundamental rights. Technical inaccuracies of AI systems intended for the remote biometric identification of natural persons can lead to biased results and entail discriminatory effects. Such possible biased results and discriminatory effects are particularly relevant with regard to age, ethnicity, race, sex or disabilities. In addition, the immediacy of the impact and the limited opportunities for further checks or corrections in relation to the use of such systems operating in real-time carry heightened risks for the rights and freedoms of the persons that are concerned by law enforcement activities.

(33)  The use of those systems for the purpose of law enforcement should therefore be prohibited, except in exhaustively listed and narrowly defined situations, where the use is strictly necessary to achieve a substantial public interest, the importance of which outweighs the risks. Those situations involve the search for certain victims of crime ▌ including missing people ; certain threats to the life or to the physical safety of natural persons or of a terrorist attack; and the localisation or identification of perpetrators or suspects of the criminal offences listed in an annex to this Regulation , where those criminal offences are punishable by a custodial sentence or a detention order for a maximum period of at least four years in the Member State concerned in accordance with the law of that Member State. Such a threshold for the custodial sentence or detention order in accordance with national law contributes to ensuring that the offence should be serious enough to potentially justify the use of ‘real-time’ remote biometric identification systems.

Moreover, those criminal offences are based on the 32 criminal offences listed in the Council Framework Decision 2002/584/JHA (21) , taking into account that some of those offences are, in practice, likely to be more relevant than others, in that the recourse to ‘real-time’ remote biometric identification is, foreseeably, necessary and proportionate to highly varying degrees for the practical pursuit of the localisation or identification of a perpetrator or suspect of the different criminal offences listed and having regard to the likely differences in the seriousness, probability and scale of the harm or possible negative consequences. An imminent threat to life or the physical safety of natural persons could also result from a serious disruption of critical infrastructure, as defined in Article 2, point (4) of Directive (EU) 2022/2557 of the European Parliament and of the Council (22) , where the disruption or destruction of such critical infrastructure would result in an imminent threat to life or the physical safety of a person, including through serious harm to the provision of basic supplies to the population or to the exercise of the core function of the State. In addition, this Regulation should preserve the ability for law enforcement, border control, immigration or asylum authorities to carry out identity checks in the presence of the person that is concerned in accordance with the conditions set out in Union and national law for such checks. In particular, law enforcement, border control, immigration or asylum authorities should be able to use information systems, in accordance with Union or national law, to identify persons who, during an identity check, either refuse to be identified or are unable to state or prove their identity, without being required by this Regulation to obtain prior authorisation. This could be, for example, a person involved in a crime, being unwilling, or unable due to an accident or a medical condition, to disclose their identity to law enforcement authorities.

(34)  In order to ensure that those systems are used in a responsible and proportionate manner, it is also important to establish that, in each of those exhaustively listed and narrowly defined situations, certain elements should be taken into account, in particular as regards the nature of the situation giving rise to the request and the consequences of the use for the rights and freedoms of all persons concerned and the safeguards and conditions provided for with the use. In addition, the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement should be deployed only to confirm the specifically targeted individual’s identity and should be limited to what is strictly necessary concerning the period of time as well as geographic and personal scope , having regard in particular to the evidence or indications regarding the threats, the victims or perpetrator. The use of the real-time remote biometric identification system in publicly accessible spaces should be authorised only if the relevant law enforcement authority has completed a fundamental rights impact assessment and, unless provided otherwise in this Regulation, has registered the system in the database as set out in this Regulation. The reference database of persons should be appropriate for each use case in each of the situations mentioned above.

(35)  Each use of a ‘real-time’ remote biometric identification system in publicly accessible spaces for the purpose of law enforcement should be subject to an express and specific authorisation by a judicial authority or by an independent administrative authority whose decision is binding of a Member State. Such authorisation should, in principle, be obtained prior to the use of the AI system with a view to identifying a person or persons. Exceptions to that rule should be allowed in duly justified situations on grounds of urgency, namely, in situations where the need to use the systems concerned is such as to make it effectively and objectively impossible to obtain an authorisation before commencing the use of the AI system. In such situations of urgency, the use of the AI system should be restricted to the absolute minimum necessary and should be subject to appropriate safeguards and conditions, as determined in national law and specified in the context of each individual urgent use case by the law enforcement authority itself. In addition, the law enforcement authority should in such situations request such authorisation ▌ while providing the reasons for not having been able to request it earlier , without undue delay and, at the latest within 24 hours. If such an authorisation is rejected, the use of real-time biometric identification systems linked to that authorisation should cease with immediate effect and all the data related to such use should be discarded and deleted. Such data includes input data directly acquired by an AI system in the course of the use of such system as well as the results and outputs of the use linked to that authorisation. It should not include input that is legally acquired in accordance with another Union or national law. In any case, no decision producing an adverse legal effect on a person should be taken based solely on the output of the remote biometric identification system .

(36)   In order to carry out their tasks in accordance with the requirements set out in this Regulation as well as in national rules, the relevant market surveillance authority and the national data protection authority should be notified of each use of the real-time biometric identification system. National market surveillance authorities and the national data protection authorities that have been notified should submit to the Commission an annual report on the use of real-time biometric identification systems.

(37)  Furthermore, it is appropriate to provide, within the exhaustive framework set by this Regulation that such use in the territory of a Member State in accordance with this Regulation should only be possible where and in as far as the Member State concerned has decided to expressly provide for the possibility to authorise such use in its detailed rules of national law. Consequently, Member States remain free under this Regulation not to provide for such a possibility at all or to only provide for such a possibility in respect of some of the objectives capable of justifying authorised use identified in this Regulation. Such national rules should be notified to the Commission within 30 days of their adoption.

(38)  The use of AI systems for real-time remote biometric identification of natural persons in publicly accessible spaces for the purpose of law enforcement necessarily involves the processing of biometric data. The rules of this Regulation that prohibit, subject to certain exceptions, such use, which are based on Article 16 TFEU, should apply as lex specialis in respect of the rules on the processing of biometric data contained in Article 10 of Directive (EU) 2016/680, thus regulating such use and the processing of biometric data involved in an exhaustive manner. Therefore, such use and processing should be possible only in as far as it is compatible with the framework set by this Regulation, without there being scope, outside that framework, for the competent authorities, where they act for purpose of law enforcement, to use such systems and process such data in connection thereto on the grounds listed in Article 10 of Directive (EU) 2016/680. In that context, this Regulation is not intended to provide the legal basis for the processing of personal data under Article 8 of Directive (EU) 2016/680. However, the use of real-time remote biometric identification systems in publicly accessible spaces for purposes other than law enforcement, including by competent authorities, should not be covered by the specific framework regarding such use for the purpose of law enforcement set by this Regulation. Such use for purposes other than law enforcement should therefore not be subject to the requirement of an authorisation under this Regulation and the applicable detailed rules of national law that may give effect to that authorisation.

(39)  Any processing of biometric data and other personal data involved in the use of AI systems for biometric identification, other than in connection to the use of real-time remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement as regulated by this Regulation, should continue to comply with all requirements resulting from Article 10 of Directive (EU) 2016/680. For purposes other than law enforcement, ▌ Article 9(1) of Regulation (EU) 2016/679 and Article 10(1) of Regulation (EU) 2018/1725 prohibit the processing of biometric data subject to limited exceptions as provided in those Articles. In the application of Article 9(1) of Regulation (EU) 2016/679, the use of remote biometric identification for purposes other than law enforcement has already been subject to prohibition decisions by national data protection authorities .

(40)  In accordance with Article 6a of Protocol No 21 on the position of the United Kingdom and Ireland in respect of the area of freedom, security and justice, as annexed to the TEU and to the TFEU, Ireland is not bound by the rules laid down in Article 5(1), point (c) to the extent it applies to the use of biometric categorisation systems for activities in the field of police cooperation and judicial cooperation in criminal matters, Article 5(1), points (e) and (f) to the extent they apply to the use of AI systems covered by that provision, Article 5 (3) to (8) and Article 26(10) of this Regulation adopted on the basis of Article 16 of the TFEU which relate to the processing of personal data by the Member States when carrying out activities falling within the scope of Chapter 4 or Chapter 5 of Title V of Part Three of the TFEU, where Ireland is not bound by the rules governing the forms of judicial cooperation in criminal matters or police cooperation which require compliance with the provisions laid down on the basis of Article 16 TFEU.

(41)  In accordance with Articles 2 and 2a of Protocol No 22 on the position of Denmark, annexed to the TEU and to the TFEU, Denmark is not bound by rules laid down in Article 5(1), point (c) to the extent it applies to the use of biometric categorisation systems for activities in the field of police cooperation and judicial cooperation in criminal matters, Article 5(1), point (e), point (f) to the extent it applies to the use of AI systems covered by that provision, Article 5 (3) to (8) and Article 26(10) of this Regulation adopted on the basis of Article 16 of the TFEU, or subject to their application, which relate to the processing of personal data by the Member States when carrying out activities falling within the scope of Chapter 4 or Chapter 5 of Title V of Part Three of the TFEU.

(42)   In line with the presumption of innocence, natural persons in the Union should always be judged on their actual behaviour. Natural persons should never be judged on AI-predicted behaviour based solely on their profiling, personality traits or characteristics, such as nationality, place of birth, place of residence, number of children, level of debt or type of car, without a reasonable suspicion of that person being involved in a criminal activity based on objective verifiable facts and without human assessment thereof. Therefore, risk assessments carried out with regard to natural persons in order to assess the risk of their offending or to predict the occurrence of an actual or potential criminal offence based solely on profiling them or on assessing their personality traits and characteristics should be prohibited. In any case, that prohibition does not refer to or touch upon risk analytics that are not based on the profiling of individuals or on the personality traits and characteristics of individuals, such as AI systems using risk analytics to assess the risk of financial fraud by undertakings on the basis of suspicious transactions or risk analytic tools to predict the likelihood of the localisation of narcotics or illicit goods by customs authorities, for example on the basis of known trafficking routes.

(43)   The placing on the market, the putting into service for this specific purpose, or use of AI systems that create or expand facial recognition databases through the untargeted scraping of facial images from the internet or CCTV footage, should be prohibited because that practice adds to the feeling of mass surveillance and can lead to gross violations of fundamental rights, including the right to privacy.

(44)   There are serious concerns about the scientific basis of AI systems aiming to identify or infer emotions, particularly as expression of emotions vary considerably across cultures and situations, and even within a single individual. Among the key shortcomings of such systems are the limited reliability, the lack of specificity and the limited generalisability. Therefore, AI systems identifying or inferring emotions or intentions of natural persons on the basis of their biometric data may lead to discriminatory outcomes and can be intrusive to the rights and freedoms of the concerned persons. Considering the imbalance of power in the context of work or education, combined with the intrusive nature of these systems, such systems could lead to detrimental or unfavourable treatment of certain natural persons or whole groups thereof. Therefore, the placing on the market, the putting into service, or the use of AI systems intended to be used to detect the emotional state of individuals in situations related to the workplace and education should be prohibited. That prohibition should not cover AI systems placed on the market strictly for medical or safety reasons, such as systems intended for therapeutical use.

(45)   Practices that are prohibited by Union law, including data protection law, non-discrimination law, consumer protection law, and competition law, should not be affected by this Regulation.

(46)  High-risk AI systems should only be placed on the Union market , put into service or used if they comply with certain mandatory requirements. Those requirements should ensure that high-risk AI systems available in the Union or whose output is otherwise used in the Union do not pose unacceptable risks to important Union public interests as recognised and protected by Union law. Based on the New Legislative Framework, as clarified in the Commission notice “The ‘Blue Guide’ on the implementation of EU product rules 2022” (23) , the general rule is that Union harmonisation legislation, such as Regulations (EU) 2017/745 (24) and (EU) 2017/746 (25) of the European Parliament and of the Council and Directive 2006/42/EC of the European Parliament and of the Council (26) , may be applicable to one product, since the making available or putting into service can take place only when the product complies with all applicable Union harmonisation legislation. To ensure consistency and avoid an unnecessary administrative burden or unnecessary costs, providers of a product that contains one or more high-risk AI system, to which the requirements of this Regulation or of the Union harmonisation legislation listed in an annex to this Regulation apply, should be flexible with regard to operational decisions on how to ensure compliance of a product that contains one or more AI systems with all applicable requirements of the Union harmonisation legislation in an optimal manner. AI systems identified as high-risk should be limited to those that have a significant harmful impact on the health, safety and fundamental rights of persons in the Union and such limitation minimises any potential restriction to international trade.

(47)  AI systems could have an adverse impact to health and safety of persons, in particular when such systems operate as safety components. Consistently with the objectives of Union harmonisation legislation to facilitate the free movement of products in the internal market and to ensure that only safe and otherwise compliant products find their way into the market, it is important that the safety risks that may be generated by a product as a whole due to its digital components, including AI systems, are duly prevented and mitigated. For instance, increasingly autonomous robots, whether in the context of manufacturing or personal assistance and care should be able to safely operate and performs their functions in complex environments. Similarly, in the health sector where the stakes for life and health are particularly high, increasingly sophisticated diagnostics systems and systems supporting human decisions should be reliable and accurate. ▌

(48)   The extent of the adverse impact caused by the AI system on the fundamental rights protected by the Charter is of particular relevance when classifying an AI system as high risk. Those rights include the right to human dignity, respect for private and family life, protection of personal data, freedom of expression and information, freedom of assembly and of association, and non-discrimination, right to education consumer protection, workers’ rights, rights of persons with disabilities, gender equality, intellectual property rights, right to an effective remedy and to a fair trial, right of defence and the presumption of innocence, right to good administration. In addition to those rights, it is important to highlight the fact that children have specific rights as enshrined in Article 24 of the Charter and in the United Nations Convention on the Rights of the Child, further developed in the UNCRC General Comment No 25 as regards the digital environment, both of which require consideration of the children’s vulnerabilities and provision of such protection and care as necessary for their well-being. The fundamental right to a high level of environmental protection enshrined in the Charter and implemented in Union policies should also be considered when assessing the severity of the harm that an AI system can cause, including in relation to the health and safety of persons.

(49)  As regards high-risk AI systems that are safety components of products or systems, or which are themselves products or systems falling within the scope of Regulation (EC) No 300/2008 of the European Parliament and of the Council (27) , Regulation (EU) No 167/2013 of the European Parliament and of the Council (28) , Regulation (EU) No 168/2013 of the European Parliament and of the Council (29) , Directive 2014/90/EU of the European Parliament and of the Council (30) , Directive (EU) 2016/797 of the European Parliament and of the Council (31) , Regulation (EU) 2018/858 of the European Parliament and of the Council (32) ,

Regulation (EU) 2018/1139 of the European Parliament and of the Council (33) , and Regulation (EU) 2019/2144 of the European Parliament and of the Council (34) , it is appropriate to amend those acts to ensure that the Commission takes into account, on the basis of the technical and regulatory specificities of each sector, and without interfering with existing governance, conformity assessment and enforcement mechanisms and authorities established therein, the mandatory requirements for high-risk AI systems laid down in this Regulation when adopting any relevant delegated or implementing acts on the basis of those acts.

(50)  As regards AI systems that are safety components of products, or which are themselves products, falling within the scope of certain Union harmonisation legislation, it is appropriate to classify them as high-risk under this Regulation if the product concerned undergoes the conformity assessment procedure with a third-party conformity assessment body pursuant to that relevant Union harmonisation legislation. In particular, such products are machinery, toys, lifts, equipment and protective systems intended for use in potentially explosive atmospheres, radio equipment, pressure equipment, recreational craft equipment, cableway installations, appliances burning gaseous fuels, medical devices, and in vitro diagnostic medical devices.

(51)  The classification of an AI system as high-risk pursuant to this Regulation should not necessarily mean that the product whose safety component is the AI system, or the AI system itself as a product, is considered to be high-risk under the criteria established in the relevant Union harmonisation legislation that applies to the product. This is, in particular, the case for Regulations (EU) 2017/745 and (EU) 2017/746, where a third-party conformity assessment is provided for medium-risk and high-risk products.

(52)  As regards stand-alone AI systems, namely high-risk AI systems other than those that are safety components, or which are themselves products, it is appropriate to classify them as high-risk if, in the light of their intended purpose, they pose a high risk of harm to the health and safety or the fundamental rights of persons, taking into account both the severity of the possible harm and its probability of occurrence and they are used in a number of specifically pre-defined areas specified in this Regulation. The identification of those systems is based on the same methodology and criteria envisaged also for any future amendments of the list of high-risk AI systems that the Commission should be empowered to adopt, via delegated acts, to take into account the rapid pace of technological development, as well as the potential changes in the use of AI systems .

(53)   It is also important to clarify that there may be specific cases in which AI systems referred to pre-defined areas specified in this Regulation do not lead to a significant risk of harm to the legal interests protected under those areas because they do not materially influence the decision-making or do not harm those interests substantially. For the purposes of this Regulation an AI system that does not materially influence the outcome of decision-making should be understood to be an AI system that does not have an impact on the substance, and thereby the outcome, of decision-making, whether human or automated. An AI system that does not materially influence the outcome of decision-making could include situations in which one or more of the following conditions are fulfilled. The first such condition should be that the AI system is intended to perform a narrow procedural task, such as an AI system that transforms unstructured data into structured data, an AI system that classifies incoming documents into categories or an AI system that is used to detect duplicates among a large number of applications. Those tasks are of such narrow and limited nature that they pose only limited risks which are not increased through the use in a context that is listed as a high-risk use in an annex to this Regulation. The second condition should be that the task performed by the AI system is intended to improve the result of a previously completed human activity that may be relevant for the purposes of that list. Considering those characteristics, the AI system provides only an additional layer to a human activity with consequently lowered risk. That condition would, for example, apply to AI systems that are intended to improve the language used in previously drafted documents, for instance in relation to professional tone, academic style of language or by aligning text to a certain brand messaging.

The third condition should be that the AI system is intended to detect decision-making patterns or deviations from prior decision-making patterns. The risk would be lowered because the use of the AI system follows a previously completed human assessment which it is not meant to replace or influence, without proper human review. Such AI systems include for instance those that, given a certain grading pattern of a teacher, can be used to check ex post whether the teacher may have deviated from the grading pattern so as to flag potential inconsistencies or anomalies. The fourth condition should be that the AI system is intended to perform a task that is only preparatory to an assessment relevant for the purposes of the AI systems listed in an annex to this Regulation, thus making the possible impact of the output of the system very low in terms of representing a risk for the assessment to follow. That condition covers, inter alia, smart solutions for file handling, which include various functions from indexing, searching, text and speech processing or linking data to other data sources, or AI systems used for translation of initial documents. In any case, those high-risk AI systems should be considered to pose significant risks of harm to the health, safety or fundamental rights of natural persons if the AI system implies profiling within the meaning of Article 4, point (4) of Regulation (EU) 2016/679 or Article 3, point (4) of Directive (EU) 2016/680 or Article 3, point (5) of Regulation (EU) 2018/1725. To ensure traceability and transparency, a provider who considers that an AI systems is not high-risk on the basis of those conditions should draw up documentation of the assessment before that system is placed on the market or put into service and should provide this documentation to national competent authorities upon request. Such a provider should be obliged to register the system in the EU database established under this Regulation. With a view to provide further guidance for the practical implementation of the conditions under which the high-risk AI systems listed in the annex are, on an exceptional basis, non-high-risk, the Commission should, after consulting the Board, provide guidelines specifying that practical implementation completed by a comprehensive list of practical examples of use cases of AI systems that are high-risk and not high-risk.

(54)   As biometric data constitutes a special category of sensitive personal data, it is appropriate to classify as high-risk several critical-use cases of biometric systems, insofar as their use is permitted under relevant Union and national law. Technical inaccuracies of AI systems intended for the remote biometric identification of natural persons can lead to biased results and entail discriminatory effects. The risk of such biased results and discriminatory effects are particularly relevant with regard to age, ethnicity, race, sex or disabilities. Remote biometric identification systems should therefore be classified as high-risk in view of the risks that they pose. Such classification excludes AI systems intended to be used for biometric verification, including authentication, the sole purpose of which is to confirm that a specific natural person who he or she claims to be and to confirm the identity of a natural person for the sole purpose of having access to a service, unlocking a device or having secure access to premises. In addition, AI systems intended to be used for biometric categorisation according to sensitive attributes or characteristics protected under Article 9(1) of Regulation (EU) 2016/679 on the basis of biometric data, in so far as these are not prohibited under this Regulation, and emotion recognition systems that are not prohibited under this Regulation, should be classified as high-risk. Biometric systems which are intended to be used solely for the purpose of enabling cybersecurity and personal data protection measures should not be considered to be high-risk systems.

(55)  As regards the management and operation of critical infrastructure, it is appropriate to classify as high-risk the AI systems intended to be used as safety components in the management and operation of critical digital infrastructure as listed in Annex I, point (8),to Directive (EU) 2022/2557, road traffic and the supply of water, gas, heating and electricity, since their failure or malfunctioning may put at risk the life and health of persons at large scale and lead to appreciable disruptions in the ordinary conduct of social and economic activities. Safety components of critical infrastructure, including critical digital infrastructure, are systems used to directly protect the physical integrity of critical infrastructure or health and safety of persons and property but which are not necessary in order for the system to function. The failure or malfunctioning of such components might directly lead to risks to the physical integrity of critical infrastructure and thus to risks to health and safety of persons and property. Components intended to be used solely for cybersecurity purposes should not qualify as safety components. Examples of safety components of such critical infrastructure may include systems for monitoring water pressure or fire alarm controlling systems in cloud computing centres.

(56)   The deployment of AI systems in education is important to promote high-quality digital education and training and to allow all learners and teachers to acquire and share the necessary digital skills and competences, including media literacy, and critical thinking, to take an active part in the economy, society, and in democratic processes. However, AI systems used in education or vocational training, in particular for determining access or admission, for assigning persons to educational and vocational training institutions or programmes at all levels, for evaluating learning outcomes of persons , for assessing the appropriate level of education for an individual and materially influencing the level of education and training that individuals will receive or will be able to access or for monitoring and detecting prohibited behaviour of students during tests should be classified as high-risk AI systems , since they may determine the educational and professional course of a person’s life and therefore affect that person’s ability to secure a livelihood. When improperly designed and used, such systems may be particularly intrusive and may violate the right to education and training as well as the right not to be discriminated against and perpetuate historical patterns of discrimination , for example against women, certain age groups, persons with disabilities, or persons of certain racial or ethnic origins or sexual orientation .

(57)  AI systems used in employment, workers management and access to self-employment, in particular for the recruitment and selection of persons, for making decisions affecting terms of the work related relationship promotion and termination of work-related contractual relationships for allocating tasks on the basis of individual behaviour, personal traits or characteristics and for monitoring or evaluation of persons in work-related contractual relationships, should also be classified as high-risk, since those systems may have an appreciable impact on future career prospects , livelihoods of those persons and workers’ rights . Relevant work-related contractual relationships should, in a meaningful manner, involve employees and persons providing services through platforms as referred to in the Commission Work Programme 2021. ▌ Throughout the recruitment process and in the evaluation, promotion, or retention of persons in work-related contractual relationships, such systems may perpetuate historical patterns of discrimination, for example against women, certain age groups, persons with disabilities, or persons of certain racial or ethnic origins or sexual orientation. AI systems used to monitor the performance and behaviour of such persons may also undermine their fundamental rights to data protection and privacy.

(58)  Another area in which the use of AI systems deserves special consideration is the access to and enjoyment of certain essential private and public services and benefits necessary for people to fully participate in society or to improve one’s standard of living. In particular, ▌ natural persons applying for or receiving essential public assistance benefits and services from public authorities namely healthcare services , social security benefits, social services providing protection in cases such as maternity, illness, industrial accidents, dependency or old age and loss of employment and social and housing assistance , are typically dependent on those benefits and services and in a vulnerable position in relation to the responsible authorities. If AI systems are used for determining whether such benefits and services should be granted, denied, reduced, revoked or reclaimed by authorities, including whether beneficiaries are legitimately entitled to such benefits or services, those systems may have a significant impact on persons’ livelihood and may infringe their fundamental rights, such as the right to social protection, non-discrimination, human dignity or an effective remedy and should therefore be classified as high-risk. Nonetheless, this Regulation should not hamper the development and use of innovative approaches in the public administration, which would stand to benefit from a wider use of compliant and safe AI systems, provided that those systems do not entail a high risk to legal and natural persons.

In addition, AI systems used to evaluate the credit score or creditworthiness of natural persons should be classified as high-risk AI systems, since they determine those persons’ access to financial resources or essential services such as housing, electricity, and telecommunication services. AI systems used for those purposes may lead to discrimination between persons or groups and may perpetuate historical patterns of discrimination, such as that based on racial or ethnic origins, gender, disabilities, age or sexual orientation, or may create new forms of discriminatory impacts. However, AI systems provided for by Union law for the purpose of detecting fraud in the offering of financial services and for prudential purposes to calculate credit institutions’ and insurances undertakings’ capital requirements should not be considered to be high-risk under this Regulation. Moreover, AI systems intended to be used for risk assessment and pricing in relation to natural persons for health and life insurance can also have a significant impact on persons’ livelihood and if not duly designed, developed and used, can infringe their fundamental rights and can lead to serious consequences for people’s life and health, including financial exclusion and discrimination. Finally, AI systems used to evaluate and classify emergency calls by natural persons or to dispatch or establish priority in the dispatching of emergency first response services , including by police, firefighters and medical aid, as well as of emergency healthcare patient triage systems, should also be classified as high-risk since they make decisions in very critical situations for the life and health of persons and their property.

(59)   Given their role and responsibility, actions by law enforcement authorities involving certain uses of AI systems are characterised by a significant degree of power imbalance and may lead to surveillance, arrest or deprivation of a natural person’s liberty as well as other adverse impacts on fundamental rights guaranteed in the Charter. In particular, if the AI system is not trained with high-quality data, does not meet adequate requirements in terms of its performance, its accuracy or robustness, or is not properly designed and tested before being put on the market or otherwise put into service, it may single out people in a discriminatory or otherwise incorrect or unjust manner. Furthermore, the exercise of important procedural fundamental rights, such as the right to an effective remedy and to a fair trial as well as the right of defence and the presumption of innocence, could be hampered, in particular, where such AI systems are not sufficiently transparent, explainable and documented. It is therefore appropriate to classify as high-risk , insofar as their use is permitted under relevant Union and national law, a number of AI systems intended to be used in the law enforcement context where accuracy, reliability and transparency is particularly important to avoid adverse impacts, retain public trust and ensure accountability and effective redress.

In view of the nature of the activities and the risks relating thereto, those high-risk AI systems should include in particular AI systems intended to be used by or on behalf of law enforcement authorities or by Union bodies, offices, or agencies in support of law enforcement authorities for assessing the risk of a natural person to become a victim of criminal offences, as polygraphs and similar tools, for the evaluation of the reliability of evidence in in the course of investigation or prosecution of criminal offences, and, insofar not prohibited under this Regulation, for assessing the risk of a natural person offending or reoffending not solely on the basis of the profiling of natural persons or the assessment of personality traits and characteristics or the past criminal behaviour of natural persons or groups, for profiling in the course of detection, investigation or prosecution of criminal offences ▌ . AI systems specifically intended to be used for administrative proceedings by tax and customs authorities as well as by financial intelligence units carrying out administrative tasks analysing information pursuant to Union anti-money laundering law should not be classified as high-risk AI systems used by law enforcement authorities for the purpose of prevention, detection, investigation and prosecution of criminal offences. The use of AI tools by law enforcement and authorities should not become a factor of inequality, or exclusion. The impact of the use of AI tools on the defence rights of suspects should not be ignored, in particular the difficulty in obtaining meaningful information on the functioning of those systems and the resulting difficulty in challenging their results in court, in particular by natural persons under investigation.

(60)  AI systems used in migration, asylum and border control management affect people who are often in particularly vulnerable position and who are dependent on the outcome of the actions of the competent public authorities. The accuracy, non-discriminatory nature and transparency of the AI systems used in those contexts are therefore particularly important to guarantee respect for the fundamental rights of the affected persons, in particular their rights to free movement, non-discrimination, protection of private life and personal data, international protection and good administration. It is therefore appropriate to classify as high-risk , insofar as their use is permitted under relevant Union and national law AI systems intended to be used by or on behalf of competent public authorities or by Union institutions, bodies, offices or agencies charged with tasks in the fields of migration, asylum and border control management as polygraphs and similar tools , for assessing certain risks posed by natural persons entering the territory of a Member State or applying for visa or asylum , for assisting competent public authorities for the examination , including related assessment of the reliability of evidence, of applications for asylum, visa and residence permits and associated complaints with regard to the objective to establish the eligibility of the natural persons applying for a status , for the purpose of detecting, recognising or identifying natural persons in the context of migration, asylum and border control management with the exception of verification of travel documents .

AI systems in the area of migration, asylum and border control management covered by this Regulation should comply with the relevant procedural requirements set by the Regulation (EC) No 810/2009 of the European Parliament and of the Council (35) , the Directive 2013/32/EU of the European Parliament and of the Council (36) , and other relevant Union law. The use of AI systems in migration, asylum and border control management should, in no circumstances, be used by Member States or Union institutions, bodies, offices or agencies as a means to circumvent their international obligations under the UN Convention relating to the Status of Refugees done at Geneva on 28 July 1951 as amended by the Protocol of 31 January 1967. Nor should they be used to in any way infringe on the principle of non-refoulement, or to deny safe and effective legal avenues into the territory of the Union, including the right to international protection.

(61)  Certain AI systems intended for the administration of justice and democratic processes should be classified as high-risk, considering their potentially significant impact on democracy, the rule of law, individual freedoms as well as the right to an effective remedy and to a fair trial. In particular, to address the risks of potential biases, errors and opacity, it is appropriate to qualify as high-risk AI systems intended to be used by a judicial authority or on its behalf to assist judicial authorities in researching and interpreting facts and the law and in applying the law to a concrete set of facts. AI systems intended to be used by alternative dispute resolution bodies for those purposes should also be considered to be high-risk when the outcomes of the alternative dispute resolution proceedings produce legal effects for the parties. The use of AI tools can support the decision-making power of judges or judicial independence, but should not replace it: the final decision-making must remain a human-driven activity. The classification of AI systems as high-risk should not, however, extend to AI systems intended for purely ancillary administrative activities that do not affect the actual administration of justice in individual cases, such as anonymisation or pseudonymisation of judicial decisions, documents or data, communication between personnel, administrative tasks ▌.

(62)   Without prejudice to the rules provided for in Regulation (EU) 2024/… of the European Parliament and of the Council (37) (38) , and in order to address the risks of undue external interference to the right to vote enshrined in Article 39 of the Charter, and of adverse effects on democracy and the rule of law, AI systems intended to be used to influence the outcome of an election or referendum or the voting behaviour of natural persons in the exercise of their vote in elections or referenda should be classified as high-risk AI systems with the exception of AI systems whose output natural persons are not directly exposed to, such as tools used to organise, optimise and structure political campaigns from an administrative and logistical point of view.

(63)  The fact that an AI system is classified as a high-risk AI system under this Regulation should not be interpreted as indicating that the use of the system is ▌ lawful under other acts of Union law or under national law compatible with Union law, such as on the protection of personal data, on the use of polygraphs and similar tools or other systems to detect the emotional state of natural persons. Any such use should continue to occur solely in accordance with the applicable requirements resulting from the Charter and from the applicable acts of secondary Union law and national law. This Regulation should not be understood as providing for the legal ground for processing of personal data, including special categories of personal data, where relevant , unless it is specifically otherwise provided for in this Regulation .

(64)  To mitigate the risks from high-risk AI systems placed on the market or put into service and to ensure a high level of trustworthiness , certain mandatory requirements should apply to high-risk AI systems , taking into account the intended purpose and the context of use of the AI system and according to the risk-management system to be established by the provider. The measures adopted by the providers to comply with the mandatory requirements of this Regulation should take into account the generally acknowledge state of the art on AI, be proportionate and effective to meet the objectives of this Regulation. Based on the New Legislative Framework, as clarified in Commission notice “The ‘Blue Guide’ on the implementation of EU product rules 2022”, the general rule is that Union harmonisation legislation may be applicable to one product, since the making available or putting into service can take place only when the product complies with all applicable Union harmonisation legislation. The hazards of AI systems covered by the requirements of this Regulation concern different aspects than the existing Union harmonisation acts and therefore the requirements of this Regulation would complement the existing body of the Union harmonisation acts. For example, machinery or medical devices products incorporating an AI system might present risks not addressed by the essential health and safety requirements set out in the relevant Union harmonised legislation, as that sectoral law does not deal with risks specific to AI systems.

This calls for a simultaneous and complementary application of the various legislative acts. To ensure consistency and to avoid an unnecessary administrative burden and unnecessary costs, providers of a product that contains one or more high-risk AI system, to which the requirements of this Regulation and of the Union harmonisation legislation based on the New Legislative Framework listed in an annex to this Regulation apply, should be flexible with regard to operational decisions on how to ensure compliance of a product that contains one or more AI systems with all the applicable requirements of that Union harmonised legislation in an optimal manner. That flexibility could mean, for example a decision by the provider to integrate a part of the necessary testing and reporting processes, information and documentation required under this Regulation into already existing documentation and procedures required under existing Union harmonisation legislation based on the New Legislative Framework listed in an annex to this Regulation. This should not, in any way, undermine the obligation of the provider to comply with all the applicable requirements.

(65)   The risk-management system should consist of a continuous, iterative process that is planned and run throughout the entire lifecycle of a high-risk AI system. This process should be aimed at identifying and mitigating the relevant risks of AI systems on health, safety and fundamental rights. The risk-management system should be regularly reviewed and updated to ensure its continuing effectiveness, as well as justification and documentation of any significant decisions and actions taken subject to this Regulation. This process should ensure that the provider identifies risks or adverse impacts and implements mitigation measures for the known and reasonably foreseeable risks of AI systems to the health, safety and fundamental rights in light of its intended purpose and reasonably foreseeable misuse, including the possible risks arising from the interaction between the AI system and the environment within which it operates. The risk-management system should adopt the most appropriate risk-management measures in the light of the state of the art in AI. When identifying the most appropriate risk-management measures, the provider should document and explain the choices made and, when relevant, involve experts and external stakeholders. In identifying the reasonably foreseeable misuse of high-risk AI systems, the provider should cover uses of AI systems which, while not directly covered by the intended purpose and provided for in the instruction for use may nevertheless be reasonably expected to result from readily predictable human behaviour in the context of the specific characteristics and use of a particular AI system.

Any known or foreseeable circumstances related to the use of the high-risk AI system in accordance with its intended purpose or under conditions of reasonably foreseeable misuse, which may lead to risks to the health and safety or fundamental rights should be included in the instructions for use provided by the provider. This is to ensure that the deployer is aware and takes them into account when using the high-risk AI system. Identifying and implementing risk mitigation measures for foreseeable misuse under this Regulation should not require specific additional training measures for the high-risk AI system by the provider to address them. The providers however are encouraged to consider such additional training measures to mitigate reasonable foreseeable misuses as necessary and appropriate.

(66)  Requirements should apply to high-risk AI systems as regards risk management, the quality and relevance of data sets used, technical documentation and record-keeping, transparency and the provision of information to deployers , human oversight, and robustness, accuracy and cybersecurity. Those requirements are necessary to effectively mitigate the risks for health, safety and fundamental rights, ▌ and no other less trade restrictive measures are reasonably available, thus avoiding unjustified restrictions to trade.

(67)   High-quality data and access to high- quality data plays a vital role in providing structure and in ensuring the performance of many AI systems, especially when techniques involving the training of models are used, with a view to ensure that the high-risk AI system performs as intended and safely and it does not become a source of discrimination prohibited by Union law. High-quality data sets for training, validation and testing require the implementation of appropriate data governance and management practices. Data sets for training, validation and testing, including the labels, should be relevant, sufficiently representative, and to the best extent possible free of errors and complete in view of the intended purpose of the system. In order to facilitate compliance with Union data protection law, such as Regulation (EU) 2016/679, data governance and management practices should include, in the case of personal data, transparency about the original purpose of the data collection. The data sets should also have the appropriate statistical properties, including as regards the persons or groups of persons in relation to whom the high-risk AI system is intended to be used , with specific attention to the mitigation of possible biases in the data sets, that are likely to affect the health and safety of persons, have a negative impact on fundamental rights or lead to discrimination prohibited under Union law, especially where data outputs influence inputs for future operations (feedback loops). Biases can for example be inherent in underlying data sets, especially when historical data is being used, or generated when the systems are implemented in real world settings.

Results provided by AI systems could be influenced by such inherent biases that are inclined to gradually increase and thereby perpetuate and amplify existing discrimination, in particular for vulnerable persons belonging to certain groups, including racial or ethnic groups. The requirement for the data sets to be to the best extent possible complete and free of errors should not affect the use of privacy-preserving techniques in the context of the development and testing of AI systems. In particular, data sets should take into account, to the extent required by their intended purpose, the features, characteristics or elements that are particular to the specific geographical, contextual, behavioural or functional setting which the AI system is intended to be used. The requirements related to data governance can be complied with by having recourse to third-parties that offer certified compliance services including verification of data governance, data set integrity, and data training, validation and testing practices, as far as compliance with the data requirements of this Regulation are ensured.

(68)  For the development and assessment of high-risk AI systems, certain actors, such as providers, notified bodies and other relevant entities, such as European Digital Innovation Hubs, testing experimentation facilities and researchers, should be able to access and use high-quality data sets within the fields of activities of those actors which are related to this Regulation. European common data spaces established by the Commission and the facilitation of data sharing between businesses and with government in the public interest will be instrumental to provide trustful, accountable and non-discriminatory access to high-quality data for the training, validation and testing of AI systems. For example, in health, the European health data space will facilitate non-discriminatory access to health data and the training of AI algorithms on those data sets, in a privacy-preserving, secure, timely, transparent and trustworthy manner, and with an appropriate institutional governance. Relevant competent authorities, including sectoral ones, providing or supporting the access to data may also support the provision of high-quality data for the training, validation and testing of AI systems.

(69)   The right to privacy and to protection of personal data must be guaranteed throughout the entire lifecycle of the AI system. In this regard, the principles of data minimisation and data protection by design and by default, as set out in Union data protection law, are applicable when personal data are processed. Measures taken by providers to ensure compliance with those principles may include not only anonymisation and encryption, but also the use of technology that permits algorithms to be brought to the data and allows training of AI systems without the transmission between parties or copying of the raw or structured data themselves, without prejudice to the requirements on data governance provided for in this Regulation.

(70)   In order to protect the right of others from the discrimination that might result from the bias in AI systems, the providers should, exceptionally, to the extent that it is strictly necessary for the purpose of ensuring bias detection and correction in relation to the high-risk AI systems, subject to appropriate safeguards for the fundamental rights and freedoms of natural persons and following the application of all applicable conditions laid down under this Regulation in addition to the conditions laid down in Regulations (EU) 2016/679 and (EU) 2018/1725 and Directive (EU) 2016/680, be able to process also special categories of personal data, as a matter of substantial public interest within the meaning of Article 9(2), point (g) of Regulation (EU) 2016/679 and Article 10(2), point (g) of Regulation (EU) 2018/1725.

(71)  Having comprehensible information on how high-risk AI systems have been developed and how they perform throughout their lifetime is essential to enable traceability of those systems, verify compliance with the requirements under this Regulation , as well as monitoring of their operations and post market monitoring . This requires keeping records and the availability of a technical documentation, containing information which is necessary to assess the compliance of the AI system with the relevant requirements and facilitate post market monitoring . Such information should include the general characteristics, capabilities and limitations of the system, algorithms, data, training, testing and validation processes used as well as documentation on the relevant risk-management system and drawn in a clear and comprehensive form. The technical documentation should be kept up to date , appropriately throughout the lifetime of the AI system . Furthermore, high-risk AI systems should technically allow for the automatic recording of events, by means of logs, over the duration of the lifetime of the system.

(72)  To address concerns related to opacity and complexity of certain AI systems and help deployers to fulfil their obligations under this Regulation, transparency should be required for high-risk AI systems before they are placed on the market or put it into service. High-risk AI systems should be designed in a manner to enable deployers to understand how the AI system works, evaluate its functionality, and comprehend its strengths and limitations. High-risk AI systems , should ▌ be accompanied by appropriate information in the form of instructions of use . Such information should include the characteristics, capabilities and limitations of performance of the AI system. Such elements would cover information on possible known or foreseeable circumstances related to the use of the high-risk AI system , including deployer action that may influence system behaviour and performance, under which the AI system can lead to risks to health, safety, and fundamental rights , on the changes that have been pre-determined and assessed for conformity by the provider and on the relevant human oversight measures, including the measures to facilitate the interpretation of the outputs of the AI system by the deployers. Transparency, including the accompanying instructions for use, should assist deployers in the use of the system and support informed decision making by them. Among others, deployers should be in a better position to make the correct choice of the system they intend to use in the light of the obligations applicable to them, be educated about the intended and precluded uses, and use the AI system correctly and as appropriate. In order to enhance legibility and accessibility of the information included in the instructions of use , where appropriate , illustrative examples, for instance on the limitations and on the intended and precluded uses of the AI system, should be included . Providers should ensure that all documentation, including the instructions for use, contains meaningful, comprehensive, accessible and understandable information, taking into account the needs and foreseeable knowledge of the target deployers. Instructions for use should be made available in a language which can be easily understood by target deployers, as determined by the Member State concerned.

(73)  High-risk AI systems should be designed and developed in such a way that natural persons can oversee their functioning , ensure that they are used as intended and that their impacts are addressed over the system’s lifecycle . For this purpose, appropriate human oversight measures should be identified by the provider of the system before its placing on the market or putting into service. In particular, where appropriate, such measures should guarantee that the system is subject to in-built operational constraints that cannot be overridden by the system itself and is responsive to the human operator, and that the natural persons to whom human oversight has been assigned have the necessary competence, training and authority to carry out that role. It is also essential, as appropriate, to ensure that high-risk AI systems include mechanisms to guide and inform a natural person to whom human oversight has been assigned to make informed decisions if, when and how to intervene in order to avoid negative consequences or risks, or stop the system if it does not perform as intended. Considering the significant consequences for persons in the case of an incorrect match by certain biometric identification systems, it is appropriate to provide for an enhanced human oversight requirement for those systems so that no action or decision may be taken by the deployer on the basis of the identification resulting from the system unless this has been separately verified and confirmed by at least two natural persons. Those persons could be from one or more entities and include the person operating or using the system. This requirement should not pose unnecessary burden or delays and it could be sufficient that the separate verifications by the different persons are automatically recorded in the logs generated by the system. Given the specificities of the areas of law enforcement, migration, border control and asylum, this requirement should not apply where Union or national law considers the application of that requirement to be disproportionate.

(74)  High-risk AI systems should perform consistently throughout their lifecycle and meet an appropriate level of accuracy, robustness and cybersecurity , in the light of their intended purpose and in accordance with the generally acknowledged state of the art. The Commission and relevant organisations and stakeholders are encouraged to take due consideration of the mitigation of risks and the negative impacts of the AI system. The expected level of performance metrics should be declared in the accompanying instructions of use. Providers are urged to communicate that information to deployers in a clear and easily understandable way, free of misunderstandings or misleading statements. Union law on legal metrology, including Directives 2014/31/EU (39) and 2014/32/EU (40) of the European Parliament and of the Council, aims to ensure the accuracy of measurements and to help the transparency and fairness of commercial transactions. In that context, in cooperation with relevant stakeholders and organisation, such as metrology and benchmarking authorities, the Commission should encourage, as appropriate, the development of benchmarks and measurement methodologies for AI systems. In doing so, the Commission should take note and collaborate with international partners working on metrology and relevant measurement indicators relating to AI.

(75)  The technical robustness is a key requirement for high-risk AI systems. They should be resilient in relation to harmful or otherwise undesirable behaviour that may result from limitations within the systems or the environment in which the systems operate (e.g. errors, faults, inconsistencies, unexpected situations) . Therefore, technical and organisational measures should be taken to ensure robustness of high-risk AI systems, for example by designing and developing appropriate technical solutions to prevent or minimize harmful or otherwise undesirable behaviour. Those technical solution may include for instance mechanisms enabling the system to safely interrupt its operation (fail-safe plans) in the presence of certain anomalies or when operation takes place outside certain predetermined boundaries. Failure to protect against these risks could lead to safety impacts or negatively affect the fundamental rights, for example due to erroneous decisions or wrong or biased outputs generated by the AI system.

(76)  Cybersecurity plays a crucial role in ensuring that AI systems are resilient against attempts to alter their use, behaviour, performance or compromise their security properties by malicious third parties exploiting the system’s vulnerabilities. Cyberattacks against AI systems can leverage AI specific assets, such as training data sets (e.g. data poisoning) or trained models (e.g. adversarial attacks or membership inference ), or exploit vulnerabilities in the AI system’s digital assets or the underlying ICT infrastructure. To ensure a level of cybersecurity appropriate to the risks, suitable measures , such as security controls, should therefore be taken by the providers of high-risk AI systems, also taking into account as appropriate the underlying ICT infrastructure.

(77)   Without prejudice to the requirements related to robustness and accuracy set out in this Regulation, high-risk AI systems which fall within the scope of the Regulation (EU) 2024/… of the European Parliament and of the Council (41) (42) , in accordance with Article 8 of that Regulation may demonstrate compliance with the cybersecurity requirements of this Regulation by fulfilling the essential cybersecurity requirements set out in Article 10 of, and Annex I to, Regulation (EU) 2024/… ++ .When high-risk AI systems fulfil the essential requirements of Regulation (EU) 2024/… ++ , they should be deemed compliant with the cybersecurity requirements set out in this Regulation in so far as the achievement of those requirements is demonstrated in the EU declaration of conformity or parts thereof issued under Regulation (EU) 2024/… ++ . For this purpose, the assessment of the cybersecurity risks, associated to a product with digital elements classified as high-risk AI system according to this Regulation, carried out under Regulation (EU) 2024/… ++ , should consider risks to the cyber resilience of an AI system as regards attempts by unauthorised third parties to alter its use, behaviour or performance, including AI specific vulnerabilities such as data poisoning or adversarial attacks, as well as, as relevant, risks to fundamental rights as required by this Regulation.

(78)   The conformity assessment procedure provided by this Regulation should apply in relation to the essential cybersecurity requirements of a product with digital elements covered by Regulation (EU) 2024/… (43) and classified as a high-risk AI system under this Regulation. However, this rule should not result in reducing the necessary level of assurance for critical products with digital elements covered by Regulation (EU) 2024/… + . Therefore, by way of derogation from this rule, high-risk AI systems that fall within the scope of this Regulation and are also qualified as important and critical products with digital elements pursuant to Regulation (EU) 2024/… + and to which the conformity assessment procedure based on internal control set out in an annex to this Regulation applies, are subject to the conformity assessment provisions of Regulation (EU) 2024/… + insofar as the essential cybersecurity requirements of that Regulation are concerned. In this case, for all the other aspects covered by this Regulation the respective provisions on conformity assessment based on internal control set out in an annex to this Regulation should apply. Building on the knowledge and expertise of ENISA on the cybersecurity policy and tasks assigned to ENISA under the Regulation (EU) 2019/1020, the Commission should cooperate with ENISA on issues related to cybersecurity of AI systems.

(79)  It is appropriate that a specific natural or legal person, defined as the provider, takes the responsibility for the placing on the market or the putting into service of a high-risk AI system, regardless of whether that natural or legal person is the person who designed or developed the system.

(80)   As signatories to the United Nations Convention on the Rights of Persons with Disabilities, the Union and the Member States are legally obliged to protect persons with disabilities from discrimination and promote their equality, to ensure that persons with disabilities have access, on an equal basis with others, to information and communications technologies and systems, and to ensure respect for privacy for persons with disabilities. Given the growing importance and use of AI systems, the application of universal design principles to all new technologies and services should ensure full and equal access for everyone potentially affected by or using AI technologies, including persons with disabilities, in a way that takes full account of their inherent dignity and diversity. It is therefore essential that providers ensure full compliance with accessibility requirements, including Directive (EU) 2016/2102 of the European Parliament and of the Council (44) and Directive (EU) 2019/882. Providers should ensure compliance with these requirements by design. Therefore, the necessary measures should be integrated as much as possible into the design of the high-risk AI system.

(81)  The provider should establish a sound quality management system, ensure the accomplishment of the required conformity assessment procedure, draw up the relevant documentation and establish a robust post-market monitoring system. Providers of high-risk AI systems that are subject to obligations regarding quality management systems under relevant sectorial Union law should have the possibility to include the elements of the quality management system provided for in this Regulation as part of the existing quality management system provided for in that other sectoral Union law. The complementarity between this Regulation and existing sectorial Union law should also be taken into account in future standardisation activities or guidance adopted by the Commission. Public authorities which put into service high-risk AI systems for their own use may adopt and implement the rules for the quality management system as part of the quality management system adopted at a national or regional level, as appropriate, taking into account the specificities of the sector and the competences and organisation of the public authority concerned.

(82)  To enable enforcement of this Regulation and create a level-playing field for operators, and taking into account the different forms of making available of digital products, it is important to ensure that, under all circumstances, a person established in the Union can provide authorities with all the necessary information on the compliance of an AI system. Therefore, prior to making their AI systems available in the Union, ▌ providers established in third countries shall, by written mandate, appoint an authorised representative established in the Union. This authorised representative plays a pivotal role in ensuring the compliance of the high-risk AI systems placed on the market or put into service in the Union by those providers who are not established in the Union and in serving as their contact person established in the Union.

(83)   In light of the nature and complexity of the value chain for AI systems and in line with the New Legislative Framework, it is essential to ensure legal certainty and facilitate the compliance with this Regulation. Therefore, it is necessary to clarify the role and the specific obligations of relevant operators along the value chain, such as importers and distributors who may contribute to the development of AI systems. In certain situations those operators could act in more than one role at the same time and should therefore fulfil cumulatively all relevant obligations associated with those roles. For example, an operator could act as a distributor and an importer at the same time.

(84)   To ensure legal certainty, it is necessary to clarify that, under certain specific conditions, any distributor, importer, deployer or other third-party should be considered to be a provider of a high-risk AI system and therefore assume all the relevant obligations . This would be the case if that party puts its name or trademark on a high-risk AI system already placed on the market or put into service, without prejudice to contractual arrangements stipulating that the obligations are allocated otherwise, or if that party make a substantial modification to a high-risk AI system that has already been placed on the market or has already been put into service and in a way that it remains a high-risk AI system in accordance with this Regulation, or if it modifies the intended purpose of an AI system, including a general-purpose AI system, which has not been classified as high-risk and has already been placed on the market or put into service, in a way that the AI system becomes a high-risk AI system in accordance with this Regulation. Those provisions should apply without prejudice to more specific provisions established in certain Union harmonisation legislation based on the New Legislative Framework, together with which this Regulation should apply. For example, Article 16(2) of Regulation (EU) 2017/745, establishing that certain changes should not be considered to be modifications of a device that could affect its compliance with the applicable requirements, should continue to apply to high-risk AI systems that are medical devices within the meaning of that Regulation .

(85)   General-purpose AI systems may be used as high-risk AI systems by themselves or be components of other high-risk AI systems. Therefore, due to their particular nature and in order to ensure a fair sharing of responsibilities along the AI value chain, the providers of such systems should, irrespective of whether they may be used as high-risk AI systems as such by other providers or as components of high-risk AI systems and unless provided otherwise under this Regulation, closely cooperate with the providers of the relevant high-risk AI systems to enable their compliance with the relevant obligations under this Regulation and with the competent authorities established under this Regulation.

(86)   Where, under the conditions laid down in this Regulation, the provider that initially placed the AI system on the market or put it into service should no longer be considered to be the provider for the purposes of this Regulation, and when that provider has not expressly excluded the change of the AI system into a high-risk AI system, the former provider should nonetheless closely cooperate and make available the necessary information and provide the reasonably expected technical access and other assistance that are required for the fulfilment of the obligations set out in this Regulation, in particular regarding the compliance with the conformity assessment of high-risk AI systems.

(87)   In addition, where a high-risk AI system that is a safety component of a product which falls within the scope of Union harmonisation legislation based on the New Legislative Framework is not placed on the market or put into service independently from the product, the product manufacturer defined in that legislation should comply with the obligations of the provider established in this Regulation and should, in particular, ensure that the AI system embedded in the final product complies with the requirements of this Regulation.

(88)   Along the AI value chain multiple parties often supply AI systems, tools and services but also components or processes that are incorporated by the provider into the AI system with various objectives, including the model training, model retraining, model testing and evaluation, integration into software, or other aspects of model development. Those parties have an important role to play in the value chain towards the provider of the high-risk AI system into which their AI systems, tools, services, components or processes are integrated, and should provide by written agreement this provider with the necessary information, capabilities, technical access and other assistance based on the generally acknowledged state of the art, in order to enable the provider to fully comply with the obligations set out in this Regulation, without compromising their own intellectual property rights or trade secrets.

(89)   Third parties making accessible to the public tools, services, processes, or AI components other than general-purpose AI models, shall not be mandated to comply with requirements targeting the responsibilities along the AI value chain, in particular towards the provider that has used or integrated them, when those tools, services, processes, or AI components are made accessible under a free and open licence. Developers of free and open-source tools, services, processes, or AI components other than general-purpose AI models should be encouraged to implement widely adopted documentation practices, such as model cards and data sheets, as a way to accelerate information sharing along the AI value chain, allowing the promotion of trustworthy AI systems in the Union.

(90)   The Commission could develop and recommend voluntary model contractual terms between providers of high-risk AI systems and third parties that supply tools, services, components or processes that are used or integrated in high-risk AI systems, to facilitate the cooperation along the value chain. When developing voluntary model contractual terms, the Commission should also take into account possible contractual requirements applicable in specific sectors or business cases.

(91)  Given the nature of AI systems and the risks to safety and fundamental rights possibly associated with their use, including as regards the need to ensure proper monitoring of the performance of an AI system in a real-life setting, it is appropriate to set specific responsibilities for deployers. Deployers should in particular take appropriate technical and organisational measures to ensure they use high-risk AI systems in accordance with the instructions of use and certain other obligations should be provided for with regard to monitoring of the functioning of the AI systems and with regard to record-keeping, as appropriate. Furthermore, deployers should ensure that the persons assigned to implement the instructions for use and human oversight as set out in this Regulation have the necessary competence, in particular an adequate level of AI literacy, training and authority to properly fulfil those tasks. These obligations should be without prejudice to other deployer obligations in relation to high-risk AI systems under Union or national law.

(92)   This Regulation is without prejudice to obligations for employers to inform or to inform and consult workers or their representatives under Union or national law and practice, including Directive 2002/14/EC of the European Parliament and of the Council (45) on a general framework for informing and consulting employees, on decisions to put into service or use AI systems. It remains necessary to ensure information of workers and their representatives on the planned deployment of high-risk AI systems at the workplace where the conditions for those information or information and consultation obligations in other legal instruments are not fulfilled. Moreover, such information right is ancillary and necessary to the objective of protecting fundamental rights that underlies this Regulation. Therefore, an information requirement to that effect should be laid down in this Regulation, without affecting any existing rights of workers.

(93)   Whilst risks related to AI systems can result from the way such systems are designed, risks can as well stem from how such AI systems are used. Deployers of high-risk AI system therefore play a critical role in ensuring that fundamental rights are protected, complementing the obligations of the provider when developing the AI system. Deployers are best placed to understand how the high-risk AI system will be used concretely and can therefore identify potential significant risks that were not foreseen in the development phase, due to a more precise knowledge of the context of use, the persons or groups of persons likely to be affected, including groups of vulnerable persons. Deployers of high-risk AI systems listed in an annex to this Regulation also play a critical role in informing natural persons and should, when they make decisions or assist in making decisions related to natural persons, where applicable, inform the natural persons that they are subject to the use of the high-risk AI system. This information should include the intended purpose and the type of decisions it makes. The deployer should also inform the natural person about its right to an explanation provided under this Regulation. With regard to high-risk AI systems used for law enforcement purposes, that obligation should be implemented in accordance with Article 13 of Directive (EU) 2016/680.

(94)   Any processing of biometric data involved in the use of AI systems for biometric identification for the purpose of law enforcement needs to comply with Article 10 of Directive (EU) 2016/680, that allows such processing only where strictly necessary, subject to appropriate safeguards for the rights and freedoms of the data subject, and where authorised by Union or Member State law. Such use, when authorized, also needs to respect the principles laid down in Article 4 (1) of Directive (EU) 2016/680 including lawfulness, fairness and transparency, purpose limitation, accuracy and storage limitation.

(95)   Without prejudice to applicable Union law, in particular Regulation (EU) 2016/679 and Directive (EU) 2016/680, considering the intrusive nature of post remote biometric identification systems, the use of post remote biometric identification systems shall be subject to safeguards. Post biometric identification systems should always be used in a way that is proportionate, legitimate and strictly necessary, and thus targeted, in terms of the individuals to be identified, the location, temporal scope and based on a closed data set of legally acquired video footage. In any case, post remote biometric identification systems should not be used in the framework of law enforcement to lead to indiscriminate surveillance. The conditions for post remote biometric identification should in any case not provide a basis to circumvent the conditions of the prohibition and strict exceptions for real time remote biometric identification.

(96)   In order to efficiently ensure that fundamental rights are protected, deployers of high-risk AI systems that are bodies governed by public law, or private operators providing public services and operators deploying certain high-risk AI systems listed in an annex to this Regulation, such as banking or insurance entities, should carry out a fundamental rights impact assessment prior to putting it into use. Services important for individuals that are of public nature may also be provided by private entities. Private operators providing such services of public nature are linked to tasks in the public interest such as in the area of education, healthcare, social services, housing, administration of justice. The aim of the fundamental rights impact assessment is for the deployer to identify the specific risks to the rights of individuals or groups of individuals likely to be affected, identify measures to be taken in the case of a materialisation of those risk. The impact assessment should apply to the first use of the high-risk AI system, and should be updated when the deployer considers that any of the relevant factors have changed. The impact assessment should identify the deployer’s relevant processes in which the high-risk AI system will be used in line with its intended purpose, and should include a description of the period of time and frequency in which the system is intended to be used as well as of specific categories of natural persons and groups who are likely to be affected in the specific context of use.

The assessment should also include the identification of specific risks of harm likely to have an impact on the fundamental rights of those persons or groups. While performing this assessment, the deployer should take into account information relevant to a proper assessment of the impact, including but not limited to the information given by the provider of the high-risk AI system in the instructions for use. In light of the risks identified, deployers should determine measures to be taken in the case of a materialisation of those risks, including for example governance arrangements in that specific context of use, such as arrangements for human oversight according to the instructions of use or, complaint handling and redress procedures, as they could be instrumental in mitigating risks to fundamental rights in concrete use-cases. After performing that impact assessment, the deployer should notify the relevant market surveillance authority. Where appropriate, to collect relevant information necessary to perform the impact assessment, deployers of high-risk AI system, in particular when AI systems are used in the public sector, could involve relevant stakeholders, including the representatives of groups of persons likely to be affected by the AI system, independent experts, and civil society organisations in conducting such impact assessments and designing measures to be taken in the case of materialisation of the risks. The European Artificial Intelligence Office (‘AI Office’) should develop a template for a questionnaire in order to facilitate compliance and reduce the administrative burden for deployers.

(97)   The notion of general-purpose AI models should be clearly defined and set apart from the notion of AI systems to enable legal certainty. The definition should be based on the key functional characteristics of a general-purpose AI model, in particular the generality and the capability to competently perform a wide range of distinct tasks. These models are typically trained on large amounts of data, through various methods, such as self-supervised, unsupervised or reinforcement learning. General-purpose AI models may be placed on the market in various ways, including through libraries, application programming interfaces (APIs), as direct download, or as physical copy. These models may be further modified or fine-tuned into new models. Although AI models are essential components of AI systems, they do not constitute AI systems on their own. AI models require the addition of further components, such as for example a user interface, to become AI systems. AI models are typically integrated into and form part of AI systems. This Regulation provides specific rules for general-purpose AI models and for general-purpose AI models that pose systemic risks, which should apply also when these models are integrated or form part of an AI system. It should be understood that the obligations for the providers of general-purpose AI models should apply once the general-purpose AI models are placed on the market.

When the provider of a general-purpose AI model integrates an own model into its own AI system that is made available on the market or put into service, that model should be considered to be placed on the market and, therefore, the obligations in this Regulation for models should continue to apply in addition to those for AI systems. The obligations laid down for models should in any case not apply when an own model is used for purely internal processes that are not essential for providing a product or a service to third parties and the rights of natural persons are not affected. Considering their potential significantly negative effects, the general-purpose AI models with systemic risk should always be subject to the relevant obligations under this Regulation. The definition should not cover AI models used before their placing on the market for the sole purpose of research, development and prototyping activities. This is without prejudice to the obligation to comply with this Regulation when, following such activities, a model is placed on the market.

(98)   Whereas the generality of a model could, among other criteria, also be determined by a number of parameters, models with at least a billion of parameters and trained with a large amount of data using self-supervision at scale should be considered to display significant generality and to competently perform a wide range of distinctive tasks.

(99)   Large generative AI models are a typical example for a general-purpose AI model, given that they allow for flexible generation of content, such as in the form of text, audio, images or video, that can readily accommodate a wide range of distinctive tasks.

(100)   When a general-purpose AI model is integrated into or forms part of an AI system, this system should be considered to be general-purpose AI system when, due to this integration, this system has the capability to serve a variety of purposes. A general-purpose AI system can be used directly, or it may be integrated into other AI systems.

(101)   Providers of general-purpose AI models have a particular role and responsibility along the AI value chain, as the models they provide may form the basis for a range of downstream systems, often provided by downstream providers that necessitate a good understanding of the models and their capabilities, both to enable the integration of such models into their products, and to fulfil their obligations under this or other regulations. Therefore, proportionate transparency measures should be laid down, including the drawing up and keeping up to date of documentation, and the provision of information on the general-purpose AI model for its usage by the downstream providers. Technical documentation should be prepared and kept up to date by the general-purpose AI model provider for the purpose of making it available, upon request, to the AI Office and the national competent authorities. The minimal set of elements to be included in such documentation should be set out in annexes to this Regulation. The Commission should be empowered to amend those annexes by means of delegated acts in the light of evolving technological developments.

(102)   Software and data, including models, released under a free and open-source licence that allows them to be openly shared and where users can freely access, use, modify and redistribute them or modified versions thereof, can contribute to research and innovation in the market and can provide significant growth opportunities for the Union economy. General-purpose AI models released under free and open-source licences should be considered to ensure high levels of transparency and openness if their parameters, including the weights, the information on the model architecture, and the information on model usage are made publicly available. The licence should be considered to be free and open-source also when it allows users to run, copy, distribute, study, change and improve software and data, including models under the condition that the original provider of the model is credited, the identical or comparable terms of distribution are respected.

(103)   Free and open-source AI components covers the software and data, including models and general-purpose AI models, tools, services or processes of an AI system. Free and open-source AI components can be provided through different channels, including their development on open repositories. For the purposes of this Regulation, AI components that are provided against a price or otherwise monetised, including through the provision of technical support or other services, including through a software platform, related to the AI component, or the use of personal data for reasons other than exclusively for improving the security, compatibility or interoperability of the software, with the exception of transactions between microenterprises, should not benefit from the exceptions provided to free and open source AI components. The fact of making AI components available through open repositories should not, in itself, constitute a monetisation.

(104)   The providers of general-purpose AI models that are released under a free and open source license, and whose parameters, including the weights, the information on the model architecture, and the information on model usage, are made publicly available should be subject to exceptions as regards the transparency-related requirements imposed on general-purpose AI models, unless they can be considered to present a systemic risk, in which case the circumstance that the model is transparent and accompanied by an open source license should not be considered to be a sufficient reason to exclude compliance with the obligations under this Regulation. In any case, given that the release of general-purpose AI models under free and open source licence does not necessarily reveal substantial information on the data set used for the training or fine-tuning of the model and on how compliance of copyright law was thereby ensured, the exception provided for general-purpose AI models from compliance with the transparency-related requirements should not concern the obligation to produce a summary about the content used for model training and the obligation to put in place a policy to comply with Union copyright law, in particular to identify and comply with the reservation of rights pursuant to Article 4(3) of Directive (EU) 2019/790 of the European Parliament and of the Council (46) .

(105)   General-purpose models, in particular large generative models, capable of generating text, images, and other content, present unique innovation opportunities but also challenges to artists, authors, and other creators and the way their creative content is created, distributed, used and consumed. The development and training of such models require access to vast amounts of text, images, videos, and other data. Text and data mining techniques may be used extensively in this context for the retrieval and analysis of such content, which may be protected by copyright and related rights. Any use of copyright protected content requires the authorisation of the rightsholder concerned unless relevant copyright exceptions and limitations apply. Directive (EU) 2019/790 introduced exceptions and limitations allowing reproductions and extractions of works or other subject matter, for the purpose of text and data mining, under certain conditions. Under these rules, rightsholders may choose to reserve their rights over their works or other subject matter to prevent text and data mining, unless this is done for the purposes of scientific research. Where the rights to opt out has been expressly reserved in an appropriate manner, providers of general-purpose AI models need to obtain an authorisation from rightsholders if they want to carry out text and data mining over such works.

(106)   Providers that place general-purpose AI models on the Union market should ensure compliance with the relevant obligations in this Regulation. To that end, providers of general-purpose AI models should put in place a policy to comply with Union law on copyright and related rights, in particular to identify and comply with the reservations of rights expressed by rightsholders pursuant to Article 4(3) of Directive (EU) 2019/790. Any provider placing a general-purpose AI model on the Union market should comply with this obligation, regardless of the jurisdiction in which the copyright-relevant acts underpinning the training of those general-purpose AI models take place. This is necessary to ensure a level playing field among providers of general-purpose AI models where no provider should be able to gain a competitive advantage in the Union market by applying lower copyright standards than those provided in the Union.

(107)   In order to increase transparency on the data that is used in the pre-training and training of general-purpose AI models, including text and data protected by copyright law, it is adequate that providers of such models draw up and make publicly available a sufficiently detailed summary of the content used for training the general-purpose model. While taking into due account the need to protect trade secrets and confidential business information, this summary should be generally comprehensive in its scope instead of technically detailed to facilitate parties with legitimate interests, including copyright holders, to exercise and enforce their rights under Union law, for example by listing the main data collections or sets that went into training the model, such as large private or public databases or data archives, and by providing a narrative explanation about other data sources used. It is appropriate for the AI Office to provide a template for the summary, which should be simple, effective, and allow the provider to provide the required summary in narrative form.

(108)   With regard to the obligations imposed on providers of general-purpose AI models to put in place a policy to comply with Union copyright law and make publicly available a summary of the content used for the training, the AI Office should monitor whether the provider has fulfilled those obligations without verifying or proceeding to a work-by-work assessment of the training data in terms of copyright compliance. This Regulation does not affect the enforcement of copyright rules as provided for under Union law.

(109)   Compliance with the obligations applicable to the providers of general-purpose AI models should be commensurate and proportionate to the type of model provider, excluding the need for compliance for persons who develop or use models for non-professional or scientific research purposes, who should nevertheless be encouraged to voluntarily comply with these requirements. Without prejudice to Union copyright law, compliance with these obligations should take due account of the size of the provider and allow simplified ways of compliance for SMEs, including start-ups, that should not represent an excessive cost and not discourage the use of such models. In the case of a modification or fine-tuning of a model, the obligations for providers should be limited to that modification or fine-tuning, for example by complementing the already existing technical documentation with information on the modifications, including new training data sources, as a means to comply with the value chain obligations provided in this Regulation.

(110)   General-purpose AI models could pose systemic risks which include, but are not limited to, any actual or reasonably foreseeable negative effects in relation to major accidents, disruptions of critical sectors and serious consequences to public health and safety; any actual or reasonably foreseeable negative effects on democratic processes, public and economic security; the dissemination of illegal, false, or discriminatory content. Systemic risks should be understood to increase with model capabilities and model reach, can arise along the entire lifecycle of the model, and are influenced by conditions of misuse, model reliability, model fairness and model security, the degree of autonomy of the model, its access to tools, novel or combined modalities, release and distribution strategies, the potential to remove guardrails and other factors. In particular, international approaches have so far identified the need to devote attention to risks from potential intentional misuse or unintended issues of control relating to alignment with human intent; chemical, biological, radiological, and nuclear risks, such as the ways in which barriers to entry can be lowered, including for weapons development, design acquisition, or use; offensive cyber capabilities, such as the ways in vulnerability discovery, exploitation, or operational use can be enabled; the effects of interaction and tool use, including for example the capacity to control physical systems and interfere with critical infrastructure; risks from models of making copies of themselves or ‘self-replicating’ or training other models; the ways in which models can give rise to harmful bias and discrimination with risks to individuals, communities or societies; the facilitation of disinformation or harming privacy with threats to democratic values and human rights; risk that a particular event could lead to a chain reaction with considerable negative effects that could affect up to an entire city, an entire domain activity or an entire community.

(111)   It is appropriate to establish a methodology for the classification of general-purpose AI models as general-purpose AI model with systemic risks. Since systemic risks result from particularly high capabilities, a general-purpose AI model should be considered to present systemic risks if it has high-impact capabilities, evaluated on the basis of appropriate technical tools and methodologies, or significant impact on the internal market due to its reach. High-impact capabilities in general-purpose AI models means capabilities that match or exceed the capabilities recorded in the most advanced general-purpose AI models. The full range of capabilities in a model could be better understood after its release on the market or when users interact with the model. According to the state of the art at the time of entry into force of this Regulation, the cumulative amount of compute used for the training of the general-purpose AI model measured in floating point operations (‘FLOPs’) is one of the relevant approximations for model capabilities. The amount of compute used for training cumulates the compute used across the activities and methods that are intended to enhance the capabilities of the model prior to deployment, such as pre-training, synthetic data generation and fine-tuning. Therefore, an initial threshold of FLOPs should be set, which, if met by a general-purpose AI model, leads to a presumption that the model is a general-purpose AI model with systemic risks. This threshold should be adjusted over time to reflect technological and industrial changes, such as algorithmic improvements or increased hardware efficiency, and should be supplemented with benchmarks and indicators for model capability.

To inform this, the AI Office should engage with the scientific community, industry, civil society and other experts. Thresholds, as well as tools and benchmarks for the assessment of high-impact capabilities, should be strong predictors of generality, its capabilities and associated systemic risk of general-purpose AI models, and could take into taking into account the way the model will be placed on the market or the number of users it may affect. To complement this system, there should be a possibility for the Commission to take individual decisions designating a general-purpose AI model as a general-purpose AI model with systemic risk if it is found that such model has capabilities or an impact equivalent to those captured by the set threshold. That decision should be taken on the basis of an overall assessment of the criteria for the designation of general-purpose AI models with systemic risk set out in an annex to this Regulation, such as quality or size of the training data set, number of business and end users, its input and output modalities, its degree of autonomy and scalability, or the tools it has access to. Upon a reasoned request of a provider whose model has been designated as a general-purpose AI model with systemic risk, the Commission should take the request into account and may decide to reassess whether the general-purpose AI model can still be considered to present systemic risks.

(112)   It is also necessary to clarify a procedure for the classification of a general-purpose AI model with systemic risks. A general-purpose AI model that meets the applicable threshold for high-impact capabilities should be presumed to be a general-purpose AI models with systemic risk. The provider should notify the AI Office at the latest two weeks after the requirements are met or it becomes known that a general-purpose AI model will meet the requirements that lead to the presumption. This is especially relevant in relation to the FLOP threshold because training of general-purpose AI models takes considerable planning which includes the upfront allocation of compute resources and, therefore, providers of general-purpose AI models are able to know if their model would meet the threshold before the training is completed. In the context of that notification, the provider should be able to demonstrate that, because of its specific characteristics, a general-purpose AI model exceptionally does not present systemic risks, and that it thus should not be classified as a general-purpose AI model with systemic risks. That information is valuable for the AI Office to anticipate the placing on the market of general-purpose AI models with systemic risks and the providers can start to engage with the AI Office early on. That information is especially important with regard to general-purpose AI models that are planned to be released as open-source, given that, after open-source model release, necessary measures to ensure compliance with the obligations under this Regulation may be more difficult to implement.

(113)   If the Commission becomes aware of the fact that a general-purpose AI model meets the requirements to classify as a general-purpose model with systemic risk, which previously had either not been known or of which the relevant provider has failed to notify the Commission, the Commission should be empowered to designate it so. A system of qualified alerts should ensure that the AI Office is made aware by the scientific panel of general-purpose AI models that should possibly be classified as general-purpose AI models with systemic risk, in addition to the monitoring activities of the AI Office.

(114)   The providers of general-purpose AI models presenting systemic risks should be subject, in addition to the obligations provided for providers of general-purpose AI models, to obligations aimed at identifying and mitigating those risks and ensuring an adequate level of cybersecurity protection, regardless of whether it is provided as a standalone model or embedded in an AI system or a product. To achieve those objectives, this Regulation should require providers to perform the necessary model evaluations, in particular prior to its first placing on the market, including conducting and documenting adversarial testing of models, also, as appropriate, through internal or independent external testing. In addition, providers of general-purpose AI models with systemic risks should continuously assess and mitigate systemic risks, including for example by putting in place risk-management policies, such as accountability and governance processes, implementing post-market monitoring, taking appropriate measures along the entire model’s lifecycle and cooperating with relevant actors along the AI value chain.

(115)   Providers of general-purpose AI models with systemic risks should assess and mitigate possible systemic risks. If, despite efforts to identify and prevent risks related to a general-purpose AI model that may present systemic risks, the development or use of the model causes a serious incident, the general-purpose AI model provider should without undue delay keep track of the incident and report any relevant information and possible corrective measures to the Commission and national competent authorities. Furthermore, providers should ensure an adequate level of cybersecurity protection for the model and its physical infrastructure, if appropriate, along the entire model lifecycle. Cybersecurity protection related to systemic risks associated with malicious use of or attacks should duly consider accidental model leakage, unauthorised releases, circumvention of safety measures, and defence against cyberattacks, unauthorised access or model theft. That protection could be facilitated by securing model weights, algorithms, servers, and data sets, such as through operational security measures for information security, specific cybersecurity policies, adequate technical and established solutions, and cyber and physical access controls, appropriate to the relevant circumstances and the risks involved.

(116)   The AI Office should encourage and facilitate the drawing up, review and adaptation of codes of practice, taking into account international approaches. All providers of general-purpose AI models could be invited to participate. To ensure that the codes of practice reflect the state of the art and duly take into account a diverse set of perspectives, the AI Office should collaborate with relevant national competent authorities, and could, where appropriate, consult with civil society organisations and other relevant stakeholders and experts, including the Scientific Panel, for the drawing up of such codes. Codes of practice should cover obligations for providers of general-purpose AI models and of general-purpose models presenting systemic risks. In addition, as regards systemic risks, codes of practice should help to establish a risk taxonomy of the type and nature of the systemic risks at Union level, including their sources. Codes of practice should also be focused on specific risk assessment and mitigation measures.

(117)   The codes of practice should represent a central tool for the proper compliance with the obligations provided for under this Regulation for providers of general-purpose AI models. Providers should be able to rely on codes of practice to demonstrate compliance with the obligations. By means of implementing acts, the Commission may decide to approve a code of practice and give it a general validity within the Union, or, alternatively, to provide common rules for the implementation of the relevant obligations, if, by the time this Regulation becomes applicable, a code of practice cannot be finalised or is not deemed adequate by the AI Office. Once a harmonised standard is published and assessed as suitable to cover the relevant obligations by the AI Office, the compliance with a European harmonised standard should grant providers the presumption of conformity. Providers of general-purpose AI models should furthermore be able to demonstrate compliance using alternative adequate means, if codes of practice or harmonized standards are not available, or they choose not to rely on those.

(118)   This Regulation regulates AI systems and models by imposing certain requirements and obligations for relevant market actors that are placing them on the market, putting into service or use in the Union, thereby complementing obligations for providers of intermediary services that embed such systems or models into their services regulated by Regulation (EU) 2022/2065 of the European Parliament and of the Council (47) . To the extent that such systems or models are embedded into designated very large online platforms or very large online search engines, they are subject to the risk-management framework provided for in Regulation (EU) 2022/2065. Consequently, the corresponding obligations of this Regulation should be presumed to be fulfilled, unless significant systemic risks not covered by Regulation (EU) 2022/2065 emerge and are identified in such models. Within this framework, providers of very large online platforms and very large online search engines are obliged to assess potential systemic risks stemming from the design, functioning and use of their services, including how the design of algorithmic systems used in the service may contribute to such risks, as well as systemic risks stemming from potential misuses. Those providers are also obliged to take appropriate mitigating measures in observance of fundamental rights.

(119)   Considering the quick pace of innovation and the technological evolution of digital services in scope of different instruments of Union law in particular having in mind the usage and the perception of their recipients, the AI systems subject to this Regulation may be provided as intermediary services or parts thereof within the meaning of Regulation (EU) 2022/2065, which should be interpreted in a technology-neutral manner. For example, AI systems may be used to provide online search engines, in particular, to the extent that an AI system such as an online chatbot performs searches of, in principle, all websites, then incorporates the results into its existing knowledge and uses the updated knowledge to generate a single output that combines different sources of information.

(120)   Furthermore, obligations placed on providers and deployers of certain AI systems in this Regulation to enable the detection and disclosure that the outputs of those systems are artificially generated or manipulated are particularly relevant to facilitate the effective implementation of Regulation (EU) 2022/2065. This applies in particular as regards the obligations of providers of very large online platforms or very large online search engines to identify and mitigate systemic risks that may arise from the dissemination of content that has been artificially generated or manipulated, in particular risk of the actual or foreseeable negative effects on democratic processes, civic discourse and electoral processes, including through disinformation.

(121)  Standardisation should play a key role to provide technical solutions to providers to ensure compliance with this Regulation , in line with the state of the art, to promote innovation as well as competitiveness and growth in the single market . Compliance with harmonised standards as defined in Article 2, point (1)(c), of Regulation (EU) No 1025/2012 of the European Parliament and of the Council (48) , which are normally expected to reflect the state of the art, should be a means for providers to demonstrate conformity with the requirements of this Regulation. A balanced representation of interests involving all relevant stakeholders in the development of standards, in particular SMEs, consumer organisations and environmental and social stakeholders in accordance with Articles 5 and 6 of Regulation (EU) No 1025/2012 should therefore be encouraged. In order to facilitate compliance, the standardisation requests should be issued by the Commission without undue delay. When preparing the standardisation request, the Commission should consult the advisory forum and the Board in order to collect relevant expertise. However, In the absence of relevant references to harmonised standards, the Commission should be able to establish, via implementing acts, and after consultation of the advisory forum, common specifications for certain requirements under this Regulation.

The common specification should be an exceptional fall back solution to facilitate the provider’s obligation to comply with the requirements of this Regulation, when the standardisation request has not been accepted by any of the European standardisation organisations, or when the relevant harmonized standards insufficiently address fundamental rights concerns, or when the harmonised standards do not comply with the request, or when there are delays in the adoption of an appropriate harmonised standard . Where such a delay in the adoption of a harmonised standard is due to the technical complexity of that standard, this should be considered by the Commission before contemplating the establishment of common specifications. When developing common specifications, the Commission is encouraged to cooperate with international partners and international standardisation bodies.

(122)   It is appropriate that, without prejudice to the use of harmonised standards and common specifications, providers of high-risk AI system that has been trained and tested on data reflecting the specific geographical, behavioural, contextual or functional setting within which the AI system is intended to be used, should be presumed to comply with the relevant measure provided for under the requirement on data governance set out in this Regulation. Without prejudice to the requirements related to robustness and accuracy set out in this Regulation, in accordance with Article 54(3) of Regulation (EU) 2019/881 of the European Parliament and of the Council (49) , high-risk AI systems that have been certified or for which a statement of conformity has been issued under a cybersecurity scheme pursuant to that Regulation and the references of which have been published in the Official Journal of the European Union should be presumed to comply with the cybersecurity requirement of this Regulation in so far as the cybersecurity certificate or statement of conformity or parts thereof cover the cybersecurity requirement of this Regulation This remains without prejudice to the voluntary nature of that cybersecurity scheme.

(123)  In order to ensure a high level of trustworthiness of high-risk AI systems, those systems should be subject to a conformity assessment prior to their placing on the market or putting into service.

(124)  It is appropriate that, in order to minimise the burden on operators and avoid any possible duplication, for high-risk AI systems related to products which are covered by existing Union harmonisation legislation based on the New Legislative Framework, the compliance of those AI systems with the requirements of this Regulation should be assessed as part of the conformity assessment already provided for in that law. The applicability of the requirements of this Regulation should thus not affect the specific logic, methodology or general structure of conformity assessment under the relevant Union harmonisation legislation. ▌

(125)  Given the complexity of high-risk AI systems and the risks that are associated to them, it is important to develop an adequate system of conformity assessment procedure for high-risk AI systems involving notified bodies, so called third party conformity assessment. However, given the current experience of professional pre-market certifiers in the field of product safety and the different nature of risks involved, it is appropriate to limit, at least in an initial phase of application of this Regulation, the scope of application of third-party conformity assessment for high-risk AI systems other than those related to products. Therefore, the conformity assessment of such systems should be carried out as a general rule by the provider under its own responsibility, with the only exception of AI systems intended to be used for biometrics .

(126)  In order to carry out third-party conformity assessments when so required , notified bodies should be notified under this Regulation by the national competent authorities, provided that they comply with a set of requirements, in particular on independence, competence, absence of conflicts of interests and suitable cybersecurity requirements . Notification of those bodies should be sent by national competent authorities to the Commission and the other Member States by means of the electronic notification tool developed and managed by the Commission pursuant to Article R23 of Annex I to Decision No 768/2008/EC.

(127)   In line with Union commitments under the World Trade Organization Agreement on Technical Barriers to Trade, it is adequate to facilitate the mutual recognition of conformity assessment results produced by competent conformity assessment bodies, independent of the territory in which they are established, provided that those conformity assessment bodies established under the law of a third country meet the applicable requirements of this Regulation and the Union has concluded an agreement to that extent. In this context, the Commission should actively explore possible international instruments for that purpose and in particular pursue the conclusion of mutual recognition agreements with third countries.

(128)  In line with the commonly established notion of substantial modification for products regulated by Union harmonisation legislation, it is appropriate that ▌ whenever a change occurs which may affect the compliance of a high-risk AI system with this Regulation (e.g. change of operating system or software architecture), or when the intended purpose of the system changes , that AI system should be considered to be a new AI system which should undergo a new conformity assessment. However, changes occurring to the algorithm and the performance of AI systems which continue to ‘learn’ after being placed on the market or put into service, namely ▌ automatically adapting how functions are carried out, should not constitute a substantial modification, provided that those changes have been pre-determined by the provider and assessed at the moment of the conformity assessment ▌.

(129)  High-risk AI systems should bear the CE marking to indicate their conformity with this Regulation so that they can move freely within the internal market. For high-risk AI systems embedded in a product, a physical CE marking should be affixed, and may be complemented by a digital CE marking. For high-risk AI systems only provided digitally, a digital CE marking should be used. Member States should not create unjustified obstacles to the placing on the market or the putting into service of high-risk AI systems that comply with the requirements laid down in this Regulation and bear the CE marking.

(130)  Under certain conditions, rapid availability of innovative technologies may be crucial for health and safety of persons , the protection of the environment and climate change and for society as a whole. It is thus appropriate that under exceptional reasons of public security or protection of life and health of natural persons , environmental protection and the protection of key industrial and infrastructural assets, market surveillance authorities could authorise the placing on the market or the putting into service of AI systems which have not undergone a conformity assessment. In a duly justified situations as provided under this Regulation, law enforcement authorities or civil protection authorities may put a specific high-risk AI system into service without the authorisation of the market surveillance authority, provided that such authorisation is requested during or after the use without undue delay.

(131)  In order to facilitate the work of the Commission and the Member States in the AI field as well as to increase the transparency towards the public, providers of high-risk AI systems other than those related to products falling within the scope of relevant existing Union harmonisation legislation, as well as providers who consider that the high-risk AI system listed in an annex to this Regulation is not high-risk on the basis of a derogation, should be required to register themselves and information about their AI system in a EU database, to be established and managed by the Commission. Before using such a high-risk AI system, deployers of high-risk AI systems that are public authorities, agencies or bodies, should register themselves in such database and select the system that they envisage to use.

Other deployers should be entitled to do so voluntarily. This section of the database should be publicly accessible, free of charge, the information should be easily navigable, understandable and machine-readable. The database should also be user-friendly, for example by providing search functionalities, including through keywords, allowing the general public to find relevant information to be submitted upon the registration of high-risk AI systems and on the high-risk AI systems, set out in annexes to this Regulation, to which the high-risk AI systems correspond. Any substantial modification of high-risk AI systems should also be registered in the EU database. For high-risk AI systems in the area of law enforcement, migration, asylum and border control management, the registration obligations should be fulfilled in a secure non-public section of the database. Access to the secure non-public section should be strictly limited to the Commission as well as to market surveillance authorities with regard to their national section of that database. High-risk AI systems in the area of critical infrastructure should only be registered at national level. The Commission should be the controller of the EU database, in accordance with Regulation (EU) 2018/1725. In order to ensure the full functionality of the database, when deployed, the procedure for setting the database should include the elaboration of functional specifications by the Commission and an independent audit report. The Commission should take into account cybersecurity and hazard-related risks when carrying out its tasks as data controller on the EU database. In order to maximise the availability and use of the database by the public, the database, including the information made available through it, should comply with requirements under the Directive (EU) 2019/882.

(132)  Certain AI systems intended to interact with natural persons or to generate content may pose specific risks of impersonation or deception irrespective of whether they qualify as high-risk or not. In certain circumstances, the use of these systems should therefore be subject to specific transparency obligations without prejudice to the requirements and obligations for high-risk AI systems and subject to targeted exceptions to take into account the special need of law enforcement . In particular, natural persons should be notified that they are interacting with an AI system, unless this is obvious from the point of view of a natural person who is reasonably well-informed, observant and circumspect taking into account the circumstances and the context of use. When implementing such obligation, the characteristics of individuals belonging to groups of vulnerable persons due to their age or disability should be taken into account to the extent the AI system is intended to interact with those groups as well. Moreover, natural persons should be notified when they are exposed to systems that, by processing their biometric data, can identify or infer the emotions or intentions of those persons or assign them to specific categories. Such specific categories can relate to aspects such as sex, age, hair colour, eye colour, tattoos, personal traits, ethnic origin, personal preferences and interests. Such information and notifications should be provided in accessible formats for persons with disabilities .

(133)   A variety of AI systems can generate large quantities of synthetic content that becomes increasingly hard for humans to distinguish from human-generated and authentic content. The wide availability and increasing capabilities of those systems have a significant impact on the integrity and trust in the information ecosystem, raising new risks of misinformation and manipulation at scale, fraud, impersonation and consumer deception. In light of those impacts, the fast technological pace and the need for new methods and techniques to trace origin of information, it is appropriate to require providers of those systems to embed technical solutions that enable marking in a machine readable format and detection that the output has been generated or manipulated by an AI system and not a human. Such techniques and methods should be sufficiently reliable, interoperable, effective and robust as far as this is technically feasible, taking into account available techniques or a combination of such techniques, such as watermarks, metadata identifications, cryptographic methods for proving provenance and authenticity of content, logging methods, fingerprints or other techniques, as may be appropriate. When implementing this obligation, providers should also take into account the specificities and the limitations of the different types of content and the relevant technological and market developments in the field, as reflected in the generally acknowledged state-of-the-art. Such techniques and methods can be implemented at the level of the system or at the level of the model, including general-purpose AI models generating content, thereby facilitating fulfilment of this obligation by the downstream provider of the AI system. To remain proportionate, it is appropriate to envisage that this marking obligation should not cover AI systems performing primarily an assistive function for standard editing or AI systems not substantially altering the input data provided by the deployer or the semantics thereof.

(134)   Further to the technical solutions employed by the providers of the system, deployers, who use an AI system to generate or manipulate image, audio or video content that appreciably resembles existing persons, places or events and would falsely appear to a person to be authentic (deep fakes), should also clearly and distinguishably disclose that the content has been artificially created or manipulated by labelling the artificial intelligence output accordingly and disclosing its artificial origin The compliance with this transparency obligation should not be interpreted as indicating that the use of the system or its output impedes the right to freedom of expression and the right to freedom of the arts and sciences guaranteed in the Charter, in particular where the content is part of an evidently creative, satirical, artistic or fictional work or programme, subject to appropriate safeguards for the rights and freedoms of third parties. In those cases, the transparency obligation for deep fakes set out in this Regulation is limited to disclosure of the existence of such generated or manipulated content in an appropriate manner that does not hamper the display or enjoyment of the work, including its normal exploitation and use, while maintaining the utility and quality of the work. In addition, it is also appropriate to envisage a similar disclosure obligation in relation to AI-generated or manipulated text to the extent it is published with the purpose of informing the public on matters of public interest unless the AI-generated content has undergone a process of human review or editorial control and a natural or legal person holds editorial responsibility for the publication of the content.

(135)   To ensure consistent implementation, it is appropriate to empower the Commission to adopt implementing acts on the application of the provisions on the labelling and detection of artificially generated or manipulated content. Without prejudice to the mandatory nature and full applicability of the transparency obligations, the Commission may also encourage and facilitate the drawing up of codes of practice at Union level to facilitate the effective implementation of the obligations regarding the detection and labelling of artificially generated or manipulated content, including to support practical arrangements for making, as appropriate, the detection mechanisms accessible and facilitating cooperation with other actors along the value chain, disseminating content or checking its authenticity and provenance to enable the public to effectively distinguish AI-generated content.

(136)   The obligations placed on providers and deployers of certain AI systems in this Regulation to enable the detection and disclosure that the outputs of those systems are artificially generated or manipulated are particularly relevant to facilitate the effective implementation of Regulation (EU) 2022/2065. This applies in particular as regards the obligations of providers of very large online platforms or very large online search engines to identify and mitigate systemic risks that may arise from the dissemination of content that has been artificially generated or manipulated, in particular risk of the actual or foreseeable negative effects on democratic processes, civic discourse and electoral processes, including through disinformation. The requirement to label content generated by AI systems under this Regulation is without prejudice to the obligation in Article 16(6) of Regulation (EU) 2022/2065 for providers of hosting services to process notices on illegal content received pursuant to Article 16(1) of that Regulation and should not influence the assessment and the decision on the illegality of the specific content. That assessment should be performed solely with reference to the rules governing the legality of the content.

(137)   The compliance with the transparency obligations for the AI systems coved by this Regulation should not be interpreted as indicating that the use of the system or its output is lawful under this Regulation or other Union and Member State law and should be without prejudice to other transparency obligations for deployers of AI systems laid down in Union or national law.

(138)  AI is a rapidly developing family of technologies that requires regulatory oversight and a safe and controlled space for experimentation, while ensuring responsible innovation and integration of appropriate safeguards and risk mitigation measures. To ensure a legal framework that promotes innovation, is future-proof and resilient to disruption, Member States should ensure that their national competent authorities establish at least one AI regulatory sandboxes at national level to facilitate the development and testing of innovative AI systems under strict regulatory oversight before these systems are placed on the market or otherwise put into service. Member States could also fulfil this obligation through participating in already existing regulatory sandboxes or establishing jointly a sandbox with one or more Member States’ competent authorities, insofar as this participation provides equivalent level of national coverage for the participating Member States. Regulatory sandboxes could be established in physical, digital or hybrid form and may accommodate physical as well as digital products. Establishing authorities should also ensure that the regulatory sandboxes have the adequate resources for their functioning, including financial and human resources.

(139)  The objectives of the AI regulatory sandboxes should be to foster AI innovation by establishing a controlled experimentation and testing environment in the development and pre-marketing phase with a view to ensuring compliance of the innovative AI systems with this Regulation and other relevant Union and national law , to enhance legal certainty for innovators and the competent authorities’ oversight and understanding of the opportunities, emerging risks and the impacts of AI use, to facilitate regulatory learning for authorities and undertakings, including with a view to future adaptions of the legal framework, to support cooperation and the sharing of best practices with the authorities involved in the AI regulatory sandbox, and to accelerate access to markets, including by removing barriers for SMEs , including start-ups. Regulatory sandboxes should be widely available throughout the Union , and particular attention should be given to their accessibility for SMEs, including start-ups. The participation in the AI regulatory sandbox should focus on issues that raise legal uncertainty for providers and prospective providers to innovate, experiment with AI in the Union and contribute to evidence-based regulatory learning. The supervision of the AI systems in the AI regulatory sandbox should therefore cover their development, training, testing and validation before the systems are placed on the market or put into service, as well as the notion and occurrence of substantial modification that may require a new conformity assessment procedure. Any significant risks identified during the development and testing of such AI systems should result in adequate mitigation and, failing that, in the suspension of the development and testing process.

Where appropriate, national competent authorities establishing AI regulatory sandboxes should cooperate with other relevant authorities, including those supervising the protection of fundamental rights,, and could allow for the involvement of other actors within the AI ecosystem such as national or European standardisation organisations, notified bodies, testing and experimentation facilities, research and experimentation labs, European Digital Innovation Hubs and relevant stakeholder and civil society organisations. To ensure uniform implementation across the Union and economies of scale, it is appropriate to establish common rules for the regulatory sandboxes’ implementation and a framework for cooperation between the relevant authorities involved in the supervision of the sandboxes. AI regulatory sandboxes established under this Regulation should be without prejudice to other law allowing for the establishment of other sandboxes aiming to ensure compliance with Union law other that this Regulation. Where appropriate, relevant competent authorities in charge of those other regulatory sandboxes should consider the benefits of using those sandboxes also for the purpose of ensuring compliance of AI systems with this Regulation. Upon agreement between the national competent authorities and the participants in the AI regulatory sandbox, testing in real world conditions may also be operated and supervised in the framework of the AI regulatory sandbox .

(140)   This Regulation should provide the legal basis for the providers and prospective providers in the AI regulatory sandbox to use personal data collected for other purposes for developing certain AI systems in the public interest within the AI regulatory sandbox, only under specified conditions, in accordance with Article 6(4) and Article 9(2), point (g), of Regulation (EU) 2016/679, and Articles 5, 6 and 10 of Regulation (EU) 2018/1725, and without prejudice to Article 4(2) and Article 10 of Directive (EU) 2016/680. All other obligations of data controllers and rights of data subjects under Regulations (EU) 2016/679 and (EU) 2018/1725 and Directive (EU) 2016/680 remain applicable. In particular, this Regulation should not provide a legal basis in the meaning of Article 22(2), point (b) of Regulation (EU) 2016/679 and Article 24(2), point (b) of Regulation (EU) 2018/1725. Providers and prospective providers in the sandbox should ensure appropriate safeguards and cooperate with the competent authorities, including by following their guidance and acting expeditiously and in good faith to adequately mitigate any identified - significant risks to safety, health, and fundamental rights that may arise during the development, testing and experimentation in the sandbox.

(141)   In order to accelerate the process of development and the placing on the market of the high-risk AI systems listed in an annex to this Regulation, it is important that providers or prospective providers of such systems may also benefit from a specific regime for testing those systems in real world conditions, without participating in an AI regulatory sandbox. However, in such cases and taking into account the possible consequences of such testing on individuals, it should be ensured that appropriate and sufficient guarantees and conditions are introduced by this Regulation for providers or prospective providers. Such guarantees should include, among others, requesting informed consent of natural persons to participate in testing in real world conditions, with the exception of law enforcement where the seeking of informed consent would prevent the AI system from being tested. Consent of subjects to participate in such testing under this Regulation is distinct from and without prejudice to consent of data subjects for the processing of their personal data under the relevant data protection law.

It is also important to minimise the risks and enable oversight by competent authorities and therefore require prospective providers to have a real-world testing plan submitted to competent market surveillance authority, register the testing in dedicated sections in the EU database subject to some limited exceptions, set limitations on the period for which the testing can be done and require additional safeguards for vulnerable persons, including groups of vulnerable persons as well as a written agreement defining the roles and responsibilities of prospective providers and deployers and effective oversight by competent personnel involved in the real world testing. Furthermore, it is appropriate to envisage additional safeguards to ensure that the predictions, recommendations or decisions of the AI system can be effectively reversed and disregarded and that personal data is protected and is deleted when the subjects have withdrawn their consent to participate in the testing without prejudice to their rights as data subjects under the Union data protection law. As regards transfer of data, it is also appropriate to envisage that data collected and processed for the purpose of testing in real-world conditions should be transferred to third countries only where appropriate and applicable safeguards under Union law are implemented, in particular in accordance with bases for transfer of personal data under Union law on data protection, while for non-personal data appropriate safeguards are put in place in accordance with Union law, such as Regulations (EU) 2022/868 (50) and (EU) 2023/2854 (51) of the European Parliament and of the Council .

(142)   To ensure that AI leads to socially and environmentally beneficial outcomes, Member States are encouraged to support and promote research and development of AI solutions in support of socially and environmentally beneficial outcomes, such as AI-based solutions to increase accessibility for persons with disabilities, tackle socio-economic inequalities, or meet environmental targets, by allocating sufficient resources, including public and Union funding, and, where appropriate and provided that the eligibility and selection criteria are fulfilled, considering in particular projects which pursue such objectives. Such projects should be based on the principle of interdisciplinary cooperation between AI developers, experts on inequality and non-discrimination, accessibility, consumer, environmental, and digital rights, as well as academics.

(143)  In order to promote and protect innovation, it is important that the interests of SMEs, including start-ups, that are providers or deployers of AI systems are taken into particular account. To this objective, Member States should develop initiatives, which are targeted at those operators, including on , awareness raising and information communication. Member States shall provide SMEs, including start-ups, having a registered office or a branch in the Union, with priority access to the AI regulatory sandboxes provided that they fulfil the eligibility conditions and selection criteria and without precluding other providers and prospective providers to access the sandboxes provided the same conditions and criteria are fulfilled. Member States shall utilise existing channels and where appropriate, establish new dedicated channels for communication with SMEs, start-ups, deployers other innovators and, as appropriate, local public authorities, to support SMEs throughout their development path by providing guidance and responding to queries about the implementation of this Regulation. Where appropriate, these channels shall work together to create synergies and ensure homogeneity in their guidance to SMEs, including start-ups, and deployers. Additionally, Member States should facilitate the participation of SMEs and other relevant stakeholders in the standardisation development processes. Moreover, the specific interests and needs of SMEs, including start-ups, providers should be taken into account when Notified Bodies set conformity assessment fees. The Commission should regularly assess the certification and compliance costs for SMEs, including start-ups, through transparent consultations deployers and should work with Member States to lower such costs.

For example, translation costs related to mandatory documentation and communication with authorities may constitute a significant cost for providers and other operators, in particular those of a smaller scale. Member States should possibly ensure that one of the languages determined and accepted by them for relevant providers’ documentation and for communication with operators is one which is broadly understood by the largest possible number of cross-border deployers. In order to address the specific needs of SMEs, including start-ups, the Commission should provide standardised templates for the areas covered by this Regulation upon request of the Board. Additionally, the Commission should complement Member States’ efforts by providing a single information platform with easy-to-use information with regards to this Regulation for all providers and deployers, by organising appropriate communication campaigns to raise awareness about the obligations arising from this Regulation, and by evaluating and promoting the convergence of best practices in public procurement procedures in relation to AI systems. Medium-sized enterprises which were recently small enterprises within the meaning of the Annex to Commission Recommendation 2003/361/EC (52) should have access to those support measures, as those new medium-sized enterprises may sometimes lack the legal resources and training necessary to ensure proper understanding of, and compliance with, this Regulation .

(144)   In order to promote and protect innovation, the AI-on-demand platform, all relevant Union funding programmes and projects, such as Digital Europe Programme, Horizon Europe, implemented by the Commission and the Member States at Union or national level should, as appropriate, contribute to the achievement of the objectives of this Regulation.

(145)   In particular, in order to minimise the risks to implementation resulting from lack of knowledge and expertise in the market as well as to facilitate compliance of providers , in particular SMEs, including start-ups, and notified bodies with their obligations under this Regulation, the AI-on-demand platform, the European Digital Innovation Hubs and the testing and experimentation facilities established by the Commission and the Member States at Union or national level should contribute to the implementation of this Regulation. Within their respective mission and fields of competence, the AI-on-demand platform, the European Digital Innovation Hubs and the testing and experimentation Facilities are able to provide in particular technical and scientific support to providers and notified bodies.

(146)   Moreover, in light of the very small size of some operators and in order to ensure proportionality regarding costs of innovation, it is appropriate to allow microenterprises to fulfil one of the most costly obligations, namely to establish a quality management system, in a simplified manner which would reduce the administrative burden and the costs for those enterprises without affecting the level of protection and the need for compliance with the requirements for high-risk AI systems. The Commission should develop guidelines to specify the elements of the quality management system to be fulfilled in this simplified manner by microentreprises.

(147)  It is appropriate that the Commission facilitates, to the extent possible, access to testing and experimentation facilities to bodies, groups or laboratories established or accredited pursuant to any relevant Union harmonisation legislation and which fulfil tasks in the context of conformity assessment of products or devices covered by that Union harmonisation legislation. This is, in particular, the case as regards expert panels, expert laboratories and reference laboratories in the field of medical devices pursuant to Regulations (EU) 2017/745 and (EU) 2017/746.

(148)   This Regulation should establish a governance framework that both allows to coordinate and support the application of this Regulation at national level, as well as build capabilities at Union level and integrate stakeholders in the field of AI. The effective implementation and enforcement of this Regulation require a governance framework that allows to coordinate and build up central expertise at Union level. The AI Office was established by Commission Decision (53) and has as its mission to develop Union expertise and capabilities in the field of AI and to contribute to the implementation of Union law on AI. Member States should facilitate the tasks of the AI Office with a view to support the development of Union expertise and capabilities at Union level and to strengthen the functioning of the digital single market. Furthermore, a Board composed of representatives of the Member States, a scientific panel to integrate the scientific community and an advisory forum to contribute stakeholder input to the implementation of this Regulation, at Union and national level, should be established. The development of Union expertise and capabilities should also include making use of existing resources and expertise, in particular through synergies with structures built up in the context of the Union level enforcement of other law and synergies with related initiatives at Union level, such as the EuroHPC Joint Undertaking and the AI testing and experimentation facilities under the Digital Europe Programme.

(149)  In order to facilitate a smooth, effective and harmonised implementation of this Regulation a Board should be established. The Board should reflect the various interests of the AI eco-system and be composed of representatives of the Member States. The Board should be responsible for a number of advisory tasks, including issuing opinions, recommendations, advice or contributing to guidance on matters related to the implementation of this Regulation, including on enforcement matters, technical specifications or existing standards regarding the requirements established in this Regulation and providing advice to the Commission and the Member States and their national competent authorities on specific questions related to AI. In order to give some flexibility to Member States in the designation of their representatives in the Board, such representatives may be any persons belonging to public entities who should have the relevant competences and powers to facilitate coordination at national level and contribute to the achievement of the Board's tasks. The Board should establish two standing sub-groups to provide a platform for cooperation and exchange among market surveillance authorities and notifying authorities on issues related, respectively, to market surveillance and notified bodies. The standing subgroup for market surveillance should act as the administrative cooperation group (ADCO) for this Regulation within the meaning of Article 30 of Regulation (EU) 2019/1020. In accordance with Article 33 of that Regulation, the Commission should support the activities of the standing subgroup for market surveillance by undertaking market evaluations or studies, in particular with a view to identifying aspects of this Regulation requiring specific and urgent coordination among market surveillance authorities. The Board may establish other standing or temporary sub-groups as appropriate for the purpose of examining specific issues. The Board should also cooperate, as appropriate, with relevant Union bodies, experts groups and networks active in the context of relevant Union law, including in particular those active under relevant Union law on data, digital products and services.

(150)   With a view to ensure the involvement of stakeholders in the implementation and application of this Regulation, an advisory forum should be established to advise and provide technical expertise to the Board and the Commission. To ensure a varied and balanced stakeholder representation between commercial and non-commercial interest and, within the category of commercial interests, with regards to SMEs and other undertakings, the advisory forum should comprise inter alia industry, start-ups, SMEs, academia, civil society, including the social partners, as well as the Fundamental Rights Agency, ENISA, the European Committee for Standardization (CEN), the European Committee for Electrotechnical Standardization (CENELEC) and the European Telecommunications Standards Institute (ETSI).

(151)   To support the implementation and enforcement of this Regulation, in particular the monitoring activities of the AI Office as regards general-purpose AI models, a scientific panel of independent experts should be established. The independent experts constituting the scientific panel should be selected on the basis of up-to-date scientific or technical expertise in the field of AI and should perform their tasks with impartiality, objectivity and ensure the confidentiality of information and data obtained in carrying out their tasks and activities. To allow reinforcing national capacities necessary for the effective enforcement of this Regulation, Member States should be able to request support from the pool of experts constituting the scientific panel for their enforcement activities.

(152)   In order to support adequate enforcement as regards AI systems and reinforce the capacities of the Member States, Union AI testing support structures should be established and made available to the Member States.

(153)  Member States hold a key role in the application and enforcement of this Regulation. In that respect, each Member State should designate at least one notifying authority and at least one market surveillance authority as national competent authorities for the purpose of supervising the application and implementation of this Regulation. Member States may decide to appoint any kind of public entity to perform the tasks of the national competent authorities within the meaning of this Regulation, in accordance with their specific national organisational characteristics and needs. In order to increase organisation efficiency on the side of Member States and to set a single point of contact vis-à-vis the public and other counterparts at Member State and Union levels, ▌ each Member State should designate a market surveillance authority to act as a single point of contact .

(154)   The national competent authorities should exercise their powers independently, impartially and without bias, so as to safeguard the principles of objectivity of their activities and tasks and to ensure the application and implementation of this Regulation. The members of these authorities should refrain from any action incompatible with their duties and should be subject to confidentiality rules under this Regulation.

(155)  In order to ensure that providers of high-risk AI systems can take into account the experience on the use of high-risk AI systems for improving their systems and the design and development process or can take any possible corrective action in a timely manner, all providers should have a post-market monitoring system in place. Where relevant, post-market monitoring should include an analysis of the interaction with other AI systems including other devices and software. Post-market monitoring should not cover sensitive operational data of deployers which are law enforcement authorities. This system is also key to ensure that the possible risks emerging from AI systems which continue to ‘learn’ after being placed on the market or put into service can be more efficiently and timely addressed. In this context, providers should also be required to have a system in place to report to the relevant authorities any serious incidents resulting from the use of their AI systems, meaning incident or malfunctioning leading to death or serious damage to health, serious and irreversible disruption of the management and operation of critical infrastructure, infringements of obligations under Union law intended to protect fundamental rights or serious damage to property or the environment .

(156)  In order to ensure an appropriate and effective enforcement of the requirements and obligations set out by this Regulation, which is Union harmonisation legislation, the system of market surveillance and compliance of products established by Regulation (EU) 2019/1020 should apply in its entirety. Market surveillance authorities designated pursuant to this Regulation should have all enforcement powers laid down in this Regulation and in Regulation (EU) 2019/1020 and should exercise their powers and carry out their duties independently, impartially and without bias. Although the majority of AI systems are not subject to specific requirements and obligations under this Regulation, market surveillance authorities may take measures in relation to all AI systems when they present a risk in accordance with this Regulation. Due to the specific nature of Union institutions, agencies and bodies falling within the scope of this Regulation, it is appropriate to designate the European Data Protection Supervisor as a competent market surveillance authority for them. This should be without prejudice to the designation of national competent authorities by the Member States. Market surveillance activities should not affect the ability of the supervised entities to carry out their tasks independently, when such independence is required by Union law .

(157)   This Regulation is without prejudice to the competences, tasks, powers and independence of relevant national public authorities or bodies which supervise the application of Union law protecting fundamental rights, including equality bodies and data protection authorities. Where necessary for their mandate, those national public authorities or bodies should also have access to any documentation created under this Regulation. A specific safeguard procedure should be set for ensuring adequate and timely enforcement against AI systems presenting a risk to health, safety and fundamental rights. The procedure for such AI systems presenting a risk should be applied to high-risk AI systems presenting a risk, prohibited systems which have been placed on the market, put into service or used in violation of the prohibited practices laid down in this Regulation and AI systems which have been made available in violation of the transparency requirements laid down in this Regulation and present a risk.

(158)  Union financial services law includes internal governance and risk-management rules and requirements which are applicable to regulated financial institutions in the course of provision of those services, including when they make use of AI systems. In order to ensure coherent application and enforcement of the obligations under this Regulation and relevant rules and requirements of the Union financial services legal acts, the competent authorities for the supervision and enforcement of those legal acts, in particular competent authorities as defined in Regulation (EU) No 575/2013 of the European Parliament and of the Council (54) and Directives 2008/48/EC (55) , 2009/138/EC (56) , 2013/36/EU (57) , 2014/17/EU (58) and (EU) 2016/97 (59) of the European Parliament and of the Council , should be designated , within their respective competences, as competent authorities for the purpose of supervising the implementation of this Regulation, including for market surveillance activities, as regards AI systems provided or used by regulated and supervised financial institutions unless Member States decide to designate another authority to fulfil these market surveillance tasks.

Those competent authorities should have all powers under this Regulation and Regulation (EU) 2019/1020 to enforce the requirements and obligations of this Regulation, including powers to carry our ex post market surveillance activities that can be integrated, as appropriate, into their existing supervisory mechanisms and procedures under the relevant Union financial services law. It is appropriate to envisage that, when acting as market surveillance authorities under this Regulation, the national authorities responsible for the supervision of credit institutions regulated under Directive 2013/36/EU, which are participating in the Single Supervisory Mechanism established by Council Regulation (EU) No 1024/2013 (60) , should report, without delay, to the European Central Bank any information identified in the course of their market surveillance activities that may be of potential interest for the European Central Bank’s prudential supervisory tasks as specified in that Regulation .

To further enhance the consistency between this Regulation and the rules applicable to credit institutions regulated under Directive 2013/36/EU, it is also appropriate to integrate ▌ some of the providers’ procedural obligations in relation to risk management, post marketing monitoring and documentation into the existing obligations and procedures under Directive 2013/36/EU. In order to avoid overlaps, limited derogations should also be envisaged in relation to the quality management system of providers and the monitoring obligation placed on deployers of high-risk AI systems to the extent that these apply to credit institutions regulated by Directive 2013/36/EU. The same regime should apply to insurance and re-insurance undertakings and insurance holding companies under Directive 2009/138/EC and the insurance intermediaries under Directive (EU) 2016/97 and other types of financial institutions subject to requirements regarding internal governance, arrangements or processes established pursuant to the relevant Union financial services law to ensure consistency and equal treatment in the financial sector.

(159)   Each market surveillance authority for high-risk AI systems in the area of biometrics, as listed in an annex to this Regulation insofar as those systems are used for the purposes of law enforcement, migration, asylum and border control management, or the administration of justice and democratic processes, should have effective investigative and corrective powers, including at least the power to obtain access to all personal data that are being processed and to all information necessary for the performance of its tasks. The market surveillance authorities should be able to exercise their powers by acting with complete independence. Any limitations of their access to sensitive operational data under this Regulation should be without prejudice to the powers conferred to them by Directive (EU) 2016/680. No exclusion on disclosing data to national data protection authorities under this Regulation should affect the current or future powers of those authorities beyond the scope of this Regulation.

(160)   The market surveillance authorities of the Member States and the Commission should be able to propose joint activities, including joint investigations, to be conducted by market surveillance authorities or market surveillance authorities jointly with the Commission, that have the aim of promoting compliance, identifying non-compliance, raising awareness and providing guidance in relation to this Regulation with respect to specific categories of high-risk AI systems that are found to present a serious risk across two or more Member States. Joint activities to promote compliance should be carried out in accordance with Article 9 of Regulation (EU) 2019/1020. The AI Office should provide coordination support for joint investigations.

(161)   It is necessary to clarify the responsibilities and competences at Union and national level as regards AI systems that are built on general-purpose AI models. To avoid overlapping competences, where an AI system is based on a general-purpose AI model and the model and system are provided by the same provider, the supervision should take place at Union level through the AI Office, which should have the powers of a market surveillance authority within the meaning of Regulation (EU) 2019/1020 for this purpose. In all other cases, national market surveillance authorities remain responsible for the supervision of AI systems. However, for general-purpose AI systems that can be used directly by deployers for at least one purpose that is classified as high-risk, market surveillance authorities should cooperate with the AI Office to carry out evaluations of compliance and inform the Board and other market surveillance authorities accordingly. Furthermore, market surveillance authorities should be able to request assistance from the AI Office where the market surveillance authority is unable to conclude an investigation on a high-risk AI system because of its inability to access certain information related to the general-purpose AI model on which the high-risk AI system is built. In such cases, the procedure regarding mutual assistance in cross-border cases in Chapter VI of Regulation (EU) 2019/1020 should apply mutatis mutandis.

(162)   To make best use of the centralised Union expertise and synergies at Union level, the powers of supervision and enforcement of the obligations on providers of general-purpose AI models should be a competence of the Commission. The Commission should entrust the implementation of these tasks to the AI Office, without prejudice to the powers of organisation of the Commission and the division of competences between Member States and the Union based on the Treaties. The AI Office should be able to carry out all necessary actions to monitor the effective implementation of this Regulation as regards general-purpose AI models. It should be able to investigate possible infringements of the rules on providers of general-purpose AI models both on its own initiative, following the results of its monitoring activities, or upon request from market surveillance authorities in line with the conditions set out in this Regulation. To support effective monitoring of the AI Office, it should provide for the possibility that downstream providers lodge complaints about possible infringements of the rules on providers of general-purpose AI systems.

(163)   With a view to complement the governance systems for general-purpose AI models, the scientific panel should support the monitoring activities of the AI Office and may, in certain cases, provide qualified alerts to the AI Office which trigger follow-ups such as investigations. This should be the case where the scientific panel has reason to suspect that a general-purpose AI model poses a concrete and identifiable risk at Union level. Furthermore, this should be the case where the scientific panel has reason to suspect that a general-purpose AI model meets the criteria that would lead to a classification as general-purpose AI model with systemic risk. To equip the scientific panel with the information necessary for the performance of those tasks, there should be a mechanism whereby the scientific panel can request the Commission to require documentation or information from a provider.

(164)   The AI Office should be able to take the necessary actions to monitor the effective implementation of and compliance with the obligations for providers of general-purpose AI models laid down in this Regulation. The AI Office should be able to investigate possible infringements in accordance with the powers provided for in this Regulation, including by requesting documentation and information, by conducting evaluations, as well as by requesting measures from providers of general-purpose AI models. In the conduct of evaluations, in order to make use of independent expertise, the AI Office should be able to involve independent experts to carry out the evaluations on its behalf. Compliance with the obligations should be enforceable, inter alia, through requests to take appropriate measures, including risk mitigation measures in the case of identified systemic risks as well as restricting the making available on the market, withdrawing or recalling the model. As a safeguard, where needed beyond the procedural rights provided for in this Regulation, providers of general-purpose AI models should have the procedural rights provided for in Article 18 of Regulation (EU) 2019/1020, which should apply mutatis mutandis, without prejudice to more specific procedural rights provided for by this Regulation.

(165)  The development of AI systems other than high-risk AI systems in accordance with the requirements of this Regulation may lead to a larger uptake of ethical and trustworthy AI in the Union. Providers of AI systems that are not high-risk should be encouraged to create codes of conduct , including related governance mechanisms, intended to foster the voluntary application of some or all of the mandatory requirements applicable to high-risk AI systems , adapted in light of the intended purpose of the systems and the lower risk involved and taking into account the available technical solutions and industry best practices such as model and data cards . Providers and, as appropriate, deployers of all AI systems, high-risk or not, and AI models should also be encouraged to apply on a voluntary basis additional requirements related, for example, to the elements of the Union’s Ethics Guidelines for Trustworthy AI, environmental sustainability, AI literacy measures, inclusive and diverse design and development of AI systems, including attention to vulnerable persons and accessibility to persons with disability, stakeholders’ participation with the involvement, as appropriate, of relevant stakeholders such as business and civil society organisations, academia, research organisations, trade unions and consumer protection organisation in the design and development of AI systems, and diversity of the development teams , including gender balance. To ensure that the voluntary codes of conduct are effective, they should be based on clear objectives and key performance indicators to measure the achievement of those objectives. They should also be developed in an inclusive way, as appropriate, with the involvement of relevant stakeholders such as business and civil society organisations, academia, research organisations, trade unions and consumer protection organisation . The Commission may develop initiatives, including of a sectorial nature, to facilitate the lowering of technical barriers hindering cross-border exchange of data for AI development, including on data access infrastructure, semantic and technical interoperability of different types of data.

(166)  It is important that AI systems related to products that are not high-risk in accordance with this Regulation and thus are not required to comply with the requirements set out for high-risk AI systems are nevertheless safe when placed on the market or put into service. To contribute to this objective, Regulation (EU) 2023/988 of the European Parliament and of the Council (61) would apply as a safety net.

(167)  In order to ensure trustful and constructive cooperation of competent authorities on Union and national level, all parties involved in the application of this Regulation should respect the confidentiality of information and data obtained in carrying out their tasks , in accordance with Union or national law . They should carry out their tasks and activities in such a manner as to protect, in particular, intellectual property rights, confidential business information and trade secrets, the effective implementation of this Regulation, public and national security interests, the integrity of criminal and administrative proceedings, and the integrity of classified information.

(168)   Compliance with this Regulation should be enforceable by means of the imposition of penalties and other enforcement measures. Member States should take all necessary measures to ensure that the provisions of this Regulation are implemented, including by laying down effective, proportionate and dissuasive penalties for their infringement , including in respect of the ne bis in idem principle. In order to strengthen and harmonise administrative penalties for infringement of this Regulation, the upper limits for setting the administrative fines for certain specific infringements should be laid down. When assessing the amount of the fines , Member States should , in each individual case, take into account all relevant circumstances of the specific situation, with due regard in particular to the nature, gravity and duration of the infringement and of its consequences and to the size of the provider, in particular if the provider is an SME, including a start-up . The European Data Protection Supervisor should have the power to impose fines on Union institutions, agencies and bodies falling within the scope of this Regulation.

(169)   Compliance with the obligations on providers of general-purpose AI models imposed under this Regulation should be enforceable among others by means of fines. To that end, appropriate levels of fines should also be laid down for infringement of those obligations, including the failure to comply with measures requested by the Commission in accordance with this Regulation, subject to appropriate limitation periods in accordance with the principle of proportionality. All decisions taken by the Commission under this Regulation are subject to review by the Court of Justice of the European Union in accordance with the TFEU.

(170)   Union and national law already provide effective remedies to natural and legal persons whose rights and freedoms are adversely affected by the use of AI systems. Without prejudice to those remedies, any natural or legal person that has grounds to consider that there has been an infringement of this Regulation should be entitled to lodge a complaint to the relevant market surveillance authority.

(171)   Affected persons should have the right to obtain an explanation where a deployer’s decision is based mainly upon the output from certain high-risk systems that fall within the scope of this Regulation and where that decision produces legal effects or similarly significantly affects those persons in a way that they consider to have an adverse impact on their health, safety or fundamental rights. That explanation should be clear and meaningful and should provide a basis on which the affected persons are able to exercise their rights. The right to obtain an explanation should not apply to the use of AI systems for which exceptions or restrictions follow from Union or national law and should apply only to the extent this right is not already provided for under Union law.

(172)   Persons acting as whistleblowers on the infringements of this Regulation should be protected under the Union law. Directive (EU) 2019/1937 of the European Parliament and of the Council (62) should therefore apply to the reporting of infringements of this Regulation and the protection of persons reporting such infringements.

(173)  In order to ensure that the regulatory framework can be adapted where necessary, the power to adopt acts in accordance with Article 290 TFEU should be delegated to the Commission to amend the conditions under which an AI system shall not be considered as high-risk, the list of high-risk AI systems, the provisions regarding technical documentation, the content of the EU declaration of conformity the provisions regarding the conformity assessment procedures, the provisions establishing the high-risk AI systems to which the conformity assessment procedure based on assessment of the quality management system and assessment of the technical documentation should apply , the threshold, benchmarks and indicators, including by supplementing those benchmarks and indicators, in the rules for the classification of general-purpose AI models with systemic risk, the criteria for the designation of general-purpose AI models with systemic risk, the technical documentation for providers of general-purpose AI models and the transparency information for providers of general-purpose AI models . It is of particular importance that the Commission carry out appropriate consultations during its preparatory work, including at expert level, and that those consultations be conducted in accordance with the principles laid down in the Interinstitutional Agreement of 13 April 2016 on Better Law-Making (63) . In particular, to ensure equal participation in the preparation of delegated acts, the European Parliament and the Council receive all documents at the same time as Member States’ experts, and their experts systematically have access to meetings of Commission expert groups dealing with the preparation of delegated acts.

(174)   Given the rapid technological developments and the required technical expertise in the effective application of this Regulation, the Commission should evaluate and review this Regulation by ... [five years from the date of entry into force of this Regulation] and every four years thereafter and report to the European Parliament and the Council. In addition, taking into account the implications for the scope of this Regulation, the Commission should carry out an assessment of the need to amend the list of high-risk AI systems and the list of prohibited practices once a year. Moreover, by two years after entry into application and every four years thereafter, the Commission should evaluate and report to the European Parliament and to the Council on the need to amend the list of high-risk areas in the annex to this Regulation, the AI systems within the scope of the transparency obligations, the effectiveness of the supervision and governance system and the progress on the development of standardisation deliverables on energy efficient development of general-purpose AI models, including the need for further measures or actions. Finally, by ... [four years from the entry into force of this Regulation] and every three years thereafter, the Commission should evaluate the impact and effectiveness of voluntary codes of conduct to foster the application of the requirements provided for high-risk AI systems in the case of AI systems other than high-risk AI systems and possibly other additional requirements for such AI systems.

(175)  In order to ensure uniform conditions for the implementation of this Regulation, implementing powers should be conferred on the Commission. Those powers should be exercised in accordance with Regulation (EU) No 182/2011 of the European Parliament and of the Council (64) .

(176)  Since the objective of this Regulation, namely to improve the functioning of the internal market and promoting the uptake of human centric and trustworthy AI, while ensuring a high level of protection of health, safety, fundamental rights enshrined in the Charter, including democracy, the rule of law and environmental protection against harmful effects of AI systems in the Union and supporting innovation, cannot be sufficiently achieved by the Member States and can rather, by reason of the scale or effects of the action, be better achieved at Union level, the Union may adopt measures in accordance with the principle of subsidiarity as set out in Article 5 TEU. In accordance with the principle of proportionality as set out in that Article, this Regulation does not go beyond what is necessary in order to achieve that objective.

(177)   In order to ensure legal certainty, ensure an appropriate adaptation period for operators and avoid disruption to the market, including by ensuring continuity of the use of AI systems, it is appropriate that this Regulation applies to the high-risk AI systems that have been placed on the market or put into service before the general date of application thereof, only if, from that date, those systems are subject to significant changes in their design or intended purpose. It is appropriate to clarify that, in this respect, the concept of significant change should be understood as equivalent in substance to the notion of substantial modification, which is used with regard only to high-risk AI systems pursuant to this Regulation. On an exceptional basis and in light of public accountability, operators of AI systems which are components of the large-scale IT systems established by the legal acts listed in an annex to this Regulation and operators of high-risk AI systems that are intended to be used by public authorities should, respectively, take the necessary steps to comply with the requirements of this Regulation by end of 2030 and by six years after the entry into force.

(178)   Providers of high-risk AI systems are encouraged to start to comply, on a voluntary basis, with the relevant obligations of this Regulation already during the transitional period.

(179)  This Regulation should apply from … [two years from the date of entry into force of this Regulation]. However, taking into account the unacceptable risk associated with the use of AI in certain ways, the prohibitions should apply already from … [six months from the date of entry into force of this Regulation]. While the full effect of those prohibitions follows with the establishment of the governance and enforcement of this Regulation, anticipating the application of the prohibitions is important to take account of unacceptable risks and to have an effect on other procedures, such as in civil law. Moreover, the infrastructure related to the governance and the conformity assessment system should be operational before that date, therefore the provisions on notified bodies and governance structure should apply from … [ 12 months from the date of entry into force of this Regulation]. Given the rapid pace of technological advancements and adoption of general-purpose AI models, obligations for providers of general-purpose AI models should apply from ... [12 months from the date of entry into force of this Regulation]. Codes of practice should be ready by... [9 months from the date of entry into force of this Regulation] in view of enabling providers to demonstrate compliance on time. The AI Office should ensure that classification rules and procedures are up to date in light of technological developments. In addition, Member States should lay down and notify to the Commission the rules on penalties, including administrative fines, and ensure that they are properly and effectively implemented by the date of application of this Regulation. Therefore the provisions on penalties should apply from … [12 months from the date of entry into force of this Regulation].

(180)  The European Data Protection Supervisor and the European Data Protection Board were consulted in accordance with Article 42(1) and (2) of Regulation (EU) 2018/1725 and delivered their joint opinion on ▌ 18 June 2021 ,

HAVE ADOPTED THIS REGULATION:

GENERAL PROVISIONS

Subject matter`

1.   The purpose of this Regulation is to improve the functioning of the internal market and promote the uptake of human-centric and trustworthy artificial intelligence (AI), while ensuring a high level of protection of health, safety, fundamental rights enshrined in the Charter of Fundamental Rights, including democracy, the rule of law and environmental protection, against the harmful effects of artificial intelligence systems (AI systems) in the Union, and to support innovation.

2.  This Regulation lays down:

(a)  harmonised rules for the placing on the market, the putting into service, and the use of AI systems in the Union;

(b)  prohibitions of certain AI practices;

(c)  specific requirements for high-risk AI systems and obligations for operators of such systems;

(d)  harmonised transparency rules for certain AI systems ;

(e)   harmonised rules for the placing on the market of general-purpose AI models ;

(f)  rules on market monitoring , market surveillance governance and enforcement;

(g)   measures to support innovation, with a particular focus on SMEs, including start-ups.

1.  This Regulation applies to:

(a)  providers placing on the market or putting into service AI systems or placing on the market general-purpose AI models in the Union, irrespective of whether those providers are established or located within the Union or in a third country;

(b)   deployers of AI systems that have their place of establishment or are located within the Union;

(c)  providers and deployers of AI systems that have their place of establishment or are located in a third country, where the output produced by the AI system is used in the Union;

(d)   importers and distributors of AI systems;

(e)   product manufacturers placing on the market or putting into service an AI system together with their product and under their own name or trademark;

(f)   authorised representatives of providers, which are not established in the Union;

(g)   affected persons that are located in the Union.

2.  For ▌ AI systems classified as high-risk AI systems in accordance with Article 6(1) and (2) related to products covered by the Union harmonisation legislation listed in section B of Annex I, only Article 112 applies . Article 57 applies only in so far as the requirements for high-risk AI systems under this Regulation have been integrated in that Union harmonisation legislation.

3.   This Regulation does not apply to areas outside the scope of Union law, and shall not, in any event, affect the competences of the Member States concerning national security, regardless of the type of entity entrusted by the Member States with carrying out tasks in relation to those competences.

This Regulation does not apply to AI systems where and in so far they are placed on the market, put into service, or used with or without modification exclusively for military, defence or national security purposes, regardless of the type of entity carrying out those activities.

This Regulation does not apply to AI systems which are not placed on the market or put into service in the Union, where the output is used in the Union exclusively for military , defence or national security purposes , regardless of the type of entity carrying out those activities .

4.  This Regulation applies neither to public authorities in a third country nor to international organisations falling within the scope of this Regulation pursuant to paragraph 1, where those authorities or organisations use AI systems in the framework of international cooperation or agreements for law enforcement and judicial cooperation with the Union or with one or more Member States , provided that such a third country or international organisation provides adequate safeguards with respect to the protection of fundamental rights and freedoms of individuals.

5.  This Regulation shall not affect the application of the provisions on the liability of providers of intermediary services as set out in Chapter II of Regulation (EU) 2022/2065.

6.   This Regulation does not apply to AI systems or AI models, including their output, specifically developed and put into service for the sole purpose of scientific research and development.

7.   Union law on the protection of personal data, privacy and the confidentiality of communications applies to personal data processed in connection with the rights and obligations laid down in this Regulation. This Regulation shall not affect Regulation (EU) 2016/679 or (EU) 2018/1725, or Directive 2002/58/EC or (EU) 2016/680, without prejudice to the arrangements provided for in Article 10(5) and Article 59 of this Regulation.

8.   This Regulation does not apply to any research, testing or development activity regarding AI systems or models prior to their being placed on the market or put into service. Such activities shall be conducted in accordance with applicable Union law. Testing in real world conditions shall not be covered by that exclusion.

9.   This Regulation is without prejudice to the rules laid down by other Union legal acts related to consumer protection and product safety.

10.   This Regulation does not apply to obligations of deployers who are natural persons using AI systems in the course of a purely personal non-professional activity.

11.   This Regulation does not preclude the Union or Member States from maintaining or introducing laws, regulations or administrative provisions which are more favourable to workers in terms of protecting their rights in respect of the use of AI systems by employers, or from encouraging or allowing the application of collective agreements which are more favourable to workers.

12.   This Regulation applies to AI systems released under free and open source licences, unless they are placed on the market or put into service as high-risk AI systems or as an AI system that falls under Article 5 or 50.

Definitions

For the purposes of this Regulation, the following definitions apply:

(1)  ‘ AI system’ means a machine-based system designed to operate with varying levels of autonomy, that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives , how to generate outputs such as predictions, content , recommendations, or decisions that can influence physical or virtual environments;

(2)   ‘risk’ means the combination of the probability of an occurrence of harm and the severity of that harm;

(3)  ‘provider’ means a natural or legal person, public authority, agency or other body that develops an AI system or a general-purpose AI model or that has an AI system or a general-purpose AI model developed and places it on the market or puts the AI system into service under its own name or trademark, whether for payment or free of charge;

(4)  ‘ deployer’ means a natural or legal person, public authority, agency or other body using an AI system under its authority ▌ except where the AI system is used in the course of a personal non-professional activity;

(5)  ‘authorised representative’ means a natural or legal person located or established in the Union who has received and accepted a written mandate from a provider of an AI system or a general-purpose AI model to, respectively, perform and carry out on its behalf the obligations and procedures established by this Regulation;

(6)  ‘importer’ means a natural or legal person located or established in the Union that places on the market ▌ an AI system that bears the name or trademark of a natural or legal person established in a third country;

(7)  ‘distributor’ means a natural or legal person in the supply chain, other than the provider or the importer, that makes an AI system available on the Union market ▌;

(8)  ‘operator’ means a provider, product manufacturer, deployer , authorised representative, importer or distributor;

(9)  ‘placing on the market’ means the first making available of an AI system or a general-purpose AI model on the Union market;

(10)  ‘making available on the market’ means the supply of an AI system or a general-purpose AI model for distribution or use on the Union market in the course of a commercial activity, whether in return for payment or free of charge;

(11)  ‘putting into service’ means the supply of an AI system by the provider for first use directly to the deployer or for own use in the Union ▌ for its intended purpose;

(12)  ‘intended purpose’ means the use for which an AI system is intended by the provider, including the specific context and conditions of use, as specified in the information supplied by the provider in the instructions for use, promotional or sales materials and statements, as well as in the technical documentation;

(13)  ‘reasonably foreseeable misuse’ means the use of an AI system in a way that is not in accordance with its intended purpose, but which may result from reasonably foreseeable human behaviour or interaction with other systems , including other AI systems ;

(14)  ‘safety component’ means a component of a product or of a system which fulfils a safety function for that product or system , or the failure or malfunctioning of which endangers the health and safety of persons or property;

(15)  ‘instructions for use’ means the information provided by the provider to inform the deployer of in particular an AI system’s intended purpose and proper use ▌;

(16)  ‘recall of an AI system’ means any measure aiming to achieve the return to the provider or taking out of service or disabling the use of an AI system made available to deployers ;

(17)  ‘withdrawal of an AI system’ means any measure aiming to prevent an AI system in the supply chain being made available on the market ;

(18)  ‘performance of an AI system’ means the ability of an AI system to achieve its intended purpose;

(19)  ‘notifying authority’ means the national authority responsible for setting up and carrying out the necessary procedures for the assessment, designation and notification of conformity assessment bodies and for their monitoring;

(20)  ‘conformity assessment’ means the process of demonstrating whether the requirements set out in Chapter II, Section 2 relating to a high-risk AI system have been fulfilled;

(21)  ‘conformity assessment body’ means a body that performs third-party conformity assessment activities, including testing, certification and inspection;

(22)  ‘notified body’ means a conformity assessment body notified in accordance with this Regulation and other relevant Union harmonisation legislation as listed in Section B of Annex I;

(23)  ‘substantial modification’ means a change to an AI system after its placing on the market or putting into service which is not foreseen or planned in the initial conformity assessment carried out by the provider and as a result of which the compliance of the AI system with the requirements set out in Chapter II, Section 2 is affected or results in a modification to the intended purpose for which the AI system has been assessed;

(24)  ‘CE marking’ means a marking by which a provider indicates that an AI system is in conformity with the requirements set out in Chapter II, Section 2 and other applicable Union harmonisation legislation listed in Annex I, providing for its affixing;

(25)  ‘post-market monitoring system ’ means all activities carried out by providers of AI systems to ▌ collect and review experience gained from the use of AI systems they place on the market or put into service for the purpose of identifying any need to immediately apply any necessary corrective or preventive actions;

(26)  ‘market surveillance authority’ means the national authority carrying out the activities and taking the measures pursuant to Regulation (EU) 2019/1020;

(27)  ‘harmonised standard’ means a harmonised standard as defined in Article 2(1), point (c), of Regulation (EU) No 1025/2012;

(28)  ‘common specification ’ means a set of technical specifications as defined in Article 2, point (4) of Regulation (EU) No 1025/2012, providing means to ▌ comply with certain requirements ▌ established under this Regulation;

(29)  ‘training data’ means data used for training an AI system through fitting its learnable parameters ▌;

(30)  ‘validation data’ means data used for providing an evaluation of the trained AI system and for tuning its non-learnable parameters and its learning process in order, inter alia, to prevent underfitting or overfitting;

(31)  ‘validation data set’ means a separate data set or part of the training data set, either as a fixed or variable split;

(32)  ‘testing data’ means data used for providing an independent evaluation of the ▌ AI system in order to confirm the expected performance of that system before its placing on the market or putting into service;

(33)  ‘input data’ means data provided to or directly acquired by an AI system on the basis of which the system produces an output;

(34)  ‘biometric data’ means personal data resulting from specific technical processing relating to the physical, physiological or behavioural characteristics of a natural person, ▌ such as facial images or dactyloscopic data;

(35)   ‘biometric identification’ means the automated recognition of physical, physiological, behavioural, or psychological human features for the purpose of establishing the identity of a natural person by comparing biometric data of that individual to biometric data of individuals stored in a database;

(36)   ‘biometric verification’ means the automated, one-to-one verification, including authentication, of the identity of natural persons by comparing their biometric data to previously provided biometric data;

(37)   ‘special categories of personal data’ means the categories of personal data referred to in Article 9(1) of Regulation (EU) 2016/679, Article 10 of Directive (EU) 2016/680 and Article 10(1) of Regulation (EU) 2018/1725;

(38)   ‘sensitive operational data’ means operational data related to activities of prevention, detection, investigation or prosecution of criminal offences, the disclosure of which could jeopardise the integrity of criminal proceedings;

(39)  ‘emotion recognition system’ means an AI system for the purpose of identifying or inferring emotions or intentions of natural persons on the basis of their biometric data;

(40)  ‘biometric categorisation system’ means an AI system for the purpose of assigning natural persons to specific categories on the basis of their biometric data, unless it is ancillary to another commercial service and strictly necessary for objective technical reasons ;

(41)  ‘remote biometric identification system’ means an AI system for the purpose of identifying natural persons , without their active involvement, typically at a distance through the comparison of a person’s biometric data with the biometric data contained in a reference database ▌;

(42)  ‘real-time remote biometric identification system’ means a remote biometric identification system whereby the capturing of biometric data, the comparison and the identification all occur without a significant delay and comprises not only instant identification, but also limited short delays in order to avoid circumvention;

(43)  ‘post remote biometric identification system’ means a remote biometric identification system other than a real-time remote biometric identification system;

(44)  ‘publicly accessible space’ means any publicly or privately owned physical place accessible to an undetermined number of natural persons , regardless of whether certain conditions for access may apply , and regardless of the potential capacity restrictions ;

(45)  ‘law enforcement authority’ means:

(a)  any public authority competent for the prevention, investigation, detection or prosecution of criminal offences or the execution of criminal penalties, including the safeguarding against and the prevention of threats to public security; or

(b)  any other body or entity entrusted by Member State law to exercise public authority and public powers for the purposes of the prevention, investigation, detection or prosecution of criminal offences or the execution of criminal penalties, including the safeguarding against and the prevention of threats to public security;

(46)  ‘law enforcement’ means activities carried out by law enforcement authorities or on their behalf for the prevention, investigation, detection or prosecution of criminal offences or the execution of criminal penalties, including safeguarding against and preventing threats to public security;

(47)  ‘ AI Office ’ means the Commission’s function of contributing to the implementation, monitoring and supervision of AI systems and AI governance carried out by the European Artificial Intelligence Office established by Commission Decision of 24.1.2024; references in this Regulation to the AI Office shall be construed as references to the Commission;

(48)  ‘national competent authority’ means a notifying authority or a market surveillance authority ; ;

(49)  ‘serious incident’ means an incident or malfunctioning of an AI system that directly or indirectly leads to any of the following:

(a)  the death of a person, or serious harm to a person’s health ;

(b)  a serious and irreversible disruption of the management or operation of critical infrastructure.

(c)   the infringement of obligations under Union law intended to protect fundamental rights;

(d)   serious harm to property or the environment;

(50)   ‘personal data’ means personal data as defined in Article 4, point (1), of Regulation (EU) 2016/679;

(51)   ‘non-personal data’ means data other than personal data as defined in Article 4, point (1), of Regulation (EU) 2016/679;

(52)   ‘profiling’ means profiling as defined in Article 4, point (4), of Regulation (EU) 2016/679 or, in the case of law enforcement authorities, as defined in Article 3, point (4) of Directive (EU) 2016/680 or, in the case of Union institutions, bodies, offices or agencies, as defined in Article 3, point (5) of Regulation (EU) 2018/1725;

(53)   ‘real-world testing plan’ means a document that describes the objectives, methodology, geographical, population and temporal scope, monitoring, organisation and conduct of testing in real-world conditions;

(54)   ‘sandbox plan’ means a document agreed between the participating provider and the competent authority describing the objectives, conditions, timeframe, methodology and requirements for the activities carried out within the sandbox;

(55)   ‘AI regulatory sandbox’ means a controlled framework set up by a competent authority which offers providers or prospective providers of AI systems the possibility to develop, train, validate and test, where appropriate in real-world conditions, an innovative AI system, pursuant to a sandbox plan for a limited time under regulatory supervision;

(56)   ‘AI literacy’ means skills, knowledge and understanding that allows providers, deployers and affected persons, taking into account their respective rights and obligations in the context of this Regulation, to make an informed deployment of AI systems, as well as to gain awareness about the opportunities and risks of AI and possible harm it can cause;

(57)   ‘testing in real-world conditions’ means the temporary testing of an AI system for its intended purpose in real-world conditions outside a laboratory or otherwise simulated environment, with a view to gathering reliable and robust data and to assessing and verifying the conformity of the AI system with the requirements of this Regulation and it is not considered to be placing the AI system on the market or putting it into service within the meaning of this Regulation, provided that all the conditions laid down in Article 57 or 60 are fulfilled;

(58)   ‘subject’, for the purpose of real-world testing, means a natural person who participates in testing in real-world conditions;

(59)   ‘informed consent’ means a subject's freely given, specific, unambiguous and voluntary expression of his or her willingness to participate in a particular testing in real-world conditions, after having been informed of all aspects of the testing that are relevant to the subject's decision to participate;

(60)   ‘deep fake’ means AI-generated or manipulated image, audio or video content that resembles existing persons, objects, places or other entities or events and would falsely appear to a person to be authentic or truthful;

(61)   ‘widespread infringement’ means any act or omission contrary to Union law protecting the interest of individuals, which:

(a)   has harmed or is likely to harm the collective interests of individuals residing in at least two Member States other than the Member State in which:

(i)   the act or omission originated or took place;

(ii)   the provider concerned, or, where applicable, its authorised representative is located or established; or

(iii)   the deployer is established, when the infringement is committed by the deployer;

(b)   has caused, causes or is likely to cause harm to the collective interests of individuals and has common features, including the same unlawful practice or the same interest being infringed, and is occurring concurrently, committed by the same operator, in at least three Member States;

(62)   ‘critical infrastructure’ means critical infrastructure as defined in Article 2, point (4), of Directive (EU) 2022/2557;

(63)   ‘general-purpose AI model’ means an AI model, including where such an AI model is trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications, except AI models that are used for research, development or prototyping activities before they are released on the market;

(64)   ‘high-impact capabilities’ means capabilities that match or exceed the capabilities recorded in the most advanced general-purpose AI models;

(65)   ‘systemic risk’ means a risk that is specific to the high-impact capabilities of general-purpose AI models, having a significant impact on the Union market due to their reach, or due to actual or reasonably foreseeable negative effects on public health, safety, public security, fundamental rights, or the society as a whole, that can be propagated at scale across the value chain;

(66)   ‘general-purpose AI system’ means an AI system which is based on a general-purpose AI model, that has the capability to serve a variety of purposes, both for direct use as well as for integration in other AI systems;

(67)   ‘floating-point operation’ or ‘FLOP’ means any mathematical operation or assignment involving floating-point numbers, which are a subset of the real numbers typically represented on computers by an integer of fixed precision scaled by an integer exponent of a fixed base;

(68)   ‘downstream provider’ means a provider of an AI system, including a general-purpose AI system, which integrates an AI model, regardless of whether the model is provided by themselves and vertically integrated or provided by another entity based on contractual relations.

AI literacy

Providers and deployers of AI systems shall take measures to ensure, to their best extent, a sufficient level of AI literacy of their staff and other persons dealing with the operation and use of AI systems on their behalf, taking into account their technical knowledge, experience, education and training and the context the AI systems are to be used in, and considering the persons or groups of persons on whom the AI systems are to be used.

PROHIBITED ARTIFICIAL INTELLIGENCE PRACTICES

Prohibited AI Practices

1.  The following AI practices shall be prohibited:

(a)  the placing on the market, the putting into service or the use of an AI system that deploys subliminal techniques beyond a person’s consciousness or purposefully manipulative or deceptive techniques, with the objective, or the effect of, materially distorting the behaviour of a person or a group of persons by appreciably impairing their ability to make an informed decision, thereby causing a person to take a decision that that person would not have otherwise taken in a manner that causes or is likely to cause that person , another person or group of persons significant harm;

(b)  the placing on the market, the putting into service or the use of an AI system that exploits any of the vulnerabilities of a person or a specific group of persons due to their age, disability or a specific social or economic situation, with the objective, or the effect, of materially distorting the behaviour of that person or a person belonging to that group in a manner that causes or is reasonably likely to cause that person or another person significant harm;

(c)  the placing on the market, the putting into service or the use of AI systems ▌ for the purpose of the evaluation or classification of natural persons or groups of persons over a certain period of time based on their social behaviour or known , inferred or predicted personal or personality characteristics, with the social score leading to either or both of the following:

(i)  detrimental or unfavourable treatment of certain natural persons or whole groups of persons in social contexts that are unrelated to the contexts in which the data was originally generated or collected;

(ii)  detrimental or unfavourable treatment of certain natural persons or ▌ groups of persons that is unjustified or disproportionate to their social behaviour or its gravity;

(d)   the placing on the market, the putting into service for this specific purpose, or the use of an AI system for making risk assessments of natural persons in order to assess or predict the likelihood of a natural person committing a criminal offence, based solely on the profiling of a natural person or on assessing their personality traits and characteristics; this prohibition shall not apply to AI systems used to support the human assessment of the involvement of a person in a criminal activity, which is already based on objective and verifiable facts directly linked to a criminal activity;

(e)   the placing on the market, the putting into service for this specific purpose, or use of AI systems that create or expand facial recognition databases through the untargeted scraping of facial images from the internet or CCTV footage;

(f)   the placing on the market, the putting into service for this specific purpose, or the use of AI systems to infer emotions of a natural person in the areas of workplace and education institutions, except where the use of the AI system is intended to be put in place or into the market for medical or safety reasons.

(g)   the placing on the market, the putting into service for this specific purpose, or the use of biometric categorisation systems that categorise individually natural persons based on their biometric data to deduce or infer their race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation; this prohibition does not cover any labelling or filtering of lawfully acquired biometric datasets, such as images, based on biometric data or categorizing of biometric data in the area of law enforcement;

(h)   the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purposes of law enforcement, ▌ unless and in so far as such use is strictly necessary for one of the following objectives:

(i)  the targeted search for specific ▌ victims of abduction, trafficking in human beings or sexual exploitation of human beings, as well as searching for missing persons;

(ii)  the prevention of a specific, substantial and imminent threat to the life or physical safety of natural persons or a genuine and present or genuine and foreseeable threat of a terrorist attack;

(iii)  the ▌ localisation or identification of a person suspected of having committed a criminal offence, for the purpose of conducting a criminal investigation, prosecution or executing a criminal penalty for offences referred to in Annex II and punishable in the Member State concerned by a custodial sentence or a detention order for a maximum period of at least four years;

Point (h) of the first subparagraph is without prejudice to Article 9 of Regulation (EU) 2016/679 for the processing of biometric data for purposes other than law enforcement.

2.  The use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purposes of law enforcement for any of the objectives referred to in paragraph 1, point (h), shall be deployed only for the purposes set out in paragraph 1, point (h), to confirm the identity of the specifically targeted individual, and it shall take into account the following elements:

(a)  the nature of the situation giving rise to the possible use, in particular the seriousness, probability and scale of the harm that would be caused if the system were not used;

(b)  the consequences of the use of the system for the rights and freedoms of all persons concerned, in particular the seriousness, probability and scale of those consequences.

In addition, the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purposes of law enforcement for any of the objectives referred to in paragraph 1, point (h), of this Article shall comply with necessary and proportionate safeguards and conditions in relation to the use in accordance with national law authorising the use thereof , in particular as regards the temporal, geographic and personal limitations. The use of the ‘real-time’ remote biometric identification system in publicly accessible spaces shall be authorised only if the law enforcement authority has completed a fundamental rights impact assessment as provided for in Article 27 and has registered the system in the EU database according to Article 49. However, in duly justified cases of urgency, the use of such systems may be commenced without the registration in the EU database, provided that such registration is completed without undue delay.

3.  For the purposes of paragraph 1, point (h) and paragraph 2, each ▌ use for the purposes of law enforcement of a ‘real-time’ remote biometric identification system in publicly accessible spaces shall be subject to a prior authorisation granted by a judicial authority or ▌ an independent administrative authority whose decision is binding of the Member State in which the use is to take place, issued upon a reasoned request and in accordance with the detailed rules of national law referred to in paragraph 5. However, in a duly justified situation of urgency, the use of such system may be commenced without an authorisation provided that such authorisation is requested without undue delay, at the latest within 24 hours. If such authorisation is rejected, the use shall be stopped with immediate effect and all the data, as well as the results and outputs of that use shall be immediately discarded and deleted .

The competent judicial authority or an independent administrative authority whose decision is binding shall grant the authorisation only where it is satisfied, on the basis of objective evidence or clear indications presented to it, that the use of the ‘real-time’ remote biometric identification system concerned is necessary for, and proportionate to, achieving one of the objectives specified in paragraph 1, point (h), as identified in the request and, in particular, remains limited to what is strictly necessary concerning the period of time as well as the geographic and personal scope. In deciding on the request, that authority shall take into account the elements referred to in paragraph 2 . No decision that produces an adverse legal effect on a person may be taken based solely on the output of the ‘real-time’ remote biometric identification system.

4.   Without prejudice to paragraph 3, each use of a ‘real-time’ remote biometric identification system in publicly accessible spaces for law enforcement purposes shall be notified to the relevant market surveillance authority and the national data protection authority in accordance with the national rules referred to in paragraph 5. The notification shall, as a minimum, contain the information specified under paragraph 6 and shall not include sensitive operational data.

5.  A Member State may decide to provide for the possibility to fully or partially authorise the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purposes of law enforcement within the limits and under the conditions listed in paragraph 1, point (h), and paragraphs 2 and 3. ▌ Member States concerned shall lay down in their national law the necessary detailed rules for the request, issuance and exercise of, as well as supervision and reporting relating to, the authorisations referred to in paragraph 3. Those rules shall also specify in respect of which of the objectives listed in paragraph 1, point (h), including which of the criminal offences referred to in point (h)(iii) thereof, the competent authorities may be authorised to use those systems for the purposes of law enforcement. Member States shall notify those rules to the Commission at the latest 30 days following the adoption thereof. Member States may introduce, in accordance with Union law, more restrictive laws on the use of remote biometric identification systems.

6.   National market surveillance authorities and the national data protection authorities of Member States that have been notified of the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for law enforcement purposes pursuant to paragraph 4 shall submit to the Commission annual reports on such use. For that purpose, the Commission shall provide Member States and national market surveillance and data protection authorities with a template, including information on the number of the decisions taken by competent judicial authorities or an independent administrative authority whose decision is binding upon requests for authorisations in accordance with paragraph 3 and their result.

7.   The Commission shall publish annual reports on the use of real-time remote biometric identification systems in publicly accessible spaces for law enforcement purposes, based on aggregated data in Member States on the basis of the annual reports referred to in paragraph 6. Those annual reports shall not include sensitive operational data of the related law enforcement activities.

8.   This Article shall not affect the prohibitions that apply where an AI practice infringes other Union law.

CHAPTER III

HIGH-RISK AI SYSTEMS

Classification of AI systems as high-risk

Classification rules for high-risk AI systems

1.  Irrespective of whether an AI system is placed on the market or put into service independently from the products referred to in points (a) and (b), that AI system shall be considered to be high-risk where both of the following conditions are fulfilled:

(a)  the AI system is intended to be used as a safety component of a product, or the AI system is itself a product, covered by the Union harmonisation legislation listed in Annex I;

(b)  the product whose safety component pursuant to point (a) is the AI system, or the AI system itself as a product, is required to undergo a third-party conformity assessment , with a view to the placing on the market or the putting into service of that product pursuant to the Union harmonisation legislation listed in Annex I .

2.  In addition to the high-risk AI systems referred to in paragraph 1, AI systems referred to in Annex III shall be considered to be high-risk.

3.   By derogation from paragraph 2, an AI system shall not be considered to be high-risk if it does not pose a significant risk of harm to the health, safety or fundamental rights of natural persons, including by not materially influencing the outcome of decision making. This shall be the case where one or more of the following conditions are fulfilled:

(a)   the AI system is intended to perform a narrow procedural task;

(b)   the AI system is intended to improve the result of a previously completed human activity;

(c)   the AI system is intended to detect decision-making patterns or deviations from prior decision-making patterns and is not meant to replace or influence the previously completed human assessment, without proper human review; or

(d)   the AI system is intended to perform a preparatory task to an assessment relevant for the purposes of the use cases listed in Annex III.

Notwithstanding the first subparagraph, an AI system referred to in Annex III shall always be considered to be high-risk where the AI system performs profiling of natural persons.

4.   A provider who considers that an AI system referred to in Annex III is not high-risk shall document its assessment before that system is placed on the market or put into service. Such provider shall be subject to the registration obligation set out in Article 49(2). Upon request of national competent authorities, the provider shall provide the documentation of the assessment.

5.   The Commission shall, after consulting the European Artificial Intelligence Board (the ‘Board’), and no later than … [18 months from the date of entry into force of this Regulation], provide guidelines specifying the practical implementation of this Article in line with Article 96 together with a comprehensive list of practical examples of use cases of AI systems that are high-risk and not high-risk.

6.   The Commission shall adopt delegated acts in accordance with Article 97 to amend the conditions laid down in paragraph 3, first subparagraph, of this Article.

The Commission may adopt delegated acts in accordance with Article 97 in order to add new conditions to those laid down in paragraph 3, first subparagraph, or to modify them, only where there is concrete and reliable evidence of the existence of AI systems that fall under the scope of Annex III but do not pose a significant risk of harm to the health, safety or fundamental rights of natural persons.

The Commission shall adopt delegated acts in accordance with Article 97 in order to delete any of the conditions laid down in the paragraph 3, first subparagraph, where there is concrete and reliable evidence that this is necessary for the purpose of maintaining the level of protection of health, safety and fundamental rights in the Union.

Any amendment to the conditions laid down in paragraph 3, first subparagraph, shall not decrease the overall level of protection of health, safety and fundamental rights in the Union.

When adopting the delegated acts, the Commission shall ensure consistency with the delegated acts adopted pursuant to Article 7(1), and shall take account of market and technological developments.

Amendments to Annex III

1.  The Commission shall adopt delegated acts in accordance with Article 97 to amend Annex III by adding or modifying use-cases of high-risk AI systems where both of the following conditions are fulfilled:

(a)  the AI systems are intended to be used in any of the areas listed in Annex III;

(b)  the AI systems pose a risk of harm to ▌ health and safety, or an adverse impact on fundamental rights, and that risk is equivalent to, or greater than, the risk of harm or of adverse impact posed by the high-risk AI systems already referred to in Annex III.

2.  When assessing the condition under paragraph 1, point (b),, the Commission shall take into account the following criteria:

(a)  the intended purpose of the AI system;

(b)  the extent to which an AI system has been used or is likely to be used;

(c)   the nature and amount of the data processed and used by the AI system, in particular whether special categories of personal data are processed;

(d)   the extent to which the AI system acts autonomously and the possibility for a human to override a decision or recommendations that may lead to potential harm;

(e)  the extent to which the use of an AI system has already caused harm to ▌ health and safety , has had an adverse impact on ▌ fundamental rights or has given rise to significant concerns in relation to the likelihood of such harm or adverse impact, as demonstrated, for example, by reports or documented allegations submitted to national competent authorities or by other reports, as appropriate;

(f)  the potential extent of such harm or such adverse impact, in particular in terms of its intensity and its ability to affect multiple persons or to disproportionately affect a particular group of persons ;

(g)  the extent to which persons who are potentially harmed or suffer an adverse impact are dependent on the outcome produced with an AI system, in particular because for practical or legal reasons it is not reasonably possible to opt-out from that outcome;

(h)  the extent to which there is an imbalance of power, or the persons who are potentially harmed or suffer an adverse impact are in a vulnerable position in relation to the deployer of an AI system, in particular due to status, authority , knowledge, economic or social circumstances, or age;

(i)  the extent to which the outcome produced involving an AI system is easily corrigible or reversible, taking into account the technical solutions available to correct or reverse it, whereby outcomes having an adverse impact on ▌ health , safety or fundamental rights, shall not be considered to be easily corrigible or reversible ;

(j)   the magnitude and likelihood of benefit of the deployment of the AI system for individuals, groups, or society at large, including possible improvements in product safety;

(k)  the extent to which existing Union law provides for:

(i)  effective measures of redress in relation to the risks posed by an AI system, with the exclusion of claims for damages;

(ii)  effective measures to prevent or substantially minimise those risks.

3.   The Commission shall adopt delegated acts in accordance with Article 97 to amend the list in Annex III by removing high-risk AI systems where both of the following conditions are fulfilled:

(a)   the high-risk AI system concerned no longer poses any significant risks to fundamental rights, health or safety, taking into account the criteria listed in paragraph 2;

(b)   the deletion does not decrease the overall level of protection of health, safety and fundamental rights under Union law.

Requirements for high-risk AI systems

Compliance with the requirements

1.  High-risk AI systems shall comply with the requirements laid down in this Section , taking into account their intended purposes as well as the generally acknowledged state of the art on AI and AI-related technologies. The risk management system referred to in Article 9 shall be taken into account when ensuring compliance with those requirements .

2.   Where a product contains an AI system, to which the requirements of this Regulation as well as requirements of the Union harmonisation legislation listed in Section A of Annex I apply, providers shall be responsible for ensuring that their product is fully compliant with all applicable requirements required under applicable Union harmonisation legislation. In ensuring the compliance of high-risk AI systems referred to in paragraph 1 with the requirements set out in this Section, and in order to ensure consistency, avoid duplications and minimise additional burdens, providers shall have a choice of integrating, as appropriate, the necessary testing and reporting processes, information and documentation they provide with regard to their product into documentation and procedures that already exist and are required under the Union harmonisation legislation listed in Section A of Annex I,.

Risk management system

1.  A risk management system shall be established, implemented, documented and maintained in relation to high-risk AI systems.

2.  The risk management system shall be understood as a continuous iterative process planned and run throughout the entire lifecycle of a high-risk AI system, requiring regular systematic review and updating. It shall comprise the following steps:

(a)  the identification and analysis of the known and the reasonably foreseeable risks that the high-risk AI system can pose to health, safety or fundamental rights when the high-risk AI system is used in accordance with its intended purpose;

(b)  the estimation and evaluation of the risks that may emerge when the high-risk AI system is used in accordance with its intended purpose, and under conditions of reasonably foreseeable misuse;

(c)  the evaluation of other risks possibly arising, based on the analysis of data gathered from the post-market monitoring system referred to in Article 72;

(d)  the adoption of appropriate and targeted risk management measures designed to address the risks identified pursuant to point (a) .

3.   The risks referred to in this Article shall concern only those which may be reasonably mitigated or eliminated through the development or design of the high-risk AI system, or the provision of adequate technical information.

4.  The risk management measures referred to in paragraph 2, point (d), shall give due consideration to the effects and possible interaction resulting from the combined application of the requirements set out in this Section , with a view to minimising risks more effectively while achieving an appropriate balance in implementing the measures to fulfil those requirements .

5.  The risk management measures referred to in paragraph 2, point (d), shall be such that the relevant residual risk associated with each hazard, as well as the overall residual risk of the high-risk AI systems is judged to be acceptable .

In identifying the most appropriate risk management measures, the following shall be ensured:

(a)  elimination or reduction of identified and evaluated risks pursuant to paragraph 2 as far as technically feasible through adequate design and development of the high-risk AI system;

(b)  where appropriate, implementation of adequate mitigation and control measures addressing risks that cannot be eliminated;

(c)  provision of information required pursuant to Article 13 and, where appropriate, training to deployers . ▌

With a view to eliminating or reducing risks related to the use of the high-risk AI system, due consideration shall be given to the technical knowledge, experience, education, the training to be expected by the deployer, and the presumable context in which the system is intended to be used.

6.  High-risk AI systems shall be tested for the purpose of identifying the most appropriate and targeted risk management measures. Testing shall ensure that high-risk AI systems perform consistently for their intended purpose and that they are in compliance with the requirements set out in this Section.

7.  Testing procedures may include testing in real-world conditions in accordance with Article 60 .

8.  The testing of high-risk AI systems shall be performed, as appropriate, at any time throughout the development process, and, in any event, prior to their being placed on the market or put into service. Testing shall be carried out against prior defined metrics and probabilistic thresholds that are appropriate to the intended purpose of the high-risk AI system.

9.  When implementing the risk management system as provided for in paragraphs 1 to 7 , providers shall give consideration to whether in view of its intended purpose the high-risk AI system is likely to have an adverse impact on persons under the age of 18 and, as appropriate, other groups of vulnerable persons .

10.  For providers of high-risk AI systems that are subject to requirements regarding internal risk management processes under other relevant provisions of Union law , the aspects provided in paragraphs 1 to 9 may be part of, or combined with, the risk management procedures established ▌ pursuant to that law .

Data and data governance

1.  High-risk AI systems which make use of techniques involving the training of AI models with data shall be developed on the basis of training, validation and testing data sets that meet the quality criteria referred to in paragraphs 2 to 5 whenever such data sets are used .

2.  Training, validation and testing data sets shall be subject to data governance and management practices appropriate for the intended purpose of the high-risk AI system . Those practices shall concern in particular:

(a)  the relevant design choices;

(b)   data collection processes and the origin of data, and in the case of personal data, the original purpose of the data collection;

(c)  relevant data-preparation processing operations, such as annotation, labelling, cleaning, updating, enrichment and aggregation;

(d)  the formulation of ▌ assumptions, in particular with respect to the information that the data are supposed to measure and represent;

(e)   an assessment of the availability, quantity and suitability of the data sets that are needed;

(f)  examination in view of possible biases that are likely to affect the health and safety of persons, have a negative impact on fundamental rights or lead to discrimination prohibited under Union law, especially where data outputs influence inputs for future operations ;

(g)   appropriate measures to detect, prevent and mitigate possible biases identified according to point (f);

(h)  the identification of relevant data gaps or shortcomings that prevent compliance with this Regulation , and how those gaps and shortcomings can be addressed.

3.  Training, validation and testing data sets shall be relevant, sufficiently representative, and to the best extent possible, free of errors and complete in view of the intended purpose . They shall have the appropriate statistical properties, including, where applicable, as regards the persons or groups of persons in relation to whom the high-risk AI system is intended to be used. Those characteristics of the data sets may be met at the level of individual data sets or at the level of a combination thereof.

4.   Data sets shall take into account, to the extent required by the intended purpose, the characteristics or elements that are particular to the specific geographical , contextual , behavioural or functional setting within which the high-risk AI system is intended to be used.

5.  To the extent that it is strictly necessary for the purpose of ensuring bias ▌ detection and correction in relation to the high-risk AI systems in accordance with paragraph (2), points (f) and (g) of this Article , the providers of such systems may exceptionally process special categories of personal data, subject to appropriate safeguards for the fundamental rights and freedoms of natural persons . In addition to the provisions set out in Regulation (EU) 2016/679, Directive (EU) 2016/680 and Regulation (EU) 2018/1725, all the following conditions shall apply in order for such processing to occur:

(a)   the bias detection and correction cannot be effectively fulfilled by processing other data, including synthetic or anonymised data;

(b)   the special categories of personal data are subject to technical limitations on the re-use of the personal data, and state of the art security and privacy-preserving measures, including pseudonymisation;

(c)   the special categories of personal data are subject to measures to ensure that the personal data processed are secured, protected, subject to suitable safeguards, including strict controls and documentation of the access, to avoid misuse and ensure that only authorised persons with appropriate confidentiality obligations have access to those personal data;

(d)   the personal data in the special categories of personal data are not to be transmitted, transferred or otherwise accessed by other parties;

(e)   the personal data in the special categories of personal data are deleted once the bias has been corrected or the personal data has reached the end of its retention period, whichever comes first;

(f)   the records of processing activities pursuant to Regulations (EU) 2016/679 and (EU) 2018/1725 and Directive (EU) 2016/680include the reasons why the processing of special categories of personal data was strictly necessary to detect and correct biases, and why that objective could not be achieved by processing other data.

6.  ▌For the development of high-risk AI systems not using techniques involving the training of AI models , paragraphs 2 to 5 apply only to the testing data sets .

Technical documentation

1.  The technical documentation of a high-risk AI system shall be drawn up before that system is placed on the market or put into service and shall be kept up-to date.

The technical documentation shall be drawn up in such a way as to demonstrate that the high-risk AI system complies with the requirements set out in this Section and to provide national competent authorities and notified bodies with the necessary information in a clear and comprehensive form to assess the compliance of the AI system with those requirements. It shall contain, at a minimum, the elements set out in Annex IV. SMEs, including start-ups, may provide the elements of the technical documentation specified in Annex IV in a simplified manner. For this purpose, the Commission shall establish a simplified technical documentation form targeted at the needs of small and microenterprises. Where an SME, including a start-up, opts to provide the information required in Annex IV in a simplified manner, it shall use the form referred to in this paragraph. Notified bodies shall accept the form for the purposes of the conformity assessment.

2.  Where a high-risk AI system related to a product covered by the Union harmonisation legislation listed in Section A of Annex I is placed on the market or put into service, a single set of technical documentation shall be drawn up containing all the information set out in paragraph 1, as well as the information required under those legal acts.

3.  The Commission shall adopt delegated acts in accordance with Article 97 to amend Annex IV where necessary to ensure that, in the light of technical progress, the technical documentation provides all the information necessary to assess the compliance of the system with the requirements set out in this Section.

Record-keeping

1.  High-risk AI systems shall technically allow for the automatic recording of events (‘logs’) over their lifetime .

2.   In order to ensure a level of traceability of the functioning of a high-risk AI system ▌ that is appropriate to the intended purpose of the system , logging capabilities shall enable the recording of events relevant for:

(a)   identifying situations that may result in the high-risk AI system presenting a risk within the meaning of Article 79(1) or in a substantial modification;

(b)   facilitating the post-market monitoring referred to in Article 72; and

(c)   monitoring the operation of high-risk AI systems referred to in Article  26(6).

3.  For high-risk AI systems referred to in point 1 (a) of Annex III, the logging capabilities shall provide, at a minimum:

(a)  recording of the period of each use of the system (start date and time and end date and time of each use);

(b)  the reference database against which input data has been checked by the system;

(c)  the input data for which the search has led to a match;

(d)  the identification of the natural persons involved in the verification of the results, as referred to in Article 14(5).

Transparency and provision of information to deployers

1.  High-risk AI systems shall be designed and developed in such a way as to ensure that their operation is sufficiently transparent to enable deployers to interpret a system’s output and use it appropriately. An appropriate type and degree of transparency shall be ensured ▌ with a view to achieving compliance with the relevant obligations of the provider and deployer set out in Section 3.

2.  High-risk AI systems shall be accompanied by instructions for use in an appropriate digital format or otherwise that include concise, complete, correct and clear information that is relevant, accessible and comprehensible to deployers.

3.  The instructions for use shall contain at least the following information :

(a)  the identity and the contact details of the provider and, where applicable, of its authorised representative;

(b)  the characteristics, capabilities and limitations of performance of the high-risk AI system, including:

(i)  its intended purpose;

(ii)  the level of accuracy, including its metrics, robustness and cybersecurity referred to in Article 15 against which the high-risk AI system has been tested and validated and which can be expected, and any known and foreseeable circumstances that may have an impact on that expected level of accuracy, robustness and cybersecurity;

(iii)  any known or foreseeable circumstance, related to the use of the high-risk AI system in accordance with its intended purpose or under conditions of reasonably foreseeable misuse, which may lead to risks to the health and safety or fundamental rights referred to in Article 9(2) ;

(iv)   where applicable, the technical capabilities and characteristics of the high-risk AI system to provide information that is relevant to explain its output;

(v)   when appropriate, its performance regarding specific persons or groups of persons on which the system is intended to be used;

(vi)  when appropriate, specifications for the input data, or any other relevant information in terms of the training, validation and testing data sets used, taking into account the intended purpose of the high-risk AI system;

(vii)   where applicable, information to enable deployers to interpret the output of the high-risk AI system and use it appropriately;

(c)  the changes to the high-risk AI system and its performance which have been pre-determined by the provider at the moment of the initial conformity assessment, if any;

(d)  the human oversight measures referred to in Article 14, including the technical measures put in place to facilitate the interpretation of the outputs of the high-risk AI systems by the deployers ;

(e)   the computational and hardware resources needed, the expected lifetime of the high-risk AI system and any necessary maintenance and care measures , including their frequency, to ensure the proper functioning of that AI system, including as regards software updates ;

(f)   where relevant, a description of the mechanisms included within the high-risk AI system that allows deployers to properly collect, store and interpret the logs in accordance with Article 12.

Human oversight

1.  High-risk AI systems shall be designed and developed in such a way, including with appropriate human-machine interface tools, that they can be effectively overseen by natural persons during the period in which they are in use.

2.  Human oversight shall aim to prevent or minimise the risks to health, safety or fundamental rights that may emerge when a high-risk AI system is used in accordance with its intended purpose or under conditions of reasonably foreseeable misuse, in particular where such risks persist despite the application of other requirements set out in this Section.

3.   The oversight measures shall be commensurate to the risks, level of autonomy and context of use of the high-risk AI system, and shall be ensured through either one or both of the following types of measures:

(a)   measures identified and built, when technically feasible, into the high-risk AI system by the provider before it is placed on the market or put into service;

(b)   measures identified by the provider before placing the high-risk AI system on the market or putting it into service and that are appropriate to be implemented by the deployer.

4.   For the purpose of implementing paragraphs 1, 2 and 3, the high-risk AI system shall be provided to the user in such a way that natural persons to whom human oversight is assigned are enabled , as appropriate and proportionate to the following circumstances:

(a)   to properly understand the relevant capacities and limitations of the high-risk AI system and be able to duly monitor its operation, including in view of detecting and addressing anomalies, dysfunctions and unexpected performance ▌;

(b)   to remain aware of the possible tendency of automatically relying or over-relying on the output produced by a high-risk AI system (‘automation bias’), in particular for high-risk AI systems used to provide information or recommendations for decisions to be taken by natural persons;

(c)  ▌ to correctly interpret the high-risk AI system’s output, taking into account, for example, the interpretation tools and methods available;

(d)  ▌ to decide, in any particular situation, not to use the high-risk AI system or to otherwise disregard, override or reverse the output of the high-risk AI system;

(e)  ▌ to intervene in the operation of the high-risk AI system or interrupt the system through a ‘stop’ button or a similar procedure that allows the system to come to a halt in a safe state .

5.  For high-risk AI systems referred to in point 1(a) of Annex III, the measures referred to in paragraph 3 of this Article shall be such as to ensure that, in addition, no action or decision is taken by the deployer on the basis of the identification resulting from the system unless that identification has been separately verified and confirmed by at least two natural persons with the necessary competence, training and authority.

The requirement for a separate verification by at least two natural persons shall not apply to high-risk AI systems used for the purposes of law enforcement, migration, border control or asylum, where Union or national law considers the application of this requirement to be disproportionate .

Accuracy, robustness and cybersecurity

1.  High-risk AI systems shall be designed and developed in such a way that they achieve ▌ an appropriate level of accuracy, robustness , and cybersecurity, and that they perform consistently in those respects throughout their lifecycle.

2.   To address the technical aspects of how to measure the appropriate levels of accuracy and robustness set out in paragraph 1 and any other relevant performance metrics, the Commission shall, in cooperation with relevant stakeholder and organisations such as metrology and benchmarking authorities, encourage, as appropriate, the development of benchmarks and measurement methodologies.

3.  The levels of accuracy and the relevant accuracy metrics of high-risk AI systems shall be declared in the accompanying instructions of use.

4.  High-risk AI systems shall be as resilient as possible regarding errors, faults or inconsistencies that may occur within the system or the environment in which the system operates, in particular due to their interaction with natural persons or other systems. Technical and organisational measures shall be taken towards this regard.

The robustness of high-risk AI systems may be achieved through technical redundancy solutions, which may include backup or fail-safe plans.

High-risk AI systems that continue to learn after being placed on the market or put into service shall be developed in such a way as to eliminate or reduce as far as possible the risk of possibly biased outputs influencing input for future operations (‘feedback loops’), and as to ensure that any such feedback loops are duly addressed with appropriate mitigation measures.

5.  High-risk AI systems shall be resilient against attempts by unauthorised third parties to alter their use , outputs or performance by exploiting system vulnerabilities.

The technical solutions aiming to ensure the cybersecurity of high-risk AI systems shall be appropriate to the relevant circumstances and the risks.

The technical solutions to address AI specific vulnerabilities shall include, where appropriate, measures to prevent , detect, respond to, resolve and control for attacks trying to manipulate the training data set (‘data poisoning’), or pre-trained components used in training (‘model poisoning’), inputs designed to cause the AI model to make a mistake (‘adversarial examples’ or ‘model evasion’ ), confidentiality attacks or model flaws.

Obligations of providers and deployers of high-risk AI systems and other parties

Obligations of providers of high-risk AI systems

Providers of high-risk AI systems shall:

(a)  ensure that their high-risk AI systems are compliant with the requirements set out in Section 2;

(b)   indicate on the high-risk AI system or, where that is not possible, on its packaging or its accompanying documentation, as applicable their name, registered trade name or registered trade mark, the address at which they can be contacted;

(c)  have a quality management system in place which complies with Article 17;

(d)   keep the documentation referred to in Article 18 ;

(e)  when under their control, keep the logs automatically generated by their high-risk AI systems as referred to in Article 19 ;

(f)  ensure that the high-risk AI system undergoes the relevant conformity assessment procedure as referred to in Article 43 , prior to its being placed on the market or put into service;

(g)   draw up an EU declaration of conformity in accordance with Article 47;

(h)   affix the CE marking to the high-risk AI system or, where that is not possible, on its packaging or its accompanying documentation, to indicate conformity with this Regulation, in accordance with Article 48;

(i)  comply with the registration obligations referred to in Article 49 (1) ;

(j)  take the necessary corrective actions and provide information as required in Article 20 ;

(k)  upon a reasoned request of a national competent authority, demonstrate the conformity of the high-risk AI system with the requirements set out in Section 2;

(l)   ensure that the high-risk AI system complies with accessibility requirements in accordance with Directives (EU) 2016/2102 and (EU) 2019/882.

Quality management system

1.  Providers of high-risk AI systems shall put a quality management system in place that ensures compliance with this Regulation. That system shall be documented in a systematic and orderly manner in the form of written policies, procedures and instructions, and shall include at least the following aspects:

(a)  a strategy for regulatory compliance, including compliance with conformity assessment procedures and procedures for the management of modifications to the high-risk AI system;

(b)  techniques, procedures and systematic actions to be used for the design, design control and design verification of the high-risk AI system;

(c)  techniques, procedures and systematic actions to be used for the development, quality control and quality assurance of the high-risk AI system;

(d)  examination, test and validation procedures to be carried out before, during and after the development of the high-risk AI system, and the frequency with which they have to be carried out;

(e)  technical specifications, including standards, to be applied and, where the relevant harmonised standards are not applied in full or do not cover all of the relevant requirements set out in Section 2, the means to be used to ensure that the high-risk AI system complies with those requirements ▌;

(f)  systems and procedures for data management, including data acquisition, data collection, data analysis, data labelling, data storage, data filtration, data mining, data aggregation, data retention and any other operation regarding the data that is performed before and for the purpose of the placing on the market or the putting into service of high-risk AI systems;

(g)  the risk management system referred to in Article 9;

(h)  the setting-up, implementation and maintenance of a post-market monitoring system, in accordance with Article 72;

(i)  procedures related to the reporting of a serious incident in accordance with Article 73;

(j)  the handling of communication with national competent authorities, other relevant authorities, including those providing or supporting the access to data, notified bodies, other operators, customers or other interested parties;

(k)  systems and procedures for record-keeping of all relevant documentation and information;

(l)  resource management, including security-of-supply related measures;

(m)  an accountability framework setting out the responsibilities of the management and other staff with regard to all the aspects listed in this paragraph.

2.  The implementation of the aspects referred to in paragraph 1 shall be proportionate to the size of the provider’s organisation. Providers shall in any event comply with the degree of rigour and the level of protection required to ensure the compliance of their high-risk AI systems with this Regulation.

3.   Providers of high-risk AI systems that are subject to obligations regarding quality management systems or an equivalent function under relevant sectorial Union law may include the aspects listed in paragraph 1 as part of the quality management systems pursuant to that law.

4.  For providers that are financial institutions subject to requirements regarding their internal governance, arrangements or processes under Union financial services law , the obligation to put in place a quality management system, with the exception of paragraph 1, points (g), (h) and (i) of this Article, shall be deemed to be fulfilled by complying with the rules on internal governance arrangements or processes pursuant to the relevant Union financial services law . For this purpose, any harmonised standards referred to in Article 40 shall be taken into account.

Documentation keeping

1.   The provider shall, for a period ending 10 years after the high-risk AI system has been placed on the market or put into service, keep at the disposal of the national competent authorities:

(a)   the technical documentation referred to in Article 11;

(b)   the documentation concerning the quality management system referred to in Article 17;

(c)   the documentation concerning the changes approved by notified bodies, where applicable;

(d)   the decisions and other documents issued by the notified bodies, where applicable;

(e)   the EU declaration of conformity referred to in Article 47.

2.   Each Member State shall determine conditions under which the documentation referred to in paragraph 1 remains at the disposal of the national competent authorities for the period indicated in that paragraph for the cases when a provider or its authorised representative established on its territory goes bankrupt or ceases its activity prior to the end of that period.

3.  Providers that are financial institutions subject to requirements regarding their internal governance, arrangements or processes under Union financial services law shall maintain the technical documentation as part of the documentation kept under the relevant Union financial services law .

Automatically generated logs

1.  Providers of high-risk AI systems shall keep the logs referred to in Article 12(1), automatically generated by their high-risk AI systems, to the extent such logs are under their control . Without prejudice to applicable Union or national law, the logs shall be kept for a period ▌ appropriate to the intended purpose of the high-risk AI system , of at least six months, unless provided otherwise in the applicable Union or national law , in particular in Union law on the protection of personal data .

2.  Providers that are financial institutions subject to requirements regarding their internal governance, arrangements or processes under Union financial services law shall maintain the logs automatically generated by their high-risk AI systems as part of the documentation kept under the relevant financial services law .

Corrective actions and duty of information

1.  Providers of high-risk AI systems which consider or have reason to consider that a high-risk AI system that they have placed on the market or put into service is not in conformity with this Regulation shall immediately take the necessary corrective actions to bring that system into conformity, to withdraw it , to disable it, or to recall it, as appropriate. They shall inform the distributors of the high-risk AI system concerned and, where applicable, the deployers, the authorised representative and importers accordingly.

2.   Where the high-risk AI system presents a risk within the meaning of Article 79(1) and the provider becomes aware of that risk, it shall immediately investigate the causes, in collaboration with the reporting deployer, where applicable, and inform the market surveillance authorities of the Member State or Member States in which they made the high-risk AI system available on the market and, where applicable, the notified body that issued a certificate for that high-risk AI system in accordance with Article 44, in particular, of the nature of the non-compliance and of any relevant corrective action taken.

Cooperation with competent authorities

1.  Providers of high-risk AI systems shall, upon a reasoned request by a ▌ competent authority, provide that authority ▌ all the information and documentation necessary to demonstrate the conformity of the high-risk AI system with the requirements set out in Section 2, in a language which can be easily understood by the authority in one of the official languages of the institutions of the Union as indicated by the Member State concerned .

2.   Upon a reasoned request by a national competent authority, providers shall also give the requesting national competent authority, as applicable, access to the automatically generated logs of the high-risk AI system referred to in Article 12(1), to the extent such logs are under their control.

3.   Any information obtained by a national competent authority pursuant to this Article shall be treated in compliance with the confidentiality obligations set out in Article 78.

Authorised representatives of providers of high-risk AI systems

1.  Prior to making their high-risk AI systems available on the Union market, ▌ providers established in third countries shall, by written mandate, appoint an authorised representative which is established in the Union.

2.   The provider shall enable its authorised representative to perform the tasks specified in the mandate received from the provider.

3.  The authorised representative shall perform the tasks specified in the mandate received from the provider. It shall provide a copy of the mandate to the market surveillance authorities upon request, in one of the official languages of the institutions of the Union, as indicated by the national competent authority. For the purposes of this Regulation, the mandate shall empower the authorised representative to carry out the following tasks:

(a)   verify that the EU declaration of conformity and the technical documentation referred to in Article 11 have been drawn up and that an appropriate conformity assessment procedure has been carried out by the provider;

(b)  keep at the disposal of the national competent authorities and national authorities or bodies referred to in Article 74(10), for a period of 10 years after the high-risk AI system has been placed on the market or put into service, the contact details of the provider that appointed the authorised representative, a copy of the EU declaration of conformity, the technical documentation and, if applicable, the certificate issued by the notified body ;

(c)  provide a national competent authority, upon a reasoned request, with all the information and documentation , including that referred to in point (b) of this subparagraph, necessary to demonstrate the conformity of a high-risk AI system with the requirements set out in Section 2, including access to the logs , as referred to in Article 12(1), automatically generated by the high-risk AI system, to the extent such logs are under the control of the provider ▌;

(d)  cooperate with competent ▌ authorities, upon a reasoned request, in any action the latter take in relation to the high-risk AI system , in particular to reduce and mitigate the risks posed by the high-risk AI system;

(e)   where applicable, comply with the registration obligations referred in Article 49(1), or, if the registration is carried out by the provider itself, ensure that the information referred to in Section A of Annex VIII is correct.

The mandate shall empower the authorised representative to be addressed, in addition to or instead of the provider, by the competent authorities, on all issues related to ensuring compliance with this Regulation.

4.   The authorised representative shall terminate the mandate if it considers or has reason to consider the provider to be acting contrary to its obligations pursuant to this Regulation. In such a case, it shall also immediately inform the market surveillance authority of the Member State in which it is located or established, as well as, where applicable, the relevant notified body, about the termination of the mandate and the reasons therefor.

Obligations of importers

1.  Before placing a high-risk AI system on the market, importers shall ensure that the system is in conformity with this Regulation by verifying that :

(a)  the relevant conformity assessment procedure referred to in Article 43 has been carried out by the provider of the high-risk AI system ;

(b)  the provider has drawn up the technical documentation in accordance with Article 11 and Annex IV;

(c)  the system bears the required CE marking and is accompanied by the EU declaration of conformity and instructions for use ;

(d)   the provider has appointed an authorised representative in accordance with Article 22(1).

2.  Where an importer has sufficient reason to consider that a high-risk AI system is not in conformity with this Regulation, or is falsified, or accompanied by falsified documentation, it shall not place the system on the market until it has been brought into conformity. Where the high-risk AI system presents a risk within the meaning of Article 79(1), the importer shall inform the provider of the system , the authorised representatives and the market surveillance authorities to that effect.

3.  Importers shall indicate their name, registered trade name or registered trade mark, and the address at which they can be contacted in relation to the high-risk AI system on its packaging or its accompanying documentation, where applicable.

4.  Importers shall ensure that, while a high-risk AI system is under their responsibility, storage or transport conditions, where applicable, do not jeopardise its compliance with the requirements set out in Section 2.

5.   Importers shall keep, for a period of 10 years after the high-risk AI system has been placed on the market or put into service, a copy of the certificate issued by the notified body, where applicable, of the instructions for use, and of the EU declaration of conformity.

6.  Importers shall provide national competent authorities, upon a reasoned request, with all the necessary information and documentation, including that kept in accordance with paragraph 5, to demonstrate the conformity of a high-risk AI system with the requirements set out in Section 2 in a language which can be easily understood by them. For this purpose, they shall also ensure that the technical documentation can be made available to those authorities .

7.   Importers shall cooperate with national competent authorities in any action those authorities take in relation to a high-risk AI system the importers placed on the market, in particular to reduce and mitigate the risks posed by it.

Obligations of distributors

1.  Before making a high-risk AI system available on the market, distributors shall verify that it bears the required CE marking, that it is accompanied by a copy of EU declaration of conformity and instructions for use, and that the provider and the importer of the system, as applicable, have complied with their respective obligations as laid down in Article 16, points (b) and (c) and Article 23(3) .

2.  Where a distributor considers or has reason to consider , on the basis of the information in its possession, a high-risk AI system not to be in conformity with the requirements set out in Section 2, it shall not make the high-risk AI system available on the market until the system has been brought into conformity with those requirements. Furthermore, where the high-risk AI system presents a risk within the meaning of Article 79(1), the distributor shall inform the provider or the importer of the system, as applicable, to that effect.

3.  Distributors shall ensure that, while a high-risk AI system is under their responsibility, where applicable, storage or transport conditions do not jeopardise the compliance of the system with the requirements set out in Section 2.

4.  A distributor that considers or has reason to consider , on the basis of the information in its possession, a high-risk AI system which it has made available on the market not to be in conformity with the requirements set out in Section 2, shall take the corrective actions necessary to bring that system into conformity with those requirements, to withdraw it or recall it, or shall ensure that the provider, the importer or any relevant operator, as appropriate, takes those corrective actions. Where the high-risk AI system presents a risk within the meaning of Article 79(1), the distributor shall immediately inform the provider or importer of the system and the national competent authorities of the Member States in which it has made the product available to that effect, giving details, in particular, of the non-compliance and of any corrective actions taken.

5.  Upon a reasoned request from a national competent authority, distributors of a high-risk AI system shall provide that authority with all the information and documentation regarding its actions pursuant to paragraphs 1 to 4 necessary to demonstrate the conformity of that system with the requirements set out in Section 2. ▌

6.   Distributors shall cooperate with national competent authorities in any action those authorities take in relation to a high-risk AI system they made available on the market, in particular to reduce or mitigate the risk posed by it.

Responsibilities along the AI value chain

1.  Any distributor, importer, deployer or other third-party shall be considered to be a provider of a high-risk AI system for the purposes of this Regulation and shall be subject to the obligations of the provider under Article 16, in any of the following circumstances:

(a)  they put their name or trademark on a high-risk AI system already placed on the market or put into service , without prejudice to contractual arrangements stipulating that the obligations therein are allocated otherwise;

(b)  they make a substantial modification to a high-risk AI system that has already been placed on the market or has already been put into service in such a way that it remains a high-risk AI system pursuant to Article 6 ;

(c)   they modify the intended purpose of an AI system, including a general-purpose AI system, which has not been classified as high-risk and has already been placed on the market or put into service in such a way that the AI system concerned becomes a high-risk AI system in accordance with Article 6.

2.  Where the circumstances referred to in paragraph 1 occur, the provider that initially placed the ▌ AI system on the market or put it into service shall no longer be considered to be a provider of that specific AI system for the purposes of this Regulation. That initial provider shall closely cooperate with new providers and shall make available the necessary information and provide the reasonably expected technical access and other assistance that are required for the fulfilment of the obligations set out in this Regulation, in particular regarding the compliance with the conformity assessment of high-risk AI systems. This paragraph shall not apply in cases where the initial provider has clearly specified that its AI system is not to be changed into a high-risk AI system and therefore does not fall under the obligation to hand over the documentation.

3.   In the case of high-risk AI systems that are safety components of products covered by the Union harmonisation legislation listed in Section A of Annex I, the product manufacturer shall be considered to be the provider of the high-risk AI system, and shall be subject to the obligations under Article 16 under either of the following circumstances:

(a)   the high-risk AI system is placed on the market together with the product under the name or trademark of the product manufacturer;

(b)   the high-risk AI system is put into service under the name or trademark of the product manufacturer after the product has been placed on the market.

4.   The provider of a high-risk AI system and the third party that supplies an AI system, tools, services, components, or processes that are used or integrated in a high-risk AI system shall, by written agreement, specify the necessary information, capabilities, technical access and other assistance based on the generally acknowledged state of the art, in order to enable the provider of the high-risk AI system to fully comply with the obligations set out in this Regulation. This paragraph shall not apply to third parties making accessible to the public tools, services, processes, or components, other than general-purpose AI models, under a free and open licence.

The AI Office may develop and recommend voluntary model terms for contracts between providers of high-risk AI systems and third parties that supply tools, services, components or processes that are used for or integrated into high-risk AI systems. When developing those voluntary model terms, the AI Office shall take into account possible contractual requirements applicable in specific sectors or business cases. The voluntary model terms shall be published and be available free of charge in an easily usable electronic format.

5.   Paragraphs 2 and 3 are without prejudice to the need to observe and protect intellectual property rights, confidential business information and trade secrets in accordance with Union and national law.

Obligations of deployers of high-risk AI systems

1.   Deployers of high-risk AI systems shall take appropriate technical and organisational measures to ensure they use such systems in accordance with the instructions for use accompanying the systems, pursuant to paragraphs 3 and 6.

2.   Deployers shall assign human oversight to natural persons who have the necessary competence, training and authority, as well as the necessary support.

3.  The obligations set out in paragraphs 1 and 2, are without prejudice to other deployer obligations under Union or national law and to the deployer ’s freedom to organise its own resources and activities for the purpose of implementing the human oversight measures indicated by the provider.

4.  Without prejudice to paragraphs 1 and 2 , to the extent the deployer exercises control over the input data, that deployer shall ensure that input data is relevant and sufficiently representative in view of the intended purpose of the high-risk AI system.

5.   Deployers shall monitor the operation of the high-risk AI system on the basis of the instructions for use and, where relevant, inform providers in accordance with Article 72 . Where deployers have reason to consider that the use of the high-risk AI system in accordance with the instructions may present a risk within the meaning of Article 79(1), they shall , without undue delay, inform the provider or distributor and the relevant market surveillance authority, and shall suspend the use of that system. Where deployers have identified a serious incident, they shall also immediately inform first the provider , and then the importer or distributor and the relevant market surveillance authorities of that incident. If the deployer is not able to reach the provider, Article 73 shall apply mutatis mutandis. This obligation shall not cover sensitive operational data of deployers of AI systems which are law enforcement authorities .

For deployers that are financial institutions subject to requirements regarding their internal governance, arrangements or processes under Union financial services law , the monitoring obligation set out in the first subparagraph shall be deemed to be fulfilled by complying with the rules on internal governance arrangements, processes and mechanisms pursuant to the relevant financial service law .

6.   Deployers of high-risk AI systems shall keep the logs automatically generated by that high-risk AI system ▌ to the extent such logs are under their control, ▌ for a period ▌ appropriate to the intended purpose of the high-risk AI system , of at least six months, unless provided otherwise in applicable Union or national law , in particular in Union law on the protection of personal data .

Deployers that are financial institutions subject to requirements regarding their internal governance, arrangements or processes under Union financial services law shall maintain the logs as part of the documentation kept pursuant to the relevant Union financial service law .

7.   Before putting into service or using a high-risk AI system at the workplace, deployers who are employers shall inform workers’ representatives and the affected workers that they will be subject to the use of the high-risk AI system. This information shall be provided, where applicable, in accordance with the rules and procedures laid down in Union and national law and practice on information of workers and their representatives.

8.   Deployers of high-risk AI systems that are public authorities, or Union institutions, bodies, offices or agencies shall comply with the registration obligations referred to in Article 49. When such deployers find that the high-risk AI system that they envisage using has not been registered in the EU database referred to in Article 71, they shall not use that system and shall inform the provider or the distributor.

9.   Where applicable, deployers of high-risk AI systems shall use the information provided under Article 13 of this Regulation to comply with their obligation to carry out a data protection impact assessment under Article 35 of Regulation (EU) 2016/679 or Article 27 of Directive (EU) 2016/680. ▌

10.   Without prejudice to Directive (EU) 2016/680, in the framework of an investigation for the targeted search of a person suspected or convicted of having committed a criminal offence, the deployer of a high-risk AI system for post-remote biometric identification shall request an authorisation, ex-ante, or without undue delay and no later than 48 hours, by a judicial authority or an administrative authority whose decision is binding and subject to judicial review, for the use of that system, except when it is used for the initial identification of a potential suspect based on objective and verifiable facts directly linked to the offence. Each use shall be limited to what is strictly necessary for the investigation of a specific criminal offence.

If the requested authorisation provided for in the first subparagraph is rejected, the use of the post-remote biometric identification system linked to that requested authorisation shall be stopped with immediate effect and the personal data linked to the use of the high-risk AI system for which the authorisation was requested shall be deleted.

In no case shall such high-risk AI system for post-remote biometric identification be used for law enforcement purposes in an untargeted way, without any link to a criminal offence, a criminal proceeding, a genuine and present or genuine and foreseeable threat of a criminal offence, or the search for a specific missing person. It shall be ensured that no decision that produces an adverse legal effect on a person may be taken by the law enforcement authorities based solely on the output of such post-remote biometric identification systems.

This paragraph is without prejudice to Article 9 of Regulation (EU) 2016/679 and Article 10 of Directive (EU) 2016/680 for the processing of biometric data.

Regardless of the purpose or deployer, each use of such high-risk AI systems shall be documented in the relevant police file and shall be made available to the relevant market surveillance authority and the national data protection authority upon request, excluding the disclosure of sensitive operational data related to law enforcement. This subparagraph shall be without prejudice to the powers conferred by Directive (EU) 2016/680 on supervisory authorities.

Deployers shall submit annual reports to the relevant market surveillance and national data protection authorities on their use of post-remote biometric identification systems, excluding the disclosure of sensitive operational data related to law enforcement. The reports may be aggregated to cover more than one deployment.

Member States may introduce, in accordance with Union law, more restrictive laws on the use of post-remote biometric identification systems.

11.   Without prejudice to Article 50 of this Regulation, deployers of high-risk AI systems referred to in Annex III that make decisions or assist in making decisions related to natural persons shall inform the natural persons that they are subject to the use of the high-risk AI system. For high-risk AI systems used for law enforcement purposes Article 13 of Directive (EU) 2016/680 shall apply.

12.   Deployers shall cooperate with the relevant national competent authorities in any action those authorities take in relation to the high-risk AI system in order to implement this Regulation.

Fundamental rights impact assessment for high-risk AI systems

1.   Prior to deploying a high-risk AI system referred to in Article 6(2) into use, with the exception of high-risk AI systems intended to be used in the area listed in point 2 of Annex III, deployers that are bodies governed by public law, or are private entities providing public services, and deployers high-risk AI systems referred to in points 5 (b) and (c) of Annex III, shall perform an assessment of the impact on fundamental rights that the use of such system may produce. For that purpose, deployers shall perform an assessment consisting of:

(a)   a description of the deployer’s processes in which the high-risk AI system will be used in line with its intended purpose;

(b)   a description of the period of time within which, and the frequency with which, each high-risk AI system is intended to be used;

(c)   the categories of natural persons and groups likely to be affected by its use in the specific context;

(d)   the specific risks of harm likely to have an impact on the categories of persons or groups of persons identified pursuant point (c) of this paragraph, taking into account the information given by the provider pursuant to Article 13;

(e)   a description of the implementation of human oversight measures, according to the instructions for use;

(f)   the measures to be taken where those risks materialise, including the arrangements for internal governance and complaint mechanisms.

2.   The obligation laid down in paragraph 1 applies to the first use of the high-risk AI system. The deployer may, in similar cases, rely on previously conducted fundamental rights impact assessments or existing impact assessments carried out by provider. If, during the use of the high-risk AI system, the deployer considers that any of the elements listed in paragraph 1 has changed or is no longer up to date, the deployer shall take the necessary steps to update the information.

3.   Once the assessment referred to in paragraph 1 of this Article has been performed, the deployer shall notify the market surveillance authority of its results, including filling-out and submitting the template referred to in paragraph 5 of this Article as part of the notification. In the case referred to in Article 46(1), deployers may be exempt from that obligation to notify.

4.   If any of the obligations laid down in this Article is already complied with as a result of the data protection impact assessment conducted pursuant to Article 35 of Regulation (EU) 2016/679 or Article 27 of Directive (EU) 2016/680, the fundamental rights impact assessment referred to in paragraph 1 of this Article shall complement that data protection impact assessment.

5.   The AI Office shall develop a template for a questionnaire, including through an automated tool, to facilitate deployers in complying with their obligations under this Article in a simplified manner.

Notifying authorities and notified bodies

Notifying authorities

1.  Each Member State shall designate or establish at least one notifying authority responsible for setting up and carrying out the necessary procedures for the assessment, designation and notification of conformity assessment bodies and for their monitoring. Those procedures shall be developed in cooperation between the notifying authorities of all Member States.

2.  Member States may decide that the assessment and monitoring referred to in paragraph 1 shall be carried out by a national accreditation body within the meaning of, and in accordance with, Regulation (EC) No 765/2008 ▌ .

3.  Notifying authorities shall be established, organised and operated in such a way that no conflict of interest arises with conformity assessment bodies, and that the objectivity and impartiality of their activities are safeguarded.

4.  Notifying authorities shall be organised in such a way that decisions relating to the notification of conformity assessment bodies are taken by competent persons different from those who carried out the assessment of those bodies.

5.  Notifying authorities shall offer or provide neither any activities that conformity assessment bodies perform, nor any consultancy services on a commercial or competitive basis.

6.  Notifying authorities shall safeguard the confidentiality of the information they obtain, in accordance with Article 78 .

7.  Notifying authorities shall have an adequate number of competent personnel at their disposal for the proper performance of their tasks. Competent personnel shall have the necessary expertise, where applicable, for their function, in fields such as information technologies, AI and law, including the supervision of fundamental rights.

Application of a conformity assessment body for notification

1.  Conformity assessment bodies shall submit an application for notification to the notifying authority of the Member State in which they are established.

2.  The application for notification shall be accompanied by a description of the conformity assessment activities, the conformity assessment module or modules and the types of AI systems for which the conformity assessment body claims to be competent, as well as by an accreditation certificate, where one exists, issued by a national accreditation body attesting that the conformity assessment body fulfils the requirements laid down in Article 31.

Any valid document related to existing designations of the applicant notified body under any other Union harmonisation legislation shall be added.

3.  Where the conformity assessment body concerned cannot provide an accreditation certificate, it shall provide the notifying authority with all the documentary evidence necessary for the verification, recognition and regular monitoring of its compliance with the requirements laid down in Article 31.

4.  For notified bodies which are designated under any other Union harmonisation legislation, all documents and certificates linked to those designations may be used to support their designation procedure under this Regulation, as appropriate. The notified body shall update the documentation referred to in paragraphs 2 and 3 of this Article whenever relevant changes occur, in order to enable the authority responsible for notified bodies to monitor and verify continuous compliance with all the requirements laid down in Article 31.

Notification procedure

1.  Notifying authorities may ▌ notify only conformity assessment bodies which have satisfied the requirements laid down in Article 31.

2.  Notifying authorities shall notify the Commission and the other Member States, using the electronic notification tool developed and managed by the Commission, of each conformity assessment body referred to in paragraph 1 .

3.  The notification referred to in paragraph 2 of this Article shall include full details of the conformity assessment activities, the conformity assessment module or modules, the types of AI systems concerned, and the relevant attestation of competence. Where a notification is not based on an accreditation certificate as referred to in Article 29(2), the notifying authority shall provide the Commission and the other Member States with documentary evidence which attests to the competence of the conformity assessment body and to the arrangements in place to ensure that that body will be monitored regularly and will continue to satisfy the requirements laid down in Article 31 .

4.  The conformity assessment body concerned may perform the activities of a notified body only where no objections are raised by the Commission or the other Member States within two weeks of a notification by a notifying authority where it includes an accreditation certificate referred to in Article 29(2), or within two months of a notification by the notifying authority where it includes documentary evidence referred to in Article 29(3) .

5.   Where objections are raised, the Commission shall, without delay, enter into consultations with the relevant Member States and the conformity assessment body. Having regard thereto, the Commission shall decide whether the authorisation is justified. The Commission shall address its decision to the Member State concerned and the relevant conformity assessment body.

Requirements relating to notified bodies

1.   A notified body shall be established under the national law of a Member State and shall have legal personality .

2.  Notified bodies shall satisfy the organisational, quality management, resources and process requirements that are necessary to fulfil their tasks , as well as suitable cybersecurity requirements .

3.  The organisational structure, allocation of responsibilities, reporting lines and operation of notified bodies shall ensure confidence in their performance, and in the results of the conformity assessment activities that the notified bodies conduct.

4.  Notified bodies shall be independent of the provider of a high-risk AI system in relation to which they perform conformity assessment activities. Notified bodies shall also be independent of any other operator having an economic interest in high-risk AI systems assessed, as well as of any competitors of the provider. This shall not preclude the use of assessed high-risk AI systems that are necessary for the operations of the conformity assessment body, or the use of such high-risk AI systems for personal purposes.

5.   Neither a conformity assessment body, its top-level management nor the personnel responsible for carrying out its conformity assessment tasks shall be directly involved in the design, development, marketing or use of high-risk AI systems, nor shall they represent the parties engaged in those activities. They shall not engage in any activity that might conflict with their independence of judgement or integrity in relation to conformity assessment activities for which they are notified. This shall, in particular, apply to consultancy services.

6.  Notified bodies shall be organised and operated so as to safeguard the independence, objectivity and impartiality of their activities. Notified bodies shall document and implement a structure and procedures to safeguard impartiality and to promote and apply the principles of impartiality throughout their organisation, personnel and assessment activities.

7.  Notified bodies shall have documented procedures in place ensuring that their personnel, committees, subsidiaries, subcontractors and any associated body or personnel of external bodies maintain, in accordance with Article 78, the confidentiality of the information which comes into their possession during the performance of conformity assessment activities, except when its disclosure is required by law. The staff of notified bodies shall be bound to observe professional secrecy with regard to all information obtained in carrying out their tasks under this Regulation, except in relation to the notifying authorities of the Member State in which their activities are carried out.

8.  Notified bodies shall have procedures for the performance of activities which take due account of the size of a provider, the sector in which it operates, its structure, and the degree of complexity of the AI system concerned.

9.  Notified bodies shall take out appropriate liability insurance for their conformity assessment activities, unless liability is assumed by the Member State in which they are established in accordance with national law or that Member State is itself directly responsible for the conformity assessment.

10.  Notified bodies shall be capable of carrying out all their tasks under this Regulation with the highest degree of professional integrity and the requisite competence in the specific field, whether those tasks are carried out by notified bodies themselves or on their behalf and under their responsibility.

11.  Notified bodies shall have sufficient internal competences to be able effectively to evaluate the tasks conducted by external parties on their behalf. ▌The notified body shall have permanent availability of sufficient administrative, technical , legal and scientific personnel who possess experience and knowledge relating to the relevant types of AI systems , data and data computing, and relating to the requirements set out in Section 2.

12.  Notified bodies shall participate in coordination activities as referred to in Article 38. They shall also take part directly, or be represented in, European standardisation organisations, or ensure that they are aware and up to date in respect of relevant standards.

Presumption of conformity with requirements relating to notified bodies

Where a conformity assessment body demonstrates its conformity with the criteria laid down in the relevant harmonised standards or parts thereof, the references of which have been published in the Official Journal of the European Union, it shall be presumed to comply with the requirements set out in Article 31 in so far as the applicable harmonised standards cover those requirements.

Subsidiaries of notified bodies and subcontracting

1.  Where a notified body subcontracts specific tasks connected with the conformity assessment or has recourse to a subsidiary, it shall ensure that the subcontractor or the subsidiary meets the requirements laid down in Article 31, and shall inform the notifying authority accordingly.

2.  Notified bodies shall take full responsibility for their tasks performed by subcontractors or subsidiaries.

3.  Activities may be subcontracted or carried out by a subsidiary only with the agreement of the provider. Notified bodies shall make a list of their subsidiaries publicly available.

4.  ▌The relevant documents concerning the assessment of the qualifications of the subcontractor or the subsidiary and the work carried out by them under this Regulation shall be kept at the disposal of the notifying authority for a period of five years from the termination date of the subcontracting activity .

Operational obligations of notified bodies

1.   Notified bodies shall verify the conformity of high-risk AI systems in accordance with the conformity assessment procedures set out in Article 43.

2.   Notified bodies shall avoid unnecessary burdens for providers when performing their activities, and take due account of the size of the provider, the sector in which it operates, its structure and the degree of complexity of the high-risk AI system concerned, in particular in view of minimising administrative burdens and compliance costs for micro- and small enterprises within the meaning of Recommendation 2003/361/EC. The notified body shall, nevertheless, respect the degree of rigour and the level of protection required for the compliance of the high-risk AI system with the requirements of this Regulation. .

3.   Notified bodies shall make available and submit upon request all relevant documentation, including the providers’ documentation, to the notifying authority referred to in Article 28 to allow that authority to conduct its assessment, designation, notification and monitoring activities, and to facilitate the assessment outlined in this Section.

Identification numbers and lists of notified bodies

1.  The Commission shall assign a single identification number to each notified body, even where a body is notified under more than one Union act.

2.  The Commission shall make publicly available the list of the bodies notified under this Regulation, including their identification numbers and the activities for which they have been notified. The Commission shall ensure that the list is kept up to date.

Changes to notifications

1.   The notifying authority shall notify the Commission and the other Member States of any relevant changes to the notification of a notified body via the electronic notification tool referred to in Article 30(2).

2.   The procedures laid down in Articles 29 and 30 shall apply to extensions of the scope of the notification.

For changes to the notification other than extensions of its scope, the procedures laid down in the following paragraphs shall apply.

3.   Where a notified body decides to cease its conformity assessment activities, it shall inform the notifying authority and the providers concerned as soon as possible and, in the case of a planned cessation, at least one year before ceasing its activities. The certificates of the notified body may remain valid for a temporary period of nine months after cessation of the notified body’s activities, on condition that another notified body has confirmed in writing that it will assume responsibilities for the high risk AI systems covered by those certificates. The latter notified body shall complete a full assessment of the AI systems affected by the end of that nine-month-period before issuing new certificates for those systems. Where the notified body has ceased its activity, the notifying authority shall withdraw the designation.

4.  Where a notifying authority has sufficient reason to consider that a notified body no longer meets the requirements laid down in Article 31, or that it is failing to fulfil its obligations, the notifying authority shall without delay investigate the matter with the utmost diligence. In that context, it shall inform the notified body concerned about the objections raised and give it the possibility to make its views known. If the notifying authority comes to the conclusion that the notified body ▌ no longer meets the requirements laid down in Article 31 or that it is failing to fulfil its obligations, it shall restrict, suspend or withdraw ▌the designation as appropriate, depending on the seriousness of the failure to meet those requirements or fulfil those obligations . It shall ▌ immediately inform the Commission and the other Member States accordingly.

5.   Where its designation has been suspended, restricted, or fully or partially withdrawn, the notified body shall inform the providers concerned at the latest within 10 days.

6.   In the event of the restriction, suspension or withdrawal of a designation, the notifying authority shall take appropriate steps to ensure that the files of the notified body concerned are kept, and to make them available to notifying authorities in other Member States and to market surveillance authorities at their request.

7.   In the event of the restriction, suspension or withdrawal of a designation, the notifying authority shall:

(a)   assess the impact on the certificates issued by the notified body;

(b)   submit a report on its findings to the Commission and the other Member States within three months of having notified the changes to the designation;

(c)   require the notified body to suspend or withdraw, within a reasonable period of time determined by the authority, any certificates which were unduly issued, in order to ensure the continuing conformity of AI systems on the market;

(d)   inform the Commission and the Member States about certificates the suspension or withdrawal of which it has required;

(e)   provide the national competent authorities of the Member State in which the provider has its registered place of business with all relevant information about the certificates of which it has required the suspension or withdrawal; that authority shall take the appropriate measures, where necessary, to avoid a potential risk to health, safety or fundamental rights.

8.   With the exception of certificates unduly issued, and where a designation has been suspended or restricted, the certificates shall remain valid in one of the following circumstances:

(a)   the notifying authority has confirmed, within one month of the suspension or restriction, that there is no risk to health, safety or fundamental rights in relation to certificates affected by the suspension or restriction, and the notifying authority has outlined a timeline for actions to remedy the suspension or restriction; or

(b)   the notifying authority has confirmed that no certificates relevant to the suspension will be issued, amended or re-issued during the course of the suspension or restriction, and states whether the notified body has the capability of continuing to monitor and remain responsible for existing certificates issued for the period of the suspension or restriction; In the event that the notifying authority determines that the notified body does not have the capability to support existing certificates issued, the provider of the system covered by the certificate shall confirm in writing to the national competent authorities of the Member State in which it has its registered place of business, within three months of the suspension or restriction, that another qualified notified body is temporarily assuming the functions of the notified body to monitor and remain responsible for the certificates during the period of suspension or restriction.

9.   With the exception of certificates unduly issued, and where a designation has been withdrawn, the certificates shall remain valid for a period of nine months under the following circumstances:

(a)   the national competent authority of the Member State in which the provider of the AI system covered by the certificate has its registered place of business has confirmed that there is no risk to health, safety or fundamental rights associated with the high-risk AI systems concerned; and

(b)   another notified body has confirmed in writing that it will assume immediate responsibilities for assessing those AI systems and completes its assessment within 12 months of the withdrawal of the designation.

In the circumstances referred to in the first subparagraph, the national competent authority of the Member State in which the provider of the system covered by the certificate has its place of business may extend the provisional validity of the certificates for additional periods of three months, which shall not exceed 12 months in total.

The national competent authority or the notified body assuming the functions of the notified body affected by the change of designation shall immediately inform the Commission, the other Member States and the other notified bodies thereof.

Challenge to the competence of notified bodies

1.  The Commission shall, where necessary, investigate all cases where there are reasons to doubt the competence of a notified body or the continued fulfilment by a notified body of the requirements laid down in Article 31 and of its applicable responsibilities .

2.  The notifying authority shall provide the Commission, on request, with all relevant information relating to the notification or the maintenance of the competence of the notified body concerned.

3.  The Commission shall ensure that all sensitive information obtained in the course of its investigations pursuant to this Article is treated confidentially in accordance with Article 78 .

4.  Where the Commission ascertains that a notified body does not meet or no longer meets the requirements for its notification , it shall inform the notifying Member State accordingly and request it to take the necessary corrective measures, including the suspension or withdrawal of the notification if necessary . Where the Member State fails to take the necessary corrective measures, the Commission may, by means of an implementing act, suspend, restrict or withdraw the designation . That implementing act shall be adopted in accordance with the examination procedure referred to in Article 98(2).

Coordination of notified bodies

1.  The Commission shall ensure that, with regard to high-risk AI systems , appropriate coordination and cooperation between notified bodies active in the conformity assessment procedures ▌ pursuant to this Regulation are put in place and properly operated in the form of a sectoral group of notified bodies.

2.  Each notifying authority shall ensure that the bodies notified by it participate in the work of a group referred to in paragraph 1, directly or through designated representatives.

3.   The Commission shall provide for the exchange of knowledge and best practices between the notifying authorities of the Member States.

Conformity assessment bodies of third countries

Conformity assessment bodies established under the law of a third country with which the Union has concluded an agreement may be authorised to carry out the activities of notified bodies under this Regulation , provided that they meet the requirements in Article 31 or they ensure an equivalent level of compliance .

Standards, conformity assessment, certificates, registration

Harmonised standards and standardisation deliverables

1.  High-risk AI systems which are in conformity with harmonised standards or parts thereof the references of which have been published in the Official Journal of the European Union in accordance with Regulation (EU) No 1025/2012 shall be presumed to be in conformity with the requirements set out in Section 2 of this Chapter or, as applicable, with the obligations set out in Chapter IV of this Regulation , to the extent that those standards cover those requirements or obligations.

2.   The Commission shall issue standardisation requests covering all requirements set out in Section 2 of this Chapter and , as applicable , obligations set out in Chapter IV of this Regulation, in accordance with Article 10 of Regulation (EU) No 1025/2012, without undue delay. The standardisation request shall also ask for deliverables on reporting and documentation processes to improve AI systems’ resource performance, such as reducing the high-risk AI system’s consumption of energy and other resources consumption during its lifecycle, and on the energy-efficient development of general-purpose AI models. When preparing a standardisation request, the Commission shall consult the Board and relevant stakeholders, including the advisory forum.

When issuing a standardisation request to European standardisation organisations, the Commission shall specify that standards have to be clear, consistent, including with the standards developed in the various sectors for products covered by the existing Union harmonisation legislation listed in Annex I, and aiming to ensure that AI systems or AI models placed on the market or put into service in the Union meet the relevant requirements laid down in this Regulation.

The Commission shall request the European standardisation organisations to provide evidence of their best efforts to fulfil the objectives referred to in the first and the second subparagraph of this paragraph in accordance with Article 24 of Regulation (EU) No 1025/2012.

3.   The participants in the standardisation process shall seek to promote investment and innovation in AI, including through increasing legal certainty, as well as the competitiveness and growth of the Union market, and shall contribute to strengthening global cooperation on standardisation and taking into account existing international standards in the field of AI that are consistent with Union values, fundamental rights and interests, and shall enhance multi-stakeholder governance ensuring a balanced representation of interests and the effective participation of all relevant stakeholders in accordance with Articles 5, 6, and 7 of Regulation (EU) No 1025/2012.

Common specifications

1.   The Commission is empowered to adopt, implementing acts establishing common specifications for the requirements set out in Section 2 of this Chapter or, as applicable, for the obligations set out in Chapter IV where the following conditions have been fulfilled:

(a)   the Commission has requested, pursuant to Article 10(1) of Regulation (EU) No 1025/2012, one or more European standardisation organisations to draft a harmonised standard for the requirements set out in Section 2 of this Chapter, and:

(i)   the request has not been accepted by any of the European standardisation organisations; or

(ii)   the harmonised standards addressing that request are not delivered within the deadline set in accordance with Article 10(1) of Regulation (EU) No 1025/2012; or

(iii)   the relevant harmonised standards insufficiently address fundamental rights concerns; or

(iv)   the harmonised standards do not comply with the request; and

(b)   no reference to harmonised standards covering the requirements referred to in Section 2 of this Title has been published in the Official Journal of the European Union in accordance with Regulation (EU) No 1025/2012, and no such reference is expected to be published within a reasonable period.

The implementing acts referred to in the first subparagraph of this paragraph shall be adopted in accordance with the examination procedure referred to in Article 98(2), after consulting the advisory forum referred to in Article 67.

2.   Before preparing a draft implementing act, the Commission shall inform the committee referred to in Article 22 of Regulation (EU) No 1025/2012 that it considers the conditions laid down in paragraph 1 of this Article to be fulfilled.

3.  High-risk AI systems which are in conformity with the common specifications referred to in paragraph 1 , or parts of those specifications, shall be presumed to be in conformity with the requirements set out in Section 2, to the extent those common specifications cover those requirements.

4.   Where a harmonised standard is adopted by a European standardisation organisation and proposed to the Commission for the publication of its reference in the Official Journal of the European Union, the Commission shall assess the harmonised standard in accordance with Regulation (EU) No 1025/2012. When reference to a harmonised standard is published in the Official Journal of the European Union, the Commission shall repeal the implementing acts referred to in paragraph 1, or parts thereof which cover the same requirements set out in Section 2 of this Chapter.

5.  Where providers of high-risk AI systems do not comply with the common specifications referred to in paragraph 1, they shall duly justify that they have adopted technical solutions that meet the requirements referred to in Section 2 to a level at least equivalent thereto.

6.   Where a Member State considers that a common specification does not entirely meet the requirements set out in Section 2, it shall inform the Commission thereof with a detailed explanation. The Commission shall assess that information and, if appropriate, amend the implementing act establishing the common specification concerned.

Presumption of conformity with certain requirements

1.  ▌High-risk AI systems that have been trained and tested on data reflecting the specific geographical, behavioural , contextual or functional setting within which they are intended to be used shall be presumed to be in compliance with the relevant requirements laid down in Article 10(4).

2.  High-risk AI systems that have been certified or for which a statement of conformity has been issued under a cybersecurity scheme pursuant to Regulation (EU) 2019/881 and the references of which have been published in the Official Journal of the European Union shall be presumed to be in compliance with the cybersecurity requirements set out in Article 15 of this Regulation in so far as the cybersecurity certificate or statement of conformity or parts thereof cover those requirements.

Conformity assessment

1.  For high-risk AI systems listed in point 1 of Annex III, where, in demonstrating the compliance of a high-risk AI system with the requirements set out in Section 2, the provider has applied harmonised standards referred to in Article 40, or, where applicable, common specifications referred to in Article 41, the provider shall opt for one of the following conformity assessment procedures based on:

(a)  the internal control referred to in Annex VI; or

(b)  the assessment of the quality management system and the assessment of the technical documentation, with the involvement of a notified body, referred to in Annex VII.

▌In demonstrating the compliance of a high-risk AI system with the requirements set out in Section 2, the provider shall follow the conformity assessment procedure set out in Annex VII where:

(a)  harmonised standards referred to in Article 40 ▌ do not exist, and common specifications referred to in Article 41 are not available ;

(b)   the provider has not applied, or has applied only part of, the harmonised standard;

(c)   the common specifications referred to in point (a) exist, but the provider has not applied them;

(d)   one or more of the harmonised standards referred to in point (a) has been published with a restriction, and only on the part of the standard that was restricted.

For the purposes of the conformity assessment procedure referred to in Annex VII, the provider may choose any of the notified bodies. However, where the high-risk AI system is intended to be put into service by law enforcement, immigration or asylum authorities or by Union institutions, bodies, offices or agencies, the market surveillance authority referred to in Article 74(8) or (9), as applicable, shall act as a notified body.

2.  For high-risk AI systems referred to in points 2 to 8 of Annex III, ▌ providers shall follow the conformity assessment procedure based on internal control as referred to in Annex VI, which does not provide for the involvement of a notified body. ▌

3.  For high-risk AI systems covered by the Union harmonisation legislation listed in Section A of Annex I,,, the provider shall follow the relevant conformity assessment procedure as required under those legal acts. The requirements set out in Section 2 of this Chapter shall apply to those high-risk AI systems and shall be part of that assessment. Points 4.3., 4.4., 4.5. and the fifth paragraph of point 4.6 of Annex VII shall also apply.

For the purposes of that assessment, notified bodies which have been notified under those legal acts shall be entitled to control the conformity of the high-risk AI systems with the requirements set out in Section 2, provided that the compliance of those notified bodies with requirements laid down in Article 31(4), (10) and (11) has been assessed in the context of the notification procedure under those legal acts.

Where a legal act listed in section A of Annex I enables the product manufacturer to opt out from a third-party conformity assessment, provided that that manufacturer has applied all harmonised standards covering all the relevant requirements, that manufacturer may use that option only if it has also applied harmonised standards or, where applicable, common specifications referred to in Article 41, covering the requirements set out in Section 2 of this Chapter.

4.  High-risk AI systems that have already been subject to a conformity assessment procedure shall undergo a new conformity assessment procedure in the event of a substantial modification, regardless of whether the modified system is intended to be further distributed or continues to be used by the current deployer .

For high-risk AI systems that continue to learn after being placed on the market or put into service, changes to the high-risk AI system and its performance that have been pre-determined by the provider at the moment of the initial conformity assessment and are part of the information contained in the technical documentation referred to in point 2(f) of Annex IV, shall not constitute a substantial modification.

5.  The Commission shall adopt delegated acts in accordance with Article 97 to update Annexes VI and VII in ▌ light of technical progress.

6.  The Commission shall adopt delegated acts in accordance with Article 97 amending paragraphs 1 and 2 of this Article in order to subject high-risk AI systems referred to in points 2 to 8 of Annex III to the conformity assessment procedure referred to in Annex VII or parts thereof. The Commission shall adopt such delegated acts taking into account the effectiveness of the conformity assessment procedure based on internal control referred to in Annex VI in preventing or minimising the risks to health and safety and protection of fundamental rights posed by such systems, as well as the availability of adequate capacities and resources among notified bodies.

Certificates

1.  Certificates issued by notified bodies in accordance with Annex VII shall be drawn-up in a language which can be easily understood by the relevant authorities in the Member State in which the notified body is established .

2.  Certificates shall be valid for the period they indicate, which shall not exceed five years for AI systems covered by Annex I, and four years for AI systems covered by Annex III . On the application of the provider, the validity of a certificate may be extended for further periods, each not exceeding five years for AI systems covered by Annex I, and four years for AI systems covered by Annex III , based on a re-assessment in accordance with the applicable conformity assessment procedures. Any supplement to a certificate shall remain valid, provided that the certificate which it supplements is valid.

3.  Where a notified body finds that an AI system no longer meets the requirements set out in Section 2, it shall, taking account of the principle of proportionality, suspend or withdraw the certificate issued or impose restrictions on it, unless compliance with those requirements is ensured by appropriate corrective action taken by the provider of the system within an appropriate deadline set by the notified body. The notified body shall give reasons for its decision.

▌An appeal procedure against decisions of the notified bodies, including against conformity certificates issued, shall be available.

Information obligations of notified bodies

1.  Notified bodies shall inform the notifying authority of the following:

(a)  any Union technical documentation assessment certificates, any supplements to those certificates, and any quality management system approvals issued in accordance with the requirements of Annex VII;

(b)  any refusal, restriction, suspension or withdrawal of a Union technical documentation assessment certificate or a quality management system approval issued in accordance with the requirements of Annex VII;

(c)  any circumstances affecting the scope of or conditions for notification;

(d)  any request for information which they have received from market surveillance authorities regarding conformity assessment activities;

(e)  on request, conformity assessment activities performed within the scope of their notification and any other activity performed, including cross-border activities and subcontracting.

2.  Each notified body shall inform the other notified bodies of:

(a)  quality management system approvals which it has refused, suspended or withdrawn, and, upon request, of quality system approvals which it has issued;

(b)  Union technical documentation assessment certificates or any supplements thereto which it has refused, withdrawn, suspended or otherwise restricted, and, upon request, of the certificates and/or supplements thereto which it has issued.

3.  Each notified body shall provide the other notified bodies carrying out similar conformity assessment activities covering the same types of AI systems with relevant information on issues relating to negative and, on request, positive conformity assessment results.

4.   The obligations referred to in paragraphs 1, 2 and 3 of this Article shall be complied with in accordance with Article 78.

Derogation from conformity assessment procedure

1.  By way of derogation from Article 43 and upon a duly justified request , any market surveillance authority may authorise the placing on the market or the putting into service of specific high-risk AI systems within the territory of the Member State concerned, for exceptional reasons of public security or the protection of life and health of persons, environmental protection or the protection of key industrial and infrastructural assets. That authorisation shall be for a limited period ▌ while the necessary conformity assessment procedures are being carried out, taking into account the exceptional reasons justifying the derogation . The completion of those procedures shall be undertaken without undue delay.

2.   In a duly justified situation of urgency for exceptional reasons of public security or in the case of specific, substantial and imminent threat to the life or physical safety of natural persons, law-enforcement authorities or civil protection authorities may put a specific high-risk AI system into service without the authorisation referred to in paragraph 1, provided that such authorisation is requested during or after the use without undue delay. If the authorisation referred to in paragraph 1 is refused, the use of the high-risk AI system shall be stopped with immediate effect and all the results and outputs of such use shall be immediately discarded.

3.  The authorisation referred to in paragraph 1 shall be issued only if the market surveillance authority concludes that the high-risk AI system complies with the requirements of Section 2. The market surveillance authority shall inform the Commission and the other Member States of any authorisation issued pursuant to paragraph 1. This obligation shall not cover sensitive operational data in relation to the activities of law-enforcement authorities.

4.  Where, within 15 calendar days of receipt of the information referred to in paragraph 3, no objection has been raised by either a Member State or the Commission in respect of an authorisation issued by a market surveillance authority of a Member State in accordance with paragraph 1, that authorisation shall be deemed justified.

5.  Where, within 15 calendar days of receipt of the notification referred to in paragraph 3, objections are raised by a Member State against an authorisation issued by a market surveillance authority of another Member State, or where the Commission considers the authorisation to be contrary to Union law, or the conclusion of the Member States regarding the compliance of the system as referred to in paragraph 3 to be unfounded, the Commission shall, without delay, enter into consultations with the relevant Member State. The operators concerned shall be consulted and have the possibility to present their views. Having regard thereto, the Commission shall decide whether the authorisation is justified. The Commission shall address its decision to the Member State concerned and to the relevant operators.

6.  Where the Commission considers the authorisation unjustified, it shall be withdrawn by the market surveillance authority of the Member State concerned.

7.  ▌For high-risk AI systems related to products covered by Union harmonisation legislation listed in Section A of Annex I, only the derogations from the conformity assessment established in that Union harmonisation legislation shall apply .

EU declaration of conformity

1.  The provider shall draw up a written machine readable, physical or electronically signed EU declaration of conformity for each high-risk AI system, and keep it at the disposal of the national competent authorities for 10 years after the high-risk AI system has been placed on the market or put into service. The EU declaration of conformity shall identify the high-risk AI system for which it has been drawn up. A copy of the EU declaration of conformity shall be submitted to the relevant national competent authorities upon request.

2.  The EU declaration of conformity shall state that the high-risk AI system concerned meets the requirements set out in Section 2. The EU declaration of conformity shall contain the information set out in Annex V, and shall be translated into a language that can be easily understood by the national competent authorities of the Member States in which the high-risk AI system is placed on the market or made available.

3.  Where high-risk AI systems are subject to other Union harmonisation legislation which also requires an EU declaration of conformity, a single EU declaration of conformity shall be drawn up in respect of all Union law applicable to the high-risk AI system. The declaration shall contain all the information required to identify the Union harmonisation legislation to which the declaration relates.

4.  By drawing up the EU declaration of conformity, the provider shall assume responsibility for compliance with the requirements set out in Section 2. The provider shall keep the EU declaration of conformity up-to-date as appropriate.

5.  The Commission shall adopt delegated acts in accordance with Article 97 for the purpose of updating the content of the EU declaration of conformity set out in Annex V, in order to introduce elements that become necessary in light of technical progress.

1.  The CE marking shall be subject to the general principles set out in Article 30 of Regulation (EC) No 765/2008 .

2.   For high-risk AI systems provided digitally, a digital CE marking shall be used, only if it can easily be accessed via the interface from which that system is accessed or via an easily accessible machine-readable code or other electronic means.

3.  The CE marking shall be affixed visibly, legibly and indelibly for high-risk AI systems. Where that is not possible or not warranted on account of the nature of the high-risk AI system, it shall be affixed to the packaging or to the accompanying documentation, as appropriate .

4.  Where applicable, the CE marking shall be followed by the identification number of the notified body responsible for the conformity assessment procedures set out in Article 43. The identification number of the notified body shall be affixed by the body itself or, under its instructions, by the provider or by the provider’s authorised representative. The identification number shall also be indicated in any promotional material which mentions that the high-risk AI system fulfils the requirements for CE marking.

5.   Where high-risk AI systems are subject to other Union law which also provides for the affixing of the CE marking, the CE marking shall indicate that the high-risk AI system also fulfil the requirements of that other law.

Registration

1.  Before placing on the market or putting into service a high-risk AI system listed in Annex III, with the exception of high-risk AI systems referred to in point 2 of Annex III , the provider or, where applicable, the authorised representative shall register themselves and their system in the EU database referred to in Article 71.

2.   Before placing on the market or putting into service an AI system for which the provider has concluded that it is not high-risk according to Article 6(3), that provider or, where applicable, the authorised representative shall register themselves and that system in the EU database referred to in Article 71.

3.   Before putting into service or using a high-risk AI system listed in Annex III, with the exception of high-risk AI systems listed in point 2 of Annex III, deployers who are public authorities, agencies or bodies or persons acting on their behalf shall register themselves, select the system and register its use in the EU database referred to in Article 71.

4.   For high-risk AI systems referred to in points 1, 6 and 7 of Annex III, in the areas of law enforcement, migration, asylum and border control management, the registration referred to in paragraphs 1, 2 and 3 of this Article shall be in a secure non-public section of the EU database referred to in Article 71 and shall include only the following information, as applicable, referred to in:

(a)   section A, points 1 to 10, of Annex VIII, with the exception of points 5a, 7 and 8;

(b)   section C, points 1 to 3 , of Annex VIII,;

(c)  Section B, points 1 to 5, and points 8 and 9 of Annex VIII;

(d)   points 1 to 3, and point 5, of Annex IX.

Only the Commission and national authorities referred to in Article 74(8) shall have access to the restricted sections of the EU database listed in the first subparagraph of this paragraph.

5.   High-risk AI systems referred to in point 2 of Annex III shall be registered at national level.

TRANSPARENCY OBLIGATIONS FOR PROVIDERS AND DEPLOYERS OF CERTAIN AI SYSTEMS

Transparency obligations for providers and users of certain AI systems

1.  Providers shall ensure that AI systems intended to interact directly with natural persons are designed and developed in such a way that the natural persons concerned are informed that they are interacting with an AI system, unless this is obvious from the point of view of a natural person who is reasonably well-informed, observant and circumspect, taking into account the circumstances and the context of use. This obligation shall not apply to AI systems authorised by law to detect, prevent, investigate or prosecute criminal offences, subject to appropriate safeguards for the rights and freedoms of third parties, unless those systems are available for the public to report a criminal offence.

2.   Providers of AI systems, including general-purpose AI systems, generating synthetic audio, image, video or text content, shall ensure that the outputs of the AI system are marked in a machine-readable format and detectable as artificially generated or manipulated. Providers shall ensure their technical solutions are effective, interoperable, robust and reliable as far as this is technically feasible, taking into account the specificities and limitations of various types of content, the costs of implementation and the generally acknowledged state-of-the-art, as may be reflected in relevant technical standards. This obligation shall not apply to the extent the AI systems perform an assistive function for standard editing or do not substantially alter the input data provided by the deployer or the semantics thereof, or where authorised by law to detect, prevent, investigate or prosecute criminal offences.

3.   Deployers of an emotion recognition system or a biometric categorisation system shall inform the natural persons exposed thereto of the operation of the system, and shall process the personal data in accordance with Regulations (EU) 2016/679 and (EU) 201 8 /1725 and Directive (EU) 2016/ 6 80, as applicable . This obligation shall not apply to AI systems used for biometric categorisation and emotion recognition , which are permitted by law to detect, prevent or investigate criminal offences , subject to appropriate safeguards for the rights and freedoms of third parties, and in compliance with Union law .

4.   Deployers of an AI system that generates or manipulates image, audio or video content constituting a deep fake, shall disclose that the content has been artificially generated or manipulated. This obligation shall not apply where the use is authorised by law to detect, prevent, investigate or prosecute criminal offence. Where the content forms part of an evidently artistic, creative, satirical, fictional analogous work or programme, the transparency obligations set out in this paragraph are limited to disclosure of the existence of such generated or manipulated content in an appropriate manner that does not hamper the display or enjoyment of the work .

Deployers of an AI system that generates or manipulates text which is published with the purpose of informing the public on matters of public interest shall disclose that the text has been artificially generated or manipulated. This obligation shall not apply where the use is authorised by law to detect, prevent, investigate or prosecute criminal offences or where the AI-generated content has undergone a process of human review or editorial control and where a natural or legal person holds editorial responsibility for the publication of the content .

5.   The information referred to in paragraphs 1 to 4 shall be provided to the natural persons concerned in a clear and distinguishable manner at the latest at the time of the first interaction or exposure. The information shall conform to the applicable accessibility requirements.

6.  Paragraphs 1 to 4 shall not affect the requirements and obligations set out in Chapter III, and shall be without prejudice to other transparency obligations laid down in Union or national law for deployers of AI systems.

7.   The AI Office shall encourage and facilitate the drawing up of codes of practice at Union level to facilitate the effective implementation of the obligations regarding the detection and labelling of artificially generated or manipulated content. The Commission is empowered to adopt implementing acts to approve those codes of practice in accordance with the procedure laid down in Article 56 (6), (7) and (8). If it deems the code is not adequate, the Commission is empowered to adopt an implementing act specifying common rules for the implementation of those obligations in accordance with the examination procedure laid down in Article 98(2) .

GENERAL-PURPOSE AI MODELS

Classification rules

Classification of general-purpose AI models as general-purpose AI models with systemic risk

1.   A general-purpose AI model shall be classified as a general-purpose AI model with systemic risk if it meets any of the following requirements:

(a)   it has high impact capabilities evaluated on the basis of appropriate technical tools and methodologies, including indicators and benchmarks;

(b)   based on a decision of the Commission, ex officio or following a qualified alert from the scientific panel, it has capabilities or an impact equivalent to those set out in point (a) having regard to the criteria set out in Annex XIII.

2.   A general-purpose AI model shall be presumed to have high impact capabilities pursuant to paragraph 1, point (a), when the cumulative amount of computation used for its training measured in FLOPs is greater than 10^25.

3.   The Commission shall adopt delegated acts in accordance with Article 97 to amend the thresholds listed in paragraphs 2 and 3 of this Article, as well as to supplement benchmarks and indicators in light of evolving technological developments, such as algorithmic improvements or increased hardware efficiency, when necessary, for these thresholds to reflect the state of the art.

1.   Where a general-purpose AI model meets the requirement referred to in Article 51(1), point (a), the relevant provider shall notify the Commission without delay and in any event within two weeks after that requirement is met or it becomes known that it will be met. That notification shall include the information necessary to demonstrate that the relevant requirement has been met. If the Commission becomes aware of a general-purpose AI model presenting systemic risks of which it has not been notified, it may decide to designate it as a model with systemic risk.

2.   The provider of a general purpose AI model that meets the requirement referred to in Article 51(1), point (a), may present, with its notification, sufficiently substantiated arguments to demonstrate that, exceptionally, although it meets that requirement, the general purpose AI model does not present, due to its specific characteristics, systemic risks and therefore should not be classified as a general-purpose AI model with systemic risk.

3.   Where the Commission concludes that the arguments submitted pursuant to paragraph 2 are not sufficiently substantiated and the relevant provider was not able to demonstrate that the general-purpose AI model does not present, due to its specific characteristics, systemic risks, it shall reject those arguments, and the general-purpose AI model shall be considered to be a general-purpose AI model with systemic risk.

4.   The Commission may designate a general-purpose AI model as presenting systemic risks, ex officio or following a qualified alert from the scientific panel pursuant to Article 90(1), point (a), on the basis of criteria set out in Annex XIII.

The Commission shall adopt delegated acts in accordance with Article 97 to specify and update the criteria set out in Annex XIII.

5.   Upon a reasoned request of a provider whose model has been designated as a general-purpose AI model with systemic risk pursuant to paragraph 4, the Commission shall take the request into account and may decide to reassess whether the general-purpose AI model can still be considered to present systemic risks on the basis of the criteria set out in Annex XIII. Such request shall contain objective, detailed and new reasons that have arisen since the designation decision. Providers may request reassessment at the earliest six months after the designation decision. Where the Commission, following its reassessment, decides to maintain the designation as a general-purpose AI model with systemic risk, providers may request reassessment at the earliest six months after that decision.

6.   The Commission shall ensure that a list of general-purpose AI models with systemic risk is published and shall keep that list up to date, without prejudice to the need to observe and protect intellectual property rights and confidential business information or trade secrets in accordance with Union and national law.

Obligations for providers of general-purpose AI models

1.   Providers of general-purpose AI models shall:

(a)   draw up and keep up-to-date the technical documentation of the model, including its training and testing process and the results of its evaluation, which shall contain, at a minimum, the elements set out in Annex XI for the purpose of providing it, upon request, to the AI Office and the national competent authorities;

(b)   draw up, keep up-to-date and make available information and documentation to providers of AI systems who intend to integrate the general-purpose AI model into their AI systems. Without prejudice to the need to respect and protect intellectual property rights and confidential business information or trade secrets in accordance with Union and national law, the information and documentation shall:

(i)   enable providers of AI systems to have a good understanding of the capabilities and limitations of the general-purpose AI model and to comply with their obligations pursuant to this Regulation; and

(ii)   contain, at a minimum, the elements set out in Annex XII;

(c)   put in place a policy to comply with Union copyright law, and in particular to identify and comply with, including through state of the art technologies, a reservation of rights expressed pursuant to Article 4(3) of Directive (EU) 2019/790;

(d)   draw up and make publicly available a sufficiently detailed summary about the content used for training of the general-purpose AI model, according to a template provided by the AI Office.

2.   The obligations set out in paragraph 1, points (a) and (b), shall not apply to providers of AI models that are released under a free and open licence that allows for the access, usage, modification, and distribution of the model, and whose parameters, including the weights, the information on the model architecture, and the information on model usage, are made publicly available. This exception shall not apply to general-purpose AI models with systemic risks.

3.   Providers of general-purpose AI models shall cooperate as necessary with the Commission and the national competent authorities in the exercise of their competences and powers pursuant to this Regulation.

4.   Providers of general-purpose AI models may rely on codes of practice within the meaning of Article 56 to demonstrate compliance with the obligations set out in paragraph 1 of this Article, until a harmonised standard is published. Providers who are in compliance with a European harmonised standard shall be presumed to be in compliance with the obligations set out in paragraph 1 of this Article. Providers of general-purpose AI models who do not adhere to an approved code of practice shall demonstrate alternative adequate means of compliance for approval by the Commission.

5.   For the purpose of facilitating compliance with Annex XI, in particular points 2 (d) and (e) thereof, the Commission shall adopt delegated acts in accordance with Article 97 to detail measurement and calculation methodologies with a view to allowing for comparable and verifiable documentation.

6.   The Commission shall adopt delegated acts in accordance with Article 97(2) to amend Annexes XI and XII in the light of evolving technological developments.

7.   Any information or documentation obtained pursuant to this Article, including trade secrets, shall be treated in compliance with the confidentiality obligations set out in Article 78.

Authorised representatives of providers of general-purpose AI models

1.   Prior to placing a general-purpose AI model on the Union market, providers established in third countries shall, by written mandate, appoint an authorised representative which is established in the Union.

2.   The authorised representative shall perform the tasks specified in the mandate received from the provider. It shall provide a copy of the mandate to the AI Office upon request, in one of the official languages of the institutions of the Union. For the purposes of this Regulation, the mandate shall empower the authorised representative to carry out the following tasks:

(a)   verify that the technical documentation specified in Annex XI has been drawn up and all obligations referred to in Articles 53 and, where applicable, Article 55 have been fulfilled by the provider;

(b)   keep a copy of the technical documentation specified in Annex XI at the disposal of the AI Office and national competent authorities, for a period of 10 years after the general-purpose AI model has been placed on the market, and keep current the contact details of the provider that appointed the authorised representative;

(c)   provide the AI Office, upon a reasoned request, with all the information and documentation, including that referred to in point (b), necessary to demonstrate its compliance with the obligations in this Chapter;

(d)   cooperate with the AI Office and national competent authorities, upon a reasoned request, in any action the latter take in relation to a general-purpose AI model with systemic risks, including when the model is integrated into AI systems placed on the market or put into service in the Union.

3.   The mandate shall empower the authorised representative to be addressed, in addition to or instead of the provider, by the AI Office or the national competent authorities, on all issues related to ensuring compliance with this Regulation.

4.   The authorised representative shall terminate the mandate if it considers or has reason to consider the provider to be acting contrary to its obligations pursuant to this Regulation. In such a case, it shall also immediately inform the AI Office about the termination of the mandate and the reasons therefor.

5.   The obligation set out in this Article shall not apply to providers of general-purpose AI models that are released under a free and open source licence that allows for the access, usage, modification, and distribution of the model, and whose parameters, including the weights, the information on the model architecture, and the information on model usage, are made publicly available, unless the general-purpose AI models present systemic risks.

Obligations for providers of general-purpose AI models with systemic risk

1.   In addition to the obligations listed in Article 53, providers of general-purpose AI models with systemic risk shall:

(a)   perform model evaluation in accordance with standardised protocols and tools reflecting the state-of-the-art, including conducting and documenting adversarial testing of the model with a view to identifying and mitigating systemic risk;

(b)   assess and mitigate possible systemic risks at Union level, including their sources, that may stem from the development, the placing on the market, or the use of general-purpose AI models with systemic risk;

(c)   keep track of, document and report without undue delay to the AI Office and, as appropriate, to national competent authorities, relevant information about serious incidents and possible corrective measures to address them;

(d)   ensure an adequate level of cybersecurity protection for the general-purpose AI model with systemic risk and the physical infrastructure of the model.

2.   Providers of general-purpose AI models with systemic risk may rely on codes of practice within the meaning of Article 56 t o demonstrate compliance with the obligations set out in paragraph 1 of this Article, until a harmonised standard is published. Providers who are in compliance with a European harmonised standard shall be presumed to be in compliance with the obligations set out in paragraph 1 of this Article. Providers of general-purpose AI models with systemic risks who do not adhere to an approved code of practice shall demonstrate alternative adequate means of compliance for approval by the Commission.

3.   Any information or documentation obtained pursuant to this Article, including trade secrets, shall be treated in compliance with the confidentiality obligations set out in Article 78.

Codes of practice

1.   The AI Office shall encourage and facilitate the drawing up of codes of practice at Union level in order to contribute to the proper application of this Regulation, taking into account international approaches.

2.   The AI Office and the Board shall aim to ensure that the codes of practice cover at least the obligations provided for in Articles 53 and 55, including the following issues:

(a)   means to ensure that the information referred to in Article 53(1), points (a) and (b) , is kept up to date in the light of market and technological developments;

(b)   the adequate level of detail for the summary about the content used for training;

(c)   the identification of the type and nature of the systemic risks at Union level, including their sources, where appropriate;

(d)   the measures, procedures and modalities for the assessment and management of the systemic risks at Union level, including the documentation thereof, which shall be proportionate to the risks, take into consideration their severity and probability and take into account the specific challenges of tackling those risks in the light of the possible ways in which such risks may emerge and materialise along the AI value chain.

3.   The AI Office may invite all providers of general-purpose AI models, as well as relevant national competent authorities, to participate in the drawing-up of codes of practice. Civil society organisations, industry, academia and other relevant stakeholders, such as downstream providers and independent experts, may support the process.

4.   The AI Office and the Board shall aim to ensure that the codes of practice clearly set out their specific objectives and contain commitments or measures, including key performance indicators as appropriate, to ensure the achievement of those objectives, and that they take due account of the needs and interests of all interested parties, including affected persons, at Union level.

5.   The AI Office shall aim to ensure that participants to the codes of practice report regularly to the AI Office on the implementation of the commitments and the measures taken and their outcomes, including as measured against the key performance indicators as appropriate. Key performance indicators and reporting commitments shall reflect differences in size and capacity between various participants.

6.   The AI Office and the Board shall regularly monitor and evaluate the achievement of the objectives of the codes of practice by the participants and their contribution to the proper application of this Regulation. The AI Office and the Board shall assess whether the codes of practice cover the obligations provided for in Articles 53 and 55, as well as the issues listed in paragraph 2 of this Article, and shall regularly monitor and evaluate the achievement of their objectives. They shall publish their assessment of the adequacy of the codes of practice.

The Commission may, by way of an implementing act, approve a code of practice and give it a general validity within the Union. That implementing act shall be adopted in accordance with the examination procedure referred to in Article 98(2).

7.   The AI Office may invite all providers of general-purpose AI models to adhere to the codes of practice. For providers of general-purpose AI models not presenting systemic risks this adherence may be limited to the obligations provided for in Article 53, unless they declare explicitly their interest to join the full code.

8.   The AI Office shall, as appropriate, also encourage and facilitate the review and adaptation of the codes of practice, in particular in the light of emerging standards. The AI Office shall assist in the assessment of available standards.

9.   Codes of practice shall be ready at the latest by … [nine months from the date of entry into force of this Regulation]. The AI Office shall take the necessary steps, including inviting providers pursuant to paragraph 7.

If, by ... [12 months from the date of entry into force], a code of practice cannot be finalised, or if the AI Office deems it is not adequate following its assessment under paragraph 6 of this Article, the Commission may provide, by means of implementing acts, common rules for the implementation of the obligations provided for in Articles 53 and 55, including the issues set out in paragraph 2 of this Article. Those implementing acts shall be adopted in accordance with the examination procedure referred to in Article 98(2).

MEASURES IN SUPPORT OF INNOVATION

AI regulatory sandboxes

1.   Member States shall ensure that their competent authorities establish at least one AI regulatory sandbox at national level, which shall be operational by … [24 months from the date of entry into force of this Regulation]. That sandbox may also be established jointly with the competent authorities of one or more other Member States. The Commission may provide technical support, advice and tools for the establishment and operation of AI regulatory sandboxes.

The obligation under the first subparagraph may also be fulfilled by participating in an existing sandbox in so far as that participation provides an equivalent level of national coverage for the participating Member States .

2.   Additional AI regulatory sandboxes at regional or local level, or established jointly with the competent authorities of other Member States may also be established.

3.   The European Data Protection Supervisor may also establish an AI regulatory sandbox for Union institutions, bodies, offices and agencies, and may exercise the roles and the tasks of national competent authorities in accordance with this Chapter.

4.   Member States shall ensure that the competent authorities referred to in paragraphs 1 and 2 allocate sufficient resources to comply with this Article effectively and in a timely manner. Where appropriate, national competent authorities shall cooperate with other relevant authorities, and may allow for the involvement of other actors within the AI ecosystem. This Article shall not affect other regulatory sandboxes established under Union or national law. Member States shall ensure an appropriate level of cooperation between the authorities supervising those other sandboxes and the national competent authorities.

5.   AI regulatory sandboxes established under paragraph (1) shall provide for a controlled environment that fosters innovation and facilitates the development, training, testing and validation of innovative AI systems for a limited time before their being placed on the market or put into service pursuant to a specific sandbox plan agreed between the prospective providers and the competent authority. Such regulatory sandboxes may include testing in real world conditions supervised in the sandbox.

6.   Competent authorities shall provide, as appropriate, guidance, supervision and support within the AI regulatory sandbox with a view to identifying risks, in particular to fundamental rights, health and safety, testing, mitigation measures, and their effectiveness in relation to the obligations and requirements of this Regulation and, where relevant, other Union and Member State law supervised within the sandbox.

7.   Competent authorities shall provide providers and prospective providers using the AI regulatory sandbox with guidance on regulatory expectations and how to fulfil the requirements and obligations set out in this Regulation.

Upon request of the provider or prospective provider of the AI system, the competent authority shall provide a written proof of the activities successfully carried out in the sandbox. The competent authority shall also provide an exit report detailing the activities carried out in the sandbox and the related results and learning outcomes. Providers may use such documentation to demonstrate their compliance with this Regulation through the conformity assessment process or relevant market surveillance activities. In this regard, the exit reports and the written proof provided by the national competent authority shall be taken positively into account by market surveillance authorities and notified bodies, with a view to accelerating conformity assessment procedures to a reasonable extent.

8.   Subject to the confidentiality provisions in Article 78, and with the agreement of the provider or prospective provider, the Commission and the Board shall be authorised to access the exit reports and shall take them into account, as appropriate, when exercising their tasks under this Regulation. If both the provider or prospective provider and the national competent authority explicitly agree, the exit report may be made publicly available through the single information platform referred to in this Article.

9.   The establishment of AI regulatory sandboxes shall aim to contribute to the following objectives:

(a)   improving legal certainty to achieve regulatory compliance with this Regulation or, where relevant, other applicable Union and national law;

(b)   supporting the sharing of best practices through cooperation with the authorities involved in the AI regulatory sandbox;

(c)   fostering innovation and competitiveness and facilitating the development of an AI ecosystem;

(d)   contributing to evidence-based regulatory learning;

(e)   facilitating and accelerating access to the Union market for AI systems, in particular when provided by SMEs, including start-ups.

10.   National competent authorities shall ensure that , to the extent the innovative AI systems involve the processing of personal data or otherwise fall under the supervisory remit of other national authorities or competent authorities providing or supporting access to data, the national data protection authorities and those other national or competent authorities are associated with the operation of the AI regulatory sandbox and involved in the supervision of those aspects to the extent of their respective tasks and powers .

11.  The AI regulatory sandboxes shall not affect the supervisory or corrective powers of the competent authorities supervising the sandboxes, including at regional or local level . Any significant risks to health and safety and fundamental rights identified during the development and testing of such AI systems shall result in an adequate mitigation . National competent authorities shall have the power to temporarily or permanently suspend the testing process, or the participation in the sandbox if no effective mitigation is possible, and shall inform the AI Office of such decision. National competent authorities shall exercise their supervisory powers within the limits of the relevant law, using their discretionary powers when implementing legal provisions in respect of a specific AI sandbox project, with the objective of supporting innovation in AI in the Union .

12.   Providers and prospective providers participating in the AI regulatory sandbox shall remain liable under applicable Union and national liability law for any damage inflicted on third parties as a result of the experimentation taking place in the sandbox. However, provided that the prospective providers observe the specific plan and the terms and conditions for their participation and follow in good faith the guidance given by the national competent authority, no administrative fines shall be imposed by the authorities for infringements of this Regulation. To the extent that other competent authorities responsible for other Union and national law were actively involved in the supervision of the AI system in the sandbox and provided guidance for compliance, no administrative fines shall be imposed regarding that law.

13.   The AI regulatory sandboxes shall be designed and implemented in such a way that, where relevant, they facilitate cross-border cooperation between national competent authorities.

14.   National competent authorities ▌ shall coordinate their activities and cooperate within the framework of the ▌ Board. ▌

15.   National competent authorities shall inform the AI Office and the Board of the establishment of a sandbox, and may ask it for support and guidance. The AI Office shall make publicly available a list of planned and existing AI sandboxes and keep it up to date in order to encourage more interaction in the AI regulatory sandboxes and cross-border cooperation.

16.   National competent authorities shall submit to the AI Office and to the Board, annual reports, starting one year after the establishment of the AI regulatory sandbox and every year thereafter until its termination and a final report. Those reports shall provide information on the progress and results of the implementation of those sandboxes, including best practices, incidents, lessons learnt and recommendations on their setup and, where relevant, on the application and possible revision of this Regulation, including its delegated and implementing acts, and on the application of other Union law supervised by the competent authorities within the sandbox. The national competent authorities shall make those annual reports or abstracts thereof available to the public, online. The Commission shall, where appropriate, take the annual reports into account when exercising its tasks under this Regulation.

17.  The Commission shall develop a single and dedicated interface containing all relevant information related to AI regulatory sandboxes to allow stakeholders to interact with AI regulatory sandboxes and to raise enquiries with competent authorities, and to seek non-binding guidance on the conformity of innovative products, services, business models embedding AI technologies, in accordance with Article 62(1), point (c). The Commission shall proactively coordinate with national competent authorities, where relevant .

Detailed arrangements for and functioning of AI regulatory sandboxes

1.   In order to avoid fragmentation across the Union, the Commission shall adopt implementing acts specifying the detailed arrangements for the establishment, development, implementation, operation and supervision of the AI regulatory sandboxes. The implementing acts shall include common principles on the following issues:

(a)   eligibility and selection criteria for participation in the AI regulatory sandbox;

(b)   procedures for the application, participation, monitoring, exiting from and termination of the AI regulatory sandbox, including the sandbox plan and the exit report;

(c)   the terms and conditions applicable to the participants.

Those implementing acts shall be adopted in accordance with the examination procedure referred to in Article 98(2).

2.   The implementing acts referred to in paragraph 1 shall ensure that:

(a)   AI regulatory sandboxes are open to any applying prospective provider of an AI system who fulfils eligibility and selection criteria, which shall be transparent and fair and national competent authorities inform applicants of their decision within three months of the application;

(b)   AI regulatory sandboxes allow broad and equal access and keep up with demand for participation; prospective providers may also submit applications in partnerships with users and other relevant third parties;

(c)   the detailed arrangements for and conditions concerning AI regulatory sandboxes to the best extent possible support flexibility for national competent authorities to establish and operate their AI regulatory sandboxes;

(d)   access to the AI regulatory sandboxes is free of charge for SMEs, including start-ups, without prejudice to exceptional costs that national competent authorities may recover in a fair and proportionate manner;

(e)   they facilitate prospective providers, by means of the learning outcomes of the AI regulatory sandboxes, in complying with conformity assessment obligations under this Regulation and the voluntary application of the codes of conduct referred to in Article 95;

(f)   AI regulatory sandboxes facilitate the involvement of other relevant actors within the AI ecosystem, such as notified bodies and standardisation organisations, SMEs, start-ups, enterprises, innovators, testing and experimentation facilities, research and experimentation labs and European Digital Innovation Hubs, centres of excellence, individual researchers, in order to allow and facilitate cooperation with the public and private sectors;

(g)   procedures, processes and administrative requirements for application, selection, participation and exiting the AI regulatory sandbox are simple, easily intelligible, and clearly communicated in order to facilitate the participation of SMEs, including start-ups, with limited legal and administrative capacities and are streamlined across the Union, in order to avoid fragmentation and that participation in an AI regulatory sandbox established by a Member State, or by the European Data Protection Supervisor is mutually and uniformly recognised and carries the same legal effects across the Union;

(h)   participation in the AI regulatory sandbox is limited to a period that is appropriate to the complexity and scale of the project, which may be extended by the national competent authority;

(i)   AI regulatory sandboxes facilitate the development of tools and infrastructure for testing, benchmarking, assessing and explaining dimensions of AI systems relevant for regulatory learning, such as accuracy, robustness and cybersecurity, as well as measures to mitigate risks to fundamental rights and society at large.

3.   Prospective providers in the AI regulatory sandboxes, in particular SMEs and start-ups, shall be directed, where relevant, to pre-deployment services such as guidance on the implementation of this Regulation, to other value-adding services such as help with standardisation documents and certification, testing and experimentation facilities, European Digital Innovation Hubs and centres of excellence.

4.   Where national competent authorities consider authorising testing in real world conditions supervised within the framework of an AI regulatory sandbox to be established under this Article, they shall specifically agree with the participants on the terms and conditions of such testing and in particular on the appropriate safeguards with a view to protecting fundamental rights, health and safety. Where appropriate, they shall cooperate with other national competent authorities with a view to ensuring consistent practices across the Union.

Further processing of personal data for developing certain AI systems in the public interest in the AI regulatory sandbox

1.  , Personal data lawfully collected for other purposes may be processed in an AI regulatory sandbox solely for the purpose of developing , training and testing certain ▌ AI systems in the sandbox when all of the following conditions are met :

(a)  ▌ AI systems shall be developed for safeguarding substantial public interest by a public authority or another natural or legal person and in one or more of the following areas:

(i)  public safety and public health, including disease detection, diagnosis prevention, control and treatment and improvement of health care systems ;

(ii)  a high level of protection and improvement of the quality of the environment , protection of biodiversity, protection against pollution, green transition measures, climate change mitigation and adaptation measures ;

(iii)   energy sustainability;

(iv)   safety and resilience of transport systems and mobility, critical infrastructure and networks;

(v)   efficiency and quality of public administration and public services;

(b)  the data processed are necessary for complying with one or more of the requirements referred to in Chapter III, Section 2 where those requirements cannot effectively be fulfilled by processing anonymised, synthetic or other non-personal data;

(c)  there are effective monitoring mechanisms to identify if any high risks to the rights and freedoms of the data subjects , as referred to in Article 35 of Regulation (EU) 2016/679 and in Article 39 of Regulation (EU) 2018/1725, may arise during the sandbox experimentation, as well as response mechanisms to promptly mitigate those risks and, where necessary, stop the processing;

(d)  any personal data to be processed in the context of the sandbox are in a functionally separate, isolated and protected data processing environment under the control of the prospective provider and only authorised persons have access to those data;

(e)   providers can further share the originally collected data only in compliance with Union data protection law; any personal data crated in the sandbox cannot be shared outside the sandbox;

(f)  any processing of personal data in the context of the sandbox neither leads to measures or decisions affecting the data subjects nor does it affect the application of their rights laid down in Union law on the protection of personal data ;

(g)  any personal data processed in the context of the sandbox are protected by means of appropriate technical and organisational measures and deleted once the participation in the sandbox has terminated or the personal data has reached the end of its retention period;

(h)  the logs of the processing of personal data in the context of the sandbox are kept for the duration of the participation in the sandbox , unless provided otherwise by Union or national law ;

(i)  a complete and detailed description of the process and rationale behind the training, testing and validation of the AI system is kept together with the testing results as part of the technical documentation referred to in Annex IV;

(j)  a short summary of the AI project developed in the sandbox, its objectives and expected results is published on the website of the competent authorities; this obligation shall not cover sensitive operational data in relation to the activities of law enforcement, border control, immigration or asylum authorities.

2.   For the purposes of the prevention, investigation, detection or prosecution of criminal offences or the execution of criminal penalties, including safeguarding against and preventing prevention threats to public security, under the control and responsibility of law enforcement authorities, the processing of personal data in AI regulatory sandboxes shall be based on a specific or Union or national law and subject to the same cumulative conditions as referred to in paragraph 1.

3.  Paragraph 1 is without prejudice to Union or national law which excludes processing of personal data for other purposes than those explicitly mentioned in that law , as well as to Union or national law laying down the basis for the processing of personal data which is necessary for the purpose of developing, testing or training of innovative AI systems or any other legal basis, in compliance with Union law on the protection of personal data .

Testing of high-risk AI systems in real world conditions outside AI regulatory sandboxes

1.   Testing of high-risk AI systems in real world conditions outside AI regulatory sandboxes may be conducted by providers or prospective providers of high-risk AI systems listed in Annex III, in accordance with this Article and the real-world testing plan referred to in this Article, without prejudice to the prohibitions under Article 5.

The detailed elements of the real-world testing plan shall be specified in implementing acts adopted by the Commission in accordance with the examination procedure referred to in Article 98(2).

This provision shall be without prejudice to Union or national law on the testing in real world conditions of high-risk AI systems related to products covered by Union harmonisation legislation listed in Annex I.

2.   Providers or prospective providers may conduct testing of high-risk AI systems referred to in Annex III in real world conditions at any time before the placing on the market or the putting into service of the AI system on their own or in partnership with one or more prospective deployers.

3.   The testing of high-risk AI systems in real world conditions under this Article shall be without prejudice to any ethical review that is required by Union or national law.

4.   Providers or prospective providers may conduct the testing in real world conditions only where all of the following conditions are met:

(a)   the provider or prospective provider has drawn up a real-world testing plan and submitted it to the market surveillance authority in the Member State where the testing in real world conditions is to be conducted;

(b)   the market surveillance authority in the Member State where the testing in real world conditions is to be conducted has approved the testing in real world conditions and the real-world testing plan. Where the market surveillance authority has not provided an answer within 30 days, the testing in real world conditions and the real-world testing plan shall be understood to have been approved. Where national law does not provide for a tacit approval, the testing in real world conditions shall remain subject to an authorisation;

(c)   the provider or prospective provider, with the exception of providers or prospective providers of high-risk AI systems referred to in points 1, 6 and 7 of Annex III in the areas of law enforcement, migration, asylum and border control management, and high-risk AI systems referred to in point 2 of Annex III has registered the testing in real world conditions in the non-public part of the EU database referred to in Article 71(3) with a Union-wide unique single identification number and with the information specified in Annex IX;

(d)   the provider or prospective provider conducting the testing in real world conditions is established in the Union or has appointed a legal representative who is established in the Union;

(e)   data collected and processed for the purpose of the testing in real world conditions shall be transferred to third countries only provided that appropriate and applicable safeguards under Union law are implemented;

(f)   the testing in real world conditions does not last longer than necessary to achieve its objectives and in any case not longer than six months, which may be extended for an additional period of six months, subject to prior notification by the provider to the market surveillance authority, accompanied by an explanation of the need for such an extension;

(g)   subjects of the testing in real world conditions who are vulnerable persons due to their age, physical or mental disability, are appropriately protected;

(h)   where a provider or prospective provider organises the testing in real world conditions in cooperation with one or more deployers or prospective deployers, the latter have been informed of all aspects of the testing that are relevant to their decision to participate, and given the relevant instructions for use of the AI system referred to in Article 13; the provider or prospective provider and the prospective deployer shall conclude an agreement specifying their roles and responsibilities with a view to ensuring compliance with the provisions for testing in real world conditions under this Regulation and under other applicable Union and national law;

(i)   the subjects of the testing in real world conditions have given informed consent in accordance with Article 61, or in the case of law enforcement, where the seeking of informed consent would prevent the AI system from being tested, the testing itself and the outcome of the testing in the real world conditions shall not have any negative effect on the subjects, and their personal data shall be deleted after the test is performed;

(j)   the testing in real world conditions is effectively overseen by the provider or prospective provider, as well as by deployers or prospective deployers through persons who are suitably qualified in the relevant field and have the necessary capacity, training and authority to perform their tasks;

(k)   the predictions, recommendations or decisions of the AI system can be effectively reversed and disregarded.

5.   Any subjects of the testing in real world conditions, or their legally designated representative, as appropriate, may, without any resulting detriment and without having to provide any justification, withdraw from the testing at any time by revoking their informed consent and may request the immediate and permanent deletion of their personal data. The withdrawal of the informed consent shall not affect the lawfulness or validity of activities already carried out.

6.   In accordance with Article 75, Member States shall confer on their market surveillance authorities the powers of requiring providers and prospective providers to provide information, of carrying out unannounced remote or on-site inspections, and of performing checks on the development of the testing in real world conditions and the related products. Market surveillance authorities shall use those powers to ensure the safe development of testing in real world conditions.

7.   Any serious incident identified in the course of the testing in real world conditions shall be reported to the national market surveillance authority in accordance with Article 73. The provider or prospective provider shall adopt immediate mitigation measures or, failing that, shall suspend the testing in real world conditions until such mitigation takes place, or otherwise terminate it. The provider or prospective provider shall establish a procedure for the prompt recall of the AI system upon such termination of the testing in real world conditions.

8.   Providers or prospective providers shall notify the national market surveillance authority in the Member State where the testing in real world conditions is to be conducted of the suspension or termination of the testing in real world conditions and of the final outcomes.

9.   The provider or prospective provider shall be liable under applicable Union and national liability law for any damage caused in the course of their testing in real world conditions.

Informed consent to participate in testing in real world conditions outside AI regulatory sandboxes

1.   For the purpose of testing in real world conditions under Article 60, freely-given informed consent shall obtained from the subjects of testing prior to their participation in such testing and after their having been duly informed with concise, clear, relevant, and understandable information regarding:

(a)   the nature and objectives of the testing in real world conditions and the possible inconvenience that may be linked to their participation;

(b)   the conditions under which the testing in real world conditions is to be conducted, including the expected duration of the subject or subjects' participation;

(c)   their rights, and the guarantees regarding their participation, in particular their right to refuse to participate in, and the right to withdraw from, testing in real world conditions at any time without any resulting detriment and without having to provide any justification;

(d)   the arrangements for requesting the reversal or the disregard of the predictions, recommendations or decisions of the AI system;

(e)   the Union-wide unique single identification number of the testing in real world conditions in accordance with Article 60 (4 ) point ( c ), and the contact details of the provider or its legal representative from whom further information can be obtained.

2.   The informed consent shall be dated and documented and a copy shall be given to the subjects of testing or their legal representative.

Measures for ▌ providers and deployers, in particular SMEs, including start-ups

1.  Member States shall undertake the following actions:

(a)  provide SMEs, including start-ups , having a registered office or a branch in the Union, with priority access to the AI regulatory sandboxes , to the extent that they fulfil the eligibility conditions and selection criteria. The priority access shall not preclude other SMEs including start-ups other than those referred to in the first subparagraph from access to the AI regulatory sandbox, provided that they also fulfil the eligibility conditions and selection criteria ;

(b)  organise specific awareness raising and training activities on the application of this Regulation tailored to the needs of SMEs including start-ups, users and, as appropriate, local public authorities ;

(c)   utilise existing dedicated channels and where appropriate, establish new ones for communication with SMEs including start-ups, users, other innovators and, as appropriate, local public authorities to provide advice and respond to queries about the implementation of this Regulation , including as regards participation in AI regulatory sandboxes;

(d)   facilitate the participation of SMEs and other relevant stakeholders in the standardisation development process.

2.  The specific interests and needs of the SME providers , including start-ups, shall be taken into account when setting the fees for conformity assessment under Article 43, reducing those fees proportionately to their size , market size and other relevant indicators .

3.   The AI Office shall undertake the following actions:

(a)   provide standardised templates for areas covered by this Regulation, as specified by the Board in its reasoned request;

(b)   develop and maintain a single information platform providing easy to use information in relation to this Regulation for all operators across the Union;

(c)   organise appropriate communication campaigns to raise awareness about the obligations arising from this Regulation;

(d)   evaluate and promote the convergence of best practices in public procurement procedures in relation to AI systems.

Derogations for specific operators

1.   Microenterprises within the meaning of Recommendation 2003/361/EC, may comply with certain elements of the quality management system required by Article 17 of this Regulation in a simplified manner, provided that they do not have partner enterprises or linked enterprises within the meaning of that Recommendation. For that purpose, the Commission shall develop guidelines on the elements of the quality management system which may be complied with in a simplified manner considering the needs of microenterprises, without affecting the level of protection or the need for compliance with the requirements in respect of high-risk AI systems.

2.   Paragraph 1 of this Article shall not be interpreted as exempting those operators from fulfilling any other requirements or obligations laid down in this Regulation, including those established in Articles 9, 10, 11, 12, 13, 14, 15, 72 and 73.

CHAPTER VII

Governance at Union level

1.   The Commission shall develop Union expertise and capabilities in the field of AI through the AI Office.

2.   Member States shall facilitate the tasks entrusted to the AI Office, as reflected in this Regulation.

Establishment and structure of the European Artificial Intelligence Board

1.  A European Artificial Intelligence Board (the ‘Board’) is hereby established.

2.   The Board shall be composed of one representative per Member State. The European Data Protection Supervisor shall participate as observer. The AI Office shall also attend the Board’s meetings, without taking part in the votes. Other national and Union authorities, bodies or experts may be invited to the meetings by the Board on a case by case basis, where the issues discussed are of relevance for them.

3.   Each representative shall be designated by their Member State for a period of three years, renewable once.

4.   Member States shall ensure that their representatives on the Board:

(a)   have the relevant competences and powers in their Member State so as to contribute actively to the achievement of the Board’s tasks referred to in Article 66;

(b)   are designated as a single contact point vis-à-vis the Board and, where appropriate, taking into account Member States’ needs, as a single contact point for stakeholders;

(c)   are empowered to facilitate consistency and coordination between national competent authorities in their Member State as regards the implementation of this Regulation, including through the collection of relevant data and information for the purpose of fulfilling their tasks on the Board.

5.   The designated representatives of the Member States shall adopt the Board’s rules of procedure by a two-thirds majority. The rules of procedure shall, in particular, lay down procedures for the selection process, the duration of the mandate of, and specifications of the tasks of, the Chair, detailed arrangements for voting, and the organisation of the Board’s activities and those of its sub-groups.

6.   The Board shall establish two standing sub-groups to provide a platform for cooperation and exchange among market surveillance authorities and to notify authorities about issues related to market surveillance and notified bodies.

The standing sub-group for market surveillance should act as the administrative cooperation group (ADCO) for this Regulation within the meaning of Article 30 of Regulation (EU) 2019/1020.

The Board may establish other standing or temporary sub-groups as appropriate for the purpose of examining specific issues. Where appropriate, representatives of the advisory forum referred to in Article 67 may be invited to such sub-groups or to specific meetings of those subgroups as observers.

7.   The Board shall be organised and operated so as to safeguard the objectivity and impartiality of its activities.

8.   The Board shall be chaired by one of the representatives of the Member States. The AI Office shall provide the secretariat for the Board. convene the meetings upon request of the Chair, and prepare the agenda in accordance with the tasks of the Board pursuant to this Regulation and its rules of procedure.

Tasks of the Board

The Board shall advise and assist the Commission and the Member States in order to facilitate the consistent and effective application of this Regulation. For this purpose, the Board may in particular:

(a)   contribute to the coordination among national competent authorities responsible for the application of this Regulation and, in cooperation with and subject to the agreement of the market surveillance authorities concerned, support joint activities of market surveillance authorities referred to in Article 74(11);

(b)   collect and share technical and regulatory expertise and best practices among Member States ;

(c)   provide advice on the implementation of this Regulation, in particular as regards the enforcement of rules on general-purpose AI models;

(d)   contribute to the harmonisation of administrative practices in the Member States, including in relation to the derogation from the conformity assessment procedures referred to in Article 46, the functioning of regulatory sandboxes, and testing in real world conditions referred to in Articles 57, 59 and 60;

(e)   upon the request of the Commission or on its own initiative, issue recommendations and written opinions on any relevant matters related to the implementation of this Regulation and to its consistent and effective application, including:

(i)   on the development and application of codes of conduct and codes of practice pursuant to this Regulation, as well as of the Commission’s guidelines;

(ii)   the evaluation and review of this Regulation pursuant to Article 112, including as regards the serious incident reports referred to in Article 73, and the functioning of the database referred to in Article 71, the preparation of the delegated or implementing acts, and as regards possible alignments of this Regulation with the legal acts listed in Annex I;

(iii)  on technical specifications or existing standards regarding the requirements set out in Chapter III, Section 2;

(iv)  on the use of harmonised standards or common specifications referred to in Articles 40 and 41;

(v)   trends, such as European global competitiveness in AI, the uptake of AI in the Union, and the development of digital skills;

(vi)   trends on the evolving typology of AI value chains, in particular on the resulting implications in terms of accountability;

(vii)   on the potential need for amendment to Annex III in accordance with Article 7, and on the potential need for possible revision of Article 5 pursuant to Article 112, taking into account relevant available evidence and the latest developments in technology;

(f)   support the Commission in promoting AI literacy, public awareness and understanding of the benefits, risks, safeguards and rights and obligations in relation to the use of AI systems;

(g)   facilitate the development of common criteria and a shared understanding among market operators and competent authorities of the relevant concepts provided for in this Regulation, including by contributing to the development of benchmarks;

(h)   cooperate, as appropriate, with other Union institutions, bodies, offices and agencies, as well as relevant Union expert groups and networks, in particular in the fields of product safety, cybersecurity, competition, digital and media services, financial services, consumer protection, data and fundamental rights protection;

(i)   contribute to effective cooperation with the competent authorities of third countries and with international organisations;

(j)   assist national competent authorities and the Commission in developing the organisational and technical expertise required for the implementation of this Regulation, including by contributing to the assessment of training needs for staff of Member States involved in implementing this Regulation;

(k)   assist the AI Office in supporting national competent authorities in the establishment and development of regulatory sandboxes, and facilitate cooperation and information-sharing among regulatory sandboxes;

(l)   contribute to, and provide relevant advice on, the development of guidance documents;

(m)   advise the Commission in relation to international matters on AI;

(n)   provide opinions to the Commission on the qualified alerts regarding general-purpose AI models;

(o)   receive opinions by the Member States on qualified alerts regarding general-purpose AI models, and on national experiences and practices on the monitoring and enforcement of AI systems, in particular systems integrating the general-purpose AI models.

Advisory forum

1.   An advisory forum shall be established to provide technical expertise and advise the Board and the Commission, and to contribute to their tasks under this Regulation.

2.   The membership of the advisory forum shall represent a balanced selection of stakeholders, including industry, start-ups, SMEs, civil society and academia. The membership of the advisory forum shall be balanced with regard to commercial and non-commercial interests and, within the category of commercial interests, with regard to SMEs and other undertakings.

3.   The Commission shall appoint the members of the advisory forum, in accordance with the criteria set out in paragraph 2, from amongst stakeholders with recognised expertise in the field of AI.

4.   The term of office of the members of the advisory forum shall be two years, which may be extended by up to no more than four years.

5.   The Fundamental Rights Agency, ENISA, the European Committee for Standardization (CEN), the European Committee for Electrotechnical Standardization (CENELEC), and the European Telecommunications Standards Institute (ETSI) shall be permanent members of the advisory forum.

6.   The advisory forum shall draw up its rules of procedure. It shall elect two co-chairs from among its members, in accordance with criteria set out in paragraph 2. The term of office of the co-chairs shall be two years, renewable once.

7.   The advisory forum shall hold meetings at least twice a year. The advisory forum may invite experts and other stakeholders to its meetings.

8.   The advisory forum may prepare opinions, recommendations and written contributions upon request of the Board or the Commission.

9.   The advisory forum may establish standing or temporary sub-groups as appropriate for the purpose of examining specific questions related to the objectives of this Regulation.

10.   The advisory forum shall prepare an annual report on its activities. That report shall be made publicly available.

Scientific panel of independent experts

1.   The Commission shall, by means of an implementing act, make provisions on the establishment of a scientific panel of independent experts (the ‘scientific panel’) intended to support the enforcement activities under this Regulation. That implementing act shall be adopted in accordance with the examination procedure referred to in Article 98(2).

2.   The scientific panel shall consist of experts selected by the Commission on the basis of up-to-date scientific or technical expertise in the field of AI necessary for the tasks set out in paragraph 3, and shall be able to demonstrate meeting all of the following conditions:

(a)   having particular expertise and competence and scientific or technical expertise in the field of AI;

(b)   independence from any provider of AI systems or general-purpose AI models or systems;

(c)   an ability to carry out activities diligently, accurately and objectively. The Commission, in consultation with the Board, shall determine the number of experts on the panel in accordance with the required needs and shall ensure fair gender and geographical representation.

3.   The scientific panel shall advise and support the AI Office, in particular with regard to the following tasks:

(a)   supporting the implementation and enforcement of this Regulation as regards general-purpose AI models and systems, in particular by:

(i)   alerting the AI Office of possible systemic risks at Union level of general-purpose AI models, in accordance with Article 90;

(ii)   contributing to the development of tools and methodologies for evaluating capabilities of general-purpose AI models and systems, including through benchmarks;

(iii)   providing advice on the classification of general-purpose AI models with systemic risk;

(iv)   providing advice on the classification of various general-purpose AI models and systems;

(v)   contributing to the development of tools and templates;

(b)   supporting the work of market surveillance authorities, at their request;

(c)   supporting cross-border market surveillance activities as referred to in Article 74(11) , without prejudice to the powers of market surveillance authorities;

(d)   supporting the AI Office in carrying out its duties in the context of the safeguard clause pursuant to Article 81.

4.   The experts on the scientific panel shall perform their tasks with impartiality and objectivity, and shall ensure the confidentiality of information and data obtained in carrying out their tasks and activities. They shall neither seek nor take instructions from anyone when exercising their tasks under paragraph 3. Each expert shall draw up a declaration of interests, which shall be made publicly available. The AI Office shall establish systems and procedures to actively manage and prevent potential conflicts of interest.

5.   The implementing act referred to in paragraph 1 shall include provisions on the conditions, procedures and detailed arrangements for the scientific panel and its members to issue alerts, and to request the assistance of the AI Office for the performance of the tasks of the scientific panel.

Access to the pool of experts by the Member States

1.   Member States may call upon experts of the scientific panel to support their enforcement activities under this Regulation.

2.   The Member States may be required to pay fees for the advice and support provided by the experts. The structure and the level of fees as well as the scale and structure of recoverable costs shall be set out in the implementing act referred to in Article 68(1), taking into account the objectives of the adequate implementation of this Regulation, cost-effectiveness and the necessity of ensuring effective access to experts for all Member States.

3.   The Commission shall facilitate timely access to the experts by the Member States, as needed, and ensure that the combination of support activities carried out by Union AI testing support pursuant to Article 84 and experts pursuant to this Article is efficiently organised and provides the best possible added value.

National competent authorities

Designation of national competent authorities and single point of contact

1.  Each Member State shall establish or designate as national competent authorities at least one notifying authority and at least one market surveillance authority for the purposes of this Regulation . Those national competent authorities shall exercise their powers independently, impartially and without bias so as to safeguard the objectivity of their activities and tasks, and to ensure the application and implementation of this Regulation. The members of those authorities shall refrain from any action incompatible with their duties. Provided that those principles are observed, such activities and tasks may be performed by one or more designated authorities, in accordance with the organisational needs of the Member State .

2.  Member States shall communicate to the Commission the identity of the notifying authorities and the market surveillance authorities and the tasks of those authorities, as well as any subsequent changes thereto. Member States shall make publicly available information on how competent authorities and single points of contact can be contacted , through electronic communication means by… [12 months from the date of entry into force of this Regulation]. Member States shall designate a market surveillance authority to act as the single point of contact for this Regulation, and shall notify the Commission of the identity of the single point of contact. The Commission shall make a list of the single points of contact publicly available .

3.  Member States shall ensure that their national competent authorities are provided with adequate technical, financial and human resources , and with infrastructure to fulfil their tasks effectively under this Regulation. In particular, the national competent authorities shall have a sufficient number of personnel permanently available whose competences and expertise shall include an in-depth understanding of AI technologies, data and data computing, personal data protection, cybersecurity, fundamental rights, health and safety risks and knowledge of existing standards and legal requirements. Member States shall assess and, if necessary, update competence and resource requirements referred to in this paragraph on an annual basis.

4.   National competent authorities shall take an adequate level of cybersecurity measures.

5.   When performing their tasks, the national competent authorities shall act in compliance with the confidentiality obligations set out in Article 78.

6.   By …, [ one year from the date of entry into force of this Regulation ] and once every two years thereafter, Member States shall report to the Commission ▌ on the status of the financial and human resources of the national competent authorities, with an assessment of their adequacy. The Commission shall transmit that information to the Board for discussion and possible recommendations.

7.  The Commission shall facilitate the exchange of experience between national competent authorities.

8.  National competent authorities may provide guidance and advice on the implementation of this Regulation, in particular to SMEs including start-ups, taking into account the guidance and advice of the Board and the Commission, as appropriate . Whenever national competent authorities intend to provide guidance and advice with regard to an AI system in areas covered by other Union law, the competent national authorities under that Union law shall be consulted, as appropriate. ▌

9.  Where Union institutions, bodies, offices or agencies fall within the scope of this Regulation, the European Data Protection Supervisor shall act as the competent authority for their supervision.

CHAPTER VIII

EU DATABASE FOR ▌ HIGH-RISK AI SYSTEMS

EU database for high-risk AI systems listed in Annex III

1.  The Commission shall, in collaboration with the Member States, set up and maintain an EU database containing information referred to in paragraphs 2 and 3 of this Article concerning high-risk AI systems referred to in Article 6(2) which are registered in accordance with Articles 49 and 60. When setting the functional specifications of such database, the Commission shall consult the relevant experts, and when updating the functional specifications of such database, the Commission shall consult the Board .

2.  The data listed in Section A of Annex VIII shall be entered into the EU database by the provider or, where applicable, by the authorised representative .

3.   The data listed in Section C of Annex VIII shall be entered into the EU database by the deployer who is, or who acts on behalf of, a public authority, agency or body, in accordance with Articles 49(2) and (3).

4.   With the exception of the section referred to in Article 49(4) and Article 60(5), the information contained in the EU database registered in accordance with Article 49 shall be accessible and publicly available in a user-friendly manner. The information should be easily navigable and machine-readable. The information registered in accordance with Article 60 shall be accessible only to market surveillance authorities and the Commission, unless the prospective provider or provider has given consent for also making the information accessible the public.

5.  The EU database shall contain personal data only in so far as necessary for collecting and processing information in accordance with this Regulation. That information shall include the names and contact details of natural persons who are responsible for registering the system and have the legal authority to represent the provider or the deployer, as applicable .

6.  The Commission shall be the controller of the EU database. It shall make available to providers, prospective providers and deployers adequate technical and administrative support. The EU database shall comply with the applicable accessibility requirements.

POST-MARKET MONITORING, INFORMATION SHARING, MARKET SURVEILLANCE

Post-market monitoring

Post-market monitoring by providers and post-market monitoring plan for high-risk AI systems

1.  Providers shall establish and document a post-market monitoring system in a manner that is proportionate to the nature of the AI technologies and the risks of the high-risk AI system.

2.  The post-market monitoring system shall actively and systematically collect, document and analyse relevant data which may be provided by deployers or which may be collected through other sources on the performance of high-risk AI systems throughout their lifetime, and which allow the provider to evaluate the continuous compliance of AI systems with the requirements set out in Chapter III, Section 2. Where relevant, post-market monitoring shall include an analysis of the interaction with other AI systems. This obligation shall not cover sensitive operational data of deployers which are law-enforcement authorities.

3.  The post-market monitoring system shall be based on a post-market monitoring plan. The post-market monitoring plan shall be part of the technical documentation referred to in Annex IV. The Commission shall adopt an implementing act laying down detailed provisions establishing a template for the post-market monitoring plan and the list of elements to be included in the plan by … [six months before the entry into application of this Regulation] . That implementing act shall be adopted in accordance with the examination procedure referred to in Article 98(2).

4.  For high-risk AI systems covered by the Union harmonisation legislation listed in Section A of Annex I, where a post-market monitoring system and plan are already established under that legislation, in order to ensure consistency, avoid duplications and minimise additional burdens, providers shall have a choice of integrating, as appropriate, the necessary elements described in paragraphs 1, 2 and 3 using the template referred in paragraph 3 into systems and plans already existing under that legislation, provided that achieves an equivalent level of protection .

The first subparagraph of this paragraph shall also apply ▌to high-risk AI systems referred to in point 5 of Annex III placed on the market or put into service by financial institutions that are subject to requirements under Union financial services law regarding their internal governance, arrangements or processes .

Sharing of information on serious incidents

Reporting of serious incidents

1.  Providers of high-risk AI systems placed on the Union market shall report any serious incident to the market surveillance authorities of the Member States where that incident occurred.

2.   The report referred to in paragraph 1 shall be made immediately after the provider has established a causal link between the AI system and the serious incident or ▌ the reasonable likelihood of such a link, and, in any event, not later than 15 days after the provider or, where applicable, the deployer, becomes aware of the serious incident .

The period for the reporting referred to in the first subparagraph shall take account of the severity of the serious incident ▌ .

3.   Notwithstanding paragraph 2 of this Article, in the event of a widespread infringement or a serious incident as defined in Article 3, point (44) (b), the report referred to in paragraph 1 of this Article shall be provided immediately, and not later than two days after the provider or, where applicable, the deployer becomes aware of that incident.

5.   Notwithstanding paragraph 2, in the event of the death of a person, the report shall be provided immediately after the provider or the deployer has established, or as soon as it suspects, a causal relationship between the high-risk AI system and the serious incident, but not later than 10 days after the date on which the provider or, where applicable, the deployer becomes aware of the serious incident.

6.   Where necessary to ensure timely reporting, the provider or, where applicable, the deployer, may submit an initial report that is incomplete, followed by a complete report .

7.   Following the reporting of a serious incident pursuant to paragraph 1, the provider shall, without delay, perform the necessary investigations in relation to the serious incident and the AI system concerned. This shall include a risk assessment of the incident, and corrective action.

The provider shall cooperate with the competent authorities, and where relevant with the notified body concerned, during the investigations referred to in the first subparagraph, and shall not perform any investigation which involves altering the AI system concerned in a way which may affect any subsequent evaluation of the causes of the incident, prior to informing the competent authorities of such action.

8.  Upon receiving a notification related to a serious incident referred to in Article 3, point (44)(c), the relevant market surveillance authority shall inform the national public authorities or bodies referred to in Article 77(1). The Commission shall develop dedicated guidance to facilitate compliance with the obligations set out in paragraph 1 of this Article. That guidance shall be issued by … [12 months after the entry into force of this Regulation] , and shall be assessed regularly .

9.   The market surveillance authority shall take appropriate measures, as provided for in Article 19 of Regulation (EU) 2019/1020, within seven days from the date it received the notification referred to in paragraph 1 of this Article, and shall follow the notification procedures as provided in that Regulation.

10.  For high-risk AI systems referred to in ▌ Annex III that are placed on the market or put into service by providers that are subject to Union legislative instruments laying down reporting obligations equivalent to those set out in this Regulation ▌, the notification of serious incidents shall be limited to those referred to in Article 3, point (44)(c) .

11.   For high-risk AI systems which are safety components of devices, or are themselves devices, covered by Regulations (EU) 2017/745 and (EU) 2017/746, the notification of serious incidents shall be limited to those referred to in Article 3, point (44)(c) of this Regulation, and shall be made to the national competent authority chosen for that purpose by the Member States where the incident occurred.

12.   National competent authorities shall immediately notify the Commission of any serious incident, whether or not it they have taken action on it, in accordance with Article 20 of Regulation (EU) 2019/1020.

Enforcement

Market surveillance and control of AI systems in the Union market

1.  Regulation (EU) 2019/1020 shall apply to AI systems covered by this Regulation. For the purposes of the effective enforcement of this Regulation:

(a)  any reference to an economic operator under Regulation (EU) 2019/1020 shall be understood as including all operators identified in Article 2(1) of this Regulation;

(b)  any reference to a product under Regulation (EU) 2019/1020 shall be understood as including all AI systems falling within the scope of this Regulation.

2.   As part of their reporting obligations under Article 34(4) of Regulation (EU) 2019/1020, the market surveillance authorities shall report annually to the Commission and relevant national competition authorities any information identified in the course of market surveillance activities that may be of potential interest for the application of Union law on competition rules. They shall also annually report to the Commission about the use of prohibited practices that occurred during that year and about the measures taken.

3.  For high-risk AI systems related to products covered by the Union harmonisation legislation listed in Section A of Annex I, the market surveillance authority for the purposes of this Regulation shall be the authority responsible for market surveillance activities designated under those legal acts. By derogation from the paragraph 2, and in appropriate circumstances, Member States may designate another relevant authority to act as a market surveillance authority, provided they ensure coordination with the relevant sectoral market surveillance authorities responsible for the enforcement of the legal acts listed in Annex I.

4.   The procedures referred to in Articles 79 to 83 of this Regulation shall not apply to AI systems related to products covered by the Union harmonisation legislation listed in section A of Annex I, where such legal acts already provide for procedures ensuring an equivalent level of protection and having the same objective. In such cases, the relevant sectoral procedures shall apply instead.

5.   Without prejudice to the powers of market surveillance authorities under Article 14 of Regulation (EU) 2019/1020, for the purpose of ensuring the effective enforcement of this Regulation, market surveillance authorities may exercise the powers referred to in Article 14(4), points (d) and (j), of that Regulation remotely, as appropriate.

6.  For high-risk AI systems placed on the market, put into service, or used by financial institutions regulated by Union financial services law, the market surveillance authority for the purposes of this Regulation shall be the relevant national authority responsible for the financial supervision of those institutions under that legislation in so far as the placing on the market, putting into service, or the use of the AI system is in direct connection with the provision of those financial services .

7.   By way of derogation from paragraph 6, in appropriate circumstances, and provided that coordination is ensured, another relevant authority may be identified by the Member State as market surveillance authority for the purposes of this Regulation.

National market surveillance authorities supervising regulated credit institutions regulated under Directive 2013/36/EU, which are participating in the Single Supervisory Mechanism established by Regulation No 1024/2013, should report, without delay, to the European Central Bank any information identified in the course of their market surveillance activities that may be of potential interest for the prudential supervisory tasks of the European Central Bank specified in that Regulation.

8.  For high-risk AI systems listed in point 1 of Annex III, in so far as the systems are used for law enforcement purposes, border management and justice and democracy, and for high-risk AI systems listed in points 6 , 7 and 8 of Annex III to this Regulation, Member States shall designate as market surveillance authorities for the purposes of this Regulation either the competent data protection supervisory authorities under Regulation (EU) 2016/679 or Directive (EU) 2016/680, or any other authority designated pursuant to the same conditions laid down in Articles 41 to 44 of Directive (EU) 2016/680. Market surveillance activities shall in no way affect the independence of judicial authorities, or otherwise interfere with their activities when acting in their judicial capacity.

9.  Where Union institutions, bodies, offices or agencies fall within the scope of this Regulation, the European Data Protection Supervisor shall act as their market surveillance authority, except in relation to the Court of Justice of the European Union acting in its judicial capacity .

10.  Member States shall facilitate coordination between market surveillance authorities designated under this Regulation and other relevant national authorities or bodies which supervise the application of Union harmonisation legislation listed in Annex I, or in other Union law, that might be relevant for the high-risk AI systems referred to in Annex III.

11.   Market surveillance authorities and the Commission shall be able to propose joint activities, including joint investigations, to be conducted by either market surveillance authorities or market surveillance authorities jointly with the Commission, that have the aim of promoting compliance, identifying non-compliance, raising awareness or providing guidance in relation to this Regulation with respect to specific categories of high-risk AI systems that are found to present a serious risk across two or more Member States in accordance with Article 9 of Regulation (EU) 2019/1020. The AI Office shall provide coordination support for joint investigations.

12.   Without prejudice to the powers provided for under Regulation (EU) 2019/1020, and where relevant and limited to what is necessary to fulfil their tasks, the market surveillance authorities shall be granted full access by providers to the documentation as well as the training, validation and testing data sets used for the development of high-risk AI systems, including, where appropriate and subject to security safeguards, through application programming interfaces (‘API’) or other relevant technical means and tools enabling remote access.

13.   Market surveillance authorities shall be granted access to the source code of the high-risk AI system upon a reasoned request and only when both of the following conditions are fulfilled:

(a)   access to source code is necessary to assess the conformity of a high-risk AI system with the requirements set out in Chapter III, Section 2; and,

(b)   testing or auditing procedures and verifications based on the data and documentation provided by the provider have been exhausted or proved insufficient.

14.   Any information or documentation obtained by market surveillance authorities shall be treated in compliance with the confidentiality obligations set out in Article 78.

Mutual assistance, market surveillance and control of general-purpose AI systems

1.   Where an AI system is based on a general-purpose AI model, and the model and the system are developed by the same provider, the AI Office shall have powers to monitor and supervise compliance of that AI system with obligations under this Regulation. To carry out its monitoring and supervision tasks, the AI Office shall have all the powers of a market surveillance authority within the meaning of Regulation (EU) 2019/1020.

2.   Where the relevant market surveillance authorities have sufficient reason to consider general-purpose AI systems that can be used directly by deployers for at least one purpose that is classified as high-risk pursuant to this Regulation to be non-compliant with the requirements laid down in this Regulation, they shall cooperate with the AI Office to carry out compliance evaluations, and shall inform the Board and other market surveillance authorities accordingly.

3.   Where a national market surveillance authority is unable to conclude its investigation of the high-risk AI system because of its inability to access certain information related to the AI model despite having made all appropriate efforts to obtain that information, it may submit a reasoned request to the AI Office, by which access to that information shall be enforced. In that case, the AI Office shall supply to the applicant authority without delay, and in any event within 30 days, any information that the AI Office considers to be relevant in order to establish whether a high-risk AI system is non-compliant. National market authorities shall safeguard the confidentiality of the information they obtain in accordance with Article 78 of this Regulation. The procedure provided for in Chapter VI of Regulation (EU) 2019/1020 shall apply mutatis mutandis.

Supervision of testing in real world conditions by market surveillance authorities

1.   Market surveillance authorities shall have competences and powers to ensure that testing in real world conditions is in accordance with this Regulation.

2.   Where testing in real world conditions is conducted for AI systems that are supervised within an AI regulatory sandbox under Article 59, the market surveillance authorities shall verify the compliance with the provisions of Article 60 as part of their supervisory role for the AI regulatory sandbox. Those authorities may, as appropriate, allow the testing in real world conditions to be conducted by the provider or prospective provider, in derogation from the conditions set out in Article 60(4), points (f) and (g).

3.   Where a market surveillance authority has been informed by the prospective provider, the provider or any third party of a serious incident or has other grounds for considering that the conditions set out in Articles 60 and 61 are not met, it may take either of the following decisions on its territory, as appropriate:

(a)   to suspend or terminate the testing in real world conditions;

(b)   to require the provider or prospective provider and users to modify any aspect of the testing in real world conditions.

4.   Where a market surveillance authority has taken a decision referred to in paragraph 3 of this Article, or has issued an objection within the meaning of Article 60(4), point (b), the decision or the objection shall indicate the grounds therefor and how the provider or prospective provider can challenge the decision or objection.

5.   Where applicable, where a market surveillance authority has taken a decision referred to in paragraph 3, it shall communicate the grounds therefor to the market surveillance authorities of other Member States in which the AI system has been tested in accordance with the testing plan.

Powers of authorities protecting fundamental rights

1.  National public authorities or bodies which supervise or enforce the respect of obligations under Union law protecting fundamental rights , including the right to non-discrimination, in relation to the use of high-risk AI systems referred to in Annex III shall have the power to request and access any documentation created or maintained under this Regulation in accessible language and format when access to that documentation is necessary for effectively fulfilling their mandates within the limits of their jurisdiction. The relevant public authority or body shall inform the market surveillance authority of the Member State concerned of any such request.

2.  By … [ three months after the entry into force of this Regulation], each Member State shall identify the public authorities or bodies referred to in paragraph 1 and make a list of them publicly available ▌ . Member States shall notify the list to the Commission and to the other Member States, and shall keep the list up to date.

3.  Where the documentation referred to in paragraph 1 is insufficient to ascertain whether an infringement of obligations under Union law protecting fundamental rights has occurred, the public authority or body referred to in paragraph 1 may make a reasoned request to the market surveillance authority , to organise testing of the high-risk AI system through technical means. The market surveillance authority shall organise the testing with the close involvement of the requesting public authority or body within a reasonable time following the request.

4.  Any information or documentation obtained by the national public authorities or bodies referred to in paragraph 1 of this Article pursuant to this Article shall be treated in compliance with the confidentiality obligations set out in Article 78.

Confidentiality

1.   The Commission, market surveillance authorities and notified bodies and any other natural or legal person involved in the application of this Regulation shall , in accordance with Union and national law, respect the confidentiality of information and data obtained in carrying out their tasks and activities in such a manner as to protect, in particular :

(a)  the intellectual property rights and confidential business information or trade secrets of a natural or legal person, including source code, except in the cases referred to in Article 5 of Directive (EU) 2016/943 of the European Parliament and of the Council (65) on the protection of undisclosed know-how and business information (trade secrets) against their unlawful acquisition, use and disclosure ;

(b)  the effective implementation of this Regulation, in particular for the purposes of inspections, investigations or audits; ▌

(c)   public and national security interests;

(d)  the conduct of criminal or administrative proceedings;

(e)   information classified pursuant to Union or national law.

2.   The authorities involved in the application of this Regulation pursuant to paragraph 1 shall request only data that is strictly necessary for the assessment of the risk posed by AI systems and for the exercise of their powers in compliance with this Regulation and Regulation (EU) 2019/1020. They shall put in place adequate and effective cybersecurity measures to protect the security and confidentiality of the information and data obtained, and shall delete the data collected as soon as it is no longer needed for the purpose for which it was obtained, in accordance with applicable Union or national law.

3.  Without prejudice to paragraphs 1 and 2 , information exchanged on a confidential basis between the national competent authorities or between national competent authorities and the Commission shall not be disclosed without prior consultation of the originating national competent authority and the deployer when high-risk AI systems referred to in point 1, 6 or 7 of Annex III are used by law enforcement, border control, immigration or asylum authorities and when such disclosure would jeopardise public and national security interests. This exchange of information shall not cover sensitive operational data in relation to the activities of law enforcement, border control, immigration or asylum authorities.

When the law enforcement, immigration or asylum authorities are providers of high-risk AI systems referred to in point 1, 6 or 7 of Annex III, the technical documentation referred to in Annex IV shall remain within the premises of those authorities. Those authorities shall ensure that the market surveillance authorities referred to in Article 74(8) and (9), as applicable, can, upon request, immediately access the documentation or obtain a copy thereof. Only staff of the market surveillance authority holding the appropriate level of security clearance shall be allowed to access that documentation or any copy thereof.

4.  Paragraphs 1, 2 and 3 shall not affect the rights or obligations of the Commission, Member States and their relevant authorities, as well as those of notified bodies , with regard to the exchange of information and the dissemination of warnings , including in the context of cross-border cooperation , nor shall they affect the obligations of the parties concerned to provide information under criminal law of the Member States.

5.  The Commission and Member States may exchange, where necessary and in accordance with relevant provisions of international and trade agreements , confidential information with regulatory authorities of third countries with which they have concluded bilateral or multilateral confidentiality arrangements guaranteeing an adequate level of confidentiality.

Procedure at national level for dealing with AI systems presenting a risk

1.  AI systems presenting a risk shall be understood as a “product presenting a risk” as defined in Article 3, point 19 of Regulation (EU) 2019/1020, in so far as they present risks to the health or safety, or to ▌ fundamental rights, of persons.

2.  Where the market surveillance authority of a Member State has sufficient reason to consider an AI system to present a risk as referred to in paragraph 1 of this Article, it shall carry out an evaluation of the AI system concerned in respect of its compliance with all the requirements and obligations laid down in this Regulation. Particular attention shall be given to AI systems presenting a risk to groups of vulnerable persons referred to in Article 5. Where risks to fundamental rights of persons are identified , the market surveillance authority shall also inform and fully cooperate with the relevant national public authorities or bodies referred to in Article 77(1). The relevant operators shall cooperate as necessary with the market surveillance authority and with the other national public authorities or bodies referred to in Article 77(1) .

Where, in the course of that evaluation, the market surveillance authority or, where applicable the market surveillance authority in cooperation with the national public authority referred to in Article 77(1), finds that the AI system does not comply with the requirements and obligations laid down in this Regulation, it shall without undue delay require the relevant operator to take all appropriate corrective actions to bring the AI system into compliance, to withdraw the AI system from the market, or to recall it within a period the market surveillance authority may prescribe, and in any event within the shorter of 15 working days, or as provided for in the relevant Union harmonisation legislation.

The market surveillance authority shall inform the relevant notified body accordingly. Article 18 of Regulation (EU) 2019/1020 shall apply to the measures referred to in the second subparagraph of this paragraph.

3.  Where the market surveillance authority considers that the non-compliance is not restricted to its national territory, it shall inform the Commission and the other Member States without undue delay of the results of the evaluation and of the actions which it has required the operator to take.

4.  The operator shall ensure that all appropriate corrective action is taken in respect of all the AI systems concerned that it has made available on the Union market.

5.  Where the operator of an AI system does not take adequate corrective action within the period referred to in paragraph 2, the market surveillance authority shall take all appropriate provisional measures to prohibit or restrict the AI system's being made available on its national market or put into service , to withdraw the product or the standalone AI system from that market or to recall it. That authority shall without undue delay notify the Commission and the other Member States ▌ of those measures.

6.  The notification referred to in paragraph 5 shall include all available details, in particular the information necessary for the identification of the non-compliant AI system, the origin of the AI system and the supply chain , the nature of the non-compliance alleged and the risk involved, the nature and duration of the national measures taken and the arguments put forward by the relevant operator. In particular, the market surveillance authorities shall indicate whether the non-compliance is due to one or more of the following:

(a)   non-compliance with the prohibition of the AI practices referred to in Article 5;

(b)  a failure of a high-risk AI system to meet requirements set out in Chapter III, Section 2;

(c)  shortcomings in the harmonised standards or common specifications referred to in Articles 40 and 41 conferring a presumption of conformity;

(d)   non-compliance with Article 50.

7.  The market surveillance authorities of the Member States other than the market surveillance authority of the Member State initiating the procedure shall, without undue delay, inform the Commission and the other Member States of any measures adopted and of any additional information at their disposal relating to the non-compliance of the AI system concerned, and, in the event of disagreement with the notified national measure, of their objections.

8.  Where, within three months of receipt of the notification referred to in paragraph 5 of this Article, no objection has been raised by either a market surveillance authority of a Member State or by the Commission in respect of a provisional measure taken by a market surveillance authority of another Member State, that measure shall be deemed justified. This shall be without prejudice to the procedural rights of the concerned operator in accordance with Article 18 of Regulation (EU) 2019/1020. The three-month period referred to in this paragraph shall be reduced to 30 days in the event of non-compliance with the prohibition of the AI practices referred to in Article 5 of this Regulation.

9.  The market surveillance authorities of the Member States shall ensure that appropriate restrictive measures are taken in respect of the product or the AI system concerned, such as withdrawal of the product or the AI system from their market, without undue delay.

Procedure for dealing with AI systems classified by the provider as non-high-risk in application of Annex III

1.   Where a market surveillance authority has sufficient reason to consider that an AI system classified by the provider is not high-risk pursuant to Article 6(3)I is indeed high-risk, the market surveillance authority shall carry out an evaluation of the AI system concerned in respect of its classification as a high-risk AI system based on the conditions set out in Article 6(3) and the Commission guidelines.

2.   Where, in the course of that evaluation, the market surveillance authority finds that the AI system concerned is high-risk, it shall without undue delay require the relevant provider to take all necessary actions to bring the AI system into compliance with the requirements and obligations laid down in this Regulation, as well as take appropriate corrective action within a period the market surveillance authority may prescribe.

3.   Where the market surveillance authority considers that the use of the AI system concerned is not restricted to its national territory, it shall inform the Commission and the other Member States without undue delay of the results of the evaluation and of the actions which it has required the provider to take.

4.   The provider shall ensure that all necessary action is taken to bring the AI system into compliance with the requirements and obligations laid down in this Regulation. Where the provider of an AI system concerned does not bring the AI system into compliance with those requirements and obligations within the period referred to in paragraph 2 of this Article, the provider shall be subject to fines in accordance with Article 99.

5.   The provider shall ensure that all appropriate corrective action is taken in respect of all the AI systems concerned that it has made available on the Union market.

6.   Where the provider of the AI system concerned does not take adequate corrective action within the period referred to in paragraph 2 of this Article, Article 79(5) to (9) shall apply.

7.   Where, in the course of the evaluation pursuant to paragraph 1 of this Article, the market surveillance authority establishes that the AI system was misclassified by the provider as not high-risk in order to circumvent the application of requirements in Chapter III, Section 2, the provider shall be subject to fines in accordance with Article 99.

8.   In exercising their power to monitor the application of this Article, and in accordance with Article 11 of Regulation (EU) 2019/1020, market surveillance authorities may perform appropriate checks, taking into account in particular information stored in the EU database referred to in Article 71 of this Regulation.

Union safeguard procedure

1.  Where, within three months of receipt of the notification referred to in Article 79(5), or within 30 days in the case of non-compliance with the prohibition of the AI practices referred to in Article 5, objections are raised by the market surveillance authority of a Member State to a measure taken by another market surveillance authority , or where the Commission considers the measure to be contrary to Union law, the Commission shall without undue delay enter into consultation with the market surveillance authority of the relevant Member State and the operator or operators, and shall evaluate the national measure. On the basis of the results of that evaluation, the Commission shall, within six months , or within 60 days in the case of non-compliance with the prohibition of the AI practices referred to in Article 5, starting from the notification referred to in Article 79(5), decide whether the national measure is justified and shall notify its decision to the market surveillance authority of the Member State concerned. The Commission shall also inform all other market surveillance authorities of its decision.

2.  Where the Commission considers the measure taken by the relevant Member State to be justified, all Member States shall ensure that they take appropriate restrictive measures in respect of the AI system concerned, such as requiring the withdrawal of the AI system from their market without undue delay , and shall inform the Commission accordingly. Where the Commission considers the national measure to be unjustified, the Member State concerned shall withdraw the measure and shall inform the Commission accordingly .

3.  Where the national measure is considered justified and the non-compliance of the AI system is attributed to shortcomings in the harmonised standards or common specifications referred to in Articles 40 and 41 of this Regulation, the Commission shall apply the procedure provided for in Article 11 of Regulation (EU) No 1025/2012.

Compliant AI systems which present a risk

1.  Where, having performed an evaluation under Article 79, after consulting the relevant national public authority referred to in Article 77(1), the market surveillance authority of a Member State finds that although a high-risk AI system is in compliance with this Regulation, it nevertheless presents a risk to the health or safety of persons, ▌ to fundamental rights of persons , or to other aspects of public interest protection, it shall require the relevant operator to take all appropriate measures to ensure that the AI system concerned, when placed on the market or put into service, no longer presents that risk without undue delay, within a ▌ period ▌ it may prescribe.

2.  The provider or other relevant operator shall ensure that corrective action is taken in respect of all the AI systems concerned that it has made available on the Union market within the timeline prescribed by the market surveillance authority of the Member State referred to in paragraph 1.

3.  The Member States shall immediately inform the Commission and the other Member States of a finding under paragraph 1. That information shall include all available details, in particular the data necessary for the identification of the AI system concerned, the origin and the supply chain of the AI system, the nature of the risk involved and the nature and duration of the national measures taken.

4.  The Commission shall without undue delay enter into consultation with the Member State or member States concerned and the relevant operators, and shall evaluate the national measures taken. On the basis of the results of that evaluation, the Commission shall decide whether the measure is justified and, where necessary, propose other appropriate measures.

5.  The Commission shall immediately communicate its decision to the Member States concerned and to the relevant operators . It shall also inform the other Member States.

Formal non-compliance

1.  Where the market surveillance authority of a Member State makes one of the following findings, it shall require the relevant provider to put an end to the non-compliance concerned , within a period it may prescribe :

(a)  a CE marking has been affixed in violation of Article 48;

(b)  a CE marking has not been affixed;

(c)  a EU declaration of conformity has not been drawn up;

(d)  a EU declaration of conformity has not been drawn up correctly;

(e)   registration in the EU database has not been carried out;

(f)   where applicable, an authorised representative has not been appointed;

(g)   technical documentation is not available.

2.  Where the non-compliance referred to in paragraph 1 persists, the market surveillance authority of the Member State concerned shall take appropriate and proportionate measures to restrict or prohibit the high-risk AI system being made available on the market or to ensure that it is recalled or withdrawn from the market without delay .

Union AI testing support structures

1.   The Commission shall designate one or more Union AI testing support structures to perform the tasks listed under Article 21(6) of Regulation (EU) 2019/1020 in the area of AI.

2.   Without prejudice to the tasks referred to in paragraph 1, Union AI testing support structures shall also provide independent technical or scientific advice at the request of the Board, the Commission, or of market surveillance authorities.

Right to lodge a complaint with a market surveillance authority

Without prejudice to other administrative or judicial remedies, any natural or legal person having grounds to consider that there has been an infringement of the provisions of this Regulation may submit reasoned complaints to the relevant market surveillance authority.

In accordance with Regulation (EU) 2019/1020, such complaints shall be taken into account for the purpose of conducting market surveillance activities, and shall be handled in line with the dedicated procedures established therefor by the market surveillance authorities.

Right to explanation of individual decision-making

1.   Any affected person subject to a decision which is taken by the deployer on the basis of the output from a high-risk AI system listed in Annex III, with the exception of systems listed under point 2 thereof, and which produces legal effects or similarly significantly affects that person in a way that they consider to have an adverse impact on their health, safety or fundamental rights shall have the right to obtain from the deployer clear and meaningful explanations of the role of the AI system in the decision-making procedure and the main elements of the decision taken.

2.   Paragraph 1 shall not apply to the use of AI systems for which exceptions from, or restrictions to, the obligation under paragraph 1 follow from Union or national law in compliance with Union law.

3.   This Article shall apply only to the extent that the right referred to in paragraph 1 is not otherwise provided for under Union law.

Reporting of infringements and protection of reporting persons

Directive (EU) 2019/1937 shall apply to the reporting of infringements of this Regulation and the protection of persons reporting such infringements.

Supervision, investigation, enforcement and monitoring in respect of providers of general-purpose AI models

Enforcement of the obligations of providers of general-purpose AI models

1.   The Commission shall have exclusive powers to supervise and enforce Chapter V, taking into account the procedural guarantees under Article 94. The Commission shall entrust the implementation of these tasks to the AI Office, without prejudice to the powers of organisation of the Commission and the division of competences between Member States and the Union based on the Treaties.

2.   Without prejudice to Article 75(3), market surveillance authorities may request the Commission to exercise the powers laid down in this Section, where that is necessary and proportionate to assist with the fulfilment of their tasks under this Regulation.

Monitoring actions

1.   For the purpose of carrying out the tasks assigned to it under this Section, the AI Office may take the necessary actions to monitor the effective implementation and compliance with this Regulation by providers of general-purpose AI models, including their adherence to approved codes of practice.

2.   Downstream providers shall have the right to lodge a complaint alleging an infringement of this Regulation. A complaint shall be duly reasoned and indicate at least:

(a)   the point of contact of the provider of the general-purpose AI model concerned;

(b)   a description of the relevant facts, the provisions of this Regulation concerned, and the reason why the downstream provider considers that the provider of the general-purpose AI model concerned infringed this Regulation;

(c)   any other information that the downstream provider that sent the request considers relevant, including, where appropriate, information gathered on its own initiative.

Alerts of systemic risks by the scientific panel

1.   The scientific panel may provide a qualified alert to the AI Office where it has reason to suspect that:

(a)   a general-purpose AI model poses concrete identifiable risk at Union level; or,

(b)   a general-purpose AI model meets the requirements referred to in Article 51 .

2.   Upon such qualified alert, the Commission, through the AI Office and after having informed the Board, may exercise the powers laid down in this Chapter for the purpose of assessing the matter. The AI Office shall inform the Board of any measure according to Articles 91 to 94.

3.   A qualified alert shall be duly reasoned and indicate at least:

(a)   the point of contact of the provider of the general-purpose AI model with systemic risk concerned;

(b)   a description of the relevant facts and the reasons for the alert by the scientific panel;

(c)   any other information that the scientific panel considers to be relevant, including, where appropriate, information gathered on its own initiative.

Power to request documentation and information

1.   The Commission may request the provider of the general-purpose AI model concerned to provide the documentation drawn up by the provider according to Articles 53 and 55, or any additional information that is necessary for the purpose of assessing compliance of the provider with this Regulation.

2.   Before sending the request for information, the AI Office may initiate a structured dialogue with the provider of the general-purpose AI model.

3.   Upon a duly substantiated request from the scientific panel, the Commission may issue a request for information to a provider of a general-purpose AI model, where the access to information is necessary and proportionate for the fulfilment of the tasks of the scientific panel under Article 68(2).

4.   The request for information shall state the legal basis and the purpose of the request, specify what information is required, and set a period within which the information is to be provided, and indicate the fines provided for in Article 101 for supplying incorrect, incomplete or misleading information.

5.   The provider of the general-purpose AI model concerned, or its representative shall supply the information requested. In the case of legal persons, companies or firms, or where the provider has no legal personality, the persons authorised to represent them by law or by their statutes, shall supply the information requested on behalf of the provider of the general-purpose AI model concerned. Lawyers duly authorised to act may supply information on behalf of their clients. The clients shall nevertheless remain fully responsible if the information supplied is incomplete, incorrect or misleading.

Power to conduct evaluations

1.   The AI Office, after consulting the Board, may conduct evaluations of the general-purpose AI model concerned:

(a)   to assess compliance of the provider with obligations under this Regulation, where the information gathered pursuant to Article 91 is insufficient; or,

(b)   to investigate systemic risks at Union level of general-purpose AI models with systemic risk, in particular following a qualified report from the scientific panel in accordance with Article 89 (1), point (a).

2.   The Commission may decide to appoint independent experts to carry out evaluations on its behalf, including from the scientific panel established pursuant to Article 68. Independent experts appointed for this task shall meet the criteria outlined in Article 68(2).

3.   For the purposes of paragraph 1, the Commission may request access to the general-purpose AI model concerned through APIs or further appropriate technical means and tools, including source code.

4.   The request for access shall state the legal basis, the purpose and reasons of the request and set the period within which the access is to be provided, and the fines provided for in Article 101 for failure to provide access.

5.   The providers of the general-purpose AI model concerned and, in the case of legal persons, companies or firms, or where they have no legal personality, the persons authorised to represent them by law or by their constitution, shall provide the access requested on behalf of the provider of the general-purpose AI model concerned.

6.   The Commission shall adopt implementing acts setting out the detailed arrangements and the conditions of the evaluations, including the detailed arrangements for involving independent experts, and the procedure for the selection thereof. Those implementing acts shall be adopted in accordance with the examination procedure referred to in Article 98(2).

7.   Prior to requesting access to the general-purpose AI model concerned, the AI Office may initiate a structured dialogue with the provider of the general-purpose AI model to gather more information on the internal testing of the model, internal safeguards for preventing systemic risks, and other internal procedures and measures the provider has taken to mitigate such risks.

Power to request measures

1.   Where necessary and appropriate, the Commission may request providers to:

(a)   take appropriate measures to comply with the obligations set out in Article 53;

(b)   require a provider to implement mitigation measures, where the evaluation carried out in accordance with Article 92 has given rise to serious and substantiated concern of a systemic risk at Union level;

(c)   restrict the making available on the market, withdraw or recall the model.

2.   Before a measure is requested, the AI Office may initiate a structured dialogue with the provider of the general-purpose AI model.

3.   If, during the structured dialogue referred to in paragraph 2, the provider of the general-purpose AI model with systemic risk offers commitments to implement mitigation measures to address a systemic risk at Union level, the Commission may, by decision, make those commitments binding and declare that there are no further grounds for action.

Procedural rights of economic operators of the general-purpose AI model

Article 18 of Regulation (EU) 2019/1020 shall apply mutatis mutandis to the providers of the general-purpose AI model, without prejudice to more specific procedural rights provided for in this Regulation.

CODES OF CONDUCT AND GUIDELINES

Codes of conduct for voluntary application of specific requirements

1.  The AI Office and the Member States shall encourage and facilitate the drawing up of codes of conduct , including related governance mechanisms, intended to foster the voluntary application to AI systems, other than high-risk AI systems, of some or all of the requirements set out in Chapter III, Section 2 taking into account the available technical solutions and industry best practices allowing for the application of such requirements .

2.  The AI Office and the Member States shall ▌ facilitate the drawing up of codes of conduct concerning the voluntary application , including by deployers, of specific requirements to all AI systems , on the basis of clear objectives and key performance indicators to measure the achievement of those objectives, including elements such as, but not limited to:

(a)   applicable elements provided for in Union ethical guidelines for trustworthy AI;

(b)   assessing and minimising the impact of AI systems on environmental sustainability, including as regards energy-efficient programming and techniques for the efficient design, training and use of AI;

(c)   promoting AI literacy, in particular that of persons dealing with the development, operation and use of AI;

(d)   facilitating an inclusive and diverse design of AI systems, including through the establishment of inclusive and diverse development teams and the promotion of stakeholders’ participation in that process;

(e)   assessing and preventing the negative impact of AI systems on vulnerable persons or groups of vulnerable persons, including as regards accessibility for persons with a disability, as well as on gender equality .

3.  Codes of conduct may be drawn up by individual providers or deployers of AI systems or by organisations representing them or by both, including with the involvement of deployers and any interested stakeholders and their representative organisations , including civil society organisations and academia . Codes of conduct may cover one or more AI systems taking into account the similarity of the intended purpose of the relevant systems.

4.  The AI Office and the Member States shall take into account the specific interests and needs of SMEs, including start-ups , when encouraging and facilitating the drawing up of codes of conduct.

Guidelines from the Commission on the implementation of this Regulation

1.   The Commission shall develop guidelines on the practical implementation of this Regulation, and in particular on:

(a)   the application of the requirements and obligations referred to in Articles 8 to 15 and in Article 25;

(b)   the prohibited practices referred to in Article 5;

(c)   the practical implementation of the provisions related to substantial modification;

(d)   the practical implementation of transparency obligations laid down in Article 50;

(e)   detailed information on the relationship of this Regulation with the Union harmonisation legislation listed in Annex I, as well as with other relevant Union law, including as regards consistency in their enforcement;

(f)   the application of the definition of an AI system as set out in Article 3(1).

When issuing such guidelines, the Commission shall pay particular attention to the needs of SMEs including start-ups, of local public authorities and of the sectors most likely to be affected by this Regulation.

The guidelines referred to in the first subparagraph shall take due account of the generally acknowledged state of the art on AI, as well as of relevant harmonised standards and common specifications that are referred to in Articles 40 and 41, or of those harmonised standards or technical specifications that are set out pursuant to Union harmonisation law.

2.   Upon request of the Member States or the AI Office, or on its own initiative, the Commission shall update guidelines previously adopted when deemed necessary.

DELEGATION OF POWER AND COMMITTEE PROCEDURE

Exercise of the delegation

1.  The power to adopt delegated acts is conferred on the Commission subject to the conditions laid down in this Article.

2.  The power to adopt delegated acts referred to in Article 6(6), Article 7(1) and (3), Article 11(3), Article 43(5) and (6), Article 47(5) , Article 51(3), Article 52(4) and Article 53(5) and (6) shall be conferred on the Commission for a period of five years from … [ date of entry into force of this Regulation]. The Commission shall draw up a report in respect of the delegation of power not later than nine months before the end of the five-year period. The delegation of power shall be tacitly extended for periods of an identical duration, unless the European Parliament or the Council opposes such extension not later than three months before the end of each period.

3.  The delegation of power referred to in Article 6(6), Article 7(1) and (3), Article 11(3), Article 43(5) and (6), Article 47(5) , Article 51(3), Article 52(4) and Article 53(5) and (6) may be revoked at any time by the European Parliament or by the Council. A decision of revocation shall put an end to the delegation of power specified in that decision. It shall take effect the day following that of its publication in the Official Journal of the European Union or at a later date specified therein. It shall not affect the validity of any delegated acts already in force.

4.  Before adopting a delegated act, the Commission shall consult experts designated by each Member State in accordance with the principles laid down in the Interinstitutional Agreement of 13 April 2016 on Better Law-Making.

5.  As soon as it adopts a delegated act, the Commission shall notify it simultaneously to the European Parliament and to the Council.

6.  Any delegated act adopted pursuant Article 6(6), Article 7(1) and (3), Article 11(3), Article 43(5) and (6), Article 47(5) , Article 51(3), Article 52(4) and Article 53(5) and (6) shall enter into force only if no objection has been expressed by either the European Parliament or the Council within a period of three months of notification of that act to the European Parliament and the Council or if, before the expiry of that period, the European Parliament and the Council have both informed the Commission that they will not object. That period shall be extended by three months at the initiative of the European Parliament or of the Council.

Committee procedure

1.  The Commission shall be assisted by a committee. That committee shall be a committee within the meaning of Regulation (EU) No 182/2011.

2.  Where reference is made to this paragraph, Article 5 of Regulation (EU) No 182/2011 shall apply.

CHAPTER XII

1.  In compliance with the terms and conditions laid down in this Regulation, Member States shall lay down the rules on penalties and other enforcement measures, which may also include warnings and non-monetary measures , applicable to infringements of this Regulation by operators, and shall take all measures necessary to ensure that they are properly and effectively implemented and taking into account the guidelines issued by the Commission pursuant to Article 96. The penalties provided for shall be effective, proportionate and dissuasive. They shall take into ▌ account the interests of SMEs, including start-ups, and their economic viability.

2.  The Member States shall, without delay and at the latest by the date of entry into application, notify the Commission of the rules on penalties and of other enforcement measures referred to in paragraph 1, and shall notify it, without delay, of any subsequent amendment to them.

3.   Non-compliance with the prohibition of the AI practices referred to in Article 5 shall be subject to administrative fines of up to 35 000 000 EUR or, if the offender is an undertaking , up to 7 % of its total worldwide annual turnover for the preceding financial year, whichever is higher.

4.  ▌Non-compliance of an AI system with any of the following provisions related to operators or notified bodies , other than those laid down in Articles 5 ▌, shall be subject to administrative fines of up to 15 000 000 EUR or, if the offender is an undertaking, up to 3 % of its total worldwide annual turnover for the preceding financial year, whichever is higher :

(a)   obligations of providers pursuant to Article 16;

(b)   obligations of authorised representatives pursuant to Article 22;

(c)   obligations of importers pursuant to Article 23;

(d)   obligations of distributors pursuant to Article 24;

(e)   obligations of deployers pursuant to Article 26 ;

(f)   requirements and obligations of notified bodies pursuant to Articles 31, 33(1), 33(3), 33(4) or 34;

(g)   transparency obligations for providers and users pursuant to Article 50.

5.  The supply of incorrect, incomplete or misleading information to notified bodies or national competent authorities in reply to a request shall be subject to administrative fines of up to 7 500 000 EUR or, if the offender is an undertaking, up to 1 % of its total worldwide annual turnover for the preceding financial year, whichever is higher.

6.   In the case of SMEs, including start-ups, each fine referred to in this Article shall be up to the percentages or amount referred to paragraphs 3, 4 and 5, whichever thereof is lower.

7.  When deciding whether to impose an administrative fine and when deciding on the amount of the administrative fine in each individual case, all relevant circumstances of the specific situation shall be taken into account and , as appropriate, regard shall be given to the following:

(a)  the nature, gravity and duration of the infringement and of its consequences , taking into account the purpose of the AI system, as well as, where appropriate, the number of affected persons and the level of damage suffered by them ;

(b)  whether administrative fines have already been applied by other market surveillance authorities of one or more Member States to the same operator for the same infringement ;

(c)   whether administrative fines have already been applied by other authorities to the same operator for infringements of other Union or national law, when such infringements result from the same activity or omission constituting a relevant infringement of this Regulation;

(d)  the size , the annual turnover and market share of the operator committing the infringement;

(e)   any other aggravating or mitigating factor applicable to the circumstances of the case, such as financial benefits gained, or losses avoided, directly or indirectly, from the infringement;

(f)   the degree of cooperation with the national competent authorities, in order to remedy the infringement and mitigate the possible adverse effects of the infringement;

(g)   the degree of responsibility of the operator taking into account the technical and organisational measures implemented by it;

(h)   the manner in which the infringement became known to the national competent authorities, in particular whether, and if so to what extent, the operator notified the infringement;

(i)   the intentional or negligent character of the infringement;

(j)   any action taken by the operator to mitigate the harm suffered by the affected persons.

8.  Each Member State shall lay down rules on ▌ to what extent administrative fines may be imposed on public authorities and bodies established in that Member State.

9.  Depending on the legal system of the Member States, the rules on administrative fines may be applied in such a manner that the fines are imposed by competent national courts or by other bodies, as applicable in those Member States. The application of such rules in those Member States shall have an equivalent effect.

10.   The exercise by the market surveillance authority of its powers under this Article shall be subject to appropriate procedural safeguards in accordance with Union and national law, including effective judicial remedies and due process.

11.   Member States shall, on an annual basis, report to the Commission about the administrative fines they have issued during that year, in accordance with this Article, and about any related litigation or judicial proceedings.

Article 100

Administrative fines on Union institutions, bodies, offices and agencies

1.  The European Data Protection Supervisor may impose administrative fines on Union institutions, bodies, offices and agencies falling within the scope of this Regulation. When deciding whether to impose an administrative fine and when deciding on the amount of the administrative fine in each individual case, all relevant circumstances of the specific situation shall be taken into account and due regard shall be given to the following:

(a)  the nature, gravity and duration of the infringement and of its consequences; taking into account the purpose of the AI system concerned as well as the number of affected persons and the level of damage suffered by them, and any relevant previous infringement;

(b)   the degree of responsibility of the Union institution, body, office or agency, taking into account technical and organisational measures implemented by them;

(c)   any action taken by the Union institution, body, office or agency to mitigate the damage suffered by affected persons;

(d)  the degree of cooperation with the European Data Protection Supervisor in order to remedy the infringement and mitigate the possible adverse effects of the infringement, including compliance with any of the measures previously ordered by the European Data Protection Supervisor against the Union institution, body, office or agency concerned with regard to the same subject matter;

(e)  any similar previous infringements by the Union institution, body, office or agency;

(f)   the manner in which the infringement became known to the European Data Protection Supervisor, in particular whether, and if so to what extent, the Union institution, body, office or agency notified the infringement;

(g)   the annual budget of the Union institution, body, office or agency.

2.   Non-compliance with the prohibition of the AI practices referred to in Article 5 shall be subject to administrative fines of up to EUR 1 500 000.

3.  The non-compliance of the AI system with any requirements or obligations under this Regulation, other than those laid down in Articles 5 ▌, shall be subject to administrative fines of up to ▌ EUR 750 000 .

4.  Before taking decisions pursuant to this Article, the European Data Protection Supervisor shall give the Union institution, body, office or agency which is the subject of the proceedings conducted by the European Data Protection Supervisor the opportunity of being heard on the matter regarding the possible infringement. The European Data Protection Supervisor shall base his or her decisions only on elements and circumstances on which the parties concerned have been able to comment. Complainants, if any, shall be associated closely with the proceedings.

5.  The rights of defence of the parties concerned shall be fully respected in the proceedings. They shall be entitled to have access to the European Data Protection Supervisor’s file, subject to the legitimate interest of individuals or undertakings in the protection of their personal data or business secrets.

6.  Funds collected by imposition of fines in this Article shall contribute to the general budget of the Union. The fines shall not affect the effective operation of the Union institution, body, office or agency fined .

7.   The European Data Protection Supervisor shall, on an annual basis, notify the Commission of the administrative fines it has imposed pursuant to this Article and of any litigation or judicial proceedings it has initiated.

Article 101

Fines for providers of general-purpose AI models

1.   The Commission may impose on providers of general purpose AI models fines not exceeding 3 % of their total worldwide turnover in the preceding financial year or 15 million EUR, whichever is higher., when the Commission finds that the provider intentionally or negligently:

(a)   infringed the relevant provisions of this Regulation;

(b)   failed to comply with a request for a document or for information pursuant to Article 91, or supplied incorrect, incomplete or misleading information;

(c)   failed to comply with a measure requested under Article 93;

(d)   failed to make available to the Commission access to the general-purpose AI model or general-purpose AI model with systemic risk with a view to conducting an evaluation pursuant to Article 92.

In fixing the amount of the fine or periodic penalty payment, regard shall be had to the nature, gravity and duration of the infringement, taking due account of the principles of proportionality and appropriateness. The Commission shall also into account commitments made in accordance with Article 93(3) or made in relevant codes of practice in accordance with Article 56.

2.   Before adopting the decision pursuant to paragraph 1, the Commission shall communicate its preliminary findings to the provider of the general-purpose AI model or general-purpose AI model with systemic risk and give it an opportunity to be heard.

3.   Fines imposed in accordance with this Article shall be effective, proportionate and dissuasive.

4.   Information on fines imposed under this Article shall also be communicated to the Board as appropriate.

5.   The Court of Justice of the European Union shall have unlimited jurisdiction to review decisions of the Commission fixing a fine under this Article. It may cancel, reduce or increase the fine imposed.

6.   The Commission shall adopt implementing acts containing detailed arrangements for proceedings in view of the possible adoption of decisions pursuant to paragraph 1 of this Article. Those implementing acts shall be adopted in accordance with the examination procedure referred to in Article 98(2).

CHAPTER XIII

FINAL PROVISIONS

Article 102

Amendment to Regulation (EC) No 300/2008

‘When adopting detailed measures related to technical specifications and procedures for approval and use of security equipment concerning Artificial Intelligence systems within the meaning of Regulation (EU) 2024/… of the European Parliament and of the Council * (66) , the requirements set out in Title III, Chapter 2 of that Regulation shall be taken into account.

________________

* Regulation (EU) 2024/… of the European Parliament and of the Council of … laying down harmonised rules on artificial intelligence (Artificial intelligence act) and amending certain Union legislative acts (OJ L, …, ELI: …).’

Article 103

Amendment to Regulation (EU) No 167/2013

‘When adopting delegated acts pursuant to the first subparagraph concerning artificial intelligence systems which are safety components within the meaning of Regulation (EU) 2024/… of the European Parliament and of the Council * (67) , the requirements set out in Title III, Chapter 2 of that Regulation shall be taken into account.

Article 104

Amendment to Regulation (EU) No 168/2013

‘When adopting delegated acts pursuant to the first subparagraph concerning Artificial Intelligence systems which are safety components within the meaning of Regulation (EU) 2024/… of the European Parliament and of the Council * (68) , the requirements set out in Title III, Chapter 2 of that Regulation shall be taken into account.

Article 105

Amendment to Directive 2014/90/EU

‘5. For Artificial Intelligence systems which are safety components within the meaning of Regulation (EU) 2024/… of the European Parliament and of the Council * (69) , when carrying out its activities pursuant to paragraph 1 and when adopting technical specifications and testing standards in accordance with paragraphs 2 and 3, the Commission shall take into account the requirements set out in Title III, Chapter 2 of that Regulation.

Article 106

Amendment to Directive (EU) 2016/797

‘12. When adopting delegated acts pursuant to paragraph 1 and implementing acts pursuant to paragraph 11 concerning Artificial Intelligence systems which are safety components within the meaning of Regulation (EU) 2024/… of the European Parliament and of the Council * (70) , the requirements set out in Title III, Chapter 2 of that Regulation shall be taken into account.

Article 107

Amendment to Regulation (EU) 2018/858

‘4. When adopting delegated acts pursuant to paragraph 3 concerning Artificial Intelligence systems which are safety components within the meaning of Regulation (EU) 2024/… of the European Parliament and of the Council * (71) , the requirements set out in Title III, Chapter 2 of that Regulation shall be taken into account.

Article 108

Amendment to Regulation (EU) 2018/1139

Regulation (EU) 2018/1139 is amended as follows:

‘3. Without prejudice to paragraph 2, when adopting implementing acts pursuant to paragraph 1 concerning Artificial Intelligence systems which are safety components within the meaning of Regulation (EU) 2024/… of the European Parliament and of the Council * (72) , the requirements set out in Title III, Chapter 2 of that Regulation shall be taken into account.

* Regulation (EU) 2024/… of the European Parliament and of the Council of … laying down harmonised rules on artificial intelligence (Artificial intelligence act) and amending certain Union legislative acts (OJ L, …, ELI: …).’;

‘4. When adopting delegated acts pursuant to paragraphs 1 and 2 concerning Artificial Intelligence systems which are safety components within the meaning of Regulation (EU) 2024/… (73) , the requirements set out in Title III, Chapter 2 of that Regulation shall be taken into account.’;

‘4. When adopting implementing acts pursuant to paragraph 1 concerning Artificial Intelligence systems which are safety components within the meaning of Regulation (EU) 2024/… (74) , the requirements set out in Title III, Chapter 2 of that Regulation shall be taken into account.’;

‘3. When adopting delegated acts pursuant to paragraphs 1 and 2 concerning Artificial Intelligence systems which are safety components within the meaning of Regulation (EU) 2024/… + , the requirements set out in Title III, Chapter 2 of that Regulation shall be taken into account.’;

‘When adopting those implementing acts concerning Artificial Intelligence systems which are safety components within the meaning of Regulation (EU) 2024/… + , the requirements set out in Title III, Chapter 2 of that Regulation shall be taken into account.’;

‘3. When adopting delegated acts pursuant to paragraphs 1 and 2 concerning Artificial Intelligence systems which are safety components within the meaning of Regulation (EU) 2024/… (75) , the requirements set out in Title III, Chapter 2 of that Regulation shall be taken into account.’.

Article 109

Amendment to Regulation (EU) 2019/2144

‘3. When adopting the implementing acts pursuant to paragraph 2, concerning artificial intelligence systems which are safety components within the meaning of Regulation (EU) 2024/… of the European Parliament and of the Council * (76) , the requirements set out in Title III, Chapter 2 of that Regulation shall be taken into account.

Article 110

Amendment to Directive (EU) 2020/1828

‘ (68) Regulation (EU) 2024/… of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union legislative acts (OJ L, ..., ELI: …) ’ .

Article 111

AI systems already placed on the market or put into service

1.   Without prejudice to the application of Article 5 as referred to in Article 113(3), point (a) , AI systems which are components of the large-scale IT systems established by the legal acts listed in Annex X that have been placed on the market or put into service before ▌… [36 months from the date of entry into force of this Regulation] shall be brought into compliance with this Regulation by 31 December 2030 .

The requirements laid down in this Regulation shall be taken into account ▌ in the evaluation of each large-scale IT system established by the legal acts listed in Annex X to be undertaken as provided for in those legal acts and where those legal acts are replaced or amended .

2.   Without prejudice to the application of Article 5 as referred to in Article 113(3), point (a) , this Regulation shall apply to operators of high-risk AI systems, other than the systems referred to in paragraph 1 of this Article, that have been placed on the market or put into service before … [24 months from the date of entry into force of this Regulation], only if, as from that date, those systems are subject to significant changes in their designs. In the case of high-risk AI systems intended to be used by public authorities, the providers and deployers of such systems shall take the necessary steps to comply with the requirements of this Regulation by … [ six years from the date of entry into force of this Regulation ] .

3.   Providers of general-purpose AI models that have been placed on the market before … [12 months from the date of entry into force of this Regulation ] shall take the necessary steps in order to comply with the obligations laid down in this Regulation by … [36 months from the date of entry into force of this Regulation ] .

Article 112

Evaluation and review

1.  The Commission shall assess the need for amendment of the list in Annex III and of the list of prohibited AI practices in Article 5, once a year following the entry into force of this Regulation , and until the end of the period of the delegation of power set out in Article 97 . The Commission shall submit the findings of that assessment to the European Parliament and the Council.

2.  By … [four years from the date of entry into force of this Regulation] and every four years thereafter, the Commission shall evaluate and report to the European Parliament and to the Council on the following:

(a)   the need for amendments extending existing area headings or adding new area headings in Annex III;

(b)   amendments to the list of AI systems requiring additional transparency measures in Article 50;

(c)   amendments enhancing the effectiveness of the supervision and governance system.

3.   By … [ four years from the date of entry into force of this Regulation] and every four years thereafter, the Commission shall submit a report on the evaluation and review of this Regulation to the European Parliament and to the Council . The report shall include an assessment with regard to the structure of enforcement and the possible need for a Union agency to resolve any identified shortcomings. On the basis of the findings, that report shall, where appropriate, be accompanied by a proposal for amendment of this Regulation . The reports shall be made public.

4.  The reports referred to in paragraph 2 shall devote specific attention to the following:

(a)  the status of the financial , technical and human resources of the national competent authorities in order to effectively perform the tasks assigned to them under this Regulation;

(b)  the state of penalties, in particular administrative fines as referred to in Article 99(1), applied by Member States for infringements of this Regulation;

(c)   adopted harmonised standards and common specifications developed to support this Regulation;

(d)   the number of undertakings that enter the market after the entry into application of this Regulation, and how many of them are SMEs.

5.   By ... [four years from the date of entry into force of this Regulation)] the Commission shall evaluate the functioning of the AI Office, whether the Office has been given sufficient powers and competences to fulfil its tasks and whether it would be relevant and needed for the proper implementation and enforcement of this Regulation to upgrade the AI Office and its enforcement competences and to increase its resources. The Commission shall submit this evaluation report to the European Parliament and to the Council.

6.   By … [four years from the date of entry into force of this Regulation)] and every four years thereafter, the Commission shall submit a report on the review of the progress on the development of standardisation deliverables on the energy-efficient development of general-purpose models, and asses the need for further measures or actions, including binding measures or actions. The report shall be submitted to the European Parliament and to the Council, and it shall be made public.

7.  By … [ four years from the date of entry into force of this Regulation] and every three years thereafter, the Commission shall evaluate the impact and effectiveness of voluntary codes of conduct to foster the application of the requirements set out in Chapter II, Section 2 for AI systems other than high-risk AI systems and possibly other additional requirements for AI systems other than high-risk AI systems , including as regards environmental sustainability.

8.  For the purposes of paragraphs 1 to 7, the Board, the Member States and national competent authorities shall provide the Commission with information upon its request and without undue delay.

9.  In carrying out the evaluations and reviews referred to in paragraphs 1 to 7, the Commission shall take into account the positions and findings of the Board, of the European Parliament, of the Council, and of other relevant bodies or sources.

10.  The Commission shall, if necessary, submit appropriate proposals to amend this Regulation, in particular taking into account developments in technology , the effect of AI systems on health and safety, and on fundamental rights, and in the light of the state of progress in the information society.

11.   To guide the evaluations and reviews referred to in paragraphs 1 to 7 of this Article, the AI Office shall undertake to develop an objective and participative methodology for the evaluation of risk levels based on the criteria outlined in the relevant Articles and the inclusion of new systems in:

(a)   the list in Annex III, including the extension of existing area headings or the addition of new area headings in that Annex;

(b)   the list of prohibited practices laid down in Article 5; and,

(c)   the list of AI systems requiring additional transparency measures pursuant to Article 50.

12.   Any amendment to this Regulation pursuant to paragraph 10, or relevant delegated or implementing acts, which concerns sectoral Union harmonisation legislation listed in Section B of Annex I shall take into account the regulatory specificities of each sector, and the existing governance, conformity assessment and enforcement mechanisms and authorities established therein.

13.   By … [seven years from the date of entry into force of this Regulation], the Commission shall carry out an assessment of the enforcement of this Regulation and shall report on it to the European Parliament, the Council and the European Economic and Social Committee, taking into account the first years of application of this Regulation. On the basis of the findings, that report shall, where appropriate, be accompanied by a proposal for amendment of this Regulation with regard to the structure of enforcement and the need for a Union agency to resolve any identified shortcomings.

Article 113

Entry into force and application

This Regulation shall enter into force on the twentieth day following that of its publication in the Official Journal of the European Union .

It shall apply from … [24 months from the date of entry into force of this Regulation].

(a)   Chapters I and II shall apply from … [six months from the date of entry into force of this Regulation];

(b)  Chapter III ▌ Section 4 , Chapter V, Chapter VII and Chapter XII shall apply from … [ 12 months from the date of entry into force of this Regulation], with the exception of Article 101 ;

(c)  Article 6(1) and the corresponding obligations in this Regulation shall apply from … [ 36 months from the date of entry into force of this Regulation].

This Regulation shall be binding in its entirety and directly applicable in all Member States.

For the European Parliament For the Council

The President The President

List of Union harmonisation legislation

Section A. List of Union harmonisation legislation based on the New Legislative Framework

1.  Directive 2006/42/EC of the European Parliament and of the Council of 17 May 2006 on machinery, and amending Directive 95/16/EC (OJ L 157, 9.6.2006, p. 24) [as repealed by the Machinery Regulation];

2.  Directive 2009/48/EC of the European Parliament and of the Council of 18 June 2009 on the safety of toys (OJ L 170, 30.6.2009, p. 1);

3.  Directive 2013/53/EU of the European Parliament and of the Council of 20 November 2013 on recreational craft and personal watercraft and repealing Directive 94/25/EC (OJ L 354, 28.12.2013, p. 90);

4.  Directive 2014/33/EU of the European Parliament and of the Council of 26 February 2014 on the harmonisation of the laws of the Member States relating to lifts and safety components for lifts (OJ L 96, 29.3.2014, p. 251);

5.  Directive 2014/34/EU of the European Parliament and of the Council of 26 February 2014 on the harmonisation of the laws of the Member States relating to equipment and protective systems intended for use in potentially explosive atmospheres (OJ L 96, 29.3.2014, p. 309);

6.  Directive 2014/53/EU of the European Parliament and of the Council of 16 April 2014 on the harmonisation of the laws of the Member States relating to the making available on the market of radio equipment and repealing Directive 1999/5/EC (OJ L 153, 22.5.2014, p. 62);

7.  Directive 2014/68/EU of the European Parliament and of the Council of 15 May 2014 on the harmonisation of the laws of the Member States relating to the making available on the market of pressure equipment (OJ L 189, 27.6.2014, p. 164);

8.  Regulation (EU) 2016/424 of the European Parliament and of the Council of 9 March 2016 on cableway installations and repealing Directive 2000/9/EC (OJ L 81, 31.3.2016, p. 1);

9.  Regulation (EU) 2016/425 of the European Parliament and of the Council of 9 March 2016 on personal protective equipment and repealing Council Directive 89/686/EEC (OJ L 81, 31.3.2016, p. 51);

10.  Regulation (EU) 2016/426 of the European Parliament and of the Council of 9 March 2016 on appliances burning gaseous fuels and repealing Directive 2009/142/EC (OJ L 81, 31.3.2016, p. 99);

11.  Regulation (EU) 2017/745 of the European Parliament and of the Council of 5 April 2017 on medical devices, amending Directive 2001/83/EC, Regulation (EC) No 178/2002 and Regulation (EC) No 1223/2009 and repealing Council Directives 90/385/EEC and 93/42/EEC (OJ L 117, 5.5.2017, p. 1);

12.  Regulation (EU) 2017/746 of the European Parliament and of the Council of 5 April 2017 on in vitro diagnostic medical devices and repealing Directive 98/79/EC and Commission Decision 2010/227/EU (OJ L 117, 5.5.2017, p. 176).

Section B. List of other Union harmonisation legislation

13.  Regulation (EC) No 300/2008 of the European Parliament and of the Council of 11 March 2008 on common rules in the field of civil aviation security and repealing Regulation (EC) No 2320/2002 (OJ L 97, 9.4.2008, p. 72);

14.  Regulation (EU) No 168/2013 of the European Parliament and of the Council of 15 January 2013 on the approval and market surveillance of two- or three-wheel vehicles and quadricycles (OJ L 60, 2.3.2013, p. 52);

15.  Regulation (EU) No 167/2013 of the European Parliament and of the Council of 5 February 2013 on the approval and market surveillance of agricultural and forestry vehicles (OJ L 60, 2.3.2013, p. 1);

16.  Directive 2014/90/EU of the European Parliament and of the Council of 23 July 2014 on marine equipment and repealing Council Directive 96/98/EC (OJ L 257, 28.8.2014, p. 146);

17.  Directive (EU) 2016/797 of the European Parliament and of the Council of 11 May 2016 on the interoperability of the rail system within the European Union (OJ L 138, 26.5.2016, p. 44);

18.  Regulation (EU) 2018/858 of the European Parliament and of the Council of 30 May 2018 on the approval and market surveillance of motor vehicles and their trailers, and of systems, components and separate technical units intended for such vehicles, amending Regulations (EC) No 715/2007 and (EC) No 595/2009 and repealing Directive 2007/46/EC (OJ L 151, 14.6.2018, p. 1);

19.  Regulation (EU) 2019/2144 of the European Parliament and of the Council of 27 November 2019 on type-approval requirements for motor vehicles and their trailers, and systems, components and separate technical units intended for such vehicles, as regards their general safety and the protection of vehicle occupants and vulnerable road users, amending Regulation (EU) 2018/858 of the European Parliament and of the Council and repealing Regulations (EC) No 78/2009, (EC) No 79/2009 and (EC) No 661/2009 of the European Parliament and of the Council and Commission Regulations (EC) No 631/2009, (EU) No 406/2010, (EU) No 672/2010, (EU) No 1003/2010, (EU) No 1005/2010, (EU) No 1008/2010, (EU) No 1009/2010, (EU) No 19/2011, (EU) No 109/2011, (EU) No 458/2011, (EU) No 65/2012, (EU) No 130/2012, (EU) No 347/2012, (EU) No 351/2012, (EU) No 1230/2012 and (EU) 2015/166 (OJ L 325, 16.12.2019, p. 1);

20.  Regulation (EU) 2018/1139 of the European Parliament and of the Council of 4 July 2018 on common rules in the field of civil aviation and establishing a European Union Aviation Safety Agency, and amending Regulations (EC) No 2111/2005, (EC) No 1008/2008, (EU) No 996/2010, (EU) No 376/2014 and Directives 2014/30/EU and 2014/53/EU of the European Parliament and of the Council, and repealing Regulations (EC) No 552/2004 and (EC) No 216/2008 of the European Parliament and of the Council and Council Regulation (EEC) No 3922/91 (OJ L 212, 22.8.2018, p. 1), in so far as the design, production and placing on the market of aircrafts referred to in Article 2(1), points (a) and (b) thereof, where it concerns unmanned aircraft and their engines, propellers, parts and equipment to control them remotely, are concerned.

List of criminal offences referred to in Article 5(1), point (e)(iii)

Criminal offences referred to in Article 5(1), point (e)(iii):

–   terrorism,

–   trafficking in human beings,

–   sexual exploitation of children, and child pornography,

–   illicit trafficking in narcotic drugs or psychotropic substances,

–   illicit trafficking in weapons, munitions or explosives,

–   murder, grievous bodily injury,

–   illicit trade in human organs or tissue,

–   illicit trafficking in nuclear or radioactive materials,

–   kidnapping, illegal restraint or hostage-taking,

–   crimes within the jurisdiction of the International Criminal Court,

–   unlawful seizure of aircraft or ships,

–   environmental crime,

–   organised or armed robbery,

–   sabotage,

–   participation in a criminal organisation involved in one or more of the offences listed above.

High-risk AI systems referred to in Article 6(2)

High-risk AI systems pursuant to Article 6(2) are the AI systems listed in any of the following areas:

1.   Biometrics, in so far as their use is permitted under relevant Union or national law:

(a)  r emote biometric identification systems.

This shall not include AI systems intended to be used for biometric verification the sole purpose of which is to confirm that a specific natural person is the person he or she claims to be ;

(b)   AI systems intended to be used for biometric categorisation, according to sensitive or protected attributes or characteristics based on the inference of those attributes or characteristics;

(c)   AI systems intended to be used for emotion recognition.

2.  ▌Critical infrastructure:

(a)  AI systems intended to be used as safety components in the management and operation of critical digital infrastructure, road traffic, or in the supply of water, gas, heating or electricity.

3.  Education and vocational training:

(a)  AI systems intended to be used to determine access or admission or to assign natural persons to educational and vocational training institutions at all levels ;

(b)  AI systems intended to be used to evaluate learning outcomes, including when those outcomes are used to steer the learning process of natural persons in educational and vocational training institutions at all levels ;

(c)   AI systems intended to be used for the purpose of assessing the appropriate level of education that an individual will receive or will be able to access, in the context of or within educational and vocational training institutions;

(d)   AI systems intended to be used for monitoring and detecting prohibited behaviour of students during tests in the context of or within educational and vocational training institutions.

4.  Employment, workers management and access to self-employment:

(a)  AI systems intended to be used for the recruitment or selection of natural persons, in particular to place targeted job advertisements, to analyse and filter job applications, and to evaluate candidates ;

(b)  AI systems intended to be used to make decisions affecting terms of work-related relationships, the promotion or termination of work-related contractual relationships, to allocate tasks based on individual behaviour or personal traits or characteristics or to monitor and evaluate the performance and behaviour of persons in such relationships.

5.  Access to and enjoyment of essential private services and essential public services and benefits:

(a)  AI systems intended to be used by public authorities or on behalf of public authorities to evaluate the eligibility of natural persons for essential public assistance benefits and services, including healthcare services, as well as to grant, reduce, revoke, or reclaim such benefits and services;

(b)  AI systems intended to be used to evaluate the creditworthiness of natural persons or establish their credit score , with the exception of AI systems used for the purpose of detecting financial fraud ;

(c)   AI systems intended to be used for risk assessment and pricing in relation to natural persons in the case of life and health insurance;

(d)   AI systems intended to evaluate and classify emergency calls by natural persons or to be used to dispatch, or to establish priority in the dispatching of, emergency first response services, including by police, firefighters and medical aid , as well as of emergency healthcare patient triage systems;

6.  Law enforcement , in so far as their use is permitted under relevant Union or national law:

(a)  AI systems intended to be used by or on behalf of law enforcement authorities , or by Union institutions, bodies, offices or agencies in support of law enforcement authorities or on their behalf to assess a natural person’s risk of becoming the victim of criminal offences;

(b)  AI systems intended to be used by or on behalf of law enforcement authorities or by Union institutions, bodies, offices or agencies in support of law enforcement authorities as polygraphs or similar tools;

(c)  AI systems intended to be used by or on behalf of law enforcement authorities , or by Union institutions, bodies, offices or agencies, in support of law enforcement authorities to evaluate the reliability of evidence in the course of the investigation or prosecution of criminal offences;

(d)  AI systems intended to be used by law enforcement authorities or on their behalf or by Union institutions, bodies, offices or agencies in support of law enforcement authorities for assessing the likelihood of a natural person of offending or re-offending not solely based on profiling of natural persons as referred to in Article 3(4) of Directive (EU) 2016/680, or to assess personality traits and characteristics or past criminal behaviour of natural persons or groups;

(e)  AI systems intended to be used by or on behalf of law enforcement authorities or by Union institutions, bodies, offices or agencies in support of law enforcement authorities for the profiling of natural persons as referred to in Article 3(4) of Directive (EU) 2016/680 in the course of the detection, investigation or prosecution of criminal offences .

7.  Migration, asylum and border control management , in so far as their use is permitted under relevant Union or national law :

(a)  AI systems intended to be used by competent public authorities as polygraphs and similar tools ;

(b)  AI systems intended to be used by or on behalf of competent public authorities or by Union institutions, bodies, offices or agencies to assess a risk, including a security risk, a risk of irregular migration , or a health risk, posed by a natural person who intends to enter or who has entered into the territory of a Member State;

▌ (c) AI systems intended to be used by or on behalf of competent public authorities or by Union institutions, bodies, offices or agencies to assist competent public authorities for the examination of applications for asylum, visa or residence permits and for associated complaints with regard to the eligibility of the natural persons applying for a status , including related assessments of the reliability of evidence;

(d)   AI systems intended to be used by or on behalf of competent public authorities, including Union institutions, bodies, offices or agencies, in the context of migration, asylum or border control management, for the purpose of detecting, recognising or identifying natural persons, with the exception of the verification of travel documents.

8.  Administration of justice and democratic processes:

(a)  AI systems intended to be used by a judicial authority or on their behalf to assist a judicial authority in researching and interpreting facts and the law and in applying the law to a concrete set of facts, or to be used in a similar way in alternative dispute resolution;

(b)   AI systems intended to be used for influencing the outcome of an election or referendum or the voting behaviour of natural persons in the exercise of their vote in elections or referenda. This does not include AI systems to the output of which natural persons are not directly exposed, such as tools used to organise, optimise or structure political campaigns from an administrative or logistical point of view.

Technical documentation referred to in Article 11(1)

The technical documentation referred to in Article 11(1) shall contain at least the following information, as applicable to the relevant AI system:

1.  A general description of the AI system including:

(a)  its intended purpose, the name of the provider and the version of the system reflecting its relation to previous versions;

(b)  how the AI system interacts with, or can be used to interact with, hardware or software , including with other AI systems, that are not part of the AI system itself, where applicable;

(c)  the versions of relevant software or firmware, and any requirements related to version updates;

(d)  the description of all the forms in which the AI system is placed on the market or put into service, such as software packages embedded into hardware, downloads, or APIs ;

(e)  the description of the hardware on which the AI system is intended to run;

(f)  where the AI system is a component of products, photographs or illustrations showing external features, the marking and internal layout of those products;

(g)   a basic description of the user-interface provided to the deployer;

(h)  instructions for use for the deployer, and a basic description of the user-interface provided to the deployer, where applicable ▌;

2.  A detailed description of the elements of the AI system and of the process for its development, including:

(a)  the methods and steps performed for the development of the AI system, including, where relevant, recourse to pre-trained systems or tools provided by third parties and how those were used, integrated or modified by the provider;

(b)  the design specifications of the system, namely the general logic of the AI system and of the algorithms; the key design choices including the rationale and assumptions made, including with regard to persons or groups of persons in respect of who, the system is intended to be used; the main classification choices; what the system is designed to optimise for, and the relevance of the different parameters; the description of the expected output and output quality of the system; the decisions about any possible trade-off made regarding the technical solutions adopted to comply with the requirements set out in Chapter III, Section 2;

(c)  the description of the system architecture explaining how software components build on or feed into each other and integrate into the overall processing; the computational resources used to develop, train, test and validate the AI system;

(d)  where relevant, the data requirements in terms of datasheets describing the training methodologies and techniques and the training data sets used, including a general description of these data sets, information about their provenance, scope and main characteristics; how the data was obtained and selected; labelling procedures (e.g. for supervised learning), data cleaning methodologies (e.g. outliers detection);

(e)  assessment of the human oversight measures needed in accordance with Article 14, including an assessment of the technical measures needed to facilitate the interpretation of the outputs of AI systems by the deployers , in accordance with Article 13(3), point (d);

(f)  where applicable, a detailed description of pre-determined changes to the AI system and its performance, together with all the relevant information related to the technical solutions adopted to ensure continuous compliance of the AI system with the relevant requirements set out in Chapter III, Section 2;

(g)  the validation and testing procedures used, including information about the validation and testing data used and their main characteristics; metrics used to measure accuracy, robustness ▌ and compliance with other relevant requirements set out in Chapter III, Section 2, as well as potentially discriminatory impacts; test logs and all test reports dated and signed by the responsible persons, including with regard to pre-determined changes as referred to under point (f);

(h)   cybersecurity measures put in place;

3.  Detailed information about the monitoring, functioning and control of the AI system, in particular with regard to: its capabilities and limitations in performance, including the degrees of accuracy for specific persons or groups of persons on which the system is intended to be used and the overall expected level of accuracy in relation to its intended purpose; the foreseeable unintended outcomes and sources of risks to health and safety, fundamental rights and discrimination in view of the intended purpose of the AI system; the human oversight measures needed in accordance with Article 14, including the technical measures put in place to facilitate the interpretation of the outputs of AI systems by the deployers ; specifications on input data, as appropriate;

4.   A description of the appropriateness of the performance metrics for the specific AI system;

5.  A detailed description of the risk management system in accordance with Article 9;

6.  A description of relevant changes made by the provider to the system through its lifecycle;

7.  A list of the harmonised standards applied in full or in part the references of which have been published in the Official Journal of the European Union ; where no such harmonised standards have been applied, a detailed description of the solutions adopted to meet the requirements set out in Chapter III, Section 2, including a list of other relevant standards and technical specifications applied;

8.  A copy of the EU declaration of conformity;

9.  A detailed description of the system in place to evaluate the AI system performance in the post-market phase in accordance with Article 72, including the post-market monitoring plan referred to in Article 72(3).

The EU declaration of conformity referred to in Article 47, shall contain all of the following information:

1.  AI system name and type and any additional unambiguous reference allowing the identification and traceability of the AI system;

2.  The name and address of the provider or, where applicable, of their authorised representative;

3.  A statement that the EU declaration of conformity is issued under the sole responsibility of the provider;

4.  A statement that the AI system is in conformity with this Regulation and, if applicable, with any other relevant Union law that provides for the issuing of an EU declaration of conformity;

5.   Where an AI system involves the processing of personal data, a statement that that AI system complies with Regulations (EU) 2016/679 and (EU) 2018/1725 and Directive (EU) 2016/680;

6.  References to any relevant harmonised standards used or any other common specification in relation to which conformity is declared;

7.  Where applicable, the name and identification number of the notified body, a description of the conformity assessment procedure performed, and identification of the certificate issued;

8.  The place and date of issue of the declaration, the name and function of the person who signed it, as well as an indication for, or on behalf of, whom that person signed, a signature.

Conformity assessment procedure based on internal control

1.  The conformity assessment procedure based on internal control is the conformity assessment procedure based on points 2 to 4.

2.  The provider verifies that the established quality management system is in compliance with the requirements of Article 17.

3.  The provider examines the information contained in the technical documentation in order to assess the compliance of the AI system with the relevant essential requirements set out in Chapter III, Section 2.

4.  The provider also verifies that the design and development process of the AI system and its post-market monitoring as referred to in Article 72 is consistent with the technical documentation.

Conformity based on an assessment of the quality management system and an assessment of the technical documentation

1.  Introduction

Conformity based on an assessment of the quality management system and an assessment of the technical documentation is the conformity assessment procedure based on points 2 to 5.

2.  Overview

The approved quality management system for the design, development and testing of AI systems pursuant to Article 17 shall be examined in accordance with point 3 and shall be subject to surveillance as specified in point 5. The technical documentation of the AI system shall be examined in accordance with point 4.

3.  Quality management system

3.1.  The application of the provider shall include:

(a)  the name and address of the provider and, if the application is lodged by an authorised representative, also their name and address;

(b)  the list of AI systems covered under the same quality management system;

(c)  the technical documentation for each AI system covered under the same quality management system;

(d)  the documentation concerning the quality management system which shall cover all the aspects listed under Article 17;

(e)  a description of the procedures in place to ensure that the quality management system remains adequate and effective;

(f)  a written declaration that the same application has not been lodged with any other notified body.

3.2.  The quality management system shall be assessed by the notified body, which shall determine whether it satisfies the requirements referred to in Article 17.

The decision shall be notified to the provider or its authorised representative.

The notification shall contain the conclusions of the assessment of the quality management system and the reasoned assessment decision.

3.3.  The quality management system as approved shall continue to be implemented and maintained by the provider so that it remains adequate and efficient.

3.4.  Any intended change to the approved quality management system or the list of AI systems covered by the latter shall be brought to the attention of the notified body by the provider.

The proposed changes shall be examined by the notified body, which shall decide whether the modified quality management system continues to satisfy the requirements referred to in point 3.2 or whether a reassessment is necessary.

The notified body shall notify the provider of its decision. The notification shall contain the conclusions of the examination of the changes and the reasoned assessment decision.

4.  Control of the technical documentation.

4.1.  In addition to the application referred to in point 3, an application with a notified body of their choice shall be lodged by the provider for the assessment of the technical documentation relating to the AI system which the provider intends to place on the market or put into service and which is covered by the quality management system referred to under point 3.

4.2.  The application shall include:

(a)  the name and address of the provider;

(b)  a written declaration that the same application has not been lodged with any other notified body;

(c)  the technical documentation referred to in Annex IV.

4.3.  The technical documentation shall be examined by the notified body. Where relevant and limited to what is necessary to fulfil its tasks , the notified body shall be granted full access to the training , validation, and testing data sets used , including, where appropriate and subject to security safeguards, through API or other relevant technical means and tools enabling remote access.

4.4.  In examining the technical documentation, the notified body may require that the provider supply further evidence or carry out further tests so as to enable a proper assessment of the conformity of the AI system with the requirements set out in Chapter III, Section 2. Where the notified body is not satisfied with the tests carried out by the provider, the notified body shall itself directly carry out adequate tests, as appropriate.

4.5.  Where necessary to assess the conformity of the high-risk AI system with the requirements set out in Chapter III, Section 2 , after all other reasonable means to verify conformity have been exhausted and have proven to be insufficient, and upon a reasoned request, the notified body shall also be granted access to the training and trained models of the AI system , including its relevant parameters . Such access shall be subject to existing Union law on the protection of intellectual property and trade secrets.

4.6.  The decision of the notified body shall be notified to the provider or its authorised representative. The notification shall contain the conclusions of the assessment of the technical documentation and the reasoned assessment decision.

Where the AI system is in conformity with the requirements set out in Chapter III, Section 2, the notified body shall issue a Union technical documentation assessment certificate. The certificate shall indicate the name and address of the provider, the conclusions of the examination, the conditions (if any) for its validity and the data necessary for the identification of the AI system.

The certificate and its annexes shall contain all relevant information to allow the conformity of the AI system to be evaluated, and to allow for control of the AI system while in use, where applicable.

Where the AI system is not in conformity with the requirements set out in Chapter III, Section 2, the notified body shall refuse to issue an Union technical documentation assessment certificate and shall inform the applicant accordingly, giving detailed reasons for its refusal.

Where the AI system does not meet the requirement relating to the data used to train it, re-training of the AI system will be needed prior to the application for a new conformity assessment. In this case, the reasoned assessment decision of the notified body refusing to issue the Union technical documentation assessment certificate shall contain specific considerations on the quality data used to train the AI system, in particular on the reasons for non-compliance.

4.7.  Any change to the AI system that could affect the compliance of the AI system with the requirements or its intended purpose shall be assessed by the notified body which issued the Union technical documentation assessment certificate. The provider shall inform such notified body of its intention to introduce any of the abovementioned changes, or if it otherwise becomes aware of the occurrence of such changes. The intended changes shall be assessed by the notified body, which shall decide whether those changes require a new conformity assessment in accordance with Article 43(4) or whether they could be addressed by means of a supplement to the Union technical documentation assessment certificate. In the latter case, the notified body shall assess the changes, notify the provider of its decision and, where the changes are approved, issue to the provider a supplement to the Union technical documentation assessment certificate.

5.  Surveillance of the approved quality management system.

5.1.  The purpose of the surveillance carried out by the notified body referred to in Point 3 is to make sure that the provider duly abides by the terms and conditions of the approved quality management system.

5.2.  For assessment purposes, the provider shall allow the notified body to access the premises where the design, development, testing of the AI systems is taking place. The provider shall further share with the notified body all necessary information.

5.3.  The notified body shall carry out periodic audits to make sure that the provider maintains and applies the quality management system and shall provide the provider with an audit report. In the context of those audits, the notified body may carry out additional tests of the AI systems for which a Union technical documentation assessment certificate was issued.

Information to be submitted upon the registration of high-risk AI systems in accordance with Article 49

Section A - Information to be submitted by providers of high-risk AI systems in accordance with Article 49(1)

The following information shall be provided and thereafter kept up to date with regard to high-risk AI systems to be registered in accordance with Article 49 (1) :

1.  The name, address and contact details of the provider;

2.  Where submission of information is carried out by another person on behalf of the provider, the name, address and contact details of that person;

3.  The name, address and contact details of the authorised representative, where applicable;

4.  The AI system trade name and any additional unambiguous reference allowing the identification and traceability of the AI system;

5.  A description of the intended purpose of the AI system and of the components and functions supported through this AI system ;

6.   A basic and concise description of the information used by the system (data, inputs) and its operating logic;

7.  The status of the AI system (on the market, or in service; no longer placed on the market/in service, recalled);

8.  The type, number and expiry date of the certificate issued by the notified body and the name or identification number of that notified body, where applicable;

9.  A scanned copy of the certificate referred to in point 8, where applicable;

10.  Any Member States in which the AI system was on the market, was put into service or made available in the Union;

11.  A copy of the EU declaration of conformity referred to in Article 47;

12.  Electronic instructions for use; this information shall not be provided for high-risk AI systems in the areas of law enforcement or migration, asylum and border control management referred to in Annex III, points 1, 6 and 7;

13.  A URL for additional information (optional).

Section B - Information to be submitted by providers of high-risk AI systems in accordance with Article 49(2)

The following information shall be provided and thereafter kept up to date with regard to AI systems to be registered in accordance with Article 49(2):

1.   The name, address and contact details of the provider;

2.   Where submission of information is carried out by another person on behalf of the provider, the name, address and contact details of that person;

3.   The name, address and contact details of the authorised representative, where applicable;

4.   The AI system trade name and any additional unambiguous reference allowing the identification and traceability of the AI system;

5.   A description of the intended purpose of the AI system;

6.   The condition or conditions under in Article 6(3)based on which the AI system is considered to be not-high-risk;

7.   A short summary of the grounds on which the AI system is considered to be not-high-risk in application of the procedure under Article 6(3);

8.   The status of the AI system (on the market, or in service; no longer placed on the market/in service, recalled);

9.   Any Member States in which the AI system has been placed on the market, put into service or made available in the Union.

Section C - Information to be submitted by deployers of high-risk AI systems in accordance with Article 49(3)

The following information shall be provided and thereafter kept up to date with regard to high-risk AI systems to be registered in accordance with Article 49:

1.   The name, address and contact details of the deployer;

2.   The name, address and contact details of the person submitting information on behalf of the deployer;

3.   A summary of the findings of the fundamental rights impact assessment conducted in accordance with Article 27;

4.   The URL of the entry of the AI system in the EU database by its provider;

5.   A summary of the data protection impact assessment carried out in accordance with Article 35 of Regulation (EU) 2016/679 or Article 27 of Directive (EU) 2016/680 as specified in Article 26(8) of this Regulation, where applicable.

Information to be submitted upon the registration of high-risk AI systems listed in Annex III in relation to testing in real world conditions in accordance with Article 60

The following information shall be provided and thereafter kept up to date with regard to testing in real world conditions to be registered in accordance with Article 60:

1.   A Union-wide unique single identification number of the testing in real world conditions;

2.   The name and contact details of the provider or prospective provider and of the deployers involved in the testing in real world conditions;

3.   A brief description of the AI system, its intended purpose, and other information necessary for the identification of the system;

4.   A summary of the main characteristics of the plan for testing in real world conditions;

5.   Information on the suspension or termination of the testing in real world conditions.

Union legislative acts on large-scale IT systems in the area of Freedom, Security and Justice

1.  Schengen Information System

(a)  Regulation (EU) 2018/1860 of the European Parliament and of the Council of 28 November 2018 on the use of the Schengen Information System for the return of illegally staying third-country nationals (OJ L 312, 7.12.2018, p. 1).

(b)  Regulation (EU) 2018/1861 of the European Parliament and of the Council of 28 November 2018 on the establishment, operation and use of the Schengen Information System (SIS) in the field of border checks, and amending the Convention implementing the Schengen Agreement, and amending and repealing Regulation (EC) No 1987/2006 (OJ L 312, 7.12.2018, p. 14)

(c)  Regulation (EU) 2018/1862 of the European Parliament and of the Council of 28 November 2018 on the establishment, operation and use of the Schengen Information System (SIS) in the field of police cooperation and judicial cooperation in criminal matters, amending and repealing Council Decision 2007/533/JHA, and repealing Regulation (EC) No 1986/2006 of the European Parliament and of the Council and Commission Decision 2010/261/EU (OJ L 312, 7.12.2018, p. 56).

2.  Visa Information System

(a)  Regulation (EU) 2021/1133 of the European Parliament and of the Council of 7 July 2021 amending Regulations (EU) No 603/2013, (EU) 2016/794, (EU) 2018/1862, (EU) 2019/816 and (EU) 2019/818 as regards the establishment of the conditions for accessing other EU information systems for the purposes of the Visa Information System (OJ L 248, 13.7.2021, p. 1).

(b)  Regulation (EU) 2021/1134 of the European Parliament and of the Council of 7 July 2021 amending Regulations (EC) No 767/2008, (EC) No 810/2009, (EU) 2016/399, (EU) 2017/2226, (EU) 2018/1240, (EU) 2018/1860, (EU) 2018/1861, (EU) 2019/817 and (EU) 2019/1896 of the European Parliament and of the Council and repealing Council Decisions 2004/512/EC and 2008/633/JHA, for the purpose of reforming the Visa Information System (OJ L 248, 13.7.2021, p. 11).

3.  Eurodac

(a)  Regulation (EU) 2024/… of the European Parliament and of the Council on the establishment of 'Eurodac' for the comparison of biometric data for the effective application of Regulation (EU) …/… [Regulation on Asylum and Migration Management], of Regulation (EU) …/… [Resettlement Regulation] and Directive 2001/55/EC [Temporary Protection Directive] for identifying an illegally staying third-country national or stateless person and on requests for the comparison with Eurodac data by Member States' law enforcement authorities and Europol for law enforcement purposes and amending Regulations (EU) 2018/1240 and (EU) 2019/818 (78) .

4.  Entry/Exit System

(a)  Regulation (EU) 2017/2226 of the European Parliament and of the Council of 30 November 2017 establishing an Entry/Exit System (EES) to register entry and exit data and refusal of entry data of third-country nationals crossing the external borders of the Member States and determining the conditions for access to the EES for law enforcement purposes, and amending the Convention implementing the Schengen Agreement and Regulations (EC) No 767/2008 and (EU) No 1077/2011 (OJ L 327, 9.12.2017, p. 20).

5.  European Travel Information and Authorisation System

(a)  Regulation (EU) 2018/1240 of the European Parliament and of the Council of 12 September 2018 establishing a European Travel Information and Authorisation System (ETIAS) and amending Regulations (EU) No 1077/2011, (EU) No 515/2014, (EU) 2016/399, (EU) 2016/1624 and (EU) 2017/2226 (OJ L 236, 19.9.2018, p. 1).

(b)  Regulation (EU) 2018/1241 of the European Parliament and of the Council of 12 September 2018 amending Regulation (EU) 2016/794 for the purpose of establishing a European Travel Information and Authorisation System (ETIAS) (OJ L 236, 19.9.2018, p. 72).

6.  European Criminal Records Information System on third-country nationals and stateless persons

(a)  Regulation (EU) 2019/816 of the European Parliament and of the Council of 17 April 2019 establishing a centralised system for the identification of Member States holding conviction information on third-country nationals and stateless persons (ECRIS-TCN) to supplement the European Criminal Records Information System and amending Regulation (EU) 2018/1726 (OJ L 135, 22.5.2019, p. 1).

7.  Interoperability

(a)  Regulation (EU) 2019/817 of the European Parliament and of the Council of 20 May 2019 on establishing a framework for interoperability between EU information systems in the field of borders and visa (OJ L 135, 22.5.2019, p. 27).

(b)  Regulation (EU) 2019/818 of the European Parliament and of the Council of 20 May 2019 on establishing a framework for interoperability between EU information systems in the field of police and judicial cooperation, asylum and migration (OJ L 135, 22.5.2019, p. 85).

Technical documentation referred to in Article 53 (1), point (a) - technical documentation for providers of general-purpose AI models

Information to be provided by all providers of general-purpose AI models

The technical documentation referred to in Article 53(1), point (a) shall contain at least the following information as appropriate to the size and risk profile of the model:

1.   A general description of the general-purpose AI model including:

(a)   the tasks that the model is intended to perform and the type and nature of AI systems in which it can be integrated;

(b)   the acceptable use policies applicable;

(c)   the date of release and methods of distribution;

(d)   the architecture and number of parameters;

(e)   the modality (e.g. text, image) and format of inputs and outputs;

(f)   the licence.

2.   A detailed description of the elements of the model referred to in point 1, and relevant information of the process for the development, including the following elements:

(a)   the technical means (e.g. instructions of use, infrastructure, tools) required for the general-purpose AI model to be integrated in AI systems;

(b)   the design specifications of the model and training process, including training methodologies and techniques, the key design choices including the rationale and assumptions made; what the model is designed to optimise for and the relevance of the different parameters, as applicable;

(c)   information on the data used for training, testing and validation, where applicable, including the type and provenance of data and curation methodologies (e.g. cleaning, filtering etc), the number of data points, their scope and main characteristics; how the data was obtained and selected as well as all other measures to detect the unsuitability of data sources and methods to detect identifiable biases, where applicable;

(d)   the computational resources used to train the model (e.g. number of floating point operations – FLOPs-), training time, and other relevant details related to the training;

(e)   known or estimated energy consumption of the model.

With regard to point (e), where the energy consumption of the model is unknown, the energy consumption may be based on information about computational resources used.

Additional information to be provided by providers of general-purpose AI models with systemic risk

1.   A detailed description of the evaluation strategies, including evaluation results, on the basis of available public evaluation protocols and tools or otherwise of other evaluation methodologies. Evaluation strategies shall include evaluation criteria, metrics and the methodology on the identification of limitations.

2.   Where applicable, a detailed description of the measures put in place for the purpose of conducting internal and/or external adversarial testing (e.g., red teaming), model adaptations, including alignment and fine-tuning.

3.   Where applicable, a detailed description of the system architecture explaining how software components build or feed into each other and integrate into the overall processing.

Transparency information referred to in Article 53( 1), point (b ) - technical documentation for providers of general-purpose AI models to downstream providers that integrate the model into their AI system

The information referred to in Article 53(1), point (b) shall contain at least the following:

(a)   the tasks that the model is intended to perform and the type and nature of AI systems into which it can be integrated;

(d)   how the model interacts, or can be used to interact, with hardware or software that is not part of the model itself, where applicable;

(e)   the versions of relevant software related to the use of the general-purpose AI model, where applicable;

(f)   the architecture and number of parameters;

(g)   the modality (e.g., text, image) and format of inputs and outputs;

(h)   the licence for the model.

2.   A description of the elements of the model and of the process for its development, including:

(a)   the technical means (e.g., instructions for use, infrastructure, tools) required for the general-purpose AI model to be integrated into AI systems;

(b)   the modality (e.g., text, image, etc.) and format of the inputs and outputs and their maximum size (e.g., context window length, etc.);

(c)   information on the data used for training, testing and validation, where applicable, including the type and provenance of data and curation methodologies.

Criteria for the designation of general-purpose AI models with systemic risk referred to in Article 51

For the purpose of determining that a general-purpose AI model has capabilities or an impact equivalent to those set out in Article 51(1), points (a) and (b), the Commission shall take into account the following criteria:

(a)   the number of parameters of the model;

(b)   the quality or size of the data set, for example measured through tokens;

(c)   the amount of computation used for training the model, measured in FLOPs or indicated by a combination of other variables such as estimated cost of training, estimated time required for the training, or estimated energy consumption for the training;

(d)   the input and output modalities of the model, such as text to text (large language models), text to image, multi-modality, and the state-of-the-art thresholds for determining high-impact capabilities for each modality, and the specific type of inputs and outputs (e.g. biological sequences);

(e)   the benchmarks and evaluations of capabilities of the model, including considering the number of tasks without additional training, adaptability to learn new, distinct tasks, its degree of autonomy and scalability, the tools it has access to;

(f)   whether it has a high impact on the internal market due to its reach, which shall be presumed when it has been made available to at least 10 000 registered business users established in the Union;

(g)   the number of registered end-users.

IMAGES

  1. 6 Best AI Essay Writer Tools to Create 100% Original Content

    artificial intelligence to help write essays

  2. Writing In An Age Of Artificial Intelligence (AI) #writingcommunity #

    artificial intelligence to help write essays

  3. What is Artificial Intelligence(AI)? : Simple Detailed Introduction for

    artificial intelligence to help write essays

  4. Artificial Intelligence. essay on artificial intelligence in english

    artificial intelligence to help write essays

  5. Stunning Intelligence Essay ~ Thatsnotus

    artificial intelligence to help write essays

  6. WRITING AN ESSAY USING ARTIFICIAL INTELLIGENCE

    artificial intelligence to help write essays

VIDEO

  1. Can Artificial intelligence Help Muslims? With Dr. Abdullah As-Sueidi

  2. How Can Artificial Intelligence Help Me Read Old Documents?

  3. Artificial Intelligence (AI) essay writing

  4. Artificial Intelligence Essay कृत्रिम बुद्धिमत्ता पर निबंध ssc jht 2023 #artificialintelligence

COMMENTS

  1. 10 Best AI for Essay Writing

    Here are a few of them —. 1. Saves time. Using best AI for essay writing has its own benefits. Students can take care of the research process while these AI tools write the essays for them. Be it an essay topic or a full-length essay generation, it saves a bunch of students' time. 2. Boosts productivity.

  2. Best AI writer of 2024

    Best AI writer of 2024: quick menu. (Image credit: Glenn Carstens-Peters / Unsplash) 1. Best AI writer overall 2. Best for marketers 3. Best for WordPress users 4. Best for long-form writing 5 ...

  3. The best AI writing generators

    Used correctly, these AI text generators can help you work better and faster, and create more polished and on-brand copy. Of course, all AI writing software needs human supervision to deliver the best results. Left to its own devices, it tends to produce fairly generic and frequently incorrect content, even if it can pass for something a human ...

  4. 13 Best AI Writing Tools For Essays, Blogs & Content

    10. Hubspot. HubSpot's AI Content Writer is an AI writing tool that helps you and your business scale content output for your email, landing page or blog post. By inputting a simple prompt or topic, the AI-powered copywriter tools generate a first draft of engaging, researched, and tailored content for your use.

  5. AI Writing Tools

    AI writing tools are artificial intelligence (AI) software applications like ChatGPT that help to automate or assist the writing process. These tools use machine learning algorithms to generate human-sounding text in response to users' text-based prompts. Other AI tools, such as grammar checkers, paraphrasers and summarizers serve more ...

  6. QuillBot AI is the Best AI Writing Assistant

    Here are the benefits of using an AI writer: 1. It will save time. Because you're not generating all the content on your own, agonizing over the right words to use, articles, essays, blog posts, and all other writing will get done in a breeze. Machines are fast, and the AI writer is no exception to that rule. 2.

  7. HyperWrite

    Your personal AI writing assistant for content generation, research, speeches, rewriting, and more. HyperWrite ensures the highest quality writing while helping you get more done. HyperWrite uses the world's most powerful artificial intelligence technology to help you work smarter, faster, and with ease. Hundreds of powerful AI tools to transform your writing, communication, and research.

  8. Free AI Writing Resources

    Write a research question. Generate three possible research questions for an argumentative high school essay on the following topic: "The long-term impact of the Covid-19 pandemic." Brainstorm topic ideas. Generate 10 questions to help me brainstorm topics for my college admission essay. Quiz yourself. I'm learning about [insert topic ...

  9. AI bot ChatGPT writes smart essays

    Now there's a fresh concern: ChatGPT, an artificial intelligence (AI) powered chatbot that creates surprisingly intelligent-sounding text in response to user prompts, including homework ...

  10. Charley.ai

    Write incredible essays in under 20 seconds with the world's most advanced AI Essay Writer. Start using Charley and you'll never stress over writing an essay again! ... and hello to effortless writing. Charley is powered by Artificial Intelligence, and he's here to help. Create. It takes less than a minute, and it's free.

  11. Wordtune: Free AI Writing Assistant

    Wordtune speeds up my writing process by 20% every day, saving me several hours of work. In my experience, Wordtune humanizes my content much better than any other AI tool. Wordtune has reduced my email writing time by 30%. It's a crucial tool for client-facing interactions and has allowed me to write at scale while maintaining the utmost ...

  12. EssayGenius

    Write better essays, in less time, with your AI writing assistant. EssayGenius uses cutting-edge AI to help you write your essays like never before. Generate ideas, rephrase sentences, and have your essay structure built for you. EssayGenius lets you write better essays, in less time. Our AI tools help you generate new paragraphs, complete ...

  13. We Used A.I. to Write Essays for Harvard, Yale and Princeton. Here's

    Or they could just write their own — chatbot-free — admissions essays from scratch. Bard I'm a high school student with a strong interest in artificial intelligence and machine learning.

  14. Writing with AI (Artificial Intelligence) using HyperWrite

    Hyper Sentence: Generate a completely original sentence using artificial intelligence. Click the 'Sentence' button and HyperWrite will suggest three different options to continue your writing based on the text currently on the page. Pro tip: Use the keyboard shortcut ("Command + /" on Mac, "Control + /" on Windows) to generate ...

  15. AI Writing Assistants: A Cure for Writer's Block or Modern-Day ...

    Test 1: Jasper.ai. Jasper, which bills itself as "your AI co-writing pilot," boasts that you'll be able to "create content 5x faster with artificial intelligence.". Speed is useful ...

  16. AI Writing: What Is It And How Does It Work?

    AI writing involves using artificial intelligence tools to create written content. You can submit a prompt and have it write a short essay, for example, or draft a screenplay, write a short story, edit your content for grammatical/syntax errors, and beyond. It completes the task in a matter of minutes, compared to the hours spent research and ...

  17. Artificial intelligence is getting better at writing, and universities

    As these improve, soon entire articles or essays might be generated and written entirely by artificial intelligence. In schools, the implications of such developments will undoubtedly shape the ...

  18. Can AI Computers Write Essays Better Than You?

    AI can already write new, plagiarism-free prose. It's possible for artificial intelligence engines to generate content without plagiarism, and it can be hard for you to tell the difference. The ...

  19. University students are using AI to write essays. Now what?

    Scott Graham, associate professor at the Department of Rhetoric & Writing at the University of Texas at Austin, tasked his pupils with writing a 2,200-word essay about a campus-wide issue using AI. Students were free to lightly edit and format their work with the only rule being that most of the essay had to be automatically generated by ...

  20. Can artificial intelligence help for scientific writing?

    Conclusions. ChatGPT is an AI software potentially able to assist in the writing process of a scientific paper and can help in the literature review, identify research questions, provide an overview of the current state of the field, and assist with tasks, such as formatting and language review.

  21. Oxford students told to use AI to help write essays

    Louisa Clarence-Smith, Education editor and Milo Dennison 12 December 2023 • 6:01pm. Oxford students have been told to use artificial intelligence (AI) to help write essays. Undergraduates ...

  22. Should researchers use AI to write papers? Group aims for ...

    When and how should text-generating artificial intelligence (AI) programs such as ChatGPT help write research papers? In the coming months, 4000 researchers from a variety of disciplines and countries will weigh in on guidelines that could be adopted widely across academic publishing, which has been grappling with chatbots and other AI issues for the past year and a half.

  23. Free AI Paragraph Generator

    Academic writing. Students and researchers can benefit from Ahrefs' Paragraph Generator when working on papers, essays, or research articles. By providing the necessary instructions, the tool can generate well-structured paragraphs that present key arguments, evidence, and analysis, aiding in the writing process. Personal writing and ...

  24. How teachers started using ChatGPT to grade assignments

    A new tool called Writable, which uses ChatGPT to help grade student writing assignments, is being offered widely to teachers in grades 3-12.. Why it matters: Teachers have quietly used ChatGPT to grade papers since it first came out — but now schools are sanctioning and encouraging its use. Driving the news: Writable, which is billed as a time-saving tool for teachers, was purchased last ...

  25. AI Writing Detection

    We use state-of-the-art artificial intelligence to help students, educators, and institutions address AI writing, ChatGPT, and other AI-generated text. ... ChatGPT, an artificial intelligence program, can write college-level essays in seconds. While some school districts are banning it due to cheating concerns, NBC News' Jacob Ward has ...

  26. Gig workers are writing essays for AI to learn from

    As artificial intelligence models run out of data to train themselves on, AI companies are increasingly turning to actual humans to write training content.. For years, companies have used gig ...

  27. Use of ChatGPT for schoolwork among US teens

    Roughly one-in-five teenagers who have heard of ChatGPT say they have used it to help them do their schoolwork. ... that amounts to 13% of all U.S. teens who have used the generative artificial intelligence (AI) chatbot in their schoolwork. ... solve math problems and write essays. For instance, 54% of teens who have heard a lot about ChatGPT ...

  28. How One Author Pushed the Limits of AI Copyright

    Matthew Sag, a professor of law and artificial intelligence at Emory University, calls what the USCO granted Shupe "thin copyright"—protection against full-fledged duplication of materials ...

  29. AI's ability to write for us—and our inability to resist 'The Button

    A lot of work is time-consuming by design. In a world in which the AI gives an instant, pretty good, near universally accessible shortcut, we'll soon face a crisis of meaning in creative work of ...

  30. Texts adopted

    Position of the European Parliament adopted at first reading on 13 March 2024 with a view to the adoption of Regulation (EU) 2024/… of the European Parliament and of the Council laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU ...