Is your AI compliant with the emerging European regulatory framework?

V29 Legal developed an online questionnaire (TAI Questionnaire) on the basis of the "Ethics Guidelines for Trustworthy AI" (Guidelines) and "The Assessment List for Trustworthy AI" (Assessment List). The TAI Questionnaire allows stakeholders to assess whether their AI can be deemed trustworthy.


Everything you need to know about the TAI Questionnaire

What is the TAI Questionnaire?

The TAI Questionnaire is an online tool which allows stakeholders to asses whether their AI can be deemed trustworthy. It has been developed on the basis of the Guidelines and the so-called assessment list (Assessment List). Both of them have been prepared by the Independent High-Level Expert Group on AI set up by the European Commission (AI HLEG).

On 25 April 2018, the European Commission (EC) presented its AI strategy and set up the AI HLEG. AI HLEG prepared  the Guidelines together with the Assessment List, aiming at providing guidance for AI applications in general, and at building a horizontal foundation to achieve Trustworthy AI.


The Assessment List underwent a piloting process and received feedback from over 350 stakeholders.

On 19 February 2020, the EC has further presented its "White Paper on Artificial Intelligence - A European approach to excellence and trust" (White Paper) – a major milestone on the way to the EU AI-specific regulations.

On 17 July 2020, the AI HLEG presented the final version of the Assessment List.

EU's Initiative and Ethics Guidelines for Trustworthy AI


What is AI?

The AI HLEG defines AI as a "combination of machine learning techniques used for searching and analysing large volumes of data; robotics dealing with the conception, design, manufacture and operation of programmable machines; and algorithms and automated decisionmaking systems (ADMS) able to predict human and machine behaviour and to make autonomous decisions."

(See Guidelines, p. 36)

Approach to AI

The EU has adopted a human-centric, inclusive approach to AI aiming at placing the power of AI at the service of human progress.

The human centric approach "strives to ensure that human values are central to the way in which AI systems are developed, deployed, used and monitored, by including respect for fundamental rights."

(See Guidelines, p. 37)

What is Trustworthy AI?

Trustworthy AI embodies the principle of a human-centric approach to create trust on all levels: development, deployment and use of AI systems. Thereby, unwanted consequences may be hindered, and vast social and economic benefits will be ensured.


Three Components of Trustworthy AI




(See Guidelines, p. 4)


Why is the Trustworthy AI approach important?

Trustworthy AI creates a bond of trust for everyone involved so that potential doubts about the benefits and use of AI can be overcome.


An AI system that is lawful, ethical and robust will enable a  progressive "responsible competitiveness".

(See Guidelines, p. 5)


The Seven Key Requirements of Trustworthy AI


To operationalise the 7 Key Requirements and offer guidance to implement them in practice, the AI HLEG provided the Assessment List.

Is the Assessment List exhaustive?

AI HLEG emphasizes, that “ensuring Trustworthy AI is not about ticking boxes, but about continuously identifying and implementing requirements, evaluating solutions, ensuring improved outcomes throughout the AI system’s lifecycle, and involving stakeholders in this.”

Accordingly, the Assessment List is not identified as a final and exhaustive document by the AI HLEG.

(See Guidelines, p. 3)

Who are the addressees of the Assessment List?

Like the Guidelines, the Assessment List is addressed to all stakeholders that are partaking in an AI system's life cycle.

"Stakeholders committed towards achieving Trustworthy AI can voluntarily opt to use the Guidelines as a method to operationalise their commitment, in particular by using the practical Assessment List when developing, deploying or using AI systems. The Assessment List can also complement – and hence be incorporated in – existing assessment processes."

(See Guidelines, p. 5)

(See Guidelines, p. 14)



The Road to a European Regulatory Framework for AI 


White Paper

On 19 February 2020, the European Commission published its White Paper and invited for comments on the proposals set out in the White Paper through an open public consultation.

The public consultation aimed at providing stakeholders with the opportunity to express their views on the questions raised and policy options proposed in the White Paper on Artificial Intelligence.

The consultation ended on 14 June 2020 and received over 1200 responses, including a contribution by V29 Legal.

Stay informed!

The TAI Questionnaire will be updated as soon as new regulatory framework at the European level has been introduced.


Privacy Policy


V29 Legal - Duve Hamama Rechtsanwälte PartG mbB

  • LinkedIn - Weiß, Kreis,