
Is your AI compliant with the emerging European regulatory framework?
V29 Legal developed an online questionnaire (TAI Questionnaire) on the basis of the "Ethics Guidelines for Trustworthy AI" (Guidelines) and "The Assessment List for Trustworthy AI" (Assessment List). The TAI Questionnaire allows stakeholders to assess whether their AI can be deemed trustworthy.
Everything you need to know about the TAI Questionnaire
What is the TAI Questionnaire?
What is the purpose of the TAI Questionnaire?
Who should use the TAI Questionnaire?
Why should I use the TAI Questionnaire?
Everything you need to know
What is the TAI Questionnaire?
What is the purpose?
Who should use the TAI Questionnaire?
Why should I use the TAI Questionnaire?
The TAI Questionnaire is for all stakeholders that are partaking in an AI systems’ life cycle:
developers, deployers and end-users, as well as the broader society.
Whether you are currently building an AI system, looking for a suitable system or using AI, you should assess whether it can be deemed trustworthy and be prepared for the emerging EU regulatory framework.
The TAI Questionnaire is an online tool which allows stakeholders to asses whether their AI can be deemed trustworthy. It has been developed on the basis of the Guidelines and the so-called assessment list (Assessment List). Both of them have been prepared by the Independent High-Level Expert Group on AI set up by the European Commission (AI HLEG).
The pace of the development of AI far outstrips the current capacity to monitor, control and govern its applications. Efforts are made on various national and international, public and private levels to address this challenge and to explore an appropriate approach for AI governance.
The TAI Questionnaire aims at providing all stakeholders with an opportunity to assess whether their AI can bee deemed trustworthy based on the Guidelines.

On 25 April 2018, the European Commission (EC) presented its AI strategy and set up the AI HLEG. AI HLEG prepared the Guidelines together with the Assessment List, aiming at providing guidance for AI applications in general, and at building a horizontal foundation to achieve Trustworthy AI.
The Assessment List underwent a piloting process and received feedback from over 350 stakeholders.
On 19 February 2020, the EC has further presented its "White Paper on Artificial Intelligence - A European approach to excellence and trust" (White Paper) – a major milestone on the way to the EU AI-specific regulations.
On 17 July 2020, the AI HLEG presented the final version of the Assessment List.
What is AI?
The AI HLEG defines AI as a "combination of machine learning techniques used for searching and analysing large volumes of data; robotics dealing with the conception, design, manufacture and operation of programmable machines; and algorithms and automated decisionmaking systems (ADMS) able to predict human and machine behaviour and to make autonomous decisions."
(See Guidelines, p. 36)
Approach to AI
The EU has adopted a human-centric, inclusive approach to AI aiming at placing the power of AI at the service of human progress.
The human centric approach "strives to ensure that human values are central to the way in which AI systems are developed, deployed, used and monitored, by including respect for fundamental rights."
(See Guidelines, p. 37)
What is Trustworthy AI?
Trustworthy AI embodies the principle of a human-centric approach to create trust on all levels: development, deployment and use of AI systems. Thereby, unwanted consequences may be hindered, and vast social and economic benefits will be ensured.
Three Components of Trustworthy AI
Lawful
Ethical
"Complying with all applicable laws and regulations"
"Ensuring adherence to ethical principles and values"
"From a technical and social perspective, since, even with good intentions, AI systems can cause unintentional harm"
Lawful
Ethical
"From a technical and social perspective, since, even with good intentions, AI systems can cause unintentional harm"
"Ensuring adherence to ethical principles and values"
"Complying with all applicable laws and regulations"



Robust
Robust
(See Guidelines, p. 4)
Why is the Trustworthy AI approach important?
Trustworthy AI creates a bond of trust for everyone involved so that potential doubts about the benefits and use of AI can be overcome.
An AI system that is lawful, ethical and robust will enable a progressive "responsible competitiveness".
(See Guidelines, p. 5)
The Seven Key Requirements of Trustworthy AI
"Including fundamental rights, human agency and human oversight"
Human agency and oversight
Technical robustness and safety
Privacy and data governance
Diversity, non-discrimination and fairness
Societal and environmental well-being
Transparency
Accountability
"Including resilience to attack and security, fall back plan and general safety, accuracy, reliability and reproducibility"
"Including respect for privacy, quality and integrity of data, and access to data"
"Including traceability, explainability and communication"
"Including the avoidance of unfair bias, accessibility and universal design, and stakeholder participation"
"Including sustainability and environmental friendliness, social impact, society and democracy"
"Including auditability, minimisation and reporting of negative impact, trade-offs and redress"
1
Human agency and oversight
Technical robustness and safety
Privacy and data governance
Transparency
Diversity,
non-discrimination
and fairness
Societal and environmental well-being
Accountability
To operationalise the 7 Key Requirements and offer guidance to implement them in practice, the AI HLEG provided the Assessment List.
Is the Assessment List exhaustive?
AI HLEG emphasizes, that “ensuring Trustworthy AI is not about ticking boxes, but about continuously identifying and implementing requirements, evaluating solutions, ensuring improved outcomes throughout the AI system’s lifecycle, and involving stakeholders in this.”
Accordingly, the Assessment List is not identified as a final and exhaustive document by the AI HLEG.
(See Guidelines, p. 3)
Who are the addressees of the Assessment List?
Like the Guidelines, the Assessment List is addressed to all stakeholders that are partaking in an AI system's life cycle.
"Stakeholders committed towards achieving Trustworthy AI can voluntarily opt to use the Guidelines as a method to operationalise their commitment, in particular by using the practical Assessment List when developing, deploying or using AI systems. The Assessment List can also complement – and hence be incorporated in – existing assessment processes."
(See Guidelines, p. 5)
(See Guidelines, p. 14)
2
3
4
5
6
7
10 April
25 April
01 June
18 December
01 February
08 April
26 June
01 December
19 February
14 June
17 July
2019
2020
10 April 2018
Digital Day Declaration:
Member States sign up to cooperate on AI
25 April 2018
EC announces European AI strategy:
-
increase public and private investments to EUR 20 bln per year over the next decade
-
prepare for socio-economic changes
-
ensure appropriate ethical and legal framework
01 June 2018
Appointment of AI HLEG and launch of AI Alliance
The European AI Alliance is a forum that engages more than 3000 European citizens and stakeholders in a dialogue on the future of AI in Europe.
18 December 2018
AI HLEG presents first draft of "Ethics Guidelines for Trustworthy AI" and launches Consultation Process
01 February 2019
End of Consultation on draft ethics guidelines with over 500 comments received
08 April 2019
AI HLEG presents
-
"Ethics Guidelines for trustworthy AI"
-
"A Definition of AI: Main Capabilities and Disciplines"
26 June 2019
AI HLEG presents “Policy and Investment Recommendations for Trustworthy AI”
Launch of the Piloting Phase of the Assessment List of the Ethics Guidelines for Trustworthy AI
01 December 2019
End of Piloting Phase of the Assessment List of Ethics Guidelines for Trustworthy AI
19 February 2020
EC publishes White Paper on AI and calls for comments
14 June 2020
Consultation on White Paper on AI ends, receiving over 1200 individual responses to an online questionnaire as well as written input
17 July 2020
AI HLEG presents the final version of the Assessment List
The Road to a European Regulatory Framework for AI
White Paper
On 19 February 2020, the European Commission published its White Paper and invited for comments on the proposals set out in the White Paper through an open public consultation.
The public consultation aimed at providing stakeholders with the opportunity to express their views on the questions raised and policy options proposed in the White Paper on Artificial Intelligence.
The consultation ended on 14 June 2020 and received over 1200 responses, including a contribution by V29 Legal.
Stay informed!
The TAI Questionnaire will be updated as soon as new regulatory framework at the European level has been introduced.
An error occurred. Try again later!
You are registered!