Frequently

Asked

Questions

Getting Started

How much time will I need to complete the TAI Questionnaire?


Completing the TAI Questionnaire will take you anywhere from approx. 5 to 10 minutes.

Note that the feature to save your progress and continue the questionnaire at a later time has not been integrated yet.




What if I can’t answer a question?


While we try our best to provide you with a questionnaire that is comprehensible, we acknowledge that some questions may be difficult to answer with limited background knowledge. We have included ‘info blocks’ throughout the tool to provide further information.

If you feel that an answer option does not apply to or reflect your business model, we would ask you to select the answer that comes closest.




Where can I get more information regarding a specific question?


The TAI Questionnaire is based on the assessment list found in the 'Ethics Guidelines for Trustworthy Artificial Intelligence’ presented by the High-Level Expert Group on AI set up by the European Commission.

For further background information, you can consult the 'Ethics Guidelines for Trustworthy AI'or the 'The Assessment List for Trustworthy AI'.




Is my personal information and data protected?


We take Data Protection seriously. Your details and other data will not be passed on or sold to third parties, they will only be used as a basis to give you an evaluative overview after you have completed the TAI Questionnaire.

For further information on data protection, please take a look at our Privacy Notice.





Technical Questions

How do I move through the TAI Questionnaire?


Use the arrows on either side of the module progress bar to move backwards or forward in the tool. Alternatively, you can use the arrow keys on your device’s keyboard.




What if I can’t move to the next page?


Check that you have answered all relevant questions on the questionnaire page. A ‘Continue’ button will appear at the bottom of the page once you have made your selections. While some questions may be declared as ‘optional’, most will require you to make a selection in order to continue to the next page.




What if I don’t want to make a selection?


There are a few questions that do not require that you make a selection, these questions are marked as ‘optional’. All other questions are necessary to provide you with the best possible result.

If you feel that an answer option does not apply to or reflect your business model, we would ask you to select the answer closest to your situation.




How can I track my progress?


Notice the progress bar at the bottom of the page. It provides an estimate of how long it will take you the complete the questionnaire.

Tip: You can use the arrows on either side of the progress bar to move backwards or forward in the tool.




What if I have received an error message?


If you receive an error message while filling out the questionnaire, we kindly ask you to screenshot and/or copy the error message and send us an email to TAI.Questionnaire@v29-legal.com.





Contact

How can I provide feedback?


We appreciate your feedback and are always interested in ways to improve. At the end of the tool, you have the option to give feedback in a designated comment area. You can also contact us by sending an email to TAI.Questionnaire@v29-legal.com.




How can I contact you?


If you have any other questions or comments, do not hesitate to contact us at TAI.Questionnaire@v29-legal.com.





Definitions

To date, there is no uniform definition of AI or of related terms. The TAI Questionnaire reflects the content as well as the definitions used in the "Ethics Guidelines on Trustworthy AI" developed by the High-Level Expert Group on AI (AI HLEG).

If any terms are unclear during the assessment process, you can have a look at the Guidelines, the Glossary of the Assessment List or search for the relevant definition here. 

A further insight into the understanding of the AI HLEG and used definitions are given in the document "A Definition of AI: Main Capabilities and Disciplines".

Ethics Guidelines on Trustworthy AI

Artificial Intelligence or AI systems


"Artificial intelligence (AI) systems are software (and possibly also hardware) systems designed by humans that, given a complex goal, act in the physical or digital dimension by perceiving their environment through data acquisition, interpreting the collected structured or unstructured data, reasoning on the knowledge, or processing the information, derived from this data and deciding the best action(s) to take to achieve the given goal. AI systems can either use symbolic rules or learn a numeric model, and they can also adapt their behaviour by analysing how the environment is affected by their previous actions. As a scientific discipline, AI includes several approaches and techniques, such as machine learning (of which deep learningandreinforcement learningarespecificexamples),machinereasoning(whichincludesplanning,scheduling, knowledge representation and reasoning, search, and optimization), and robotics (which includes control, perception, sensors and actuators, as well as the integration of all other techniques into cyber -physical systems)." (See Guidelines, p. 36)




AI Practitioners


By AI practitioners the High Level Expert Group on AI denotes "all individuals or organisations that develop (including research, design or provide data for) deploy (including implement) or use AI systems, excluding those that use AI systems in the capacity of enduser or consumer." (See Guidelines, p. 36)




AI system’s life cycle


"An AI system’s life cycle encompasses its development (including research, design, data provision, and limited trials), deployment (including implementation) and use phase." (See Guidelines, p. 36)




Auditability


"Auditability refers to the ability of an AI system to undergo the assessment of the system’s algorithms, data and design processes. This does not necessarily imply that information about business models and Intellectual Property related to the AI system must always be openly available. Ensuring traceability and logging mechanisms from the early design phase of the AI system can help enabling the system's auditability." (See Guidelines, p. 36; Assessment List, p. 25)




Bias


"Bias is an inclination of prejudice towards or against a person, object, or position. Bias can arise in many ways in AI systems. For example, in data-drive AI systems, such as those produced through machine learning, bias in data collection and training can result in an AI system demonstrating bias. In logic-based AI, such as rule-based systems, bias can arise due to how a knowledge engineer might view the rules that apply in a particular setting. Bias can also arise due to online learning and adaptation through interaction. It can also arise through personalisation whereby users are presented with recommendations or information feeds that are tailored to the user’s tastes. It does not necessarily relate to human bias or human-driven data collection. It can arise, for example, through the limited contexts in which a system in used, in which case there is no opportunity to generalise it to other contexts. Bias can be good or bad, intentional or unintentional. In certain cases, bias can result in discriminatory and/or unfair outcomes, indicated in this document as unfair bias." (See Guidelines, p. 36)




Ethics


"Ethics is an academic discipline which is a subfield of philosophy. In general terms, it deals with questions like “What is a good action?”, “What is the value of a human life?”, “What is justice?”, or “What is the good life?”. In academic ethics, there are four major fields of research: (i) Meta-ethics, mostly concerning the meaning and reference of normative sentence, and the question how their truth values can be determined (if they have any); (ii) normative ethics, the practical means of determining a moral course of action by examining the standards for right and wrong action and assigning a value to specific actions; (iii) descriptive ethics, which aims at an empirical investigation of people's moral behaviour and beliefs; and (iv) applied ethics, concerning what we are obligated (or permitted) to do in a specific (often historically new) situation or a particular domain of (often historically unprecedented) possibilities for action. Applied ethics deals with real-life situations, where decisions have to be made under timepressure, and often limited rationality. AI Ethics is generally viewed as an example of applied ethics and focuses on the normative issues raised by the design, development, implementation and use of AI. Within ethical discussions, the terms “moral” and “ethical” are often used. The term “moral” refers to the concrete, factual patterns of behaviour, the customs, and conventions that can be found in specific cultures, groups, or individuals at a certain time. The term “ethical” refers to an evaluative assessment of such concrete actions and behaviours from a systematic, academic perspective." (See Guidelines, p. 37)




Ethical AI


In the Ethics Guidelines on Trustworthy AI, "ethical AI is used to indicate the development, deployment and use of AI that ensures compliance with ethical norms, including fundamental rights as special moral entitlements, ethical principles and related core values. It is the second of the three core elements necessary for achieving Trustworthy AI." (See Guidelines, p. 37)




Human-Centric AI


"The human-centric approach to AI strives to ensure that human values are central to the way in which AI systems are developed, deployed, used and monitored, by ensuring respect for fundamental rights, including those set out in the Treaties of the European Union and Charter of Fundamental Rights of the European Union, all of which are united by reference to a common foundation rooted in respect for human dignity, in which the human being enjoy a unique and inalienable moral status. This also entails consideration of the natural environment and of other living beings that are part of the human ecosystem, as well as a sustainable approach enabling the flourishing of future generations to come." (See Guidelines, p. 37)




Red Teaming


"Red teaming is the practice whereby a “red team” or independent group challenges an organisation to improve its effectiveness by assuming an adversarial role or point of view. It is particularly used to help identifying and addressing potential security vulnerabilities." (See Guidelines, p. 37; Assessment List, p. 28))




Reproducibility


"Reproducibility refers to the closeness between the results of two actions, such as two scientific experiments, that are given the same input and use the methodology, as described in a corresponding scientific evidence (such as a scientific publication). A related concept is replication, which is the ability to independently achieve non-identical conclusions that are at least similar, when differences in sampling, research procedures and data analysis methods may exist. Reproducibility and replicability together are among the main tools of the scientific method." (See Assessment List, p. 28)




Robust AI


"Robustness of an AI system encompasses both its technical robustness (appropriate in a given context, such as the application domain or life cycle phase) and as well as its robustness from a social perspective (ensuring that the AI system duly takes into account the context and environment in which the system operates). This is crucial to ensure that, even with good intentions, no unintentional harm can occur. Robustness is the third of the three components necessary for achieving Trustworthy AI." (See Guidelines, p. 37; Assessment List, p. 29)




Stakeholders


By stakeholders the High-Level Expert Group on AI denotes "all those that research develop, design, deploy or use AI, as well as those that are (directly or indirectly) affected by AI – including but not limited to companies, organisations, researchers, public services, institutions, civil society organisations, governments, regulators, social partners, individuals, citizens, workers and customers." (See Guidelines, pp. 37)




Traceability


"Traceability of an AI system refers to the capability to keep track of the system’s data, development and deployment processes, typically by means of documented recorded identification." (See Guidelines, p. 38) "Ability to track the journey of a data input through all stages of sampling, labelling, processing and decision making." (See Assessment List, p. 29)




Trust


The High-Level Expert Group on AI takes "the following definition from the literature: “Trust is viewed as: (1) a set of specific beliefs dealing with benevolence, competence, integrity, and predictability (trusting beliefs); (2) the willingness of one party to depend on another in a risky situation (trusting intention); or (3) the combination of these elements.” While “Trust” is usually not a property ascribed to machines, this document aims to stress the importance of being able to trust not only in the fact that AI systems are legally compliant, ethically adherent and robust, but also that such trust can be ascribed to all people and processes involved in the AI system’s life cycle." (See Guidelines, p. 38)




Trustworthy AI


"Trustworthy AI has three components: (1) it should be lawful, ensuring compliance with all applicable laws and regulations (2) it should be ethical, demonstrating respect for, and ensure adherence to, ethical principles and values and (3) it should be robust, both from a technical and social perspective, since, even with good intentions, AI systems can cause unintentional harm. Trustworthy AI concerns not only the trustworthiness of the AI system itself but also comprises the trustworthiness of all processes and actors that are part of the system’s life cycle." (See Guidelines, p. 38; Assessment List, p. 29)




Vulnerable Persons and Groups


"No commonly accepted or widely agreed legal definition of vulnerable persons exists, due to their heterogeneity. What constitutes a vulnerable person or group is often context-specific. Temporary life events (such as childhood or illness), market factors (such as information asymmetry or market power), economic factors (such as poverty), factors linked to one’s identity (such as gender, religion or culture) or other factors can play a role. The Charter of Fundamental Rights of the EU encompasses under Article 21 on non-discrimination the following grounds, which can be a reference point amongst others: namely sex, race, colour, ethnic or social origin, genetic features, language, religion or belief, political or any other opinion, membership of a national minority, property, birth, disability, age and sexual orientation. Other articles of law address the rights of specific groups, in addition to those listed above. Any such list is not exhaustive, and may change over time. A vulnerable group is a group of persons who share one or several characteristics of vulnerability." (See Guidelines, p. 38)




Accessability


"Extent to which products, systems, services, environments and facilities can be used by people from a population with the widest range of user needs, characteristics and capabilities to achieve identified goals in identified contexts of use (which includes direct use or use supported by assistive technologies)." (See Assessment List, p. 23)




Accountability


"This term refers to the idea that one is responsible for their action – and as a corollary their consequences – and must be able to explain their aims, motivations, and reasons. Accountability has several dimensions. Accountability is sometimes required by law. For example, the General Data Protection Regulation (GDPR) requires organisations that process personal data to ensure security measures are in place to prevent data breaches and report if these fail. But accountability might also express an ethical standard, and fall short of legal consequences. Some tech firms that do not invest in facial recognition technology in spite of the absence of a ban or technological moratorium might do so out of ethical accountability considerations." (See Assessment List, p. 23)




Accuracy


"The goal of an AI model is to learn patterns that generalize well for unseen data. It is important to check if a trained AI model is performing well on unseen examples that have not been used for training the model. To do this, the model is used to predict the answer on the test dataset and then the predicted target is compared to the actual answer. The concept of accuracy is used to evaluate the predictive capability of the AI model. Informally, accuracy is the fraction of predictions the model got right. A number of metrics are used in machine learning (ML) to measure the predictive accuracy of a model. The choice of the accuracy metric to be used depends on the ML task." (See Assessment List, p. 23)




AI bias


"AI (or algorithmic) bias describes systematic and repeatable errors in a computer system that create unfair outcomes, such as favouring one arbitrary group of users over others. Bias can emerge due to many factors, including but not limited to the design of the algorithm or the unintended or unanticipated use or decisions relating to the way data is coded, collected, selected or used to train the algorithm. Bias can enter into algorithmic systems as a result of pre-existing cultural, social, or institutional expectations; because of technical limitations of their design; or by being used in unanticipated contexts or by audiences who are not considered in the software's initial design. AI bias is found across platforms, including but not limited to search engine results and social media platforms, and can have impacts ranging from inadvertent privacy violations to reinforcing social biases of race, gender, sexuality, and ethnicity." (See Assessment List, p. 23)




AI designer


"AI designers bridge the gap between AI capabilities and user needs. For example, they can create prototypes showing some novel AI capabilities and how they might be used if the product is deployed, prior to the possible development of the AI product. AI designers also work with development teams to better understand user needs and how to build technology that addresses those needs. Additionally, they can support AI developers by designing platforms to support data collection and annotation, ensuring that data collection respects some properties (such as safety and fairness)." (See Assessment List, pp. 23, 24)




AI developer


"An AI developer is someone who performs some of the tasks included in the AI development. AI development is the process of conceiving, specifying, designing, training, programming, documenting, testing, and bug fixing involved in creating and maintaining AI applications, frameworks, or other AI components. It includes writing and maintaining the AI source code, as well as all that is involved between the conception of the software through to the final manifestation and use of the software." (See Assessment List, p. 24)




AI Ethics Review Board


"An AI Ethics Review Board or AI Ethics Committee should be composed of a diverse group of stakeholders and expertises, including gender, background, age and other factors. The purpose for which the AI Ethics Board is created should be clear to the organisation establishing it and the members who are invited to join it. The members should have an independent role that is not influenced by any economic or other considerations. Bias and conflicts of interest should be avoided. The overall size can vary depending on the scope of the task. Both the authority the AI Ethics Review Board has and the access to information should be proportionate to their ability to fulfill the task to their best possible ability." (See Assessment List, p. 24)




AI reliability


"An AI system is said to be reliable if it behaves as expected, even for novel inputs on which it has not been trained or tested earlier." (See Assessment List, p. 24)




AI system


"Artificial intelligence (AI) systems are software (and possibly also hardware) systems designed by humans that, given a complex goal, act in the physical or digital dimension by perceiving their environment through data acquisition, interpreting the collected structured or unstructured data, reasoning on the knowledge, or processing the information, derived from this data and deciding the best action(s) to take to achieve the given goal. AI systems can either use symbolic rules or learn a numeric model, and they can also adapt their behaviour by analysing how the environment is affected by their previous actions. As a scientific discipline, AI includes several approaches and techniques, such as machine learning (of which deep learning and reinforcement learning are specific examples), machine reasoning (which includes planning, scheduling, knowledge representation and reasoning, search, and optimization), and robotics (which includes control, perception, sensors and actuators, as well as the integration of all other techniques into cyber-physical systems). A separate document prepared by the AI HLEG and elaborating on the definition of AI used for the purpose of this document is titled 'A definition of AI: Main capabilities and scientific disciplines'." (See Assessment List, p. 24)




AI system environment


"This denotes everything in the world which surrounds the AI system, but which is not a part of the system itself. More technically, an environment can be described as a situation in which the system operates. AI systems get information from their environment via sensors that collect data and modify the environment via suitable actuators. Depending on whether the environment is in the physical or virtual world, actuators can be hardware, such as robotic arms, or software, such as programs that make changes in some digital structure." (See Assessment List, pp. 24, 25)




AI Technology


"Software or hardware that is added to or incorporated within an ICT system to increase accessibility. Often it is specifically designed to assist people with disabilities in carrying out daily activities. Assistive technology includes wheelchairs, reading machines, devices for grasping, etc. In the area of Web Accessibility, common software- based assistive technologies include screen readers, screen magnifiers, speech synthesizers, and voice input software that operate in conjunction with graphical desktop browsers (among other user agents). Hardware assistive technologies include alternative keyboards and pointing devices." (See Assessment List, p. 25)




Audit


"An audit is an independent examination of some required properties of an entity, be it a company, a product, or a piece of software. Audits provide third-party assurance to various stakeholders that the subject matter is free from material misstatement. The term is most frequently applied to audits of the financial information relating to a legal person, but can be applied to anything else." (See Assessment List, p. 25)




Autonomous AI systems


"An autonomous AI system is an AI system that performs behaviors or tasks with a high degree of autonomy, that is, without external influence." (See Assessment List, p. 25)




Confidence score


"Much of AI involves estimating some quantity, such as the probability that the output is a correct answer to the given input. Confidence scores, or confidence intervals, are a way of quantifying the uncertainty of such an estimate. A low confidence score associated with the output of an AI system means that the system is not too sure that the specific output is correct." (See Assessment List, p. 25)




Data governance


"Data governance is a term used on both a macro and a micro level. On the macro level, data governance refers to the governing of cross-border data flows by countries, and hence is more precisely called international data governance. On the micro level, data governance is a data management concept concerning the capability that enables an organization to ensure that high data quality exists throughout the complete lifecycle of the data, and data controls are implemented that support business objectives. The key focus areas of data governance include data availability, usability, consistency, integrity, and sharing. It also regards establishing processes to ensure effective data management throughout the enterprise such as accountability for the adverse effects of poor data quality and ensuring that the data which an enterprise has can be used by the entire organization." (See Assessment List, p. 25)




Data poisoning


"Data poisoning occurs when an adversarial actor attacks an AI system, and is able to inject bad data into the AI model’s training set, thus making the AI system learn something that it should not learn. Examples show that in some cases these data poisoning attacks on neural nets can be very effective, causing a significant drop in accuracy even with very little data poisoning. Other kinds of poisoning attacks do not aim to change the behavior of the AI system, but rather they insert a backdoor, which is a data that the model’s designer is not aware of, but that the attacker can leverage to get the AI system to do what they want." (See Assessment List, p. 26)




Data Protection Impact Assessment (DPIA)


"Evaluation of the effects that the processing of personal data might have on individuals to whom the data relates. A DPIA is necessary in all cases in which the technology creates a high risk of violation of the rights and freedoms of individuals. The law requires a DPIA in case of automated processing, including profiling (i), processing of personal data revealing sensitive information like racial of ethnic origin, political opinions, religious or philosophical beliefs (ii), processing of personal data relating to criminal convictions and offences (iii) and systematic monitoring of a publicly accessible area on a large scale (iv)." (See Assessment List, p. 26)




Data Protection Officer (DPO)


"This denotes an expert on data protection law. The function of a DPO is to internally monitor a public or private organisation’s compliance with GDPR. Public or private organisations must appoint DPOs in the following circumstances: (i) data processing activities are carried out by a public authority or body, except for courts acting in their judicial capacity; (ii) the processing of personal data requires regular and systematic monitoring of individuals on a large scale; (iii) the processing of personal data reveals sensitive information like racial of ethnic origin, political opinions, religious or philosophical beliefs, or refers to criminal convictions and offences. A DPO must be independent of the appointing organisation." (See Assessment List, p. 26)




Encryption, Pseudonymisation, Aggregation, and Anonymisation


"Pseudonymisation refers to the idea that it is not possible to attribute personal data to a specific data subject without additional information. By contrast to pseudonymisation, anonymisation consists in preventing any identification of individuals from personal data. The link between an individual and personal data is definitively erased. Encryption is the procedure whereby clear text information is disguised by using especially a hash key. Encrypted results are unintelligible data for persons who do not have the encryption key. Aggregation is a process whereby data is gathered and expressed in a summary form, especially for statistical analysis." (See Assessment List, p. 26)




End-user


"An end-user is the person that ultimately uses or is intended to ultimately use the AI system. This could either be a consumer or a professional within a public or private organisation. The end-user stands in contrast to users who support or maintain the product, such as system administrators, database administrators, information technology experts, software professionals and computer technicians." (See Assessment List, p. 26)




Explainability


"Feature of an AI system that is intelligible to non-experts. An AI system is intelligible if its functionality and operations can be explained non technically to a person not skilled in the art." (See Assessment List, p. 26)




Fairness


"Fairness refers to a variety of ideas known as equity, impartiality, egalitarianism, non-discrimination and justice. Fairness embodies an ideal of equal treatment between individuals or between groups of individuals. This is what is generally referred to as ‘substantive’ fairness. But fairness also encompasses a procedural perspective, that is the ability to seek and obtain relief when individual rights and freedoms are violated." (See Assessment List, p. 27)




Fault tolerance


"Fault tolerance is the property that enables a system to continue operating properly in the event of the failure of (or one or more faults within) some of its components. If its operating quality decreases at all, the decrease is proportional to the severity of the failure, as compared to a naively designed system, in which even a small failure can cause total breakdown. Fault tolerance is particularly sought after in high-availability or safety- critical systems. Redundancy or duplication is the provision of additional functional capabilities that would be unnecessary in a fault-free environment. This can consist of backup components that automatically ‘kick in’ if one component fails." (See Assessment List, p. 27)




Human oversight, human-in-the-loop, human-on-the-loop, human-in-command


"Human oversight helps ensure that an AI system does not undermine human autonomy or causes other adverse effects. Oversight may be achieved through governance mechanisms such as a human-in-the-loop (HITL), human-on-the-loop (HOTL), or human-in-command (HIC) approach. Human-in-the-loop refers to the capability for human intervention in every decision cycle of the system, which in many cases is neither possible nor desirable. Human- on-the-loop refers to the capability for human intervention during the design cycle of the system and monitoring the system’s operation. Human-in-command refers to the capability to oversee the overall activity of the AI system (including its broader economic, societal, legal and ethical impact) and the ability to decide when and how to use the system in any particular situation. This can include the decision not to use an AI system in a particular situation, to establish levels of human discretion during the use of the system, or to ensure the ability to override a decision made by a system. Moreover, it must be ensured that public enforcers have the ability to exercise oversight in line with their mandate. Oversight mechanisms can be required in varying degrees to support other safety and control measures, depending on the AI system’s application area and potential risk. All other things being equal, the less oversight a human can exercise over an AI system, the more extensive testing and stricter governance is required." (See Assessment List, p. 27)




Interpretability


"Interpretability refers to the concept of comprehensibility, explainability, or understandability. When an element of an AI system is interpretable, this means that it is possible at least for an external observer to understand it and find its meaning." (See Assessment List, p. 27)




Lifecycle


"The lifecycle of an AI system includes several interdependent phases ranging from its design and development (including sub-phases such as requirement analysis, data collection, training, testing, integration), installation, deployment, operation, maintenance, and disposal. Given the complexity of AI (and in general information) systems, several models and methodologies have been defined to manage this complexity, especially during the design and development phases, such as waterfall, spiral, agile software development, rapid prototyping, and incremental." (See Assessment List, p. 27)




Model Evasion


"Evasion is one of the most common attacks on machine learning models (ML) performed during production. It refers to designing an input, which seems normal for a human but is wrongly classified by ML models. A typical example is to change some pixels in a picture before uploading, so that the image recognition system fails to classify the result." (See Assessment List, p. 27)




Model Inversion


"Model inversion refers to a kind of attack to AI models, in which the access to a model is abused to infer information about the training data. So, model inversion turns the usual path from training data into a machine-learned model from a one-way one to a two-way one, permitting the training data to be estimated from the model with varying degrees of accuracy. Such attacks raise serious concerns given that training data usually contain privacy-sensitive information." (See Assessment List, p. 28)




Online continual learning


"The ability to continually learn over time by accommodating new knowledge while retaining previously learned experiences is referred to as continual or lifelong learning. Learning continually is crucial for agents and robots operating in changing environments and required to acquire, fine-tune, adapt, and transfer increasingly complex representations of knowledge. Such a continuous learning task has represented a long-standing challenge for machine learning and neural networks and,36 consequently, for the development of artificial intelligence (AI) systems. The main issue of computational models regarding lifelong learning is that they are prone to catastrophic forgetting or catastrophic interference, i.e., training a model with new information interferes with previously learned knowledge." (See Assessment List, p. 28)




Pen test


"A penetration test, colloquially known as a pen test, pentest or ethical hacking, is an authorized simulated cyberattack on a computer system, performed to evaluate the security of the system. The test is performed to identify both weaknesses (also referred to as vulnerabilities), including the potential for unauthorised parties to gain access to the system's features and data, as well as strengths, enabling a full risk assessment to be completed." (See Assessment List, p. 28)




Redress by design


"Redress by design relates to the idea of establishing, from the design phase, mechanisms to ensure redundancy, alternative systems, alternative procedures, etc. in order to be able to effectively detect, audit, rectify the wrong decisions taken by a perfectly functioning system and, if possible, improve the system." (See Assessment List, p. 28))




Self-learning AI system


"Self-learning (or self-supervised learning) AI systems recognize patterns in the training data in an autonomous way, without the need for supervision." (See Assessment List, p. 29)




Standards


"Standards are norms designed by industry and/or Governments that set product or services’ specifications. They are a key part of our society as they ensure quality and safety in both products and services in international trade. Businesses can be seen to benefit from standards as they can help cut costs by improved systems and procedures put in place. Standards are internationally agreed by experts and they usually represent what the experts think is the best way of doing something. It could be about making a product, managing a process, delivering a service or supplying materials – standards cover a huge range of activities. Standards are released by international organizations, such as ISO (International Organisation for Standardisation), IEEE (The Institute of Electrical and Electronics Engineers) Standard Association, and NIST (National Institute of Standards and Technology)." (See Assessment List, p. 29)




Subjects


"A subject is a person or a group of persons affected by the AI system (such as the recipient of benefits where the decision to grant or reject benefits is underpinned by an AI- system, or the general public for facial recognition)." (See Assessment List, p. 29)




Universal Design


"Terms such as “Design for All”, “Universal Design”, “accessible design”, “barrier‐free design”, “inclusive design” and “transgenerational design” are often used interchangeably with the same meaning. These concepts have been developed by different stakeholders working to deliver high levels of accessibility. A parallel development of human- centred design emerged within ergonomics focusing on usability. These related concepts are expressed in the human rights perspective of the Design for All approach. The Design for All approach focuses on user involvement and experiences during the design and development process to achieve accessibility and usability. It should be applied from the earliest possible time, and throughout all stages in the life of products and services which are intended for mainstream use. A Design for All approach also focuses on user requirements and interoperability between products and services across the end-to-end chain of use to reach inclusive and non-stigmatizing solutions." (See Assessment List, pp. 29, 30)




Use case


"A use case is a specific situation in which a product or service could potentially be used. For example, self-driving cars or care robots are use cases for AI." (See Assessment List, p. 30)




User


"A user is a person that uses, supports or maintains the product, such as system administrators, database administrators, information technology experts, software professionals and computer technicians." (See Assessment List, p. 30)




Workflow of the model


"The workflow of an AI model shows the phases needed to build the model and their interdependencies. Typical phases are: Data collection and preparation, Model development, Model training, Model accuracy evaluation, Hyperparameters’ tuning, Model usage, Model maintenance, Model versioning. These stages are usually iterative: one may need to reevaluate and go back to a previous step at any point in the process." (See Assessment List, p. 30)





A Definition of AI


Privacy Policy

Imprint

V29 Legal - Duve Hamama Rechtsanwälte PartG mbB

  • LinkedIn - Weiß, Kreis,