Quality Assurance in AI

Expert-defined terms from the Professional Certificate in AI in Medical Imaging course at London School of International Business. Free to read, free to share, paired with a globally recognised certification pathway.

Quality Assurance in AI

Artificial Intelligence (AI) #

The simulation of human intelligence processes by machines, especially computer systems. These processes include learning (the acquisition of information and rules for using the information), reasoning (using the rules to reach approximate or definite conclusions), and self-correction.

Machine Learning (ML) #

A subset of AI that involves the practice of using algorithms to parse data, learn from it, and then make a determination or prediction about something.

Deep Learning (DL) #

A subset of ML that makes the computation of multi-layer neural networks feasible. DL is able to learn from large, complex datasets and is widely used in image and speech recognition.

Quality Assurance (QA) in AI #

The process of ensuring that AI systems meet specified requirements and are fit for their intended use. QA in AI involves verifying that the system is able to accurately and consistently perform its intended functions, and that it does not pose unacceptable risks to users or society.

Data Quality #

The degree to which data is accurate, complete, consistent, and timely. In the context of AI, data quality is critical for ensuring that the system is able to learn and make predictions based on reliable information.

Ground Truth #

The true or accepted value or result for a given dataset or measurement. In AI, ground truth is used as a reference for evaluating the performance of a system.

Performance Metrics #

Quantitative measures used to evaluate the performance of an AI system. Examples include accuracy, precision, recall, and F1 score.

Overfitting #

A situation in which an AI model learns the training data too well, resulting in poor performance on new, unseen data. Overfitting can be addressed through techniques such as regularization and cross-validation.

Underfitting #

A situation in which an AI model does not learn the training data well enough, resulting in poor performance on both the training data and new, unseen data. Underfitting can be addressed through techniques such as increasing the complexity of the model or obtaining more training data.

Bias #

A systematic deviation from the truth in the output of an AI system. Bias can be caused by a variety of factors, including the data used to train the system, the algorithms used to process the data, and the way in which the system is used.

Fairness #

The principle that an AI system should not discriminate against certain groups or individuals based on characteristics such as race, gender, or age.

Explainability #

The degree to which the workings of an AI system can be understood and interpreted by humans. Explainability is important for building trust in AI systems and for ensuring that they can be used responsibly.

Transparency #

The degree to which the data, algorithms, and other components of an AI system are open and accessible for inspection and analysis. Transparency is important for building trust in AI systems and for ensuring that they can be used responsibly.

Robustness #

The ability of an AI system to perform consistently and reliably in the face of unexpected inputs or conditions. Robustness is important for ensuring that AI systems can be used safely and effectively in real-world applications.

Security #

The measures taken to protect an AI system from unauthorized access, use, disclosure, disruption, modification, or destruction. Security is important for ensuring the confidentiality, integrity, and availability of AI systems.

Privacy #

The protection of personal information and other sensitive data in AI systems. Privacy is important for ensuring that AI systems are used in a way that respects the rights and expectations of individuals.

Ethics in AI #

The principles and values that guide the development, deployment, and use of AI systems. Ethical considerations in AI include fairness, transparency, accountability, privacy, and non-maleficence (doing no harm).

Human #

AI Collaboration: The interaction between humans and AI systems in which they work together to achieve a common goal. Human-AI collaboration can take many forms, including humans providing guidance or oversight to AI systems, and AI systems augmenting or enhancing human capabilities.

Computer #

Aided Detection (CAD): The use of AI and other digital tools to assist radiologists and other medical professionals in the detection and diagnosis of diseases and conditions. CAD is widely used in medical imaging, including mammography, CT, and MRI.

Computer #

Aided Diagnosis (CADx): The use of AI and other digital tools to assist radiologists and other medical professionals in the interpretation and diagnosis of diseases and conditions. CADx is a more advanced form of CAD that goes beyond simple detection to provide more detailed information and insights.

Medical Imaging Informatics #

The application of informatics principles and technologies to medical imaging. Medical imaging informatics includes the acquisition, management, analysis, and interpretation of medical images, as well as the integration of these images with other clinical data.

Radiomics #

The extraction and analysis of large numbers of quantitative features from medical images. Radiomics can provide insights into the underlying biology and behavior of diseases and conditions, and can be used to support personalized medicine and treatment planning.

Natural Language Processing (NLP) #

A field of AI that focuses on the interaction between computers and human language. NLP includes the ability to understand, generate, and translate human language, as well as the ability to extract meaning and insights from text data.

Clinical Decision Support (CDS) #

The use of AI and other digital tools to assist healthcare providers in making clinical decisions. CDS can provide evidence-based recommendations and alerts, and can help to improve the quality, safety, and efficiency of care.

Evidence #

Based Medicine (EBM): The use of the best available evidence to inform clinical decision making. EBM is based on the principles of systematic review, critical appraisal, and clinical judgment.

Personalized Medicine #

The tailoring of medical treatments and interventions to the individual characteristics and needs of each patient. Personalized medicine is enabled by advances in genomics, proteomics, and other -omics technologies, as well as by the use of AI and other digital tools.

Precision Medicine #

A form of personalized medicine that focuses on the prevention, diagnosis, and treatment of diseases and conditions based on the individual genetic makeup and other molecular or cellular characteristics of each patient.

Health Information Technology (HIT) #

The use of computers and other digital tools to manage and communicate health information. HIT includes electronic health records (EHRs), computerized physician order entry (CPOE), and other systems for storing, sharing, and analyzing health data.

Clinical Workflow #

The sequence of tasks and activities that are performed by healthcare providers to deliver care to patients. Clinical workflow includes the processes of ordering tests and treatments, documenting encounters, and communicating with patients and other healthcare providers.

Usability #

The degree to which a product or system is easy to use, understand, and navigate. Usability is important for ensuring that AI systems are user-friendly and accessible to a wide range of users, including those with limited technical expertise.

User Experience (UX) #

The overall experience of using a product or system, including the user's perceptions, emotions, and behaviors. UX encompasses usability, but also includes other factors such as aesthetics, functionality, and value.

User Interface (UI) #

The part of a product or system that users interact with directly. The UI includes the layout, design, and functionality of the product or system, and is critical for ensuring a positive user experience.

Accessibility #

The degree to which a product or system is usable by people with a wide range of abilities and disabilities. Accessibility is important for ensuring that AI systems are inclusive and equitable, and for avoiding discrimination and exclusion.

Inclusivity #

The principle that AI systems should be designed and developed to serve the needs and interests of a diverse range of users, including those from different cultures, languages, and backgrounds. Inclusivity is important for ensuring that AI systems are relevant and useful to a wide range of users, and for avoiding bias and discrimination.

Cultural Competence #

The ability of healthcare providers

May 2026 cohort · 29 days left
from £90 GBP
Enrol