EXIN AI Foundation - Quick Reference Card

Exam Traps Cheat Sheet
Page 1 / 4 -- Key Definitions
1. AI & Intelligence
Human Intelligence
"The mental quality that consists of the abilities to learn from experience, adapt to new situations, understand and handle abstract concepts, and use knowledge to manipulate one's environment."
Artificial Intelligence
"Intelligence demonstrated by machines, in contrast to the natural intelligence displayed by humans and other animals."
Scientific Method
"An empirical method for acquiring knowledge that has characterized the development of science."
Singularity (Kurzweil)
"A future period characterized by rapid technological growth that will irreversibly transform human life."
2. Machine Learning & Neural Networks
Machine Learning (Tom Mitchell)
"The study of computer algorithms that allow computer programs to automatically improve through experience."
Neural Network
"A machine learning program, or model, that makes decisions in a manner similar to the human brain, by using processes that mimic the way biological neurons work together to identify phenomena, weigh options and arrive at conclusions."
Deep Learning
"Deep learning is a multi-layered neural network."
3. Generative AI & Data
Large Language Models (LLMs) (IBM)
"Deep learning algorithms that can recognize, summarize, translate, predict, and generate content using very large datasets."
Generative AI (IBM)
"Refers to deep-learning models that can generate high-quality text, images, and other content based on the data they were trained on."
Big Data (Dialogic.com)
"Extremely large data sets that may be analyzed computationally to reveal patterns, trends, and associations."
Data Visualization (IBM)
"The representation of data through use of common graphics, such as charts, plots, infographics and even animations."
4. Robotics & Ethics
Robotics
"A machine that can carry out a complex series of tasks automatically, either with or without intelligence."
Risk
"A person or thing regarded as a threat or likely source of danger."
Ethics
"Moral principles that govern a person's behavior or the conducting of an activity."
AI Governance
A set of practices (policies, standards, AI steering committees) to keep AI systems safe, ethical, and under control.
5. Data Types
Structured Data
Organized sequentially or in tabular format (e.g., spreadsheets, SQL databases).
Semi-structured Data
Has some organizational properties but not fully tabular (e.g., JSON, XML, emails).
Unstructured Data
No pre-defined order or structure (e.g., images, video, social media posts).
For exam preparation only. Based on Preparation Guide Edition 202508.

EXIN AI Foundation - Quick Reference Card

Exam Traps Cheat Sheet
Page 2 / 4 -- Frameworks & Standards
Floridi & Cowls' 5 Ethical Principles
PrincipleMeaning
BeneficencePromote well-being, do good
Non-maleficenceDo no harm, prevent damage
AutonomyPreserve human decision-making power
JusticeBe fair, equitable, non-discriminatory
ExplicabilityBe transparent, explainable
UK AI Principles (5)
#Principle
1Safety, security and robustness
2Transparency and explainability
3Fairness
4Accountability and governance
5Contestability and redress
Asilomar AI Principles (2017)
CategoryDetails
Total23 principles for responsible AI development
Area 1Research issues
Area 2Ethics and values
Area 3Longer-term issues
PurposeEnsure AI is developed safely and beneficially
ISO Standards EXAM TRAP
StandardScope
ISO 42001AI Management System (AIMS) -- THE one for AI
ISO 22989AI concepts and terminology
ISO 31000Risk management
ISO 9001Quality management (NOT AI-specific!)
Key Dates / Timeline
YearEvent
1956Dartmouth Conference -- AI field begins
1974-1980AI Winter 1 -- reduced funding & interest
1987-1993AI Winter 2 -- expert systems decline
2017Asilomar AI Principles published
2022LLMs widespread (ChatGPT etc.)
2024EU AI Act enacted
Regulation & Standards Bodies
RegulationScope
GDPREU data protection regulation
DPA 2018UK Data Protection Act
WCAGWeb Content Accessibility Guidelines
NISTUS National Institute of Standards
EU AI ActEU regulation for AI systems (2024)
Ethical Challenges vs. Strategies
ChallengesStrategies
Self-interestDealing with bias
Self-reviewOpenness
Conflict of interestTransparency
IntimidationTrustworthiness
AdvocacyExplainability
Risk Management Techniques TRAP
Valid TechniquesNOT a Technique
Risk analysisCrisis
SWOT analysis
PESTLE
Cynefin
For exam preparation only. Based on Preparation Guide Edition 202508.

EXIN AI Foundation - Quick Reference Card

Exam Traps Cheat Sheet
Page 3 / 4 -- Lists & Categories
5 Data Quality Characteristics MEMORIZE
CharacteristicQuestion It Answers
AccuracyIs it correct?
CompletenessIs it all there?
UniquenessIs it free from duplication?
ConsistencyIs it free from conflict?
TimelinessIs it current and available?
Business Case Structure
  • Introduction
  • Management / Executive Summary
  • Description of Current State
  • Options Considered (option described, cost/benefit analysis, impact assessment, risk assessment)
  • Recommendations
  • Appendices / Supporting Information
Power / Interest Grid EXAM Q29
Power / Interest
High Interest
Low Interest
High Power
Constant Active Management
Keep Satisfied
Low Power
Keep Informed
Monitor
4 Risk Management Strategies
StrategyApproach
AcceptAcknowledge and tolerate the risk
MitigateReduce impact (incl. sharing, contingency)
AvoidEliminate the risk entirely
TransferPass risk to a third party
3 Governance Areas EXAM Q33-34
AreaPurpose
ComplianceSatisfy regulations
Risk ManagementProactively detect & mitigate risk
Lifecycle GovernanceManage, monitor & govern AI models
Waterfall Stages (in order) TRAP: Q30
Requirements→ Design→ Implementation→ Testing→ Deployment→ Maintenance
ML Process Stages
Analyze problem→ Data selection→ Pre-processing→ Visualization→ Select model→ Train→ Test→ Repeat→ Review
Learning Types
TypeData Used
SupervisedLabeled data (we know the expected output)
UnsupervisedUnlabeled data (grouping/clustering)
Semi-supervisedSmall labeled + large unlabeled datasets
ML Concepts
  • Prediction
  • Object Recognition
  • Classification (incl. random decision forests)
  • Clustering
  • Recommendations (e.g. Netflix, Spotify)
Types of Robots
  • Industrial -- factory automation
  • Personal -- household, companions
  • Autonomous -- self-navigating
  • Nanobots -- microscopic scale
  • Humanoids -- human-like form
Data Visualization Types
  • Written
  • Verbal
  • Pictoral
  • Sounds
  • Dashboards / Infographics
  • Virtual & Augmented Reality (VR/AR)
Common AI Examples
  • Human compatible
  • Wearable
  • Edge computing
  • Internet of Things (IoT)
  • Personal care
  • Self-driving vehicles
  • Generative AI tools
AI Sectors
  • Marketing & Sales
  • Healthcare
  • Finance
  • Transportation
  • Education
  • Manufacturing
  • Entertainment
  • IT / Research & Development
8 AI Career Roles
  • Machine Learning Engineer
  • Data Scientist
  • AI Research Scientist
  • Computer Vision Engineer
  • NLP Engineer
  • Robotics Engineer
  • AI Ethics Specialist
  • AI Anthropologist
Sustainability Measures
  • Green IT initiatives
  • Data center energy and efficiency
  • Sustainable supply chain
  • Choice of algorithm
  • Low-code / no-code programming
  • Monitoring & reporting environmental impact
For exam preparation only. Based on Preparation Guide Edition 202508.

EXIN AI Foundation - Quick Reference Card

Exam Traps Cheat Sheet
Page 4 / 4 -- Exam Traps
Common Wrong Answers -- Learn These! TRAPS
!
"Domain AI" or "Technical AI" -- not real terms
Correct: Narrow AI (also called Weak AI)
Q4: AI focused on specific tasks within a defined scope = Narrow AI
!
"Crisis" is a risk management technique
Correct: Crisis is NOT a technique. SWOT, PESTLE, Cynefin ARE
Q12: "Which is NOT a risk management technique?" -- Answer: Crisis
!
ISO 9001 is for AI management
Correct: ISO 42001 is for AIMS. ISO 9001 = quality management only
Q8: AIMS standard = ISO 42001 (not 9001, 22989, or 31000)
!
"New skills demand" is an environmental impact
Correct: Water usage demand is the environmental impact
Q5: New skills demand is a social impact. Water/energy = environmental
!
SMEs minimize bias
Correct: SMEs minimize misinformation. Bias needs systemic approaches
Q21: Subject matter experts check reliability of sources = misinformation
!
"Completeness" checks for data conflicts
Correct: Consistency checks for conflicts (e.g. text in a number field)
Q20: Completeness = "is it all there?" / Consistency = "free from conflict?"
!
Risk owner = project sponsor or impacted team
Correct: The individual ultimately accountable for managing the risk
Q32: Not the manager, not the team, not the sponsor -- the accountable individual
!
Testing or Deployment comes after Design in Waterfall
Correct: Implementation comes immediately after Design
Q30: Requirements > Design > Implementation > Testing > Deployment > Maintenance
!
"Human plus machine" = competition or replacement
Correct: The combination of humans and machines to augment capabilities
Q38: Not competition, not replacement, not study -- augmentation
!
Recommendations or Impact Assessment contain the business case brief
Correct: The Management Summary contains the overarching brief
Q28: Management/Executive Summary = the overview brief of the business case
!
Competitor analysis or cost-benefit = governance activities
Correct: Governance = Compliance, Risk Management, Lifecycle Governance
Q34: Risk assessment IS a governance activity. Competitor analysis is NOT
!
Triple Bottom Line = a type of income matrix
Correct: Triple Bottom Line = Profit, People, Planet (3Ps)
Q31: Framework for assessing 3Ps impact of AI change, not a financial tool
More Traps from the Sample Exam
Q# Correct Answer Why Others Are Wrong
Q1 B) Intelligence Not creativity, technology, or thinking
Q2 D) Empirical method for acquiring knowledge Not procedural, not teaching, not communicating
Q6 B) Green, energy-efficient AI models Not "high-code" (trap: low-code helps!), not low-cost
Q7 B) Guidelines governing creation & use Ethics in AI is not about teaching AI moral decisions
Q14 C) Search algorithms Washing machines and construction workers are not AI
Q16 C) ML from structured & unstructured data Neural nets are not "biological computing" or "scripting"
Q17 C) Clustering Not "bunching," "clumping," or "combining" (made-up terms)
Q22 C) User purchasing patterns & trends Big data use case is insight into customer behaviour
Q23 A) Dashboard Dashboards show multiple KPI streams, not infographics
Q25 C) Prompt engineering Not "data selection" or "NLP" or "training"
Q37 A) Chatbots Generative AI tool in sales/marketing; not Snapchat
Q39 D) Reducing energy consumption Positive AI environmental impact -- optimization
Quick-Fire Exam Reminders
  • 40 questions, 60 minutes, 65% pass mark (26/40)
  • Bloom levels 1 (remember) and 2 (understand) only
  • Narrow AI = Weak AI = task-specific; General AI = Strong AI = human-level
  • AI is a subset of computer science; ML is a subset of AI; DL is a subset of ML
  • RPA = Robotic Process Automation (automates repetitive tasks)
  • Data types: Structured (tabular) > Semi-structured (some structure) > Unstructured (none)
  • Prompt engineering = altering instructions to get specific outputs
  • Process automation opportunity = repetitive, rule-based tasks (e.g. rotas)
  • Regulation purpose: to manage associated risks (not limit innovation)
  • Professional standards: ethical, accountable, competent, inclusive
For exam preparation only. Based on Preparation Guide Edition 202508.