EXIN Artificial Intelligence Foundation
Comprehensive Study Guide -- Edition 202508
Study Strategy for IT Professionals
K1 (Remember) questions test recall: memorize verbatim definitions, names, dates, and lists. K2 (Understand) questions test understanding: you must interpret scenarios, compare concepts, and choose the best description.
Learn the exact wording of definitions from the preparation guide -- exam answers often use that precise language.
Topic 1: An Introduction to AI and Historical Development
15% of Exam1.1 Identify the key definitions of key AI terms
K1 / K2- Human Intelligence -- "The mental quality that consists of the abilities to learn from experience, adapt to new situations, understand and handle abstract concepts, and use knowledge to manipulate one's environment."
- Artificial Intelligence (AI) -- "Intelligence demonstrated by machines, in contrast to the natural intelligence displayed by humans and other animals."
- Machine Learning (ML) -- "The study of computer algorithms that allow computer programs to automatically improve through experience."
- Scientific Method -- "An empirical method for acquiring knowledge that has characterized the development of science."
Key elements of the scientific method: Observation, Hypothesis, Experimentation, Analysis, Replication, Peer Review.
All four definitions above appear verbatim in the preparation guide. The exam will test whether you can select the correct definition when given multiple options. Know which definition belongs to which term.
Do not confuse the ML definition (which says "computer algorithms" and "improve through experience") with the AI definition (which says "intelligence demonstrated by machines"). The ML definition is attributed to Tom Mitchell.
1.2 Describe key milestones in the development of AI
K2The exam tests five key milestones only:
- Asilomar Principles (2017) -- Coordinated by the Future of Life Institute (FLI) at the Beneficial AI 2017 conference. A set of 23 principles covering three categories: Research issues, Ethics and values, Longer-term issues.
- Dartmouth Conference (1956) -- Birthplace of AI. The term "Artificial Intelligence" was coined here. Organized by John McCarthy, Marvin Minsky, Nathaniel Rochester, and Claude Shannon.
- AI Winters -- Periods of reduced funding and interest:
- First AI Winter (1974-1980): Overpromises, underperformance, funding cuts.
- Second AI Winter (1987-1993): Commercial failures of expert systems, overhype, computational limitations.
- Big Data and IoT -- Enormous datasets from social media, sensors, e-commerce, and IoT devices fueled modern AI advances.
- Large Language Models (LLMs) -- Widespread public use from 2022 onward.
Dartmouth = 1956, four organizers (McCarthy, Minsky, Rochester, Shannon), coined "AI". Asilomar = 2017, FLI, 23 principles. First winter = 1974-1980. Second winter = 1987-1993.
McCarthy coined "AI" (not Turing, who proposed the Turing Test in 1950). Asilomar is about responsible AI governance, not about the birth of AI as a field.
1.3 Describe different types of AI
K2- Narrow AI (ANI) -- Also known as weak AI. Task-specific, operates within well-defined domains. All current AI systems are narrow AI. Examples: image recognition, speech recognition, language translation, virtual assistants (Siri, Alexa), spam filtering, medical diagnostics, generative AI.
- General AI (AGI) -- Also known as strong AI. Aims to replicate human intelligence. The hypothetical intelligence of a machine that has the capacity to understand or learn any intellectual task a human can. AGI does not currently exist.
Narrow/Weak/ANI = task-specific, exists today. General/Strong/AGI = human-level, hypothetical. Know at least four examples of narrow AI.
If a scenario describes a system that does multiple unrelated tasks at human level, it is AGI. If it does one well-defined task, it is narrow AI. All current AI, including ChatGPT, is classified as narrow AI.
1.4 Explain the impact of AI on society
K2Floridi & Cowls' Principles (5 principles for ethical AI):
- Beneficence -- AI should promote well-being and do good
- Non-maleficence -- AI should avoid causing harm ("do no harm")
- Autonomy -- AI should respect human autonomy and decision-making
- Justice -- AI should promote fairness and avoid reinforcing inequality
- Explicability -- AI systems should be transparent and understandable
UK AI Principles (5 principles):
- Safety, security and robustness
- Transparency and explainability
- Fairness
- Accountability and governance
- Contestability and redress
- Social impact: Job displacement vs. creation, reskilling needs, privacy concerns
- Economic impact: Productivity gains, innovation, economic disparities
- Environmental impact: Energy consumption, water usage, carbon emissions, e-waste
- UN 17 Sustainable Development Goals (SDGs) -- AI can help or hinder these
- EU AI Act (2024) -- Regulatory framework for AI in the EU
Floridi & Cowls = Beneficence, Non-maleficence, Autonomy, Justice, Explicability.
UK AI = Safety/security/robustness, Transparency/explainability, Fairness, Accountability/governance, Contestability/redress.
Do not confuse the two sets of principles. Floridi & Cowls uses medical ethics language (beneficence, non-maleficence). UK principles use governance language (accountability, contestability). Know both by heart.
1.5 Describe sustainability measures to help reduce the environmental impact of AI
K2- Green IT initiatives -- Energy-efficient hardware, renewable energy, responsible disposal
- Data center energy and efficiency -- Renewable energy, improved cooling, optimized utilization
- Sustainable supply chain -- Responsible sourcing, reduced manufacturing emissions, minimized e-waste
- Choice of algorithm -- More efficient algorithms reduce energy use; not every problem needs deep learning
- Low-code/no-code programming -- Reduces development overhead and resource consumption
- Monitoring and reporting environmental impact -- Tracking energy, carbon, and water use throughout the AI lifecycle
Six sustainability measures: Green IT, data center efficiency, sustainable supply chain, algorithm choice, low-code/no-code, monitoring and reporting.
AI's environmental impact is both direct (training models) and indirect (increased demand for digital services). The exam may ask you to identify specific measures from a list.
Topic 2: Ethical and Legal Considerations
15% of Exam2.1 Describe ethical concerns, including bias and privacy, in AI
K2- Ethics -- "Moral principles that govern a person's behaviour or the conducting of an activity." (Oxford English Dictionary)
- Ethics vs. Law: Ethics = moral guidelines (can vary); Law = formal rules enforced by government. Overlap exists, but they are distinct.
Ethical concerns in AI:
- Bias, unfairness, and discrimination -- AI can perpetuate or amplify biases from training data
- Data privacy and protection -- AI processes vast personal data, raising privacy concerns
- Impact on employment and the economy -- Automation replacing jobs, widening inequality
- Autonomous weapons -- Ethical issues with lethal autonomous systems
- Autonomous vehicles and liability -- Who is responsible when self-driving cars cause harm?
Ethics = moral principles governing behavior. Five ethical concerns: bias/unfairness/discrimination, data privacy, employment impact, autonomous weapons, autonomous vehicles liability.
The exam distinguishes ethics (moral principles) from law (legal rules). Choose the answer about moral principles, not legal enforcement.
2.2 Describe the importance of guiding principles in ethical AI development
K2The UK AI Principles guide ethical AI:
- Safety, security and robustness
- Transparency and explainability
- Fairness
- Accountability and governance
- Contestability and redress
AI Governance -- A set of practices to keep AI systems under control so they remain safe and ethical. Includes organizational policies, standards, and AI steering committees.
AI governance = set of practices to keep AI safe and ethical, including policies, standards, and steering committees.
2.3 Explain strategies for addressing ethical challenges in AI projects
K2Ethical challenges (threats to ethical behavior):
- Self-interest -- Placing personal gain above ethical obligations
- Self-review -- Reviewing your own work without independent scrutiny
- Conflict of interest -- Competing loyalties that compromise objectivity
- Intimidation -- Being pressured to act unethically
- Advocacy -- Promoting a position to the point of compromising objectivity
Strategies for addressing challenges:
- Dealing with bias -- Diverse data, diverse teams, fairness metrics
- Openness -- Transparency about AI use, data sources, and limitations
- Transparency -- Making AI processes visible and understandable
- Trustworthiness -- Building reliable, dependable systems
- Explainability -- AI decisions must be explainable in human terms
An ethical risk framework integrates ethical considerations into every stage of AI development.
Five challenges: self-interest, self-review, conflict of interest, intimidation, advocacy. Five strategies: dealing with bias, openness, transparency, trustworthiness, explainability.
The exam may present a scenario and ask which ethical challenge it represents. "A developer tests their own AI model without external review" = self-review.
2.4 Explain the role of regulation in AI
K2- Need for regulation -- Ensures legal accountability and effective management of AI
- AI regulation landscape -- Standards like WCAG (Web Content Accessibility Guidelines)
- Data Protection Act 2018 (DPA 2018) -- UK data protection legislation
- UK GDPR -- Governs personal data collection, storage, and processing in the UK
- ISO -- International standards for AI systems
- NIST -- US frameworks for AI risk management
- Consequences of unregulated AI -- Widespread harm, bias, loss of trust, privacy violations
- Professional standards -- Must be ethical, accountable, competent, inclusive
Key regulations: DPA 2018, UK GDPR, ISO, NIST, WCAG. Professional standards = ethical, accountable, competent, inclusive.
WCAG is about accessibility (web content), not data protection. Do not confuse it with GDPR.
2.5 Explain the process of risk management in AI
K2- Risk -- "A person or thing regarded as a threat or likely source of danger."
- Risk management -- "A process or series of processes which allow risk to be understood and minimized proactively."
Risk management techniques:
- Risk analysis -- Identifying and assessing potential risks
- SWOT analysis -- Strengths, Weaknesses, Opportunities, Threats
- PESTLE analysis -- Political, Economic, Social, Technological, Legal, Environmental
- Cynefin framework -- Categorizes problems: Simple/Clear, Complicated, Complex, Chaotic, Disorder
Risk mitigation strategies:
- Ownership and accountability -- Assigning clear risk owners
- Stakeholder involvement -- Engaging all affected parties
- Subject matter experts -- Consulting domain experts
Techniques: Risk analysis, SWOT, PESTLE, Cynefin. Mitigation strategies: Ownership/accountability, stakeholder involvement, SMEs.
PESTLE and SWOT are analysis frameworks (identifying risks), not mitigation strategies (responding to risks). The exam tests this distinction.
Topic 3: Enablers of AI
15% of Exam3.1 List common examples of AI
K1- Human compatible -- AI working alongside humans (cobots, collaborative tools)
- Wearable -- Fitness trackers, smartwatches with health monitoring
- Edge -- AI processing at the device level (faster, more private)
- Internet of Things (IoT) -- Smart home devices, sensors, connected equipment
- Personal care -- AI health apps, personalized medication, mental health chatbots
- Self-driving vehicles -- Autonomous cars using sensors, cameras, and AI
- Generative AI tools -- ChatGPT, image generators, code assistants
Seven categories: human compatible, wearable, edge, IoT, personal care, self-driving vehicles, generative AI tools.
3.2 Describe the role of robotics in AI
K2- Robotics -- "A machine that can carry out a complex series of tasks automatically, either with or without intelligence."
- Intelligent vs. non-intelligent: Intelligent robots use AI (sensors, learning); non-intelligent follow fixed instructions.
Types of robots:
- Industrial -- Manufacturing, assembly, welding
- Personal -- Home assistants, vacuum cleaners
- Autonomous -- Self-driving vehicles, drones
- Nanobots -- Microscopic robots for medical applications
- Humanoids -- Robots resembling humans (ASIMO, Sophia)
Robotic Process Automation (RPA) -- Software that automates repetitive digital tasks (data entry, form processing). Not a physical robot.
Definition of robotics (verbatim). Five types: industrial, personal, autonomous, nanobots, humanoids. RPA = software automation, not physical.
RPA is not a physical robot. The exam may try to confuse RPA with physical robotics. The key distinction for intelligent vs. non-intelligent is whether the robot uses AI/learning.
3.3 Describe machine learning
K2- Machine Learning -- "The field of machine learning is concerned with the question of how to construct computer programs that automatically improve with experience." (Tom Mitchell)
- Neural Networks -- "A machine learning program, or model, that makes decisions in a manner similar to the human brain, by using processes that mimic the way biological neurons work together to identify phenomena, weigh options and arrive at conclusions."
- Deep Learning -- "Deep learning is a multi-layered neural network."
- Large Language Models (LLMs) -- "LLMs are deep learning algorithms that can recognize, summarize, translate, predict, and generate content using very large datasets." (IBM)
Hierarchy: AI > Machine Learning > Deep Learning > LLMs
All four definitions verbatim. ML is a subset of AI. Deep learning is a subset of ML. LLMs are a subset of deep learning. Tom Mitchell = ML definition. IBM = LLM definition.
The hierarchy matters: AI is broadest, LLMs are most specific. "ML is a subset of AI" is the correct relationship.
3.4 Identify common machine learning concepts
K1 / K2- Prediction -- Using historical data to forecast future outcomes
- Object recognition -- Identifying objects within images or video (CNNs)
- Classification -- Assigning data to predefined categories; includes random decision forests (ensemble of decision trees)
- Clustering -- Grouping data by similarities without predefined categories (unsupervised)
- Recommendations -- Suggesting content based on behavior (e.g., Netflix, Spotify)
Five ML concepts: prediction, object recognition, classification (inc. random decision forests), clustering, recommendations.
Classification (supervised, known categories) vs. clustering (unsupervised, categories emerge). Random decision forests = classification technique.
3.5 Describe supervised and unsupervised learning
K2- Supervised learning -- Uses labeled data (input-output pairs). We know what the output will be. Example: spam classification.
- Unsupervised learning -- Uses unlabeled data. Discovers hidden patterns. Example: customer segmentation/clustering.
- Semi-supervised learning -- Small amount of labeled data + larger amount of unlabeled data. Useful when labeling is expensive.
Supervised = labeled, known outputs (classification). Unsupervised = unlabeled, discovers patterns (clustering). Semi-supervised = small labeled + large unlabeled.
If you "know the answer" during training, it is supervised. If the algorithm discovers structure on its own, it is unsupervised.
Topic 4: Finding and Using Data in AI
20% of Exam4.1 Describe key data terms
K1- Big Data -- "Extremely large data sets that may be analyzed computationally to reveal patterns, trends, and associations." (Dialogic.com)
- Data Visualization -- "The representation of data through use of common graphics, such as charts, plots, infographics and even animations." (IBM)
- Structured data -- Data organized sequentially or serially in a tabular format. Examples: spreadsheets, SQL databases.
- Semi-structured data -- Has some organizational properties but not tabular. Examples: JSON, XML, emails.
- Unstructured data -- No pre-defined order or structure. Examples: images, videos, social media posts.
Big data (Dialogic.com), data visualization (IBM). Structured = tabular. Semi-structured = some organization, not tabular. Unstructured = no structure.
JSON and XML = semi-structured. Database table = structured. Video file = unstructured. The exam frequently tests these classifications.
4.2 Describe the characteristics of data quality and why it is important in AI
K2Five data quality characteristics:
- Accuracy -- Is the data correct?
- Completeness -- Is all the required data present?
- Uniqueness -- Is the data free from duplication?
- Consistency -- Is the data free from conflict across sources?
- Timeliness -- Is the data current and available when needed?
Implications of poor-quality data:
- Errors and inaccuracies in AI outputs
- Bias amplified through flawed data
- Loss of trust in AI systems
- Financial penalties from non-compliance
Five characteristics: Accuracy, Completeness, Uniqueness, Consistency, Timeliness. Remember the five questions: Correct? All there? No duplicates? No conflicts? Current?
"Duplicate records" = uniqueness. "Outdated data" = timeliness. "Two systems disagree" = consistency. Match the scenario to the characteristic.
4.3 Explain the risks associated with handling data in AI
K2- Bias: Multiple sources, diversity in teams handling data, fairness metrics
- Misinformation: Check reliability of sources, use subject matter expert (SME) validation
- Processing restrictions: Organizational requirements, frameworks and regulations
- Legal restrictions: UK GDPR, DPA 2018, staying abreast of new requirements
- The scientific method: Hypothesis-driven, evidence-based approach for AI development
Four risk categories: bias, misinformation, processing restrictions, legal restrictions. Plus the scientific method as a tool for managing risks.
UK GDPR and DPA 2018 are separate but related. DPA 2018 is the UK act; UK GDPR is the regulation. Both govern personal data in the UK.
4.4 Describe the purpose and use of big data
K2- Storage and use -- Cost-effective storage and processing of massive datasets
- Understanding the user -- Analyzing behavior and preferences for targeted marketing
- Improving process -- Identifying inefficiencies, data-driven business decisions
- Improving experience -- Personalization and predictive analytics for better user experiences
Four purposes: storage and use, understanding the user, improving process, improving experience.
4.5 Explain data visualization techniques and tools
K2- Written -- Reports, summaries, narrative descriptions
- Verbal -- Presentations, spoken explanations
- Pictorial -- Charts, graphs, plots, diagrams, maps
- Sounds -- Audio alerts, sonification
- Dashboards and infographics -- Interactive multi-visualization displays
- Virtual and augmented reality (VR/AR) -- Immersive 3D environments
Six types: written, verbal, pictorial, sounds, dashboards/infographics, VR/AR.
4.6 Describe key generative AI terms
K1- Generative AI -- "Refers to deep-learning models that can generate high-quality text, images, and other content based on the data they were trained on." (IBM)
- LLMs -- "Deep learning algorithms that can recognize, summarize, translate, predict, and generate content using very large datasets." (IBM)
Both definitions from IBM. LLM five verbs: recognize, summarize, translate, predict, generate.
4.7 Describe the purpose and use of generative AI including LLMs
K2- Trained on huge volumes of data
- Uses training to predict the next word in a chain of words
- Generates coherent, human-sounding language
- Prompt engineering -- Crafting specific, detailed requests for better AI outputs
- Natural Language Processing (NLP) -- Machines understanding and processing human language
- Image generation -- Creating images from text descriptions
LLMs predict the next likely word based on statistical patterns -- they do not "understand" language. Prompt engineering is about the user's input, not the model's architecture.
4.8 Describe how data is used to train AI in the ML process
K2Stages of the ML training process (in order):
- Analyze the problem -- Define what you are solving
- Data selection -- Choose relevant data sources
- Data pre-processing -- Clean, normalize, transform data
- Data visualization -- Explore and understand patterns
- Select a model/algorithm -- Choose the appropriate ML approach
- Train the model
- Test the model
- Repeat (learn from experience)
- Review -- Assess the overall solution
Sequence: Analyze > Select data > Pre-process > Visualize > Select model > Train > Test > Repeat > Review. The exam may ask you to put these in order.
The process is iterative. "There is no de facto method within machine learning; learning through experience is vitally important."
Topic 5: Using AI in Your Organization
20% of Exam5.1 Identify opportunities for AI in your organization
K2- Automation -- Automating processes to minimize human input
- Repetitive tasks -- AI handles routine work humans find tedious
- Content creation via generative AI -- Drafting text, generating images, producing reports
Think everyday tasks: automated data entry (repetitive), chatbots for support (automation), drafting marketing copy (generative AI).
5.2 List the contents and structure of a business case
K1- Introduction -- Context and purpose
- Management or executive summary -- High-level overview
- Description of current state -- How things work today
- Options considered -- Each includes:
- Option described
- Analysis of costs and benefits
- Impact assessment
- Risk assessment
- Recommendations -- Proposed course of action
- Appendices/supporting information -- Detailed data, references
Six sections: Introduction, Executive summary, Current state, Options considered (4 sub-elements), Recommendations, Appendices.
The exam may ask what goes inside "Options considered." Answer: option described, cost/benefit analysis, impact assessment, risk assessment.
5.3 Identify and categorize stakeholders relevant to an AI project
K2Stakeholder -- Any individual or group with an interest in or influence on a project.
Power/Interest Grid (4 quadrants):
- High power, High interest -- Constant active management (key players)
- High power, Low interest -- Keep satisfied
- Low power, High interest -- Keep informed
- Low power, Low interest -- Monitor
Stakeholder Wheel -- Visual representation of all stakeholder groups around a project.
Four quadrants: manage closely, keep satisfied, keep informed, monitor. Map each to its power/interest combination.
"A senior executive who rarely uses the system but can kill the project" = high power, low interest = keep satisfied.
5.4 Describe project management approaches
K2- Agile -- Iterative, flexible, sprints, embraces changing requirements, continuous feedback
- Waterfall -- Sequential, linear. Phases: Requirements > Design > Implementation > Testing > Deployment > Maintenance
- Hybrid -- Combines Agile and Waterfall elements
Agile = iterative, flexible. Waterfall = sequential, 6 phases. Hybrid = combination of both.
Uncertain/changing requirements = Agile. Fixed/well-understood requirements = Waterfall.
5.5 Identify the risks, costs, and benefits associated with a proposed solution
K2- Risk analysis: Risk assessment + risk owners
- Risk appetite -- Level of risk the organization will accept
- Risk management strategies:
- Accept -- Acknowledge, take no action
- Mitigate -- Reduce probability/impact (inc. sharing and contingency planning)
- Avoid -- Change plans to eliminate the risk
- Transfer -- Shift risk to another party (e.g., insurance)
- Financial costs/benefits: Forecasting, margin for error
- Socio-economic benefits -- Wider societal gains
- Triple bottom line:
- Profit -- Financial performance
- People -- Social impact
- Planet -- Environmental sustainability
Four risk strategies: Accept, Mitigate (share/contingency), Avoid, Transfer. Triple bottom line: Profit, People, Planet.
"Transfer" = shifting risk (e.g., insurance). "Mitigate" includes sharing and contingency. The exam may describe a scenario and ask which strategy applies.
5.6 Describe the ongoing governance activities required when implementing AI
K2Three governance areas:
- Compliance -- Satisfying all applicable regulations
- Risk management -- Proactively detecting and mitigating risks
- Lifecycle governance -- Ongoing management:
- Manage -- Day-to-day operations
- Monitor -- Continuous tracking of performance, bias, drift
- Govern -- Strategic oversight and policy enforcement
Three areas: compliance, risk management, lifecycle governance (manage, monitor, govern).
Governance is ongoing throughout the AI lifecycle, not just at deployment. The answer includes continuous monitoring and management.
Topic 6: Future Planning and Impact -- Human Plus Machine
15% of Exam6.1 Describe the roles and career opportunities presented by AI
K2AI-specific roles:
- Machine Learning Engineer -- Designs and builds ML models
- Data Scientist -- Analyzes complex data, builds predictive models
- AI Research Scientist -- Conducts fundamental AI research
- Computer Vision Engineer -- Interprets visual data
- NLP Engineer -- Builds language understanding systems
- Robotics Engineer -- Designs robots and autonomous systems
- AI Ethics Specialist -- Ensures responsible AI development
- AI Anthropologist -- Studies cultural/social implications of AI
Opportunities for existing roles: additional training and knowledge, improved efficiency, automation of routine tasks.
You will not be tested on specific duties of each role. Focus on recognizing the role names. You will not be assessed on names or duties of specific job roles.
6.2 Identify AI uses in the real world
K1- Marketing -- Trend prediction, targeted advertising, customer segmentation
- Healthcare -- Diagnostics (X-rays, MRIs), treatment planning, drug discovery
- Finance -- Fraud detection, algorithmic trading, credit scoring, audit automation
- Transportation -- Self-driving cars, route optimization, traffic management
- Education -- Personalized learning, adaptive assessments
- Manufacturing -- Predictive maintenance, quality control, supply chain optimization
- Entertainment -- Recommendation algorithms (Netflix, Spotify), content generation
- IT -- Cybersecurity, chatbots, automated testing, infrastructure management
Eight sectors: marketing, healthcare, finance, transportation, education, manufacturing, entertainment, IT.
"A bank flagging suspicious transactions" = finance (fraud detection). "Algorithm suggesting your next show" = entertainment (recommendation).
6.3 Explain AI's impact on society, and the future of AI
K2Benefits:
- Reducing human error through automation
- Processing vast data for informed decisions
- AI-powered medical diagnosis assistance
Challenges:
- Algorithm bias and privacy concerns
- Job loss and displacement
- Security risks from hacking
- Socio-economic inequality
- Lack of creativity and empathy
Environmental impact: Energy consumption, climate change, e-waste.
Economic impact: Job losses in some sectors, need for retraining, market volatility.
Future advancements: Increased computing power, more data, better algorithms.
The exam expects a balanced view: AI has both significant benefits and serious challenges. Be prepared to identify both in scenario-based questions.
6.4 Describe consciousness and its impact on ethical AI
K2- Human consciousness (sentience) -- Subjective experience of awareness; capacity for feelings, perceptions, self-awareness
- AI consciousness -- Hypothetical: could AI develop genuine subjective experience?
- Kurzweil Singularity -- "A future period characterized by rapid technological growth that will irreversibly transform human life."
- Seth's theory (Anil Seth):
- Predictive processing and perception -- The brain constructs reality through predictions
- The nature of self and consciousness -- Consciousness is about prediction, not just information processing
- Functional capabilities vs. genuine consciousness -- AI may mimic conscious behavior without actually being conscious
- Ethical implications -- Should AI appear human? If AI were conscious, would it have rights?
Kurzweil Singularity = rapid technological growth transforming human life irreversibly. Seth = predictive processing/perception, nature of self/consciousness. Functional capabilities (mimicking) vs. genuine consciousness (sentience).
The 202505 change document updated Seth's theory to "predictive processing and perception" and "the nature of self and consciousness." Use this updated language, not older references to "self-reporting capabilities" or "presence of senses."
Final Exam Checklist
The 10 most important things to review before exam day
- Verbatim definitions -- Human Intelligence, AI, ML, Scientific Method, Big Data, Data Visualization, Generative AI, LLMs, Ethics, Risk, Robotics. These exact definitions appear in exam questions.
- Floridi & Cowls' 5 principles vs. UK AI 5 principles -- Know both lists and do not mix them up. Floridi & Cowls = Beneficence, Non-maleficence, Autonomy, Justice, Explicability. UK = Safety/security/robustness, Transparency/explainability, Fairness, Accountability/governance, Contestability/redress.
- Five data quality characteristics -- Accuracy, Completeness, Uniqueness, Consistency, Timeliness. Match scenarios to the violated characteristic.
- Key dates and people -- Dartmouth 1956 (McCarthy, Minsky, Rochester, Shannon), Asilomar 2017 (FLI, 23 principles), AI winters (1974-1980 and 1987-1993), LLMs widespread from 2022.
- AI hierarchy -- AI > ML > Deep Learning > LLMs. Narrow/Weak AI (exists) vs. General/Strong AI (hypothetical).
- Three learning types -- Supervised (labeled), Unsupervised (unlabeled), Semi-supervised (small labeled + large unlabeled). Link supervised to classification, unsupervised to clustering.
- Business case structure -- Introduction, Executive summary, Current state, Options considered (option described, cost/benefit, impact, risk), Recommendations, Appendices.
- Risk management -- Techniques (Risk analysis, SWOT, PESTLE, Cynefin) vs. Strategies (Accept, Mitigate, Avoid, Transfer). Triple bottom line: Profit, People, Planet. Power/Interest grid quadrants.
- ML training process order -- Analyze > Data selection > Pre-processing > Visualization > Select model > Train > Test > Repeat > Review.
- Governance, regulation, and sustainability -- Three governance areas (compliance, risk management, lifecycle governance). Key regulations (DPA 2018, UK GDPR, ISO, NIST, WCAG). Six sustainability measures.