Generally Accepted Definitions and Types of AI
Artificial Intelligence (AI) refers to the simulation of human intelligence processes by computer systems, encompassing learning, reasoning, problem-solving, perception, and language understanding. Several generally accepted definitions and types of AI form the foundation of AI governance. **Defin… Artificial Intelligence (AI) refers to the simulation of human intelligence processes by computer systems, encompassing learning, reasoning, problem-solving, perception, and language understanding. Several generally accepted definitions and types of AI form the foundation of AI governance. **Definitions of AI:** The most widely accepted definition describes AI as machines or software that can perform tasks typically requiring human intelligence. The OECD defines an AI system as a machine-based system that can, for a given set of objectives, generate outputs such as predictions, recommendations, or decisions influencing real or virtual environments. The EU AI Act defines it as software developed using specific techniques that can generate outputs for human-defined objectives. **Types of AI by Capability:** 1. **Narrow AI (Weak AI):** Designed to perform specific tasks within a limited domain. Examples include virtual assistants, recommendation engines, and image recognition systems. This is the only type of AI that currently exists. 2. **General AI (Strong AI):** A theoretical AI that possesses human-level cognitive abilities across any intellectual task. It can reason, learn, and apply knowledge across domains autonomously. 3. **Superintelligent AI:** A hypothetical AI that surpasses human intelligence in virtually all areas, including creativity, problem-solving, and social intelligence. **Types of AI by Functionality:** 1. **Reactive Machines:** Basic AI that responds to specific inputs without memory (e.g., IBM's Deep Blue). 2. **Limited Memory:** AI that uses historical data for decisions (e.g., self-driving cars). 3. **Theory of Mind:** AI that could understand emotions and beliefs (still theoretical). 4. **Self-Aware AI:** AI possessing consciousness and self-awareness (purely hypothetical). Understanding these definitions and classifications is essential for AI governance professionals, as regulatory frameworks, risk assessments, and ethical guidelines are often tailored to specific AI types and their associated capabilities and risks.
Generally Accepted Definitions and Types of AI: A Comprehensive Guide
Why This Topic Is Important
Understanding the generally accepted definitions and types of AI is foundational to AI governance. Without a clear grasp of what AI is, how it is categorized, and how different types of AI systems function, it is impossible to develop effective governance frameworks, assess risks, or implement appropriate safeguards. This topic forms the bedrock of the AIGP (AI Governance Professional) exam because every subsequent governance concept — from risk management to ethical deployment — depends on a shared understanding of AI terminology and classification.
For governance professionals, being able to precisely define and categorize AI systems is essential for:
- Communicating clearly with stakeholders across technical and non-technical domains
- Applying the correct regulatory and ethical frameworks to specific AI systems
- Conducting meaningful risk assessments based on the type and capability of an AI system
- Ensuring organizational policies are appropriately scoped and targeted
What Is AI? Generally Accepted Definitions
There is no single universally agreed-upon definition of AI, but several widely recognized definitions exist across academic, industry, and regulatory contexts:
1. OECD Definition (2019, updated 2023)
The Organisation for Economic Co-operation and Development defines an AI system as "a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments." This definition is particularly important for governance professionals because it has been adopted or referenced by many national and international regulatory frameworks, including the EU AI Act.
2. EU AI Act Definition
The EU AI Act defines an AI system in alignment with the OECD definition, emphasizing that it is a system designed to operate with varying levels of autonomy, that may exhibit adaptiveness after deployment, and that infers how to generate outputs from inputs received. This legal definition is critical for compliance and regulatory purposes.
3. NIST Definition
The U.S. National Institute of Standards and Technology (NIST) defines AI as "an engineered or machine-based system that can, for a given set of objectives, generate outputs such as predictions, recommendations, or decisions influencing real or virtual environments." NIST's AI Risk Management Framework (AI RMF) builds upon this definition.
4. Academic and General Definitions
Broadly, AI refers to the simulation of human intelligence processes by computer systems. These processes include learning (acquiring information and rules), reasoning (using rules to reach conclusions), and self-correction. Early definitions from researchers like John McCarthy (who coined the term in 1956) focused on "making a machine behave in ways that would be called intelligent if a human were so behaving."
Key Characteristics Common Across Definitions:
- Machine-based or engineered systems
- Capable of processing inputs and generating outputs
- Outputs include predictions, recommendations, decisions, or content
- Varying levels of autonomy
- Ability to influence physical or virtual environments
- May exhibit adaptiveness or learning capabilities
Types of AI: Classification Frameworks
AI systems can be categorized in several ways. Understanding these classification schemes is essential for governance.
A. Classification by Capability
1. Narrow AI (Weak AI / ANI - Artificial Narrow Intelligence)
- Designed to perform a specific task or a narrow range of tasks
- All current AI systems fall into this category
- Examples: image recognition, natural language processing chatbots, recommendation engines, spam filters, autonomous driving systems
- Cannot generalize knowledge to tasks outside their training domain
- This is the category most relevant to current AI governance
2. General AI (Strong AI / AGI - Artificial General Intelligence)
- A hypothetical AI system that possesses the ability to understand, learn, and apply intelligence across a wide range of tasks at a level comparable to human cognitive abilities
- Does not currently exist
- Would be capable of reasoning, problem-solving, and learning in any domain without specific programming
- Significant governance implications are discussed in anticipatory frameworks
3. Superintelligent AI (ASI - Artificial Superintelligence)
- A theoretical AI that surpasses human intelligence in virtually all domains
- Entirely speculative at this point
- Raises profound existential and ethical governance questions
- Often discussed in long-term AI safety and alignment research
B. Classification by Functionality
1. Reactive Machines
- The most basic type of AI
- Respond to specific inputs with specific outputs
- No memory or ability to learn from past experiences
- Example: IBM's Deep Blue chess computer
2. Limited Memory AI
- Can use past data or experiences to inform current decisions
- Most modern AI systems fall into this category
- Examples: self-driving cars that observe other vehicles' behavior, large language models trained on historical data
- Temporary storage of information for immediate decision-making
3. Theory of Mind AI
- A theoretical type that would understand emotions, beliefs, and thought processes of other entities
- Does not fully exist yet, though some research moves in this direction
- Would enable more natural human-AI interaction
4. Self-Aware AI
- A theoretical and hypothetical form of AI with consciousness and self-awareness
- Does not exist
- Raises the most significant ethical and governance concerns
C. Classification by AI Techniques and Approaches
1. Machine Learning (ML)
- A subset of AI where systems learn from data without being explicitly programmed
- Supervised Learning: Learning from labeled training data (e.g., classification, regression)
- Unsupervised Learning: Finding patterns in unlabeled data (e.g., clustering, dimensionality reduction)
- Reinforcement Learning: Learning through trial and error with a reward/penalty system
- Semi-supervised Learning: Combines labeled and unlabeled data
2. Deep Learning
- A subset of machine learning using artificial neural networks with multiple layers
- Particularly effective for image recognition, NLP, and complex pattern recognition
- Examples: convolutional neural networks (CNNs), recurrent neural networks (RNNs), transformers
3. Generative AI
- AI systems that can generate new content such as text, images, audio, video, or code
- Based on large foundation models trained on massive datasets
- Examples: Large Language Models (LLMs) like GPT, image generators like DALL-E and Stable Diffusion
- Raises specific governance concerns around misinformation, intellectual property, and deepfakes
4. Expert Systems / Rule-Based AI
- Traditional AI approach using predefined rules and logic
- Transparent and explainable but limited in adaptability
- Example: medical diagnosis systems with coded decision trees
5. Natural Language Processing (NLP)
- AI techniques focused on understanding and generating human language
- Includes sentiment analysis, translation, summarization, and conversational AI
6. Computer Vision
- AI techniques that enable machines to interpret and make decisions based on visual data
- Examples: facial recognition, object detection, medical image analysis
7. Robotics and Embodied AI
- AI integrated with physical systems
- Examples: autonomous vehicles, industrial robots, drones
D. Classification by Risk Level (EU AI Act Framework)
This classification is particularly important for governance professionals:
1. Unacceptable Risk
- AI systems that are banned outright
- Examples: social scoring by governments, real-time remote biometric identification in public spaces (with limited exceptions), manipulation of vulnerable groups
2. High Risk
- AI systems used in critical areas that must comply with strict requirements
- Examples: AI in critical infrastructure, education, employment, law enforcement, migration, administration of justice
- Subject to conformity assessments, documentation requirements, human oversight mandates
3. Limited Risk
- AI systems with specific transparency obligations
- Examples: chatbots (must disclose they are AI), deepfake generators (must label content)
4. Minimal Risk
- AI systems with no specific regulatory requirements beyond existing law
- Examples: AI-enabled video games, spam filters
How This Works in Practice for Governance
Understanding definitions and types of AI enables governance professionals to:
1. Scope Governance Frameworks: Different types of AI require different governance approaches. A rule-based expert system requires different oversight than a generative AI model.
2. Conduct AI Inventories: Organizations must identify and classify all AI systems they develop or deploy, which requires a clear taxonomy of AI types.
3. Assess Risk: The type of AI directly correlates with risk profiles. Deep learning models may present explainability challenges, while generative AI raises content authenticity concerns.
4. Ensure Regulatory Compliance: Regulatory frameworks like the EU AI Act classify obligations based on AI type and risk category. Misclassifying an AI system can lead to non-compliance.
5. Communicate with Stakeholders: A shared vocabulary ensures that legal teams, engineers, executives, and regulators are discussing the same concepts.
6. Design Appropriate Controls: The governance controls for a narrow AI image classifier differ significantly from those needed for a general-purpose generative AI chatbot deployed to consumers.
Exam Tips: Answering Questions on Generally Accepted Definitions and Types of AI
1. Know the OECD Definition Thoroughly
The OECD definition is the most frequently referenced in governance contexts. Be able to identify its key elements: machine-based system, explicit or implicit objectives, inference from inputs, generation of outputs (predictions, content, recommendations, decisions), and influence on physical or virtual environments. Expect questions that test whether you can distinguish the OECD definition from other definitions or identify its components.
2. Distinguish Between Narrow, General, and Superintelligent AI
A common exam question format asks you to identify which type of AI currently exists (answer: narrow AI only). Be clear that AGI and ASI remain theoretical. If a question describes a system that performs one specific task well, it is narrow AI regardless of how impressive it seems.
3. Understand the EU AI Act Risk Categories
Be prepared to classify AI use cases into unacceptable, high, limited, or minimal risk categories. Practice by sorting examples: social scoring = unacceptable; AI in hiring = high risk; chatbot = limited risk (transparency obligations); spam filter = minimal risk.
4. Differentiate ML Subtypes
Know the difference between supervised, unsupervised, reinforcement, and semi-supervised learning. A common exam question might describe a scenario and ask which learning approach is being used. Remember: labeled data = supervised; unlabeled data = unsupervised; reward signals = reinforcement.
5. Connect Definitions to Governance Implications
Exam questions may not just test recall — they may ask why a definition matters or how a classification affects governance decisions. For example, understanding that generative AI creates new content helps explain why it raises unique intellectual property and misinformation governance challenges.
6. Watch for Tricky Terminology
Be careful with terms like "strong AI" (which refers to AGI, not a powerful narrow AI system) and "weak AI" (which refers to narrow AI, not a poorly performing system). These terms can be misleading if taken at face value.
7. Remember That Definitions Vary Across Jurisdictions
Different regulatory bodies may define AI slightly differently. The exam may test your awareness that no single universal definition exists and that governance professionals must be adaptable to jurisdictional variations. The OECD definition serves as a common reference point, but local regulations may add nuances.
8. Use the Process of Elimination
When faced with multiple-choice questions about AI types, eliminate answers that describe capabilities that do not currently exist (e.g., self-aware AI, AI with consciousness) unless the question specifically asks about theoretical types. Most governance-focused questions will pertain to narrow AI and current ML techniques.
9. Pay Attention to Context Clues in Scenarios
Exam scenarios often embed clues about the type of AI being described. Look for keywords: "trained on labeled data" (supervised learning), "generates images from text prompts" (generative AI), "uses predefined rules" (expert system/rule-based AI), "learns from interaction with the environment" (reinforcement learning).
10. Link Types to Specific Governance Controls
High-scoring answers demonstrate awareness that different AI types demand different controls. For instance: opaque deep learning models require explainability measures; generative AI requires content provenance and watermarking; high-risk AI under the EU AI Act requires conformity assessments and human oversight. Making these connections shows a mature understanding of the material.
11. Practice with Real-World Examples
Familiarize yourself with well-known AI systems and be able to classify them: ChatGPT (generative AI, LLM, narrow AI, limited memory), Tesla Autopilot (computer vision, narrow AI, limited memory, high-risk), Netflix recommendations (narrow AI, supervised/unsupervised ML, minimal risk). Being able to quickly categorize real-world examples will help with scenario-based exam questions.
12. Review Key Institutional Sources
Make sure you are familiar with definitions and frameworks from: OECD, EU (AI Act), NIST (AI RMF), ISO/IEC standards, and UNESCO. These are the most likely sources referenced in exam questions about accepted definitions.
Unlock Premium Access
Artificial Intelligence Governance Professional
- Access to ALL Certifications: Study for any certification on our platform with one subscription
- 3360 Superior-grade Artificial Intelligence Governance Professional practice questions
- Unlimited practice tests across all certifications
- Detailed explanations for every question
- AIGP: 5 full exams plus all other certification exams
- 100% Satisfaction Guaranteed: Full refund if unsatisfied
- Risk-Free: 7-day free trial with all premium features!