When ChatGPT is released in late 2022, it completely changed people’s relationship with technology. Suddenly, online search became agentic, allowing people to converse in natural language with chatbots that respond with novel answers just like humans would. This was such a transformation that Google, Meta, Microsoft, and Apple quickly began integrating AI into their product suites.
But that aspect of AI chatbots is only a small part of the AI world. While it’s certainly great to have ChatGPT helping you with your homework or Midjourney creating compelling images of mechs based on their country of origin, the potential of generative AI could completely reshape the economy. According to the McKinsey Global Institute, this could be worth $4.4 trillion per year to the global economy, which is why we expect to hear more and more about artificial intelligence.
This shows up in a dizzying array of products, including Google’s Gemini, Microsoft’s Copilot, Anthropic’s Claude, the Perplexity AI search tool, and gadgets from Humane and Rabbit, to name just a short list. You can read reviews, hands-on assessments, news, commentary, and how-to articles for these and other products in our new AI Atlas hub.
As people become more comfortable with a world intertwined with AI, new terms are popping up everywhere. Whether you’re looking to sound smart over a drink or impress at a job interview, here are some important AI terms to know.
This glossary will be updated regularly.
—
Artificial General Intelligence (AGI): A concept suggesting a more advanced version of AI as we know it today, one that can perform tasks much better than humans while also learning and improving its capabilities.
AI ethics: Principles aimed at preventing AI from harming humans through measures such as determining how AI systems collect data and address bias.
AI Safety: An interdisciplinary field that considers the long-term impacts of AI and the possibility that it could suddenly evolve into a superintelligence that could turn against humans.
Algorithm: A set of instructions that enables a computer program to study and analyze data in a particular way, such as recognizing patterns, and learn from it to perform a task on its own.
Alignment: Fine-tuning AI to better produce desired outcomes, which can mean anything from content moderation to maintaining positive human interactions.
Anthropomorphism: The tendency of humans to give human-like characteristics to non-human objects. In AI, this can also include believing a chatbot is more human-like and conscious than it actually is, believing it is happy, sad, or even sentient.
Artificial Intelligence (AI): The use of technology to simulate human intelligence in computer programs or robotics. A branch of computer science that aims to build systems that can perform human tasks.
Autonomous agent: An AI model that has capabilities, programming, and other tools to accomplish a specific task. For example, a self-driving car is an autonomous agent because it has sensor inputs, GPS, driving algorithms, and navigates roads on its own. Researchers at Stanford University have demonstrated that autonomous agents can develop their own culture, traditions, and a common language.
Bias: For large language models, this is an error that arises from the training data, which can lead to the incorrect attribution of certain characteristics to certain races or groups based on stereotypes.
Chatbot: A program that communicates with humans through text that simulates human language.
ChatGPT: An AI chatbot developed by OpenAI that employs large-scale language modeling technology.
Cognitive Computing: Another name for artificial intelligence.
Data Augmentation: Remix existing data or add more diverse datasets to train your AI.
Deep Learning: A method of AI that uses multiple parameters to recognize complex patterns in images, speech, and text; it is a subfield of machine learning. The process is inspired by the human brain and uses artificial neural networks to create patterns.
Diffusion: A machine learning method that takes existing data, such as a photo, and adds random noise. The diffusion model trains a network to redesign or restore that photo.
Emergent behavior: When an AI model exhibits unintended capabilities.
End-to-end learning (E2E): A deep learning process in which a model is instructed to perform a task from start to finish, learning from the inputs and solving them all at once, rather than being trained to perform tasks sequentially.
Ethical considerations: Awareness of the ethical implications of AI and issues around privacy, data use, fairness, misuse, and other safety issues.
foom: Also known as fast takeoff or hard takeoff. The concept that if someone were to build AGI, it may already be too late to save humanity.
Generative Adversarial Network (GAN): A generative AI model made up of two neural networks (a generator and a discriminator) that generate new data. The generator creates new content and the discriminator checks if it is real.
Generative AI: Content generation technology that uses AI to create text, video, computer code, or images. The AI is fed large amounts of training data, finds patterns, and generates unique new responses that may be similar to the source material.
Google Gemini: An AI chatbot from Google that works similarly to ChatGPT, but retrieves information from the current web. ChatGPT, on the other hand, is limited to data up to 2021 and is not connected to the internet.
Guardrails: Policies and restrictions placed on AI models to ensure data is handled responsibly and models do not create objectionable content.
Hallucinations: Incorrect responses from AI. This includes generative AI generating answers that are incorrect but state confidently as if they were correct. The reasons for this are not fully understood. For example, if you ask an AI chatbot, “When did Leonardo da Vinci paint the Mona Lisa?”, the AI chatbot may return the incorrect answer, “Leonardo da Vinci painted the Mona Lisa in 1815,” 300 years after the Mona Lisa was actually painted.
Large-scale language models (LLM): AI models trained on large amounts of text data to understand language and generate new content in human-like language.
Machine Learning (ML): A component of AI that enables computers to learn and make better predictions without explicit programming, and can be combined with training sets to generate new content.
Microsoft Bing: A Microsoft search engine that can provide AI-powered search results using the technology behind ChatGPT. Similar to Google Gemini in that it is internet-connected.
Multimodal AI: A type of AI that can process multiple types of input, such as text, images, video, and audio.
Natural Language Processing: A branch of AI that uses machine learning and deep learning to give computers the ability to understand human language, often using learning algorithms, statistical models, and linguistic rules.
Neural network: A computational model for recognizing patterns in data, similar to the structure of the human brain. It is made up of interconnected nodes, or neurons, that recognize patterns and learn over time.
Overfitting: An error where machine learning is so close to the training data that it can only identify certain examples in that data, but not new data.
Paperclips: The Paperclip Maximizer theory, proposed by Oxford philosopher Nick Bostrom, is a hypothetical scenario in which an AI system creates as many literal paperclips as possible. In its goal of producing the greatest amount of paperclips, the AI system is assumed to consume or transform all materials to achieve its goal, including dismantling other machines to produce more paperclips that may be beneficial to humans. An unintended consequence of this AI system is that it could destroy humanity in its goal of creating paperclips.
Parameters: Numbers that give LLM structure and behavior, and allow it to make predictions.
Prompt: A suggestion or question you type into the AI chatbot to get a response.
Prompt chaining: The ability of AI to leverage information from previous interactions to color future responses.
Probabilistic Parrot: An analogy to LLM, showing that no matter how convincing the output is, the software has no understanding of the meaning behind the language or the broader world around it. This phrase refers to the ability of a parrot to imitate human speech without understanding the meaning of the human words.
Style transfer: The ability to adapt the style of one image to the content of another, allowing the AI to interpret the visual attributes of one image and use them in another, for example recreating a Rembrandt self-portrait in the style of Picasso.
Temperature: A parameter that is set to control the randomness of the output of a language model. The higher the temperature, the riskier the model.
Text to image generation: Create an image based on a text description.
Token: A small piece of text that the AI language model processes to create a response to a prompt. A token is equivalent to four English letters, or roughly three-quarters of a word.
Training data: Datasets used to help train an AI model, such as text, images, code, or data.
Transformer models: Neural network architectures and deep learning models that learn context by tracking relationships within data, such as parts of sentences or images, so they can look at whole sentences to understand context, rather than analyzing them word by word.
Turing Test: Named after the famous mathematician and computer scientist Alan Turing, this test tests a machine’s ability to behave like a human. If humans cannot distinguish the machine’s responses from those of other humans, the machine passes.
Weak AI, also known as narrow AI: AI that is focused on a specific task and cannot learn beyond that skillset. Most AI today is weak AI.
Zero-shot learning: Testing where a model must complete a task without being given the necessary training data, an example would be recognizing lions while only training on tigers.