The launch of ChatGPT in late 2022 has completely changed people’s relationship with finding information online. Suddenly, people could have meaningful conversations with machines. This means you can now ask AI chatbots questions in natural language, and it will give you the same innovative answers you would get from a human. This was so transformative that Google, Meta, Microsoft, and Apple quickly started integrating AI into their products.
But that aspect of AI chatbots is only part of the AI landscape. Sure, it’s great to have ChatGPT help you with your homework or Midjourney to create images of attractive mechs based on their country of origin, but the potential of generative AI could completely reshape the economy. May be built. According to the McKinsey Global Institute, this could be worth $4.4 trillion a year to the global economy, which is why we expect to see more and more talk about artificial intelligence.
It has appeared in a dizzying array of products, a short list including Google’s Gemini, Microsoft’s Copilot, Anthropic’s Claude, Perplexity AI search tools, and gadgets from Humane and Rabbit. You can read reviews and real ratings, news, explanations, and how-to posts for these and other products on the AI Atlas hub.
As people become accustomed to a world intertwined with AI, new terms are popping up everywhere. Whether you want to look smart over drinks or impress at a job interview, here are some important AI terms you should know.
This glossary will be updated regularly.
Artificial General Intelligence (AGI): A concept that suggests a more advanced version of AI than we currently know. They can perform tasks far better than humans, while also being able to teach and improve their own abilities.
Agentic: A system or model that exhibits an agent with the ability to autonomously pursue actions to achieve goals. In the context of AI, such as highly self-driving cars, agent models can operate without continuous monitoring. Unlike the “agent” framework, which is in the background, the agent framework is in the foreground and focuses on the user experience.
AI Ethics: Principles aimed at preventing AI from harming humans. This is achieved through measures such as determining how AI systems collect data and how to address bias.
AI safety: An interdisciplinary field concerned with the long-term effects of AI and how it could suddenly evolve into a superintelligence that could be hostile to humans.
Algorithm: A set of instructions that allows a computer program to learn and analyze data in a particular way, such as recognizing patterns, and learn from it to perform tasks on its own.
Tuning: Fine-tuning the AI to better produce desired results. This can mean everything from moderating content to maintaining positive human interactions.
Anthropomorphism: When humans tend to give human-like characteristics to non-human objects. In AI, this may include believing that the chatbot is more human-like and cognitive than it actually is. For example, believing that the chatbot is happy, sad, or fully sentient.
Artificial Intelligence, or AI: The use of technology in computer programs or robotics to simulate human intelligence. A field of computer science aimed at building systems that can perform human tasks.
Autonomous agent: An AI model with functionality, programming, and other tools to perform specific tasks. For example, self-driving cars are autonomous agents because they have sensory input, GPS, and driving algorithms to navigate the roads themselves. Researchers at Stanford University have shown that autonomous agents can develop their own cultures, traditions, and shared languages.
Bias: Errors introduced by training data with respect to large-scale language models. As a result, certain characteristics can be incorrectly attributed to particular races or groups based on stereotypes.
Chatbot: A program that communicates with humans through text that mimics human language.
ChatGPT: AI chatbot using large-scale language modeling technology developed by OpenAI.
Cognitive computing: Another term for artificial intelligence.
Data augmentation: Remixing existing data or adding a more diverse set of data to train AI.
Deep learning: An AI technique and subfield of machine learning that uses multiple parameters to recognize complex patterns in images, audio, and text. The process is inspired by the human brain and uses artificial neural networks to create patterns.
Diffusion: A machine learning method that takes existing data, such as a photo, and adds random noise. The diffusion model trains a network to redesign or restore that photo.
Emergency behavior: When an AI model exhibits unintended capabilities.
End-to-end learning, or E2E: A deep learning process in which a model is instructed to perform a task from start to finish. It is not trained to accomplish tasks sequentially, but rather learns from the input and solves them all at once.
Ethical considerations: Awareness of the ethical implications of AI and issues related to privacy, data use, fairness, abuse, and other safety issues.
foom: Also known as fast takeoff or hard takeoff. The concept that even if someone builds AGI, it may already be too late to save humanity.
Generative Adversarial Network (GAN): A generative AI model that consists of two neural networks (a generator and a discriminator) to generate new data. Generator creates new content and Discriminator checks whether it is genuine or not.
Generative AI: Content generation technology that uses AI to create text, video, computer code, or images. AI is fed large amounts of training data to find patterns and generate unique new responses. This response may also be similar to the source material.
Google Gemini: Google’s AI chatbot that works similarly to ChatGPT, but pulls information from the current web. ChatGPT, on the other hand, is limited to data until 2021 and is not connected to the internet.
Guardrails: Policies and restrictions placed on AI models to ensure that data is treated responsibly and that the models do not create objectionable content.
Hallucination: AI’s incorrect reaction. You can include generative AI that generates answers that are wrong but can be confidently stated as if they were right. The reason for this is not completely understood. For example, if you ask an AI chatbot, “When did Leonardo da Vinci paint the Mona Lisa?”, it will say, “Leonardo da Vinci painted the Mona Lisa in 1815, 300 years after he actually painted it.” You may respond with the wrong statement.
Large-scale language model (LLM): An AI model trained on large amounts of text data to understand language and generate new content in human-like language.
Machine learning (ML): A component of AI that allows computers to learn and make better predictions without explicit programming. Can be combined with training sets to generate new content.
Microsoft Bing: Microsoft’s search engine. Using the technology that powers ChatGPT, we can now provide you with AI-powered search results. It is similar to Google Gemini in that it is connected to the internet.
Multimodal AI: A type of AI that can process multiple types of input, such as text, images, video, and audio.
Natural language processing: A field of AI that uses machine learning and deep learning to enable computers to understand human language, often using learning algorithms, statistical models, and language rules.
Neural network: A computational model that resembles the structure of the human brain and is intended to recognize patterns in data. It consists of interconnected nodes or neurons that can recognize patterns and learn over time.
Overfitting: An error in machine learning where machine learning works too close to the training data and can only identify certain examples in that data, but not new data.
Paperclips: The paperclip maximizer theory, devised by Oxford University philosopher Nick Bostrom, is a hypothetical scenario in which an AI system creates as many literal paperclips as possible. In the goal of producing the maximum amount of paperclips, the AI system assumes that it consumes or converts all materials to achieve the goal. This could include dismantling other machines to produce more paperclips, machines that could be beneficial to humans. An unintended consequence of this AI system is that it could potentially destroy humanity for the purpose of making paperclips.
Parameters: Numerical values that give the structure and behavior of the LLM and allow predictions.
Perplexity: Name of the AI-powered chatbot and search engine owned by Perplexity AI. It uses large-scale language models like those found in other AI chatbots to provide novel answers to your questions. Our connection to the open Internet also allows us to provide you with up-to-date information and results from around the web. A paid tier of the service, Perplexity Pro, is also available and uses other models such as GPT-4o, Claude 3 Opus, Mistral Large, open source LlaMa 3, and proprietary Sonar 32k. Pro users can also upload documents for analysis, generate images, and interpret code.
Prompt: A suggestion or question you enter into an AI chatbot to get a response.
Prompt chaining: A feature of AI that uses information from previous interactions to color future responses.
Probabilistic Parrot: An analogy in LLM that shows that software does not have a deeper understanding of the meaning behind the language or the world around it, regardless of how convincing the output may sound. This phrase refers to how parrots imitate human words without understanding the meaning behind them.
Style Transfer: The ability to adapt the style of one image to the content of another image. This allows AI to interpret the visual attributes of one image and use them in another. For example, recreating Rembrandt’s self-portrait in the style of Picasso.
Temperature: A parameter set to control how random the output of the language model is. Higher temperatures mean the model takes more risks.
Text-to-image generation: Create images based on text descriptions.
Token: A small piece of written text that an AI language model processes to formulate a response to a prompt. A token corresponds to four letters in English, or about three-quarters of a word.
Training data: Datasets (text, images, code, data, etc.) used to help train an AI model.
Transformer models: Neural network architectures and deep learning models that learn context by tracking relationships in data, such as parts of sentences or images. So instead of analyzing a sentence word by word, you can look at the entire sentence and understand the context.
Turing Test: Named after the famous mathematician and computer scientist Alan Turing, it tests a machine’s ability to behave like a human. If a human cannot distinguish a machine’s response from another human, the machine passes.
Weak AI, also known as narrow AI: AI that is focused on a specific task and is unable to learn beyond its skill set. Most current AI is weak AI.
Zero-shot learning: Testing where the model must complete a task without being given the necessary training data. An example would be recognizing a lion when trained only on tigers.