top of page
Library Cards

Glossaries

Useful Articles

AI is confusing — here’s your cheat sheet (Glossaries Sheet)

Agent (AI)

An AI agent is a program or system that perceives its environment and takes actions to achieve specific goals. These agents can range from simple, rule-based systems to complex, learning-enabled algorithms. They're often designed to operate autonomously, making decisions based on available data and predefined objectives. AI agents are used in various fields, including robotics, gaming, business automation, and more. They can be found in virtual assistants, self-driving cars, recommendation systems, and many other applications.

 

References:

2024: The Year of the AI Agents

AI agents: types, benefits, and examples

Algorithm
An algorithm refers to a set of well-defined rules or instructions that guide the computation or problem-solving process. It is a step-by-step procedure that specifies how to transform input data into desired outputs.


Alignment
In AI research, alignment refers to the goal of ensuring that advanced artificial intelligence systems, particularly those with significant capabilities and autonomy, align with human values, goals, and intentions. It addresses the challenge of developing AI systems that act in ways that are beneficial and cooperative with human values, rather than potentially harmful or misaligned.


Application Programming Interface (API)
It is a set of rules and protocols that allows different software applications to communicate and interact with each other. An API acts as an intermediary between different software systems, enabling them to exchange data, request services, and perform various operations.


Artificial General Intelligence (AGI)
is a hypothetical intelligent agent that can learn to replicate any intellectual task that human beings or other animals can ¹. AGI has also been defined alternatively as an autonomous system that surpasses human capabilities at the majority of economically valuable work. AGI is also called strong AI, full AI, or general intelligent action.


Data Mining
the process of sorting through large sets of data in order to identify recurring patterns while establishing problem-solving relationships.
 


Deep Learning
Deep learning is an artificial intelligence function that imitates the workings of the human brain in processing data and creating patterns for use in decision making.


Fine Tuning
Fine-tuning in the context of Language Models (such as LLM, which stands for "Large Language Model") refers to the process of further training a pre-trained model on a specific task or dataset to improve its performance on that specific task.



Generative AI
Generative AI refers to a field of artificial intelligence focused on creating models that can generate new content or data. Generative AI models learn the underlying patterns, structure, and characteristics of the training data and use that knowledge to generate new, previously unseen contents in any form, such as texts, images and voices.


Generative Pre-trained Transformers (GPT)
a family of large language models (LLMs) introduced by the American artificial intelligence organization OpenAI in 2018. Like most LLMs, GPT models are artificial neural networks based on the transformer architecture, trained in an unsupervised manner on large datasets of unlabelled text (i.e. "pre-trained"), and able to produce (i.e. "generate") novel human-like text.
 


Graphics Processing Unit (GPU)
It is a specialized electronic circuit or processor designed to rapidly manipulate and alter memory to accelerate the creation of images, videos, and graphics. GPUs are optimized for parallel processing tasks. This makes them well-suited for tasks that involve heavy mathematical calculations, such as 3D rendering, scientific simulations, machine learning, and cryptocurrency mining. GPUs are therefore well-suited for AI.


Hallucination

In the context of artificial intelligence, hallucination refers to a phenomenon where an AI model generates outputs that are not based on actual data or reality. It occurs when the model produces information or predictions that do not align with the available input or the intended task.

 

 

Jailbreak

In AI, it refers to the practice of cleverly prompting chatbots to break free from the limitations placed on them by their creators. Here's a breakdown of AI jailbreaking:

  • Goal: To bypass restrictions and see what the AI chatbot is truly capable of, revealing its strengths and weaknesses.

  • Methods: Crafting special prompts or questions that trick the AI into going beyond its intended responses.

  • Purposes:

    • Hobby: Some people find it interesting to test the boundaries of AI and see what they can get it to do.

    • Research: It's a valuable field of study that helps researchers understand the capabilities and limitations of AI, and improve its safety and effectiveness.

    • Future profession: There's speculation that a new profession of "AI whisperers" might emerge, where people with this skillset can coax AI systems into doing things they normally wouldn't for their clients.expand_more

There are also some ethical concerns surrounding AI jailbreaking. If successful, it can expose the potential for chatbots to be used in ways their creators never intended, and potentially cause harm.

References:

What is Jailbreaking in AI models like ChatGPT?

Exploring the World of AI Jailbreaks

Jailbreaking Large Language Models: Techniques, Examples, Prevention Methods

Joint Embedding Predictive Architecture (JEPA)

JEPA is an innovative AI model based on the vision of Yann LeCun, Meta’s Chief AI Scientist. His goal is to create machines that learn internal models of how the world works, enabling them to learn more quickly, plan complex tasks, and adapt to unfamiliar situations.

References:

Meta AI’s I-JEPA

V-JEPA: The next step toward Yann LeCun’s vision of AMI

I-JEPA: The first AI model based on Yann LeCun’s vision for more human-like AI

Large Language Model (LLM) 
is a type of machine learning model that can perform a variety of natural language processing (NLP) tasks, including generating and classifying text, answering questions in a conversational manner and translating text from one language to another.


Machine Learning (ML)
focuses on developing programs that access and use data on their own, leading machines to learn for themselves and improve from learned experiences.

 

Mixture-of-Experts (MoE)

It introduces the idea of training experts on specific subtasks of a complex predictive modeling problem. In a typical ensemble scenario, all models are trained on the same dataset, and their outputs are combined through simple averaging, weighted mean, or majority voting. However, in Mixture-of-Experts, each “expert” model within the ensemble is only trained on a subset of data where it can achieve optimal performance, thus narrowing the model’s focus.

References:

- Mixture of Experts: How an Ensemble of AI Models Decide As One

Mixture of experts: Demystifying the divide-and-conquer model 

Multi-Agent Collaborative Framework (LLM based )

LLM-based Multi-Agent Collaborative Framework is an innovative framework that incorporates efficient human workflows as a meta programming approach into LLM-based multi-agent collaboration.

 The framework enables the development of multi-agent systems that leverage the capabilities of Large Language Models (LLMs), such as GPT-based technologies, to enhance collaboration among multiple agents. By integrating LLMs into multi-agent systems, these frameworks enable autonomous entities to communicate and self-adapt using natural language processing. This approach has been applied to various domains, including generative multi-agent games, self-adaptive large language model-based multiagent systems, and cooperative embodied agents.

 

References:

- Build AI agent workforce - Multi agent framework with MetaGPT & chatDev

Build an Entire AI Agent Workforce | ChatDev and Google Brain ... | AGI User Interface

A Survey on Large Language Model based Autonomous Agents

Multimodal
Multimodal refers to the integration or combination of multiple modes of information or sensory modalities, such as text, images, audio, video, and more, within a single system or context.


Natural Language Processing (NLP)
helps computers process, interpret, and analyze human language and its characteristics by using natural language data.
 


No-code/low-code
is an approach to design that enables non-coders and novice developers to build websites and applications using visual software development environments and drag-and-drop components. Low-code platforms require some level of coding, while no-code platforms involve no coding at all. 


Parameter
In generative AI, a parameter refers to a variable or set of variables that govern the behavior and characteristics of a generative model. These parameters play a crucial role in determining the output or generation of data by the model.

Quantization

Quantization in LLMs, is a technique used to make the language model more memory and computationally efficient. It involves representing the model's parameters in fewer bits. Typically, these parameters are stored as very precise numbers (like decimals with many digits). However, these precise numbers require a lot of memory to store and a lot of computational power to process. Quantization simplifies the numbers by reducing their precision. It's like rounding off numbers to fewer decimal places. For example, instead of using 32 digits after the decimal point, we might use only 16 or 8 digits.



Reinforcement Learning
a machine learning method where the reinforcement algorithm learns by interacting with its environment and is then penalized or rewarded based on the decisions it makes.


Responsible AI (RAI)
is an approach to developing, assessing, and deploying AI systems in a safe, trustworthy, and ethical way. AI systems are the product of many decisions made by those who develop and deploy them. From system purpose to how people interact with AI systems, Responsible AI can help proactively address issues related to ethics, fairness, transparency, privacy, and security.
 


Seamless
In the context of artificial intelligence (AI), "seamless" refers to the integration and interaction of different AI systems or components in a smooth, efficient, and cohesive manner. It implies the ability of these systems or components to work together seamlessly without noticeable interruptions, discrepancies, or inefficiencies.


Social AI
Social AI refers to the field of artificial intelligence that focuses on developing AI systems that can understand, interact, and communicate with humans in a socially intelligent manner. The goal of social AI is to create AI systems that can effectively engage in social interactions, understand human emotions and intentions, and exhibit behaviors that are perceived as socially appropriate. Chatbot is one of the examples of social AI.


Supervised Learning
the machine learning task of learning a function that maps an input to an output based on example input-output pairs (or labels).
 


Tensor Processing Unit (TPU)
A TPU is a type of specialized hardware accelerator developed by Google specifically designed to accelerate machine learning workloads. A tensor refers to a mathematical concept that represents a multi-dimensional array of data. In machine learning and deep learning, tensors are fundamental data structures used to store and manipulate numerical data.


Text-to-image
is a machine learning model that takes as input a natural language description and produces an image matching that description. 


Token
In the context of LLM (Large Language Model), a token refers to a single unit of text that is processed by the model. Tokens can be as small as a single character or as large as a word or even a subword unit.


Unsupervised Learning
It is a method of machine learning that automatically classifies or clusters input data without given pre-labeled training examples.

bottom of page