top of page
Researcher

Research

2024

March 14

Quiet-STaR: Language Models Can Teach Themselves to Think Before Speaking (paper)

February 22

AI Jailbreak Breakthrough!

 

2023

November 27

DOBB-E: 6D General AI Robot Breakthrough

October 9

Learning Interactive Real-World Simulators

  • Generative models trained on internet data have revolutionized how text, image, and video content can be created. Perhaps the next milestone for generative models is to simulate realistic experience in response to actions taken by humans, robots, and other interactive agents. Applications of a real-world simulator range from controllable content creation in games and movies, to training embodied agents purely in simulation that can be directly deployed in the real world. 

  • Paper: Learning Interactive Real-World Simulators

October 3

Google DeepMind created the Open X-Embodiment dataset and RT-X model

Sept 29

The Dawn of LMMs: Preliminary Explorations with GPT-4V(ision)

  • Large multimodal models (LMMs) extend large language models (LLMs) with multi-sensory skills, such as visual understanding, to achieve stronger generic intelligence. In this paper, we analyze the latest model, GPT-4V(ision), to deepen the understanding of LMMs. The analysis focuses on the intriguing tasks that GPT-4V can perform, containing test samples to probe the quality and genericity of GPT-4V's capabilities, its supported inputs and working modes, and the effective ways to prompt the model.

  • the Dawn of Large Multimodal Models: Google Breakthrough and Report Highlights

August 6-10

Drag Your GAN: Interactive Point-based Manipulation on the Generative Image Manifold (SIGGRAPH 2023 Conference Proceedings)

  • Synthesizing visual content that meets users' needs often requires flexible and precise controllability of the pose, shape, expression, and layout of the generated objects.

  •  generative adversarial networks (GANs) via manually annotated training data or a prior 3D model

August 6

Generative Agents: Interactive Simulacra of Human Behavior

June 5 ********

Orca: Progressive Learning from Complex Explanation Traces of GPT-4

June 1

Thought Cloning: Learning to Think while Acting by Imitating Human Thinking

  • Reinforcement Learning (RL) agents are far from human-level performance in any of these abilities. We hypothesize one reason for such cognitive deficiencies is that they lack the benefits of thinking in language and that we can improve AI agents by training them to think like humans do.

  • We introduce a novel Imitation Learning framework, Thought Cloning, where the idea is to not just clone the behaviors of human demonstrators, but also the thoughts humans have as they perform these behaviors.

  • Thought Cloning also provides important benefits for AI Safety and Interpretability, and makes it easier to debug and improve AI.

  • The code and dataset are available in ShengranHu/Thought-Cloning

June 1

Scientists design artificial synapses for neuromorphic computing

  • made a new interface-type memristive device, which their results suggest can be used to build artificial synapses for next-generation neuromorphic computing.

  • a human brain-like ability that opens up new possibilities for computing and devices.

  • We could see neuromorphic computing enable a lot of applications that require intelligence, from self-driving cars, to drones, to security cameras. Basically, many things that people are capable of doing, these types of devices will be able to do.

May 23

QLoRA: An Open-source Method of AI Breakthrough for Fine-Tuning LLMs

"These developments have democratized the fine-tuning of AI models, significantly reducing the computational resources needed and making it accessible for a wider audience."​

"Guanaco, the first model fine-tuned using QLoRA, showcases its efficacy, achieving 99.3 percent of ChatGPT's performance level in just 24 hours with a single 48 gigabyte GPU."

 

May 18

Meta’s Breakthrough Language Model: LIMA ( Less Is More for Alignment)

- "LIMA can apply its learned knowledge to new and unfamiliar tasks, demonstrating a level of flexibility and adaptability, shared by the researchers."

- "with only limited instruction tuning data, models like LIMA can generate high-quality output."

May 17

Tree of Thoughts: Deliberate Problem Solving with Large Language Models

  • Research on human problem-solving suggests that people search through a combinatorial problemspace – a tree where the nodes represent partial solutions, and the branches correspond to operators that modify them

  • humans prefer ToT (Tree of Thoughts) over CoT (Chain of Thoughts) in 41 out of 100 passage pairs, while only prefer CoT over ToT in 21

  • Detail Explanation: New Prompt Achieves 🚀 900% Logic & Reasoning Improvement (GPT-4)

April 11

Amazon creates a new user-centric simulation platform to develop embodied AI agents

  • Amazon Alexa AI recently created a new simulation platform specifically for embodied AI research, the field specialized in the development of autonomous robots

  • Our primary objective was to develop an interactive Embodied AI framework to catalyze the creation of next-generation embodied AI agents

  • Several embodied-AI simulation platforms have been proposed in recent years (e.g., AI2Thor, Habitat, iGibson). These platforms support simulated scenes, where embodied agents can navigate and interact with objects, yet most of them are not designed for humans to interact with agents due to the lack of user-centricity

  • Alexa Arena, offers a framework with user-centric capabilities, such as smooth visuals during robot navigation, continuous background animations and sounds, viewpoints in rooms to simplify room-2-room navigation, and visual hints embedded in the scene that aid human-users to generate suitable instructions for task-completion. 

  • In a game-like setting, users can interact with virtual robots through natural-language dialogue, providing invaluable feedback and helping the robots learn and complete their tasks.

​​

April 10

Powerful new Meta AI tool can identify individual items within images

  • allowing computers to detect and comprehend the elements of a previously unseen image and isolate them for user interaction

  • Meta's Segment Anything Model (SAM) hunts for related pixels in an image and identifies the common components that make up all the pieces of the picture

  • SAM can be activated by user clicks or text prompts. Meta researchers envision SAM's further utilization in the AR/VR realm. When users focus on an object, it can be delineated, defined and "lifted" into a 3D image

  • A free working model is available online. Users can select from an image gallery or upload their own photos. They can then tap anywhere on the screen or draw a rectangle around an item of interest and watch SAM define, for instance, the outline of a nose, face or entire body.

2022

 

Nov 15

Solving brain dynamics gives rise to flexible machine-learning models

  • built "liquid" neural networks, inspired by the brains of small species: a class of flexible, robust machine learning models that learn on the job and can adapt to changing conditions, for real-world safety-critical tasks, like driving and flying.

  • These modes have the same characteristics of liquid neural nets—flexible, causal, robust, and explainable—but are orders of magnitude faster, and scalable.

  • On a medical prediction task, for example, the new models were 220 times faster on a sampling of 8,000 patients.

  • "closed-form continuous-time" (CfC)

  • "When we have a closed-form description of neurons and synapses' communication, we can build computational models of brains with billions of cells, a capability that is not possible today due to the high computational complexity of neuroscience models. The closed-form equation could facilitate such grand-level simulations and therefore opens new avenues of research for us to understand intelligence," says MIT CSAIL Research Affiliate Ramin Hasani

  • "Recent neural network architectures, such as neural ODEs and liquid neural networks, have hidden layers composed of specific dynamical systems representing infinite latent states instead of explicit stacks of layers," says Sildomar Monteiro

Nov 7

Speaking the same language: How artificial neurons mimic biological neurons

  • most artificial neurons only emulate their biological counterparts electrically, without taking into account the wet biological environment that consists of ions, biomolecules and neurotransmitters.

  • Scientists led by Paschalis Gkoupidenis, group leader in Paul Blom's department at the Max Planck Institute for Polymer Research, have now tackled this problem and developed the first bio-realistic artificial neuron. This neuron can work in a biological environment and is able to produce diverse spiking dynamics that can be found in biology and can therefore communicate with their "real" biological counterparts.

  • a non-linear element made of organic soft matter, something that also exists in biological neurons. "Such [an] artificial element could be the key for bio-realistic neuroprosthetics that will speak the same language as biology

Aug 23

New book co-written by philosopher claims AI will 'never' rule the world

  • a critical examination of AI's unjustifiable projections, such as machines detaching themselves from humanity, self-replicating, and becoming "full ethical agents." There cannot be a machine will, they say. 

  • AI that would match the general intelligence of humans is impossible because of the mathematical limits on what can be modeled and is "computable."

  • "In certain completely rule-determined confined settings, machine learning can be used to create algorithms that outperform humans," says Smith. "But this does not mean that they can 'discover' the rules governing just any activity taking place in an open environment, which is what the human brain achieves every day."

 

Aug 10

A new explainable AI paradigm that could enhance human-robot collaboration

  • developed a new AI system that can explain its decision-making processes to human users ... could be a new step toward the creation of more reliable and understandable AI.

  • aims to build collaborative trust between robots and humans

  • This essentially means that their system can actively learn and adapt its decision-making based on the feedback it receives by users on the fly. This ability to contextually adapt is characteristic of what is often referred to as the third/next wave of AI.

  • our work enables robots to estimate users' intentions and values during the collaboration in real-time, saving the need to code complicated and specific objectives to the robots beforehand, thus providing a better human-machine teaming paradigm.

  • This essentially means that a human user can understand why a robot or machine is acting in a specific way or coming to specific conclusions, and the machine or robot can infer why the human user is acting in specific ways. This can significantly enhance human-robot communication.

  •  generic human-robot collaboration

May 20

STaR: Self-Taught Reasoner Bootstrapping Reasoning With Reasoning

Jan 5                    

New method to make AI-generated voices more expressive

2021

Nov 29            

Machine-learning model could enable robots to understand interactions in the way humans do

  • When humans look at a scene, they see objects and the relationships between them. On top of your desk, there might be a laptop that is sitting to the left of a phone, which is in front of a computer monitor. Many deep learning models struggle to see the world this way because they don't understand the entangled relationships between individual objects.

  • The researchers used a machine-learning technique called energy-based models to represent the individual object relationships in a scene description. This technique enables them to use one energy-based model to encode each relational description, and then compose them together in a way that infers all objects and relationships.

  • They are also interested in eventually incorporating their model into robotics systems, enabling a robot to infer object relationships from videos and then apply this knowledge to manipulate objects in the world.

Oct 4              

Artificial intelligence is smart, but does it play well with others?

  • In single-blind experiments, participants played two series of the game: One with the AI agent as their teammate, and the other with a rule-based agent, a bot manually programmed to play in a predefined way.

  • humans consistently hated playing with their AI teammate. They found it to be unpredictable, unreliable, and untrustworthy, and felt negatively even when the team scored well.

  • Reinforcement learning is a much more general-purpose way of developing AI. If you can train it to learn how to play the game of chess, that agent won't necessarily go drive a car. But you can use the same algorithms to train a different agent to drive a car, given the right data

  • The researchers think that the ability for the AI to explain its actions will engender trust. This will be the focus of their work for the next year.

Oct 6              

Brain cell differences could be key to learning in humans and AI

  • "the brain needs to be energy efficient while still being able to excel at solving complex tasks. Our work suggests that having a diversity of neurons in both brains and AI fulfills both these requirements and could boost learning."

  • Neurons are like snowflakes: they look the same from a distance but on further inspection it's clear that no two are exactly alike.

  • By contrast, each cell in an artificial neural network—the technology on which AI is based—is identical, with only their connectivity varying. Despite the speed at which AI technology is advancing, their neural networks do not learn as accurately or quickly as the human brain—and the researchers wondered if their lack of cell variability might be a culprit.

  • the researchers focused on tweaking the "time constant"—that is, how quickly each cell decides what it wants to do based on what the cells connected to it are doing. Some cells will decide very quickly, looking only at what the connected cells have just done. Other cells will be slower to react, basing their decision on what other cells have been doing for a while.

  • The results show that by allowing the network to combine slow and fast information, it was better able to solve tasks in more complicated, real-world settings.

Sept 21           

Researchers psychoanalyse AI

  • Just because we can develop an algorithm that lets artificial intelligence find patterns in data to best solve a task, it does not mean that we understand what patterns it finds. So even though we have created it, it does not mean that we know it,

  • Søren Hauberg and his colleagues have developed a method based on classical geometry, which makes it possible to see how an artificial intelligence has formed it's "personality."

  • These classic geometric models have found new applications in machine learning, where they can be used to make a map of how compression has moved data around and thus go backwards through the AI's neural network and understand the learning process.

 

Simon's Comment:

  • such development burs the line bewteen human and robot. If one day robotic psychology become a discipline, either human will be understood as robot or robot will be understood as human.

Aug 9              

Artificial neural networks modeled on real brains can perform cognitive tasks

  • artificial intelligence networks based on human brain connectivity can perform cognitive tasks efficiently.

  • By examining MRI data from a large Open Science repository, researchers reconstructed a brain connectivity pattern, and applied it to an artificial neural network (ANN).

  • They found that ANNs with human brain connectivity, known as neuromorphic neural networks, performed cognitive memory tasks more flexibly and efficiently than other benchmark architectures.

July 14            

Enabling the 'imagination' of artificial intelligence

  • We were inspired by human visual generalization capabilities to try to simulate human imagination in machines

  • Humans can separate their learned knowledge by attributes—for instance, shape, pose, position, color—and then recombine them to imagine a new object. Our paper attempts to simulate this process using neural networks.

  • This is one of the long-sought goals of AI: creating models that can extrapolate. This means that, given a few examples, the model should be able to extract the underlying rules and apply them to a vast range of novel examples it hasn't seen before.

  • "controllable novel image synthesis," or what you might call imagination.

  • In the field of medicine, it could help doctors and biologists discover more useful drugs by disentangling the medicine function from other properties, and then recombining them to synthesize new medicine.

  • "This new disentanglement approach, for the first time, truly unleashes a new sense of imagination in A.I. systems, bringing them closer to humans' understanding of the world."

June 14           

What Google’s AI-designed chip tells us about the nature of intelligence

  • artificial intelligence software that can design computer chips faster than humans can

  • the researchers could reframe the chip floorplanning problem as a board game and could tackle it in the same way that other scientists had solved the game of Go.

June 7             

DeepMind scientists: Reinforcement learning is enough for general AI

  • reward maximization and trial-and-error experience are enough to develop behavior that exhibits the kind of abilities associated with intelligence. And from this, they conclude that reinforcement learning, a branch of AI that is based on reward maximization, can lead to the development of artificial general intelligence.

May 28           

Artificial intelligence system could help counter the spread of disinformation

  • Their goal was to create a system that would automatically detect disinformation narratives as well as those individuals who are spreading the narratives within social media networks.

May 5             

Toward a brain-like AI with hyperdimensional computing

  • "hyperdimensional computing," which can possibly take AI systems a step closer toward human-like cognition.    

  • a relatively new paradigm for computing using large vectors (like 10000 bits each) and is inspired by patterns of neural activity in the human brain.

  • we can represent data holistically, meaning that the value of an object is distributed among many data points

April 16          

Toward deep-learning models that can reason about code more like humans

  • A machine capable of programming itself

  • advances in natural language processing

  • predicting what software developers will do next, and offering an assist.

  • the model is able to learn from its mistakes

2020

Feb 11

EMERGENT TOOL USE FROM MULTI-AGENT AUTOCURRICULA

  • Video Explanation: Multi-Agent Hide and Seek

  • "multi-agent competition may scale better with increasing environment complexity and leads to behavior that centers around far more human-relevant skills than other self-supervised reinforcement learning methods" (p.1)

bottom of page