top of page

AI & Machine Learning

2022

 

Nov 15

Solving brain dynamics gives rise to flexible machine-learning models

  • built "liquid" neural networks, inspired by the brains of small species: a class of flexible, robust machine learning models that learn on the job and can adapt to changing conditions, for real-world safety-critical tasks, like driving and flying.

  • These modes have the same characteristics of liquid neural nets—flexible, causal, robust, and explainable—but are orders of magnitude faster, and scalable.

  • On a medical prediction task, for example, the new models were 220 times faster on a sampling of 8,000 patients.

  • "closed-form continuous-time" (CfC)

  • "When we have a closed-form description of neurons and synapses' communication, we can build computational models of brains with billions of cells, a capability that is not possible today due to the high computational complexity of neuroscience models. The closed-form equation could facilitate such grand-level simulations and therefore opens new avenues of research for us to understand intelligence," says MIT CSAIL Research Affiliate Ramin Hasani

  • "Recent neural network architectures, such as neural ODEs and liquid neural networks, have hidden layers composed of specific dynamical systems representing infinite latent states instead of explicit stacks of layers," says Sildomar Monteiro

​

Nov 7

Speaking the same language: How artificial neurons mimic biological neurons

  • most artificial neurons only emulate their biological counterparts electrically, without taking into account the wet biological environment that consists of ions, biomolecules and neurotransmitters.

  • Scientists led by Paschalis Gkoupidenis, group leader in Paul Blom's department at the Max Planck Institute for Polymer Research, have now tackled this problem and developed the first bio-realistic artificial neuron. This neuron can work in a biological environment and is able to produce diverse spiking dynamics that can be found in biology and can therefore communicate with their "real" biological counterparts.

  • a non-linear element made of organic soft matter, something that also exists in biological neurons. "Such [an] artificial element could be the key for bio-realistic neuroprosthetics that will speak the same language as biology

​

Aug 23

New book co-written by philosopher claims AI will 'never' rule the world

  • a critical examination of AI's unjustifiable projections, such as machines detaching themselves from humanity, self-replicating, and becoming "full ethical agents." There cannot be a machine will, they say. 

  • AI that would match the general intelligence of humans is impossible because of the mathematical limits on what can be modeled and is "computable."

  • "In certain completely rule-determined confined settings, machine learning can be used to create algorithms that outperform humans," says Smith. "But this does not mean that they can 'discover' the rules governing just any activity taking place in an open environment, which is what the human brain achieves every day."

​

 

Aug 10

A new explainable AI paradigm that could enhance human-robot collaboration

  • developed a new AI system that can explain its decision-making processes to human users ... could be a new step toward the creation of more reliable and understandable AI.

  • aims to build collaborative trust between robots and humans

  • This essentially means that their system can actively learn and adapt its decision-making based on the feedback it receives by users on the fly. This ability to contextually adapt is characteristic of what is often referred to as the third/next wave of AI.

  • our work enables robots to estimate users' intentions and values during the collaboration in real-time, saving the need to code complicated and specific objectives to the robots beforehand, thus providing a better human-machine teaming paradigm.

  • This essentially means that a human user can understand why a robot or machine is acting in a specific way or coming to specific conclusions, and the machine or robot can infer why the human user is acting in specific ways. This can significantly enhance human-robot communication.

  •  generic human-robot collaboration

​

​

Jan 5                    

New method to make AI-generated voices more expressive

​

​

2021

​

Nov 29            

Machine-learning model could enable robots to understand interactions in the way humans do

  • When humans look at a scene, they see objects and the relationships between them. On top of your desk, there might be a laptop that is sitting to the left of a phone, which is in front of a computer monitor. Many deep learning models struggle to see the world this way because they don't understand the entangled relationships between individual objects.

  • The researchers used a machine-learning technique called energy-based models to represent the individual object relationships in a scene description. This technique enables them to use one energy-based model to encode each relational description, and then compose them together in a way that infers all objects and relationships.

  • They are also interested in eventually incorporating their model into robotics systems, enabling a robot to infer object relationships from videos and then apply this knowledge to manipulate objects in the world.

​

​

Oct 4              

Artificial intelligence is smart, but does it play well with others?

  • In single-blind experiments, participants played two series of the game: One with the AI agent as their teammate, and the other with a rule-based agent, a bot manually programmed to play in a predefined way.

  • humans consistently hated playing with their AI teammate. They found it to be unpredictable, unreliable, and untrustworthy, and felt negatively even when the team scored well.

  • Reinforcement learning is a much more general-purpose way of developing AI. If you can train it to learn how to play the game of chess, that agent won't necessarily go drive a car. But you can use the same algorithms to train a different agent to drive a car, given the right data

  • The researchers note that the AI used in this study wasn't developed for human preference. But, that's part of the problem—not many are. Like most collaborative AI models, this model was designed to score as high as possible, and its success has been benchmarked by its objective performance.

  • The researchers think that the ability for the AI to explain its actions will engender trust. This will be the focus of their work for the next year.

​

​

Oct 6              

Brain cell differences could be key to learning in humans and AI

  • "the brain needs to be energy efficient while still being able to excel at solving complex tasks. Our work suggests that having a diversity of neurons in both brains and AI fulfills both these requirements and could boost learning."

  • Neurons are like snowflakes: they look the same from a distance but on further inspection it's clear that no two are exactly alike.

  • By contrast, each cell in an artificial neural network—the technology on which AI is based—is identical, with only their connectivity varying. Despite the speed at which AI technology is advancing, their neural networks do not learn as accurately or quickly as the human brain—and the researchers wondered if their lack of cell variability might be a culprit.

  • the researchers focused on tweaking the "time constant"—that is, how quickly each cell decides what it wants to do based on what the cells connected to it are doing. Some cells will decide very quickly, looking only at what the connected cells have just done. Other cells will be slower to react, basing their decision on what other cells have been doing for a while.

  • The results show that by allowing the network to combine slow and fast information, it was better able to solve tasks in more complicated, real-world settings.

​

​

Sept 21           

Researchers psychoanalyse AI

  • Just because we can develop an algorithm that lets artificial intelligence find patterns in data to best solve a task, it does not mean that we understand what patterns it finds. So even though we have created it, it does not mean that we know it,

  • Søren Hauberg and his colleagues have developed a method based on classical geometry, which makes it possible to see how an artificial intelligence has formed it's "personality."

  • These classic geometric models have found new applications in machine learning, where they can be used to make a map of how compression has moved data around and thus go backwards through the AI's neural network and understand the learning process.

 

Simon's Comment:

  • such development burs the line bewteen human and robot. If one day robotic psychology become a discipline, either human will be understood as robot or robot will be understood as human.

​

​

Aug 9              

Artificial neural networks modeled on real brains can perform cognitive tasks

  • artificial intelligence networks based on human brain connectivity can perform cognitive tasks efficiently.

  • By examining MRI data from a large Open Science repository, researchers reconstructed a brain connectivity pattern, and applied it to an artificial neural network (ANN).

  • They found that ANNs with human brain connectivity, known as neuromorphic neural networks, performed cognitive memory tasks more flexibly and efficiently than other benchmark architectures.

​

​

July 14            

Enabling the 'imagination' of artificial intelligence

  • We were inspired by human visual generalization capabilities to try to simulate human imagination in machines

  • Humans can separate their learned knowledge by attributes—for instance, shape, pose, position, color—and then recombine them to imagine a new object. Our paper attempts to simulate this process using neural networks.

  • This is one of the long-sought goals of AI: creating models that can extrapolate. This means that, given a few examples, the model should be able to extract the underlying rules and apply them to a vast range of novel examples it hasn't seen before.

  • "controllable novel image synthesis," or what you might call imagination.

  • In the field of medicine, it could help doctors and biologists discover more useful drugs by disentangling the medicine function from other properties, and then recombining them to synthesize new medicine.

  • "This new disentanglement approach, for the first time, truly unleashes a new sense of imagination in A.I. systems, bringing them closer to humans' understanding of the world."

​

​

June 14           

What Google’s AI-designed chip tells us about the nature of intelligence

  • artificial intelligence software that can design computer chips faster than humans can

  • the researchers could reframe the chip floorplanning problem as a board game and could tackle it in the same way that other scientists had solved the game of Go.

​

​

June 7             

DeepMind scientists: Reinforcement learning is enough for general AI

  • reward maximization and trial-and-error experience are enough to develop behavior that exhibits the kind of abilities associated with intelligence. And from this, they conclude that reinforcement learning, a branch of AI that is based on reward maximization, can lead to the development of artificial general intelligence.

​

​

May 28           

Artificial intelligence system could help counter the spread of disinformation

  • Their goal was to create a system that would automatically detect disinformation narratives as well as those individuals who are spreading the narratives within social media networks.

​

​

May 5             

Toward a brain-like AI with hyperdimensional computing

  • "hyperdimensional computing," which can possibly take AI systems a step closer toward human-like cognition.    

  • a relatively new paradigm for computing using large vectors (like 10000 bits each) and is inspired by patterns of neural activity in the human brain.

  • we can represent data holistically, meaning that the value of an object is distributed among many data points

​

​

April 16          

Toward deep-learning models that can reason about code more like humans

  • A machine capable of programming itself

  • advances in natural language processing

  • predicting what software developers will do next, and offering an assist.

  • the model is able to learn from its mistakes

                             

bottom of page