Agregátor RSS
Tip: Jak omezit stahování mobilních dat na pozadí? Návod pro iOS i Android
InstallFest 2021 – CFP
How Mirroring the Architecture of the Human Brain Is Speeding Up AI Learning
While AI can carry out some impressive feats when trained on millions of data points, the human brain can often learn from a tiny number of examples. New research shows that borrowing architectural principles from the brain can help AI get closer to our visual prowess.
The prevailing wisdom in deep learning research is that the more data you throw at an algorithm, the better it will learn. And in the era of Big Data, that’s easier than ever, particularly for the large data-centric tech companies carrying out a lot of the cutting-edge AI research.
Today’s largest deep learning models, like OpenAI’s GPT-3 and Google’s BERT, are trained on billions of data points, and even more modest models require large amounts of data. Collecting these datasets and investing the computational resources to crunch through them is a major bottleneck, particularly for less well-resourced academic labs.
It also means today’s AI is far less flexible than natural intelligence. While a human only needs to see a handful of examples of an animal, a tool, or some other category of object to be able pick it out again, most AI need to be trained on many examples of an object in order to be able to recognize it.
There is an active sub-discipline of AI research aimed at what is known as “one-shot” or “few-shot” learning, where algorithms are designed to be able to learn from very few examples. But these approaches are still largely experimental, and they can’t come close to matching the fastest learner we know—the human brain.
This prompted a pair of neuroscientists to see if they could design an AI that could learn from few data points by borrowing principles from how we think the brain solves this problem. In a paper in Frontiers in Computational Neuroscience, they explained that the approach significantly boosts AI’s ability to learn new visual concepts from few examples.
“Our model provides a biologically plausible way for artificial neural networks to learn new visual concepts from a small number of examples,” Maximilian Riesenhuber, from Georgetown University Medical Center, said in a press release. “We can get computers to learn much better from few examples by leveraging prior learning in a way that we think mirrors what the brain is doing.”
Several decades of neuroscience research suggest that the brain’s ability to learn so quickly depends on its ability to use prior knowledge to understand new concepts based on little data. When it comes to visual understanding, this can rely on similarities of shape, structure, or color, but the brain can also leverage abstract visual concepts thought to be encoded in a brain region called the anterior temporal lobe (ATL).
“It is like saying that a platypus looks a bit like a duck, a beaver, and a sea otter,” said paper co-author Joshua Rule, from the University of California Berkeley.
The researchers decided to try and recreate this capability by using similar high-level concepts learned by an AI to help it quickly learn previously unseen categories of images.
Deep learning algorithms work by getting layers of artificial neurons to learn increasingly complex features of an image or other data type, which are then used to categorize new data. For instance, early layers will look for simple features like edges, while later ones might look for more complex ones like noses, faces, or even more high-level characteristics.
First they trained the AI on 2.5 million images across 2,000 different categories from the popular ImageNet dataset. They then extracted features from various layers of the network, including the very last layer before the output layer. They refer to these as “conceptual features” because they are the highest-level features learned, and most similar to the abstract concepts that might be encoded in the ATL.
They then used these different sets of features to train the AI to learn new concepts based on 2, 4, 8, 16, 32, 64, and 128 examples. They found that the AI that used the conceptual features yielded much better performance than ones trained using lower-level features on lower numbers of examples, but the gap shrunk as they were fed more training examples.
While the researchers admit the challenge they set their AI was relatively simple and only covers one aspect of the complex process of visual reasoning, they said that using a biologically plausible approach to solving the few-shot problem opens up promising new avenues in both neuroscience and AI.
“Our findings not only suggest techniques that could help computers learn more quickly and efficiently, they can also lead to improved neuroscience experiments aimed at understanding how people learn so quickly, which is not yet well understood,” Riesenhuber said.
As the researchers note, the human visual system is still the gold standard when it comes to understanding the world around us. Borrowing from its design principles might turn out to be a profitable direction for future research.
Image Credit: Gerd Altmann from Pixabay
GNU Radio 3.9.0.0
Google omezí některé funkce v Chromiu a odvozených prohlížečích. Budou jen pro ty, kteří mají Chrome
Microsoft si pohrává s myšlenkou přinést do Windows části z verze 10X. Třeba plovoucí Start
Novinky ve Firefoxu 85, který vyjde příští týden
Intel na desktopech končí s SSD řady Optane. V omezené formě budou jen pro mobilní zařízení
Unity – První seznámení s tvorbou počítačových her, nová kniha od CZ.NIC
CloudLinux CentOS Replacement Available this Quarter, Named AlmaLinux>
Linux Mint fixes screensaver bypass discovered by two kids>
Meteoweb yr.no mění design. Zahodil některé funkce, ale přidá nové
Naked Security Live – Staying safe online at home (especially if you’re homeschooling!)
Qualcomm kupuje Nuvii, start-up ex-inženýrů Applu
Sledujte start rakety Falcon 9 s další várkou satelitů Starlink
GeForce GT 1010: nový model grafické karty vychází ze staré školy
Živě.cz bez reklam a s články navíc. Objednejte si Živě Premium
Nové kosmické lodě pro ISS: CST-100 alias Starliner
DuckDuckGo stále roste. Nešmírující vyhledávač si meziročně polepšil o 57 %
SSD brzy nabídnou přes 7 GB/s v obou směrech, výrazně zrychlí i flashky
- « první
- ‹ předchozí
- …
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- …
- následující ›
- poslední »
