Agregátor RSS
Microsoft Fixes 90 New Flaws, Including Actively Exploited NTLM and Task Scheduler Bugs
Microsoft Fixes 90 New Flaws, Including Actively Exploited NTLM and Task Scheduler Bugs
Iranian Hackers Use "Dream Job" Lures to Deploy SnailResin Malware in Aerospace Attacks
Druhou generaci 3nm procesu se Samsungu nedaří dotáhnout, výtěžnost je minimální
Nikon mate lidi. Nový objektiv Z 50 mm F1,4 má lepší světelnost a nižší cenu než stájová konkurence
VMware Fusion a VMware Workstation zdarma i pro komerční použití
WordPress 6.7 Rollins
Admins can give thanks this November for dollops of Microsoft patches
Patch Tuesday Patch Tuesday has swung around again, and Microsoft has released fixes for 89 CVE-listed security flaws in its products – including two under active attack – and reissued three more.…
China's Volt Typhoon crew and its botnet surge back with a vengeance
China's Volt Typhoon crew and its botnet are back, compromising old Cisco routers once again to break into critical infrastructure networks and kick off cyberattacks, according to security researchers.…
Air National Guardsman gets 15 years after splashing classified docs on Discord
A former Air National Guard member who stole classified American military secrets, and showed them to his gaming buddies on Discord, has been sentenced to 15 years in prison.…
Microsoft fixes bugs causing Windows Server 2025 blue screens, install issues
Jaderné noviny – přehled za říjen 2024
Přehled říjnových vydání Jaderných novin: stav vydání jádra, citáty týdne a seznam článků týkajících se jádra.
Změny v ČSOB. Věrnostní program nahradí umělá inteligence. Nové podmínky pro slevy a úrokové sazby
Univerzitní eduroam pouze na IPv6: CLAT, DHCPv6 a logování
Softwarová sklizeň (13. 11. 2024): upravujte web přímo za provozu
Jak převážet antihmotu? V CERNu postavili mobilní past BASE-STEP
65/35W Arrow Lake dostává konkrétní podobu. Bude lepší, než se čekalo
Could We Ever Decipher an Alien Language? Uncovering How AI Communicates May Be Key
In the 2016 science fiction movie Arrival, a linguist is faced with the daunting task of deciphering an alien language consisting of palindromic phrases, which read the same backwards as they do forwards, written with circular symbols. As she discovers various clues, different nations around the world interpret the messages differently—with some assuming they convey a threat.
If humanity ended up in such a situation today, our best bet may be to turn to research uncovering how artificial intelligence develops languages.
But what exactly defines a language? Most of us use at least one to communicate with people around us, but how did it come about? Linguists have been pondering this very question for decades, yet there is no easy way to find out how language evolved.
Language is ephemeral, it leaves no examinable trace in the fossil records. Unlike bones, we can’t dig up ancient languages to study how they developed over time.
While we may be unable to study the true evolution of human language, perhaps a simulation could provide some insights. That’s where AI comes in—a fascinating field of research called emergent communication, which I have spent the last three years studying.
To simulate how language may evolve, we give AI agents simple tasks that require communication, like a game where one robot must guide another to a specific location on a grid without showing it a map. We provide (almost) no restrictions on what they can say or how—we simply give them the task and let them solve it however they want.
Because solving these tasks requires the agents to communicate with each other, we can study how their communication evolves over time to get an idea of how language might evolve.
Similar experiments have been done with humans. Imagine you, an English speaker, are paired with a non-English speaker. Your task is to instruct your partner to pick up a green cube from an assortment of objects on a table.
You might try to gesture a cube shape with your hands and point at grass outside the window to indicate the color green. Over time, you’d develop a sort of proto-language together. Maybe you’d create specific gestures or symbols for “cube” and “green.” Through repeated interactions, these improvised signals would become more refined and consistent, forming a basic communication system.
This works similarly for AI. Through trial and error, algorithms learn to communicate about objects they see, and their conversation partners learn to understand them.
But how do we know what they’re talking about? If they only develop this language with their artificial conversation partner and not with us, how do we know what each word means? After all, a specific word could mean “green,” “cube,” or worse—both. This challenge of interpretation is a key part of my research.
Cracking the CodeThe task of understanding AI language may seem almost impossible at first. If I tried speaking Polish (my mother tongue) to a collaborator who only speaks English, we couldn’t understand each other or even know where each word begins and ends.
The challenge with AI languages is even greater, as they might organize information in ways completely foreign to human linguistic patterns.
Fortunately, linguists have developed sophisticated tools using information theory to interpret unknown languages.
Just as archaeologists piece together ancient languages from fragments, we use patterns in AI conversations to understand their linguistic structure. Sometimes we find surprising similarities to human languages, and other times we discover entirely novel ways of communication.
These tools help us peek into the “black box” of AI communication, revealing how AI agents develop their own unique ways of sharing information.
My recent work focuses on using what the agents see and say to interpret their language. Imagine having a transcript of a conversation in a language unknown to you, along with what each speaker was looking at. We can match patterns in the transcript to objects in the participant’s field of vision, building statistical connections between words and objects.
For example, perhaps the phrase “yayo” coincides with a bird flying past—we could guess that “yayo” is the speaker’s word for “bird.” Through careful analysis of these patterns, we can begin to decode the meaning behind the communication.
In the latest paper by me and my colleagues, set to appear in the conference proceedings of Neural Information Processing Systems (NeurIPS), we show that such methods can be used to reverse-engineer at least parts of the AIs’ language and syntax, giving us insights into how they might structure communication.
Aliens and Autonomous SystemsHow does this connect to aliens? The methods we’re developing for understanding AI languages could help us decipher any future alien communications.
If we are able to obtain some written alien text together with some context (such as visual information relating to the text), we could apply the same statistical tools to analyze them. The approaches we’re developing today could be useful tools in the future study of alien languages, known as xenolinguistics.
But we don’t need to find extraterrestrials to benefit from this research. There are numerous applications, from improving language models like ChatGPT or Claude to improving communication between autonomous vehicles or drones.
By decoding emergent languages, we can make future technology easier to understand. Whether it’s knowing how self-driving cars coordinate their movements or how AI systems make decisions, we’re not just creating intelligent systems—we’re learning to understand them.
This article is republished from The Conversation under a Creative Commons license. Read the original article.
Image Credit: Tomas Martinez on Unsplash
Google Chrome 131
Microsoft Exchange adds warning to emails abusing spoofing flaw
- « první
- ‹ předchozí
- …
- 23
- 24
- 25
- 26
- 27
- 28
- 29
- 30
- 31
- …
- následující ›
- poslední »