Agregátor RSS
Admins can give thanks this November for dollops of Microsoft patches
Patch Tuesday Patch Tuesday has swung around again, and Microsoft has released fixes for 89 CVE-listed security flaws in its products – including two under active attack – and reissued three more.…
China's Volt Typhoon crew and its botnet surge back with a vengeance
China's Volt Typhoon crew and its botnet are back, compromising old Cisco routers once again to break into critical infrastructure networks and kick off cyberattacks, according to security researchers.…
Air National Guardsman gets 15 years after splashing classified docs on Discord
A former Air National Guard member who stole classified American military secrets, and showed them to his gaming buddies on Discord, has been sentenced to 15 years in prison.…
Microsoft fixes bugs causing Windows Server 2025 blue screens, install issues
Jaderné noviny – přehled za říjen 2024
Přehled říjnových vydání Jaderných novin: stav vydání jádra, citáty týdne a seznam článků týkajících se jádra.
Změny v ČSOB. Věrnostní program nahradí umělá inteligence. Nové podmínky pro slevy a úrokové sazby
Univerzitní eduroam pouze na IPv6: CLAT, DHCPv6 a logování
Softwarová sklizeň (13. 11. 2024): upravujte web přímo za provozu
Jak převážet antihmotu? V CERNu postavili mobilní past BASE-STEP
65/35W Arrow Lake dostává konkrétní podobu. Bude lepší, než se čekalo
Could We Ever Decipher an Alien Language? Uncovering How AI Communicates May Be Key
In the 2016 science fiction movie Arrival, a linguist is faced with the daunting task of deciphering an alien language consisting of palindromic phrases, which read the same backwards as they do forwards, written with circular symbols. As she discovers various clues, different nations around the world interpret the messages differently—with some assuming they convey a threat.
If humanity ended up in such a situation today, our best bet may be to turn to research uncovering how artificial intelligence develops languages.
But what exactly defines a language? Most of us use at least one to communicate with people around us, but how did it come about? Linguists have been pondering this very question for decades, yet there is no easy way to find out how language evolved.
Language is ephemeral, it leaves no examinable trace in the fossil records. Unlike bones, we can’t dig up ancient languages to study how they developed over time.
While we may be unable to study the true evolution of human language, perhaps a simulation could provide some insights. That’s where AI comes in—a fascinating field of research called emergent communication, which I have spent the last three years studying.
To simulate how language may evolve, we give AI agents simple tasks that require communication, like a game where one robot must guide another to a specific location on a grid without showing it a map. We provide (almost) no restrictions on what they can say or how—we simply give them the task and let them solve it however they want.
Because solving these tasks requires the agents to communicate with each other, we can study how their communication evolves over time to get an idea of how language might evolve.
Similar experiments have been done with humans. Imagine you, an English speaker, are paired with a non-English speaker. Your task is to instruct your partner to pick up a green cube from an assortment of objects on a table.
You might try to gesture a cube shape with your hands and point at grass outside the window to indicate the color green. Over time, you’d develop a sort of proto-language together. Maybe you’d create specific gestures or symbols for “cube” and “green.” Through repeated interactions, these improvised signals would become more refined and consistent, forming a basic communication system.
This works similarly for AI. Through trial and error, algorithms learn to communicate about objects they see, and their conversation partners learn to understand them.
But how do we know what they’re talking about? If they only develop this language with their artificial conversation partner and not with us, how do we know what each word means? After all, a specific word could mean “green,” “cube,” or worse—both. This challenge of interpretation is a key part of my research.
Cracking the CodeThe task of understanding AI language may seem almost impossible at first. If I tried speaking Polish (my mother tongue) to a collaborator who only speaks English, we couldn’t understand each other or even know where each word begins and ends.
The challenge with AI languages is even greater, as they might organize information in ways completely foreign to human linguistic patterns.
Fortunately, linguists have developed sophisticated tools using information theory to interpret unknown languages.
Just as archaeologists piece together ancient languages from fragments, we use patterns in AI conversations to understand their linguistic structure. Sometimes we find surprising similarities to human languages, and other times we discover entirely novel ways of communication.
These tools help us peek into the “black box” of AI communication, revealing how AI agents develop their own unique ways of sharing information.
My recent work focuses on using what the agents see and say to interpret their language. Imagine having a transcript of a conversation in a language unknown to you, along with what each speaker was looking at. We can match patterns in the transcript to objects in the participant’s field of vision, building statistical connections between words and objects.
For example, perhaps the phrase “yayo” coincides with a bird flying past—we could guess that “yayo” is the speaker’s word for “bird.” Through careful analysis of these patterns, we can begin to decode the meaning behind the communication.
In the latest paper by me and my colleagues, set to appear in the conference proceedings of Neural Information Processing Systems (NeurIPS), we show that such methods can be used to reverse-engineer at least parts of the AIs’ language and syntax, giving us insights into how they might structure communication.
Aliens and Autonomous SystemsHow does this connect to aliens? The methods we’re developing for understanding AI languages could help us decipher any future alien communications.
If we are able to obtain some written alien text together with some context (such as visual information relating to the text), we could apply the same statistical tools to analyze them. The approaches we’re developing today could be useful tools in the future study of alien languages, known as xenolinguistics.
But we don’t need to find extraterrestrials to benefit from this research. There are numerous applications, from improving language models like ChatGPT or Claude to improving communication between autonomous vehicles or drones.
By decoding emergent languages, we can make future technology easier to understand. Whether it’s knowing how self-driving cars coordinate their movements or how AI systems make decisions, we’re not just creating intelligent systems—we’re learning to understand them.
This article is republished from The Conversation under a Creative Commons license. Read the original article.
Image Credit: Tomas Martinez on Unsplash
Google Chrome 131
Microsoft Exchange adds warning to emails abusing spoofing flaw
Here's what we know about the suspected Snowflake data extortionists
Two men allegedly compromised what's believed to be multiple organizations' Snowflake-hosted cloud environments, stole sensitive data within, and extorted at least $2.5 million from at least three victims.…
Baidu releases new AI offerings on the way to broader commercialization of the technology
Baidu has introduced a text-to-image generator dubbed I-RAG and a no-code developer platform called Miaoda as part of its growing portfolio of artificial intelligence (AI) products that, like US-based AI companies, it eventually aims to offer its user base as part of a wide array of commercial AI offerings.
CEO Robin Li introduced the new technology in a presentation at the company’s Baidu World Conference Tuesday. I-RAG uses Baidu’s search capabilities to generate images from speech and has been designed to address the “hallucinations” issue, according to a report on Reuters. The hallucinations referred to are images generated via large language model (LLM)-based AI that deviate from what was specified in the input prompt or contain non-existent elements.
Baidu also launched Miaoda, a developer platform that uses the capabilities of LLMs to generate code, and is aimed at allowing users without extensive coding expertise to develop applications. AI companies in the US also are providing similar tools to develop applications through a visual interface, with reusable components and advanced developer assistance, noted Manukrishnan SR, practice director for Everest Group.
Indeed, like those of leading US companies such as OpenAI, Google, and Microsoft, Baidu’s AI moves demonstrate its march toward the commercialization phase of the technology. The company, like others before it, has been adding AI to existing products or creating new ones that enterprise and other business users can integrate into their applications.
Follow the leaderGoogle, OpenAI, and Microsoft already have products similar to the ones Baidu revealed Tuesday, and the Chinese company has some catching up to do, analysts noted. The release of an AI-enhanced no-code platform in particular demonstrates Baidu’s aim to keep up with a software development trend that may one day leverage AI to replace traditional coding with software configuration.
“The pace of innovation and research in generative AI technologies and software is moving at a breakneck pace in the US,” Dave Schubmehl, research VP, AI & automation at IDC, observed. “To compete effectively on the world stage, other countries will need to adopt this same pace of innovation and research.”
He added, “many vendors are offering low code/no code/code generation capabilities in their products. Baidu’s product Miaoda is doing what other vendors like Microsoft and OpenAI have already done, which is using LLM capabilities to generate code.”
So far, however, Baidu’s AI tools do not seem to be as advanced as the ones released by OpenAI, Microsoft, and Google, Everest Group’s SR told CIO, “since these players have large existing datasets on which they can train their AI models.”
However, with “all major cloud platform players now offering some form of genAI-based programming augmentation facility,” AI-based software development may be the way forward for the enterprise, noted Bradley Shimmin, chief analyst, AI and data analytics, at Omdia.
“This is a very important area of research in that it points to an eventual state where both domain experts inside an organization and professional ISV practitioners can both use the same tooling to create full-stack apps and/or workflow automations in a declarative, no-code, conversational manner,” Shimmin said.
Still, this evolution is not without its challenges, and may not be something CIOs need to worry about quite yet, Everest Group’s SR noted.
“These tools are facing a host of challenges, including maintaining code quality, adherence to regulatory standards, and questions on ROI,” he told CIO. “Thus, while AI is set to revolutionize software development in the medium to long term, there are a lot of challenges that need to be ironed out before its potential can be fully realized.”
Don’t underestimate ChinaThough Baidu is still playing catch-up to US-based companies, China as a major global AI player should not be underestimated, Shimmins noted. In fact, “China and the US are really not that far apart from one another in terms of expertise and investment [in AI],” he observed.
“Already, China has produced some very strong models, particularly open source models such as Qwen2.5-Coder, which rivals some of the larger frontier models from Anthropic and OpenAI (at least in terms of published benchmarks),” he said.
The US has been doing everything it can to stymie overall technological development in China in various ways, and AI is no exception. A mere two weeks ago, the US government announced new rules restricting investments in China’s AI and other tech sectors deemed threats to national security, expanding existing technology restrictions that were so far limited to exports. China, for its part, has banned the use of OpenAI in the country.
However, despite the current friction between the US and China in terms of their technological arms race, the two countries have similar goals when it comes to AI, and may end up collaborating in some areas, Shimmin noted.
“In terms of academic research, the two nations are starting to work more closely with one another in seeking out a common ground concerning the existential threat posed by AI itself,” he said.
D-Link won’t fix critical bug in 60,000 exposed EoL modems
Windows 10 KB5046613 update released with fixes for printer bugs
'Cybersecurity issue' at Food Lion parent blamed for US grocery mayhem
Retail giant Ahold Delhaize, which owns Food Lion and Stop & Shop, among others, is confirming outages at several of its US grocery stores are being caused by an ongoing "cybersecurity issue."…
Microsoft November 2024 Patch Tuesday fixes 4 zero-days, 89 flaws
Microsoft November 2024 Patch Tuesday fixes 4 zero-days, 91 flaws
- « první
- ‹ předchozí
- …
- 24
- 25
- 26
- 27
- 28
- 29
- 30
- 31
- 32
- …
- následující ›
- poslední »