Agregátor RSS

GhostWrite, bezpečnostní chyba v RISC-V procesoru T-Head XuanTie C910

AbcLinuxu [zprávičky] - 7 Srpen, 2024 - 23:17
GhostWrite je bezpečnostní chyba v RISC-V procesoru T-Head XuanTie C910. S vlastní doménou a logem.
Kategorie: GNU/Linux & BSD

DARPA podpořila záměr postavit na Měsíci obří lampu. Mohla by osvětlovat okolí i vyrábět elekřinu

Živě.cz - 7 Srpen, 2024 - 19:45
Společnost Honeybee Robotics přišla s nápadem postavit na Měsíci obří lampu, které by mohla poskytovat světlo i energii budoucím kolonistům. Na práci na tomto ambiciózním projektu již získala finanční prostředky od Defense Advanced Research Projects Agency (DARPA). Tzv. LUNARSABER (Lunar Utility ...
Kategorie: IT News

Apple’s instructions to its new Siri GenAI offering illustrate the GenAI challenge

Computerworld.com [Hacking News] - 7 Srpen, 2024 - 19:44

Deep within Apple’s systems is a variety of instructions it has given to its GenAI Apple Intelligence mechanism. The screen captures of those instructions provide a peek into Apple’s efforts to influence its GenAI deployment, and also illustrate the steep challenges in controlling an algorithm that is simply trying to guess answers. 

The more explicit and contained an instruction, the easier it is for GenAI to understand and obey it. Therefore, some of the Apple instructions, such as “You prefer to use clauses instead of complete sentences”, and “Please keep your summary of the input within a 10-word limit”, should work well, AI specialists said.

But other, more interpretable commands from the Apple screen captures, such as “Do not hallucinate. Do not make up factual information,” may not be nearly as effective.

“I have not had good luck telling it not to hallucinate. It’s not clear to me that it knows when it is hallucinating and when it is not. This thing isn’t sentient,” said Michael Finley, CTO at AnswerRocket. “What does work is to ask it to reflect on its work, or to use a second prompt in a chain to check the results of the first one. Asking it to double check results is common. This has a verifiably good impact on results.”

Finley was also baffled at a comment that told the system to “only output valid JSON and nothing else.” 

“I am surprised that they told it to only use valid JSON. The model is either going to use it or not,” Finley said, adding it has no practical or meaningful way to assess validity. “The whole thing is really unsophisticated. I was surprised that this is what is at the heart.” He concluded that “it was kind of cobbled together. That is not necessarily a bad thing.” By that he meant that Apple developers were under pressure to move the software out quickly.

The instructions under scrutiny were for new GenAI capabilities being built into Apple’s Siri. The dataset Apple will be using is far larger than earlier efforts, which is why it will only be available on the latest devices with the strongest CPU horsepower as well as the most RAM.

“Apple’s models for Siri have been small until now. Using GPT — arguably some of the largest models — means new capabilities,” Finley said. “As parameter counts get bigger, models learn to do things that are more indirect. Small models can’t role-play, larger models can. Small models don’t know about deception, larger models do.”

Clyde Williamson, product security architect at Protegrity, was amused by how the existence in a public forum of the comments, which were presumably not intended to be seen by Apple customers, nicely illustrates the overall privacy/data security challenges within GenAI.

“This does highlight, though, the idea of how security in AI becomes a bit fuzzy. Anything we tell an AI, it might tell someone else,” Williamson said. “I don’t see any evidence that Apple tried to secure this prompt template, but it’s reasonable to expect that they didn’t intend for end-users to see the prompts. Unfortunately, LLMs are not good at keeping secrets.”

Another AI specialist, Rasa CTO Alan Nichol, applauded many of the comments. “It was very pragmatic and simple,” Nichol said, but added that “a model can’t know when it’s wrong.”

“These models produce plausible texts that sometimes overlap with the truth. And sometimes, by sheer accident and coincidence, it is correct,” Nichol said. “If you think about how these models are trained, they are trying to please the end-user, they are trying to think of what the user wants.”

Nichol liked many of the comments, though, noting, “The instructions to keep everything short, I always use comments like that,” because otherwise, LLMs tend to be “incredibly verbose and fluffy.”

Kategorie: Hacking & Security

A New Study Says AI Models Encode Language Like the Human Brain Does

Singularity HUB - 7 Srpen, 2024 - 19:34

Language enables people to transmit thoughts to each other because each person’s brain responds similarly to the meaning of words. In newly published research, my colleagues and I developed a framework to model the brain activity of speakers as they engaged in face-to-face conversations.

We recorded the electrical activity of two people’s brains as they engaged in unscripted conversations. Previous research has shown that when two people converse, their brain activity becomes coupled, or aligned, and that the degree of neural coupling is associated with better understanding of the speaker’s message.

A neural code refers to particular patterns of brain activity associated with distinct words in their contexts. We found that the speakers’ brains are aligned on a shared neural code. Importantly, the brain’s neural code resembled the artificial neural code of large language models.

The Neural Patterns of Words

A large language model is a machine learning program that can generate text by predicting what words most likely follow others. Large language models excel at learning the structure of language, generating humanlike text, and holding conversations. They can even pass the Turing test, making it difficult for someone to discern whether they are interacting with a machine or a human. Like humans, large language models learn how to speak by reading or listening to text produced by other humans.

By giving the large language model a transcript of the conversation, we were able to extract its “neural activations,” or how it translates words into numbers, as it “reads” the script. Then, we correlated the speaker’s brain activity with both the large language model’s activations and with the listener’s brain activity. We found that the large language model’s activations could predict the speaker and listener’s shared brain activity.

To understand each other, people have a shared agreement on the grammatical rules and the meaning of words in context. For instance, we know to use the past tense form of a verb to talk about past actions, as in the sentence: “He visited the museum yesterday.” Additionally, we intuitively understand that the same word can have different meanings in different situations. For instance, the word cold in the sentence “you are cold as ice” can refer either to one’s body temperature or personality trait, depending on the context. Due to the complexity and richness of natural language, until the recent success of large language models, we lacked a precise mathematical model to describe it.

Our study found that large language models can predict how linguistic information is encoded in the human brain, providing a new tool to interpret human brain activity. The similarity between the human brain’s and the large language model’s linguistic code has enabled us, for the first time, to track how information in the speaker’s brain is encoded into words and transferred, word by word, to the listener’s brain during face-to-face conversations. For example, we found that brain activity associated with the meaning of a word emerges in the speaker’s brain before articulating a word, and the same activity rapidly reemerges in the listener’s brain after hearing the word.

Powerful New Tool

Our study has provided insights into the neural code for language processing in the human brain and how both humans and machines can use this code to communicate. We found that large language models were better able to predict shared brain activity compared with different features of language, such as syntax, or the order in which words connect to form phrases and sentences. This is partly due to the large language models’ ability to incorporate the contextual meaning of words, as well as integrate multiple levels of the linguistic hierarchy into one model: from words to sentences to conceptual meaning. This suggests important similarities between the brain and artificial neural networks.

An important aspect of our research is using everyday recordings of natural conversations to ensure that our findings capture the brain’s processing in real life. This is called ecological validity. In contrast to experiments in which participants are told what to say, we relinquish control of the study and let the participants converse as naturally as possible. This loss of control makes it difficult to analyze the data because each conversation is unique and involves two interacting individuals who are spontaneously speaking. Our ability to model neural activity as people engage in everyday conversations attests to the power of large language models.

Other Dimensions

Now that we’ve developed a framework to assess the shared neural code between brains during everyday conversations, we’re interested in what factors drive or inhibit this coupling. For example, does linguistic coupling increase if a listener better understands the speaker’s intent? Or perhaps complex language, like jargon, may reduce neural coupling.

Another factor that can influence linguistic coupling may be the relationship between the speakers. For example, you may be able to convey a lot of information with a few words to a good friend but not to a stranger. Or you may be better neurally coupled to political allies rather than rivals. This is because differences in the way we use words across groups may make it easier to align and be coupled with people within rather than outside our social groups.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: Mohamed HassanPixabay

Kategorie: Transhumanismus

Faulty instructions in Alibaba's T-Head C910 RISC-V CPUs blow away all security

The Register - Anti-Virus - 7 Srpen, 2024 - 19:00
Let's get physical, physical ... I don't wanna hear your MMU talk

Black Hat  Computer security researchers at the CISPA Helmholtz Center for Information Security in Germany have found serious security flaws in some of Alibaba subsidiary T-Head Semiconductor's RISC-V processors.…

Kategorie: Viry a Červi

Windows 11 můžete pořád získat zdarma. Upgrade provedete takhle

Živě.cz - 7 Srpen, 2024 - 18:45
Microsoft poskytuje bezplatný upgrade na Windows 11. •Přejít můžete ze starších licencovaných verzí Windows. •Windows 11 jsou dostatečně dobré a Desítky se pomalu loučí.
Kategorie: IT News

Macs are becoming more locked down

Computerworld.com [Hacking News] - 7 Srpen, 2024 - 18:07

Enterprises are becoming increasingly impressed by the robust security of Macs, and Apple is locking its platform down even more firmly with macOS Sequoia and a couple of changes to improve defenses against malware and “camfecting.” This reflects the company’s continued mission to ensure platform security by design.

Gatekeeper empowerment

The first change is the biggest. Apple’s Gatekeeper protection is designed to stop people from running unsafe applications on their Macs. When you try to install software downloaded from the Internet, you are presented with a security warning before the application will work (though it has long been possible for Mac users to bypass the protection by Control-Clicking on the application icon).

Apple has abandoned this in the latest Sequoia beta. Now, users must actively open Settings > Privacy & Security to permit their system to run such apps on a per-app basis. 

While the impact of this change is slight — you can still install and use apps obtained elsewhere — it should help prevent users from accidentally installing malware because it makes the whole process more intentional. Less-experienced users become less likely to be tricked into giving such approval by the app installation screen.

Apple recommends notarization

The real aim of the change is to prevent users who might be less tech-savvy from being tricked into bypassing Gatekeeper. In an ideal world, Apple would like all apps installed on Macs to at least notarized, the company confirms.

“If you distribute software outside of the Mac App Store, we recommend that you submit your software to be notarized,” Apple says. “The Apple notary service automatically scans your Developer ID-signed software and performs security checks. When your software is ready for distribution, it’s assigned a ticket to let Gatekeeper know it’s been notarized so customers can run it with confidence.”

This is a similar process to what Apple is trying to achieve on iOS devices in Europe. The goal is to secure the user and the platform, while also narrowing the size of the attack surface on its systems.

Camfecting and how to stop it

The second change will seem annoying to some, but does at least put Mac users in control. If you have ever installed screen recording or video conferencing software, you were probably asked to provide permission for those applications to capture your Mac screen. You likely went ahead and gave that permission and forgot about it — but that means applications you (or someone with access to your Mac) gave such permission to might be able to secretly continue recording your actions.

This improves in macOS Sequoia, which will require that you review and confirm this permission once a week. A dialog box will appear explaining the app wants to access the computer screen and audio, and giving you two choices: disable that permission, or “Continue to Allow” access.

While some might see this process as overly intrusive, it should help protect Macs against some in-person and malware-based camfecting attacks, as any application that has permission to access the camera/screen recording will be surfaced once a week. That means if an app you didn’t expect to see there appears on the list, you should take immediate steps to secure your device.

User controlled security

Seen in context, these latest security improvements mean the Mac is becoming better locked down as Apple works to make security protections you already have in place more understandable.

Take the Privacy & Security section of Settings for example: Over time, this has become an extensive, perhaps daunting, collection of options Apple has made easier to understand. In Sequoia, you can now more easily see how many apps enjoy full or partial access to the various settings and have a guide to help you manage those settings.

Again and again with its security improvements, Apple continues working to make security an intentional choice, explains what it is users are securing, and is creating device management APIs IT can use to ensure that their entire fleet remains as secure as it can possibly be — no kernel access required.

Please follow me on Mastodon, or join me in the AppleHolic’s bar & grill and Apple Discussions groups on MeWe.

Kategorie: Hacking & Security

Skype ruší reklamy. Nenarazíte na ně ani v konverzacích, ani v kanálech

Živě.cz - 7 Srpen, 2024 - 17:45
Skype žije, moc o něm ale neslyšíme. Před téměř rokem oslavil 20. narozeniny a integroval AI funkce. V tomhle nepolevuje, kromě další AI novinek ale také zcela odstraňuje reklamy. Změny se jako první promítly do kanálu Insider, kam je přinesl Skype 8.125.76.201. Microsoft slibuje, že reklamy zmizí ...
Kategorie: IT News

Fighting AI fire with AI fire

The Register - Anti-Virus - 7 Srpen, 2024 - 17:00
Palo Alto Networks reveals how AI can be harnessed to strengthen cyber security defenses David Gordon

Sponsored Post  Hackers and cyber criminals are busy finding new ways of using AI to launch attacks on businesses and organizations often unprepared to deal with the speed, scale and sophistication of the assaults directed against them.…

Kategorie: Viry a Červi

Na Ukrajině se pohybuje severokorejský obrněnec Bulsae-4 s protitankovými raketami

Živě.cz - 7 Srpen, 2024 - 16:45
Ukrajinský vzdušný průzkum detekoval v ruských pozicích severně od Charkova severorejského obrněnce Bulsae-4 • Jde o relativně moderní obrněné vozidlo, které může být vyzbrojeno celkem 8 účinnými protitankovými raketami • Toto vozidlo údajně nedávno zničilo Ukrajinci používanou britskou ...
Kategorie: IT News

Google ‘BlueBuddy’ AI assistant to guide Chromebook users through Bluetooth troubleshooting

Computerworld.com [Hacking News] - 7 Srpen, 2024 - 16:15

Google reportedly plans to add new artificial intelligence (AI) technology to Chromebooks that can help people troubleshoot issues with connecting Bluetooth devices to their laptops, as part of its ongoing and fast-moving strategy to integrate AI into its products.

According to a published report, the virtual assistant, “BlueBuddy”, will provide quick and easy answers to user questions if they can’t get a Bluetooth device paired with their Chromebook, something that often is troublesome when using devices that leverage the wireless protocol.

The website Windows Report gleaned details of the virtual assistant from developer documentation for Google’s Chromium web browser project. The documentation made mention of something called “BlueBuddy” that would allow users to “enter an issue and I will recommend a fix.”

The addition of BlueBuddy is in line with various other AI features that Google already has unveiled for Chromebooks, which provide alternatives to Windows laptops and MacBooks and which run the lightweight ChromeOS. Google already has added its much improved Gemini AI chatbot to Chromebook Plus laptops, as well as adding other AI features such as “Help me write,” an AI editing assistant; generative AI wallpaper and video call backgrounds; and Magic editor using AI to enhance photos.

The company did not respond immediately to a request for comment on Tuesday.

A trust issue?

As Google competes with Microsoft and other tech giants to achieve AI dominance, integrating the technology seamlessly into various products, including consumer devices, is one key strategic play. Not to be outdone, Microsoft also has integrated its AI assistant, Copilot, into its Microsoft 365 apps, Word, Outlook, and OneNote, to make it easy to accomplish tasks such as generating first drafts and revising text, as well as making other AI improvements.

But one blind spot tech providers may have when it comes to the seamless integration of AI into products that people already use is that, while it’s certainly helpful, maybe customers aren’t quite ready for it because it’s still unproven, observed one expert.

“It’s natural to want to incorporate the newest technological innovations into your products, and when done well, it can have amazing gains in efficiency and productivity,” noted Gal Ringel, co-founder and CEO of data-privacy firm Mine.

However, “in many cases consumers are not yet asking for AI to be added to products,” he said. That’s because “there is still the major issue of trust when it comes to AI, and trying to push the tech through without first asking why consumers are cautious and addressing those issues is not doing AI well,” he noted.

Google’s secure focus

Still, Google has a good chance of integrating AI more safely into Chromebook than a competitor like, say, Microsoft, does with Windows machines, because it has full control over the technology, observed Bradley Shimmin, chief analyst of AI platforms, analytics, and data management for research firm Omdia.

“Google really owns the laptop as it sits on the user’s desktop, in terms of how software runs on that machine,” he said. “This allows the company to provide a much better security profile than other systems.”

Google also requires that all ChromeOS devices use secure boot, which means that every time the machine boots up, it’s guaranteed to run without any malware that could possibly have been picked up beforehand, he noted. Moreover, the OS uses strong sandboxing for each app/web to prevent any in-app exposure to risk, Shimmin said.

“Taken together, these efforts means that Google can roll out OS- and app-level functionality to all current Chromebooks in short order,” he said. “And given Google’s strong adherence to security practices, I would imagine that this implementation will focus on user privacy and security.”

Kategorie: Hacking & Security

Běh na 400 metrů je atletické peklo na hranici fyziologických limitů. Sprint, který trvá příliš dlouho

Živě.cz - 7 Srpen, 2024 - 16:15
Atletika nabízí spoustu různých disciplín, ale běh na 400 metrů je mezi nimi doslova zabiják. Tahle zdánlivě krátká trať v sobě skrývá takovou kombinaci nástrah, že ji mnozí považují za vůbec nejnáročnější atletickou disciplínu. Proč je zrovna čtvrtka tak extrémně náročná? Základní problém je v ...
Kategorie: IT News

New Linux Kernel Exploit Technique 'SLUBStick' Discovered by Researchers

The Hacker News - 7 Srpen, 2024 - 16:10
Cybersecurity researchers have shed light on a novel Linux kernel exploitation technique dubbed SLUBStick that could be exploited to elevate a limited heap vulnerability to an arbitrary memory read-and-write primitive. "Initially, it exploits a timing side-channel of the allocator to perform a cross-cache attack reliably," a group of academics from the Graz University of Technology said [PDF]. "
Kategorie: Hacking & Security

New Linux Kernel Exploit Technique 'SLUBStick' Discovered by Researchers

The Hacker News - 7 Srpen, 2024 - 16:10
Cybersecurity researchers have shed light on a novel Linux kernel exploitation technique dubbed SLUBStick that could be exploited to elevate a limited heap vulnerability to an arbitrary memory read-and-write primitive. "Initially, it exploits a timing side-channel of the allocator to perform a cross-cache attack reliably," a group of academics from the Graz University of Technology said [PDF]. "Ravie Lakshmananhttp://www.blogger.com/profile/[email protected]
Kategorie: Hacking & Security

Trochu to zjednoduším, ale Elon Musk právě zažaloval inzerenty sítě X za to, že neinzerují

Živě.cz - 7 Srpen, 2024 - 15:45
„Neinzerujte. Když se někdo snaží vydírat mě přes inzerci, vydírat mě penězi… Go fuck yourself.“ A pro jistotu ještě jednou a pomaleji: „Go… fuck… yourself… Je to jasné? Čau Bobe, jestli jsi v publiku.“ To Musk v listopadu 2023 vzkázal svým inzerentům s důrazem na Roberta Igera, CEO společnosti ...
Kategorie: IT News

Roundcube Webmail Flaws Allow Hackers to Steal Emails and Passwords

The Hacker News - 7 Srpen, 2024 - 15:29
Cybersecurity researchers have disclosed details of security flaws in the Roundcube webmail software that could be exploited to execute malicious JavaScript in a victim's web browser and steal sensitive information from their account under specific circumstances. "When a victim views a malicious email in Roundcube sent by an attacker, the attacker can execute arbitrary JavaScript in the victim's
Kategorie: Hacking & Security

Roundcube Webmail Flaws Allow Hackers to Steal Emails and Passwords

The Hacker News - 7 Srpen, 2024 - 15:29
Cybersecurity researchers have disclosed details of security flaws in the Roundcube webmail software that could be exploited to execute malicious JavaScript in a victim's web browser and steal sensitive information from their account under specific circumstances. "When a victim views a malicious email in Roundcube sent by an attacker, the attacker can execute arbitrary JavaScript in the victim'sRavie Lakshmananhttp://www.blogger.com/profile/[email protected]
Kategorie: Hacking & Security

Small CSS tweaks can help nasty emails slip through Outlook's anti-phishing net

The Register - Anti-Virus - 7 Srpen, 2024 - 15:23
A simple HTML change and the warning is gone!

Researchers say cybercriminals can have fun bypassing one of Microsoft's anti-phishing measures in Outlook with some simple CSS tweaks.…

Kategorie: Viry a Červi

Nvidia reportedly trained AI models on Youtube data

Computerworld.com [Hacking News] - 7 Srpen, 2024 - 15:03

Nvidia scraped huge amounts of data from YouTube to train its AI models, even though neither Youtube nor individual YouTube channels approved the move, according to leaked documents obtained by 404 Media via Futurism.

Among other things, Nvidia reportedly used the YouTube data to train its deep learning model Cosmos, an algorithm for automated driving, a human-like AI avatar, and Omniverse, a tool for building 3D worlds.

Nvidia’s data collection lies in an ethical and legal gray area. According to Youtube’s terms of use, the company is not allowed to use YouTube data without permission. According to 404 Media, several Nvidia employees questioned the data collection and were told by managers that the decision to do so had been approved at the top of the company.

Nvidia already faces legal action, filed in May, alleging it has violated fair use copyright laws.

Kategorie: Hacking & Security

Adobe’s AI-powered customer journey tool helps ID enterprise buyers

Computerworld.com [Hacking News] - 7 Srpen, 2024 - 15:00

Adobe wants to make it easier for B2B marketers to identify and target groups of enterprise buyers with the integration of AI assistance into a new customer journey planning tool. 

Adobe Journey Optimizer (AJO) B2B is now available, the company announced Wednesday, offering an enterprise-focused alternative to the existing AJO tool, which caters to B2C marketing.

One of the key features in AJO B2B is the ability to create buyer groups to target in sales and marketing efforts — a different approach to traditional lead-based and account-based marketing, said Sundeep Parsa, vice president of product for Adobe’s customer journey management portfolio. 

Large-scale procurement decisions — such as the purchase of enterprise software or hardware, for example — now often involve lengthy sales processes with input from “committees” of as many as 15 business and technology leaders at customer organizations. This puts pressure on marketing and sales teams to establish relationships with the right people within client organizations and move towards eventual sales, he said. 

AJO B2B helps simplify that process by making it easier to access related information, said Parsa, thanks in part to the integration with Adobe’s recently unveiled Experience Platform AI assistant. For example, a sales rep can ask the AI assistant in natural language for details on buying groups at a client organization, and whether any of these are likely to be interested in a particular product.

The AI tool can also provide a “completeness score,” which can indicate that a certain job role is missing from the list of buying group contacts, said Parsa. An example might be a security software vendor that wants to include a CISO or another employee with regulatory knowledge in their sales and marketing efforts.

Adobe Journey Optimizer now has a generative AI-driven email creator.

Adobe

The AI recommendations are based on Adobe’s core model, which understands concepts such as what a buying group is, or what a lead is, alongside custom models developed based on an AJO B2B customer’s own sales and marketing data. 

Eventually, Adobe hopes the AI assistant can provide more guidance on how to target individuals within a buying group, such as a suggestion to send white papers and case studies to a technical diligence team, for instance, and an invitation to a decision-maker to an upcoming executive forum event. 

The ability to create buying groups is one of several key features in AJO B2B. Once they are identified, users can create tailored “customer journeys” for specific job roles at client organizations across platforms such as email, web, chat, and webinars. Here, the embedded AEP AI assistant can be accessed for how-to advice and troubleshooting as users build customer journeys in the application. 

AJO B2B users can then access asset libraries — including images from Adobe’s Firefly generative AI model — to create personalized emails suited to different buyer groups.  

Sales and marketing teams can also view each other’s buying group engagements —this will streamline workflows and provide more effective customer connections, said Parsa. “There are less ‘broken telephone’ scenarios; they’re able to collaborate on a common understanding of buying groups,” he said. 

Finally, dashboards that provide insights in buying group journey performance are now available in AJO B2B, with the ability to query data via the conversational AI assistant “coming soon,” Adobe said. 

“You can say, ‘Give me a trend of the buying groups over the last six months and give me a linearity model by month.’ You can ask that question and the [AI assistant will] generate that for you,” he said.

Kategorie: Hacking & Security
Syndikovat obsah