Agregátor RSS

OpenAI is working on new reasoning AI technology

Computerworld.com [Hacking News] - 15 Červenec, 2024 - 13:48

ChatGPT developer OpenAI is developing a new kind of reasoning AI models with the project name “Strawberry” that can be used for research, according to a report by Reuters. Strawberry was apparently earlier known by the name “Q” and would be considered a breakthrough within OpenAI.

The plan is for the new Strawberry models to not only be able to generate answers based on instructions, but also to be able to plan ahead by navigating the internet independently and reliably perform what OpenAI calls “deep research.”

How Strawberry works under the hood remains unclear; it is also unknown how long the technology may be from completion. In a comment to Reuters, an OpenAI spokesperson said continued research into new AI opportunities is ongoing within the industry. However, the spokesperson did not say anything specific about Strawberry in particular.

More OpenAI news:

Kategorie: Hacking & Security

OpenAI whistleblowers seek SEC probe into ‘restrictive’ NDAs with staffers

Computerworld.com [Hacking News] - 15 Červenec, 2024 - 13:47

Some employees of ChatGPT-maker OpenAI have reportedly written to the US Securities and Exchange Commission (SEC) seeking a probe into some employee agreements, which they term restrictive non-disclosure agreements (NDAs).

These staffers-turned-whistleblowers have written to the SEC alleging that the company forced their employees to sign agreements that were not in compliance with SEC’s regulations.

“Given the well-documented potential risks posed by the irresponsible deployment of AI, we urge the commissioners to immediately approve an investigation into OpenAI’s prior NDAs, and to review current efforts apparently being undertaken by the company to ensure full compliance with SEC rules,” read the letter shared with Reuters by the office of Senator Chuck Grassley.

The same letter alleges that OpenAI made employees sign agreements that curb their federal rights to whistleblower compensation and urges the financial watchdog to impose individual penalties for each such agreement signed.

Further, the whistleblowers have alleged that OpenAI’s agreements with employees restricted them from making any disclosure to authorities without checking with the management first and any failure to comply with these agreements would attract penalties for the staffers.

The company, according to the letter, also did not create any separate or specific exemptions in the employee non-disparagement clauses for disclosing securities violations to the SEC.

An email sent to OpenAI about the letter went unanswered.

The Senator’s office also cast doubt about the practices at OpenAI. “OpenAI’s policies and practices appear to cast a chilling effect on whistleblowers’ right to speak up and receive due compensation for their protected disclosures,” the Senator was quoted as saying.

Experts in the field of AI have been warning against the use of the technology without proper guidelines and regulations.

In May, more than 150 leading artificial intelligence (AI) researchers, ethicists, and others signed an open letter calling on generative AI (genAI) companies to submit to independent evaluations of their systems to maintain basic protection against the risks of using large-scale AI.

Last April, the who’s who of the technology industry called for AI labs to stop training the most powerful systems for at least six months, citing “profound risks to society and humanity.”

That open letter, which now has more than 3,100 signatories including Apple co-founder Steve Wozniak, called out San Francisco-based OpenAI Lab’s recently announced GPT-4 algorithm in particular, saying the company should halt further development until oversight standards were in place. OpenAI, on the other hand, in May formed a safety and security committee led by board members as they started researching their next large language models.

More OpenAI news:

Kategorie: Hacking & Security

Nejen Apple Pay, iPhony se otevřou i pro další NFC platby. Výrobce se dohodl s Evropskou komisí

Živě.cz - 15 Červenec, 2024 - 13:45
Evropská komise prošetřuje ze zneužití dominantního postavení • Apple Pay už nebudou jedinou platební službou na iPhonech • Komise přijala závazek Applu pustit do iOS i jiné NFC peněženky
Kategorie: IT News

Analysts expect weak demand for Apple Vision Pro

Computerworld.com [Hacking News] - 15 Červenec, 2024 - 13:42

On Friday, Apple Vision Pro was launched in Europe. But the analysts do not expect any major sales success.

According to research firm IDC, fewer than 500,000 units of the mixed reality headset will be sold in 2024, which is partly due to the high price. Apple’s headset costs $3,499; that corresponds, for example, to almost 50,000 Swedish kronor with included VAT and other fees.

By comparison, Facebook’s parent company Meta sells its Meta Quest 3 headset for $499, and its predecessor, the Meta Quest 2, retails for $299.

According to rumors, a cheaper variant of the Apple Vision Pro will be launched in 2025, but release dates and details remain unclear.

More on Apple Vision Pro:

Kategorie: Hacking & Security

10,000 Victims a Day: Infostealer Garden of Low-Hanging Fruit

The Hacker News - 15 Červenec, 2024 - 12:52
Imagine you could gain access to any Fortune 100 company for $10 or less, or even for free. Terrifying thought, isn’t it? Or exciting, depending on which side of the cybersecurity barricade you are on. Well, that’s basically the state of things today. Welcome to the infostealer garden of low-hanging fruit. Over the last few years, the problem has grown bigger and bigger, and only now are we
Kategorie: Hacking & Security

10,000 Victims a Day: Infostealer Garden of Low-Hanging Fruit

The Hacker News - 15 Červenec, 2024 - 12:52
Imagine you could gain access to any Fortune 100 company for $10 or less, or even for free. Terrifying thought, isn’t it? Or exciting, depending on which side of the cybersecurity barricade you are on. Well, that’s basically the state of things today. Welcome to the infostealer garden of low-hanging fruit. Over the last few years, the problem has grown bigger and bigger, and only now are we The Hacker Newshttp://www.blogger.com/profile/[email protected]
Kategorie: Hacking & Security

Recenze Microsoft Surface Pro 11 se Snapdragonem. Na pohled stejný, ale konečně dokonalý

Živě.cz - 15 Červenec, 2024 - 12:45
Snapdragon X Elite nebývale sluší dobře známému tabletu od Microsoftu • Přesné, ale už zastarávající IPS nahradil skvělý OLED • Základní verze se Snapdragonem X Plus ale zůstává s IPS
Kategorie: IT News

CRYSTALRAY Hackers Infect Over 1,500 Victims Using Network Mapping Tool

The Hacker News - 15 Červenec, 2024 - 12:24
A threat actor that was previously observed using an open-source network mapping tool has greatly expanded their operations to infect over 1,500 victims. Sysdig, which is tracking the cluster under the name CRYSTALRAY, said the activities have witnessed a tenfold surge, adding it includes "mass scanning, exploiting multiple vulnerabilities, and placing backdoors using multiple [open-source
Kategorie: Hacking & Security

CRYSTALRAY Hackers Infect Over 1,500 Victims Using Network Mapping Tool

The Hacker News - 15 Červenec, 2024 - 12:24
A threat actor that was previously observed using an open-source network mapping tool has greatly expanded their operations to infect over 1,500 victims. Sysdig, which is tracking the cluster under the name CRYSTALRAY, said the activities have witnessed a tenfold surge, adding it includes "mass scanning, exploiting multiple vulnerabilities, and placing backdoors using multiple [open-source Newsroomhttp://www.blogger.com/profile/[email protected]
Kategorie: Hacking & Security

The promise and peril of ‘agentic AI’

Computerworld.com [Hacking News] - 15 Červenec, 2024 - 12:00

Amazon last week made an unusual deal with a company called Adept in which Amazon will license the company’s technology and also poach members of its team, including the company’s co-founders.

The e-commerce, cloud computing, online advertising, digital streaming and artificial intelligence (AI) giant is no doubt hoping the deal will propel Amazon, which is lagging behind companies like Microsoft, Google and Meta in the all-important area of AI. (In fact, Adept had previously been in acquisition talks with both Microsoft and Meta.) 

Adept specializes in the hottest area of AI that hardly anyone is talking about, but which some credibly claim is the next leap forward for AI technology. 

But wait, what exactly is agentic AI? The easiest way to understand it is by comparison to LLM-based chatbots. 

How agentic AI differs from LLM chatbots

We know all about LLM-based chatbots like ChatGPT. Agentic AI systems are based on the same kind of large language models, but with important additions. While LLM-based chatbots respond to specific prompts, trying to deliver what’s asked for literally, agentic systems take that further by incorporating autonomous goal-setting, reasoning, and dynamic planning. They’re also designed to integrate with applications, systems, and platforms. 

While LLMs, such as ChatGPT, reference huge quantities of data and hybrid systems, like Perplexity AI, combine that with real-time web searches, agentic systems further incorporate changing circumstances and contexts to pursue goals, causing them to reprioritize tasks and change methods to achieve those goals. 

While LLM chatbots have no ability to make actual decisions, agentic systems are characterized by advanced contextual reasoning and decision-making. Agentic systems can plan, “understand” intent, and more fully integrate with a much wider range of third-party systems and platforms. 

What’s it good for?

One obvious use for agentic AI is as a personal assistant. Such a tool could — based on natural-language requests — schedule meetings and manage a calendar, change times based on others’ and your availability, and remind you of the meetings. And it could be useful in the meetings themselves, gathering data in advance, creating an agenda, taking notes and assigning action items, then sending  follow-up reminders. All this could theoretically begin with a single plain-language, but vague, request.

It could read, categorize and answer emails on your behalf, deciding which to answer and which to leave for you to respond to. 

You could tell your agentic AI assistant to fill out forms for you or subscribe to services, entering the requested information and even processing any payment. It could even theoretically surf the web for you, gathering information and creating a report.

Like today’s LLM chatbots, agentic AI assistants could use multimodal input and could receive verbal instructions along with audio, text, and video inputs harvested by cameras and microphones in your glasses. 

Another obvious application for agentic AI is for customer service. Today’s interactive voice response (IVR) systems seem like a good idea in theory — placing the burden on customers to navigate complex decision trees while struggling with inadequate speech recognition so that a company doesn’t need to pay humans to interface with customers — but fail in practice. 

Agentic AI promises to transform automated customer service. Such technology should be able to function as if it not only understands the words but also the problems and goals of a customer on the phone, then perform multi-step actions to arrive at a solution.

They can do all kinds of things a lower-level employee might do — qualify sales leads, do initial outreach for sales calls, automate fraud detection and loan application processing at a bank, autonomously screen candidates applying for jobs, and even conduct initial interviews and other tasks. 

Agentic AI should be able to achieve very large-scale goals as well — optimize supply chains and distribution networks, manage inventory, optimize delivery routes, reduce operating costs, and more.

The risk of agentic AI

Let’s start with the basics. The idea of AI that can operate “creatively” and autonomously — capable of doing things across sites, platforms and networks, directed by a human-created prompt with limited human oversight — is obviously problematic.

Let’s say a salesperson directs agentic AI to set up a meeting with a hard-to-reach potential client. The AI understands the goal and has vast information about how actual humans do things, but no moral compass and no explicit direction to conduct itself ethically.  

One way to reach that client (based on the behavior of real humans in the real world) could be to send an email, tricking the person into clicking on self-executing malware, which would open a trojan on the target’s system to be used for exfiltrating all personal data and using that data to find out where that person would be at a certain time. The AI could then place a call to that location, and say there’s an emergency. The target would then take the call, and the AI would try to set up a meeting.

This is just one small example of how an agentic AI without coded or prompted ethics could do the wrong thing. The possibilities for problems are endless. 

Agentic AI could be so powerful and capable that there’s no way this ends well without a huge effort on the development and maintenance of AI governance frameworks that include guidelines, safety measures, and constant oversight by well-trained people.

Note: The rise of LLMs, starting with ChatGPT, engendered fears that AI could take jobs away from people; agentic AI is the technology that could really do that at scale.

The worst-case scenario would be for millions of people to be let go and replaced by agentic AI. The best case is that the technology  would be inferior to a human partnering with it. With such a tool, human work could be made far more efficient and less error-prone. 

I’m pessimistic that agentic AI can benefit humanity if the ethical considerations remain completely in the hands of Silicon Valley tech bros, investors, and AI technologists. We’ll need to combine expertise from AI, ethics, law, academia, and specific industry domains and move cautiously into the era of agentic AI.

It’s reasonable to feel both thrilled by the promise of agentic AI and terrified about the potential negative effects. One thing is certain: It’s time to pay attention to this emerging technology. 

With a giant, ambitious, capable, and aggressive company like Amazon making moves to lead in agentic AI, there’s no ignoring it any longer. 

More by Mike Elgan:

Kategorie: Hacking & Security

Renegade business units trying out genAI will destroy the enterprise before they help

Computerworld.com [Hacking News] - 15 Červenec, 2024 - 12:00

One of the more tired cliches in IT circles refers to “Cowboy IT” or “Wild West IT,” but it’s the most appropriate way to describe enterprise generative AI (genAI) efforts these days. As much as IT is struggling to keep on top of internal genAI efforts, the biggest danger today involves various business units globally creating or purchasing their very own experimental AI efforts.

We’ve talked extensively about Shadow AI (employees/contractors purchasing AI tools outside of proper channels) and Sneaky AI (longtime vendors silently adding AI features into systems without telling anyone). But Cowboy AI is perhaps the worst of the bunch because no one can get intro trouble. Most boards and CEOs are openly encouraging all business units to experiment with genAI and see what enterprise advantages they can unearth.

The nightmare is that almost none of those line of business (LOB) teams understand how much they are putting the enterprise at risk. Uncontrolled and unmanaged, genAI apps are absolutely dangerous.

Longtime Gartner analyst Avivah Litan (whose official title these days is Distinguished VP Analyst) wrote on LinkedIn recently about the cybersecurity dangers from these kinds of genAI efforts. Although her points were intended for security talent, the problems she describes are absolutely a bigger problem for IT.

“Enterprise AI is under the radar of most Security Operations, where staff don’t have the tools required to protect use of AI,” she wrote. “Traditional Appsec tools are inadequate when it comes to vulnerability scans for AI entities. Importantly, Security staff are often not involved in enterprise AI development and have little contact with data scientists and AI engineers. Meanwhile, attackers are busy uploading malicious models into Hugging Face, creating a new attack vector that most enterprises don’t bother to look at. 

“Noma Security reported they just detected a model a customer had downloaded that mimicked a well-known open-source LLM model. The attacker added a few lines of code that caused a forward function. Still, the model worked perfectly well, so the data scientists didn’t suspect anything. But every input to the model and every output from the model were also sent to the attacker, who was able to extract it all. Noma also discovered thousands of infected data science notebooks. They recently found a keylogging dependency that logged all activities on their customer’s Jupyter notebooks. The keylogger sent the captured activity to an unknown location, evading Security which didn’t have the Jupyter notebooks in its sights.”

IT leaders: How many of the phrases above sound a little too familiar? 

Your team “often not involved in enterprise AI development and have little contact with data scientists and AI engineers?” Bad guys “creating a new attack vector that most enterprises don’t bother to look at?” Or maybe “the model worked perfectly well so the data scientists didn’t suspect anything. But every input to the model and every output from the model were also sent to the attacker, who was able to extract it all” or a manipulated external app which your IT team “didn’t have in its sights?”

Some enterprises have debated creating a new AI executive, but that’s unlikely to help. It will more than likely be an executive with lots of responsibilities, far too little budget and no actual authority to get any business unit to comply with the AI chief’s edicts. It’s sort of like many CISOs today, a toothless manager but with even more headaches. 

The better answer is to use the best power in the world to force LOB executives to take AI efforts seriously: make it an HR-approved criteria for their annual bonus. Put massive financial penalties on any problems that result from AI efforts their unit undertakes. (Paycheck hits get their attention because it is literally money out of their pockets.) Then add a caveat: If IT approves the effort in writing, then you are fully blameless for anything bad that later happens.

Magically, getting IT signoff becomes important to those LOB leaders. Then and only then, the CIO will have the clout to protect the company from errant AI.

Another possible outcome of this carrot-stick approach is that business execs will still want to maintain control and will instead hire AI experts for their units directly. That works, too. 

The cost of trying out many of these genAI efforts — especially for a relatively short time — is often negligible. That can be bad because it makes it easy for LOB workers to underestimate the risks to the business that they are accepting. 

The potential of genAI is unlimited and exciting, but if strict rules aren’t put in place right away, it could well destroy a business before it has a chance to help. 

Yippee-ki-yay, CIO.

Kategorie: Hacking & Security

Někde je tam černá díra. Jinak by hvězdy ve výjimečné hvězdokupě Omega Centauri vylétly do vesmíru

Živě.cz - 15 Červenec, 2024 - 11:45
Omega Centauri je formálně kulová hvězdokupa, ale zároveň je jasné, že je velmi výjimečná • Od roku 2008 se vědci dohadují, jestli se v ní nachází černá díra střední velikosti či nikoli • Pokud ano, byl by to vzrušující objev
Kategorie: IT News

Při výprodeji v akci Back to School je Microsoft Windows jen za €20 a Office za €24,50

AbcLinuxu [články] - 15 Červenec, 2024 - 11:00

Ať už horečně hledáte levnou cestu k postavení hového počítače, nebo jen chcete upgradovat stárnoucí kopii Office, tato kolekce nabídky od Goodoffer24.com vám pomůže. Jejich ceny jsou opravdu neuvěřitelné, tak na co čekat?

Kategorie: GNU/Linux & BSD

Kdy se vyplatí soundbar a kdy domácí kino. Jak vybrat správné ozvučení televizoru

Živě.cz - 15 Červenec, 2024 - 10:45
Někdy je výhodnější soundbar, jindy se vyplatí domácí kino. Pomůžeme vám při rozhodování. Odpovíme na otázky, které byste si měli položit, když vybíráte systém pro vylepšení zvuku z televizoru.
Kategorie: IT News

Vyfoťte mobilem dokumenty a následně je snadno upravujte a editujte text. Ukážeme jak

Živě.cz - 15 Červenec, 2024 - 10:15
Když vyfotíte mobilem dokument, vznikne obrázek, což není praktické • Co dělat v případě, kdy potřebujete dokument upravovat? • Ukážeme vám, jak fotit dokumenty pro další zpracování
Kategorie: IT News

Intel 310, novinka o výkonu nejpomalejšího >4 roky starého Core i3

CD-R server - 15 Červenec, 2024 - 10:00
Intel se chystá rozšířit nabídku desktopových procesorů o nový model nazvaný Intel 310. Jde o dvoujádro s vypnutým boostem a 6MB L3 cache…
Kategorie: IT News

Nejlevnější chytrý ventilátor na trhu. Sloupový model z Lidlu stojí jen 1199 Kč

Živě.cz - 15 Červenec, 2024 - 09:45
Lidl tento týden o 40 % zlevnil sloupový ventilátor vlastní značky Silvercrest. model Smart Home STVS 50 A1 tak koupíte za 1199 Kč. K dispozici je v černé i bílé variantě a s tříletou zárukou. Jde o nejlevnější chytrý ventilátor na trhu. Chytrý v tom smyslu, že jej lze ovládat mobilem a zapojit do ...
Kategorie: IT News

Singapore Banks to Phase Out OTPs for Online Logins Within 3 Months

The Hacker News - 15 Červenec, 2024 - 09:19
Retail banking institutions in Singapore have three months to phase out the use of one-time passwords (OTPs) for authentication purposes when signing into online accounts to mitigate the risk of phishing attacks. The decision was announced by the Monetary Authority of Singapore (MAS) and The Association of Banks in Singapore (ABS) on July 9, 2024. "Customers who have activated their digital
Kategorie: Hacking & Security

Singapore Banks to Phase Out OTPs for Online Logins Within 3 Months

The Hacker News - 15 Červenec, 2024 - 09:19
Retail banking institutions in Singapore have three months to phase out the use of one-time passwords (OTPs) for authentication purposes when signing into online accounts to mitigate the risk of phishing attacks. The decision was announced by the Monetary Authority of Singapore (MAS) and The Association of Banks in Singapore (ABS) on July 9, 2024. "Customers who have activated their digital Newsroomhttp://www.blogger.com/profile/[email protected]
Kategorie: Hacking & Security
Syndikovat obsah