Agregátor RSS

AppSec Webinar: How to Turn Developers into Security Champions

The Hacker News - 18 Červenec, 2024 - 13:45
Let's face it: AppSec and developers often feel like they're on opposing teams. You're battling endless vulnerabilities while they just want to ship code. Sound familiar? It's a common challenge, but there is a solution. Ever wish they proactively cared about security? The answer lies in a proven, but often overlooked, strategy: Security Champion Programs — a way to turn developers from
Kategorie: Hacking & Security

AppSec Webinar: How to Turn Developers into Security Champions

The Hacker News - 18 Červenec, 2024 - 13:45
Let's face it: AppSec and developers often feel like they're on opposing teams. You're battling endless vulnerabilities while they just want to ship code. Sound familiar? It's a common challenge, but there is a solution. Ever wish they proactively cared about security? The answer lies in a proven, but often overlooked, strategy: Security Champion Programs — a way to turn developers from The Hacker Newshttp://www.blogger.com/profile/[email protected]
Kategorie: Hacking & Security

Méně spamu a reklamních sdělení. Seznam Email zavedl odhlašování newsletterů na jedno kliknutí

Živě.cz - 18 Červenec, 2024 - 13:45
Seznam po vzoru velkých zahraničních služeb pro elektronickou poštu zavádí funkci one-click unsubscribe. Uživatelům umožní odhlašovat newslettery a jiná reklamní sdělení jedním tlačítkem. Mezi předmět zprávy a tělo vměstnal šedý pruh s tlačítkem Odhlásit odběr. Novinka je zatím dostupná jen ve ...
Kategorie: IT News

Automated Threats Pose Increasing Risk to the Travel Industry

The Hacker News - 18 Červenec, 2024 - 13:00
As the travel industry rebounds post-pandemic, it is increasingly targeted by automated threats, with the sector experiencing nearly 21% of all bot attack requests last year. That’s according to research from Imperva, a Thales company. In their 2024 Bad Bot Report, Imperva finds that bad bots accounted for 44.5% of the industry’s web traffic in 2023—a significant jump from 37.4% in 2022. 
Kategorie: Hacking & Security

Automated Threats Pose Increasing Risk to the Travel Industry

The Hacker News - 18 Červenec, 2024 - 13:00
As the travel industry rebounds post-pandemic, it is increasingly targeted by automated threats, with the sector experiencing nearly 21% of all bot attack requests last year. That’s according to research from Imperva, a Thales company. In their 2024 Bad Bot Report, Imperva finds that bad bots accounted for 44.5% of the industry’s web traffic in 2023—a significant jump from 37.4% in 2022.  The Hacker Newshttp://www.blogger.com/profile/[email protected]
Kategorie: Hacking & Security

Severe Linux Kernel Privilege Escalation Bugs Could Compromise Entire Systems

LinuxSecurity.com - 18 Červenec, 2024 - 13:00
The Cybersecurity and Infrastructure Security Agency (CISA) recently added a new Linux kernel privilege escalation bug ( CVE-2024-1086 ) to its Known Exploited Vulnerabilities (KEV) catalog . This bug is being actively exploited in the wild, and federal organizations have been given a deadline of June 20th to patch it, suggesting that private organizations follow suit.
Kategorie: Hacking & Security

SSD opět za rozumnou cenu jako před krizí. Dvouterový Kingston KC3000 vhodný do PC i PS5 je za 2815 Kč

Živě.cz - 18 Červenec, 2024 - 12:45
CZC prostřednictvím Allegra tento týden prodává Kingston KC3000 s kapacitou 2 TB za 2815 Kč. Nejde o historicky nejnižší cenu, loni na podzim se tento model dal koupit ještě o tři stovky levněji. Jenže pak přišla krize, výrobci paměťových čipů zastavili produkci, protože měli plné skladové zásoby, ...
Kategorie: IT News

Maximum-severity Cisco vulnerability allows attackers to change admin passwords

The Register - Anti-Virus - 18 Červenec, 2024 - 12:37
You’re going to want to patch this one

Cisco just dropped a patch for a maximum-severity vulnerability that allows attackers to change the password of any user, including admins.…

Kategorie: Viry a Červi

Want ROI from genAI? Rethink what both terms mean

Computerworld.com [Hacking News] - 18 Červenec, 2024 - 12:00

When generative AI popularity and marketing hype went into overdrive last year, just about every enterprise launched a wide range of genAI projects. And for various reasons, very few of them delivered the kind of return on investment that CEOs and board members had expected.

That meant that 2024 has become the year of AI postmortems and recriminations about why projects went sour and who was to blame. What can IT leaders do now to make sure that genAI projects launched later this year and throughout 2025 fare better? Experts are suggesting a radical rethinking of how ROI should be measured in genAI deployments, as well as the kinds of projects where generative AI belongs at all.

“We have an AI ROI paradox in our sector, and we have to overcome it,” said Atefeh “Atti” Riazi, CIO for media enterprise Hearst, which reported $12 billion in revenue last year. “Although we have [years of experience] measuring the ROI for IT on lots of other projects, AI is so disruptive that we don’t really yet understand its impacts. We don’t understand the implications of it long term.”

 

When boards push down genAI mandates — and LOBs go rogue

After OpenAI captured the attention of the industry when consumer fascination with ChatGPT surged in early 2023, Conor Twomey observed a “wave of euphoria and fear that swept over every boardroom.” AI vendors tried to take advantage of this euphoria by marketing their own version of FUD (fear, uncertainty, and doubt), said Twomey, head of AI strategy at data management firm KX.

“Every organization went down the same path and said, ‘We don’t know what this thing is capable of.’”

That sparked a flood of genAI deployments ordered from boards of directors and, to a lesser extent, CEOs. This was happening to an extent that has not been seen since the early days of web euphoria around 1994.

“That was something different with generative AI, where a lot of the motion came top-down,” said Rajiv Shah, who manages AI strategy for Snowflake, a cloud data storage and analytics service provider. “Deep learning, for example, was certainly hyped up, but it didn’t have the same top-down pushing.”

Shah says this top-down approach colored and often complicated the traditional requirements for ROI analysis prior to major rollouts. Little wonder that those rollouts failed to meet expectations.

And mandates from above weren’t the only source of pressure IT leaders faced to push through genAI projects. Many business units also brought AI ideas to IT, and IT pointed out why they would be unlikely to be successful. And those departments often said, “Thanks for the input. We are doing it anyway.”

Such projects tend to shift focus away from companies’ true priorities, notes Kelwin Fernandes, CEO at AI consultant NILG.AI.

“I see genAI being applied in non-core processes that won’t directly affect the core business, such as chatbots or support agents. These projects lack support and long-term engagement from the organization,” Fernandes said. “I see genAI not bringing the promised ROI because people moved their priorities from making better decisions to building conversational interfaces or chatbots.”

Inflated expectations, underestimated costs

Early genAI apps often delivered breathtaking results in small pilots, setting expectations that didn’t carry over to larger deployments. “One of the primary culprits of the cost versus value conundrum is lack of scalability,” said KX’s Twomey.

He points to an increasing number of startup companies using open-source genAI technology that is “sufficient for introductory deployments, meaning they work nicely with a couple hundred unstructured documents. Once enterprises feel comfortable with this technology and begin to scale it up to hundreds of thousands of documents, the open-source system bloats and spikes running costs,” he said.

“Same goes for usage,” he added. “When genAI is inserted into a workflow ideal for a subset of users and then exponentially more users are added, it doesn’t work as hoped.”

Patrick Byrnes, formerly senior consultant for AI at Deloitte and now an AI consultant for DataArt, attributes some of the inflated ROI expectations for generative AI projects to the impressive performance delivered by the earliest genAI applications.

“If you go into Gemini or ChatGPT and ask it something basic, you can get an incredible response right away,” he said. Expecting similar results on a larger scale, “some enterprises did not start small. Right out of the gate, they went with high-impact customer facing efforts.”

Indeed, many of the ROI shortcomings with genAI deployments are a result of executives not thinking through the rollout implications sufficiently, according to an executive in the AI field who asked that her name and affiliation not be used.

“Automation driven by AI leads to productivity gains, but often the cost to enable it is overlooked,” she said. “Enterprises focus on model development, training, and system infrastructure but don’t accurately account for cost of data prep. They spin up massive data sets for AI, but small errors can make it useless, which also leads employees to mistrust outputs, leading to costs without ROI.”

Another overlooked factor, she noted, is that many AI vendors are currently focused on customer acquisition, keeping costs down in the short term. “Then they will ratchet up prices with an eye toward profitability, which will lead to higher costs for enterprise users in the future.”

Those costs are not likely to get meaningfully better by 2025. IDC noted that the costs with generative AI efforts are extensive.

“Generative AI requires enormous levels of compute power. NVIDIA’s workhorse chip that powers the GPUs for datacenters and the AI industry costs ~$10,000 per chip,” the analyst firm said in a September 2023 report. “Operational costs are in the range of $4 million to $5 million monthly, and businesses expect model training costs to exceed $5 million. Added to this are electricity costs and datacenter management.”

The hallucination challenge

On top of all this is the fact that genAI periodically hallucinates, meaning that the system makes things up. That will deliver a bitter surprise if the company is trusting it to analyze critical data in healthcare, finance, or aerospace — and even if it is simply relying on genAI to accurately summarize what happened during a meeting.

For business managers who are used to trusting the numbers generated by a spreadsheet projecting revenue growth, that can be unsettling. Those executives are used to the projections failing because an employee’s assumptions turned out to be too optimistic, but they are not used to Excel lying about the mathematical result of 800 numbers being multiplied.

And it cuts into ROI because all generative AI output must be closely fact-checked by a human, erasing many of the perceived productivity gains.

Hearst’s Riazzi sees the genAI hallucination issue as temporary. “Hallucinations do not bother me. Eventually, it will address itself,” she said.

More importantly, she argues that business simply needs to apply the same supervision and oversight to genAI that it has for decades with its human employees, stressing that “people hallucinate as well” and coders have been known to write “buggy code.”

“Human error is already a big issue in medicine and patient care,” Riazzi said. “There is a lot of bad data out there, but there is no difference [in managing hallucinations] from what we are already doing today. We see a lot of data cleansing going on.”

NILG.AI’s Fernandes is doubtful that genAI hallucinations will ever go away, but he says that shouldn’t necessarily be a dealbreaker for any application. It is simply a matter of enterprises adjusting their thinking to deal with an imperfect reality, something they already have experience doing.

“We have quality assurance to reduce production errors, but errors still exist, and that’s why we have return policies and warranties. We use the QA process as a fallback plan of the factory errors and the warranty as a fallback plan of the QA,” he said. “All those actions reduce the probability of failure to a certain point. They can still exist; we have learned to do business with those errors. We need to understand — on each application — what the right fallback action is for an AI error.”

Looking for ROI in all the wrong places

Even when genAI succeeds, its results are sometimes less valuable than anticipated. For example, generative AI is a very effective tool for creating information that is generally handled by lower-level staffers or contractors, where it is simply tweaking existing material for use in social media or e-commerce product descriptions. It still needs to be verified by humans, but it has the potential for cutting costs in creating low-level content.

But because it often is low level, some have questioned whether that is really going to deliver any meaningful financial advantages.

“Even before AI, the market for mediocre written and visual content was already fairly saturated, so it’s no surprise that some enterprises have discovered there is limited ROI in similar mediocre content generated by AI,” said Brian Levine, a managing director at consultant Ernst & Young.

What ROI should look like for enterprise genAI

KX’s Twomey questioned whether many senior enterprise executives have a realistic handle on what ROI should mean in a generative AI rollout, especially in the first year where it is mostly an experiment rather than a traditional deployment.

“Enterprise deployment of genAI has slowed down — and will continue to do so — as enterprises experience an increase in costs that exceeds the value they are getting,” Twomey said. “When this happens, it tells me that enterprises aren’t understanding the ROI and they’re not appropriately controlling TCO.”

And therein lies the conundrum: How can executives appropriately control the total cost of ownership and appropriately interpret the return on investment if they have no idea what either should look like in a generative AI reality?

This gets even more difficult when secondary ROI factors are considered, such as market and customer/prospect perceptions, Twomey points out.

“This complexity with transiting — and scaling — AI workflows in production has been prohibitive for many enterprise deployments,” he said. “The repercussions are clear losses in time, money, and effort that can also result in competitive disadvantages, reputational damage, and stalled future innovation initiatives.”

It may even be premature to measure ROI monetarily for genAI. “The value for enterprises today is to practice, to experiment,” said DataArt’s Byrnes. “That is one of the things that people don’t really appreciate. There is a strong learning component to all of this.”

Focusing genAI

But while experimentation is important, it should be done intelligently. EY’s Levine notes that some companies are inclined to trust generative AI too much when it comes to methodology, allowing the software to figure out how to obtain the desired information. 

Consider the example of a large and growing retail chain that turned to genAI to figure out the best locations for its next 50 stores. Given insufficient guidelines, the AI went off the rails and returned completely unusable results, according to inside sources.

Instead of simply telling the AI to make recommendations for the best places to launch stores, Levine suggests that the retailer would be better served by coding very extensive and very specific lists of how it currently evaluates new locations. That way, the software can follow those instructions, and the chances of it making errors is somewhat reduced.

Would an enterprise ever tell a new employee, “Figure out where our next 50 stores should be. Bye!”? Unlikely. The business would spend days training that employee on what to look for and where to look, and the employee would be shown lots of examples of how it had been done before. If a manager wouldn’t expect a new employee to figure out how to answer the question without extensive training, why would that manager expect genAI to fare any better?

Given that ROI simply means value delivered minus cost, the best way to improve value is to increase the accuracy and usability of the answers provided. Sometimes, that means not giving genAI broad requests and seeing what it chooses to do. That might work in machine learning, but genAI is a different animal.

To be fair, there absolutely are situations where it makes sense to set genAI loose and see where it chooses to go. But for the overwhelming majority of situations, IT will see far better results if it takes the time to train genAI appropriately.

Reining in genAI projects

Now that the initial hype over genAI has died down, it’s important for IT leaders to protect their organizations by focusing on deployments that will bring true value to the company, say AI strategists.

One suggestion for trying to better control generative AI efforts is for enterprises to create AI committees consisting of specialists in various AI disciplines, Snowflake’s Shah said. That way, every single generative AI proposal originating anywhere in the enterprise would have to be run by this committee, who could veto or approve any idea.

“With security and legal, there are so many things that can go wrong with a generative AI effort. This would make executives go in front of the committee and explain exactly what they wanted to do and why,” he said.

Shah sees these AI approval committees as short-term placeholders. “As we mature our understanding, the need for those committees will go away,” he said.

Another suggestion comes from NILG.AI’s Fernandes. Instead of flashy, large-scale genAI projects, enterprises should focus on smaller, more controllable objectives such as “analyzing a vehicle’s damage report and estimating costs, or auditing a sales call and identifying if the person follows the script, or recommending products in e-commerce based on the content/description of those products instead of just the interactions/clicks.”

And instead of implicitly trusting genAI models, “we shouldn’t use LLMs on any critical task without a fallback option. We shouldn’t use them as a source of truth for our decision-making but as an educated guess, just like you would deal with another person’s opinion.”

More by Evan Schuman:

Kategorie: Hacking & Security

Čínský bojový dron TB-001 Scorpion provedl průzkum u Okinawy

Živě.cz - 18 Červenec, 2024 - 11:45
V pondělí 8. července ve Východočínském moři vzlétl dron TB-001 Scorpion • Prolétl mezi Okinawou a ostrovy Mijako a otočil se nad Filipínským mořem • Jeho let vyvolal okamžitou reakci japonských vzdušných sil sebeobrany
Kategorie: IT News

SAP AI Core Vulnerabilities Expose Customer Data to Cyber Attacks

The Hacker News - 18 Červenec, 2024 - 11:33
Cybersecurity researchers have uncovered security shortcomings in SAP AI Core cloud-based platform for creating and deploying predictive artificial intelligence (AI) workflows that could be exploited to get hold of access tokens and customer data. The five vulnerabilities have been collectively dubbed SAPwned by cloud security firm Wiz. "The vulnerabilities we found could have allowed attackers
Kategorie: Hacking & Security

SAP AI Core Vulnerabilities Expose Customer Data to Cyber Attacks

The Hacker News - 18 Červenec, 2024 - 11:33
Cybersecurity researchers have uncovered security shortcomings in SAP AI Core cloud-based platform for creating and deploying predictive artificial intelligence (AI) workflows that could be exploited to get hold of access tokens and customer data. The five vulnerabilities have been collectively dubbed SAPwned by cloud security firm Wiz. "The vulnerabilities we found could have allowed attackers Newsroomhttp://www.blogger.com/profile/[email protected]
Kategorie: Hacking & Security

New UK government downplays AI regulation in program for the next year

Computerworld.com [Hacking News] - 18 Červenec, 2024 - 11:22

As Britain’s King Charles III stood up in the Houses of Parliament on Wednesday to present the new Labour government’s proposed legislative program, technology experts were primed for any mention of artificial intelligence (AI).

In the event, amidst the colorful pomp and arcane ceremony the British state is famous for in the state opening of Parliament, what the speech delivered was mostly a promise of future legislation shorn of any detail on the form this will take.

Talking head

The King’s Speech is where Britain’s elected government, in this case the recently elected Labour administration, lays out bills it plans to enact into law in the coming year.

The monarch delivers the speech, but it is written for him by the government. His role is purely constitutional and ceremonial.

It is hard to imagine a greater contrast than a ceremony whose origins date back hundreds of years and topics such as AI, which embodies the promise and peril of 21st century technology.

The government “will seek to establish the appropriate legislation to place requirements on those working to develop the most powerful artificial intelligence models,” announced King Charles.

Beyond the focus on regulating models used for generative AI, though, that leaves the government’s plans and their timing open to interpretation. But even the willingness to act marks a change of direction from the policy of the deposed Conservative administration to legislate on AI within narrow constraints.

Everyone wants to regulate AI

There had been an expectation that the new government would go further, primed by general statements of intent in the Labour Party Manifesto 2024.

“We will ensure our industrial strategy supports the development of the Artificial Intelligence (AI) sector, removes planning barriers to new datacentres,” stated the Manifesto before turning to the need for regulation.

“Labour will ensure the safe development and use of AI models by introducing binding regulation on the handful of companies developing the most powerful AI models and by banning the creation of sexually explicit deepfakes.”

The disappearance of these modest ambitions could signal that the government has yet to work out what “binding regulation” should look like at a time when other legislation seems more pressing.

The previous government worried that too much regulation risked stifling development. Equally, no regulation at all carries the risk that by the time it becomes necessary it will be too late to act.

The EU, of course, already has its AI Act while the US is still working through a mixture of proposed legislation bolstered by the Biden administration’s executive orders describing first principles.

Still too early?

A comment by open-source industry advocate OpenUK in advance of the King’s Speech sums up the dilemma.

“There are lessons the UK can learn from the EU’s AI Act that will likely prove to be an overly prescriptive and unwieldy cautionary tale of regulatory capture with only the largest companies able to comply, stifling innovation in the EU,” said the organization’s CEO, Amanda Brock.

It was still too early to legislate in a way that creates walls and legal restrictions.

“For the UK to stay relevant globally, and to build successful AI companies, openness is crucial. This will allow the UK ecosystem to grow its status as a world leader in open- source AI, behind only the US and China,” she added.

But not everyone is convinced that the wait-and-see approach is the right one.

“Regulation is not just about setting restrictions on AI development; it’s about providing the clarity and guidance needed to promote safe and sustainable innovation,” said Bruna de Castro e Silva of AI Governance specialist Saidot.

“As the EU moves forward with publishing its official AI Act, UK businesses have been left waiting for clear guidance on how to develop and deploy AI safely and ethically.”

This is why AI regulation is seen as a thankless task. Take an interventionist approach and experts will line up to say you’re stifling a technology with huge economic and social potential. Take a more cautious approach and others will say you’re not doing enough.

Last November, the previous Conservative administration of Rishi Sunak jumped on the theme of AI, hosting a global AI Safety Summit with symbolic flourish at the famous Second World War code-breaking facility just outside London, Bletchley Park.

At that event, several big AI names — OpenAI, Google DeepMind, Anthropic — undertook to give a new Frontier AI Taskforce early access to their models to conduct safety evaluations.

The new government inherits that promise even if to many others it will seem as if certainty about the UK’s AI legislative regime is no nearer than it was then.

More on AI regulation:

Kategorie: Hacking & Security

TAG-100: New Threat Actor Uses Open-Source Tools for Widespread Attacks

The Hacker News - 18 Červenec, 2024 - 11:10
Unknown threat actors have been observed leveraging open-source tools as part of a suspected cyber espionage campaign targeting global government and private sector organizations. Recorded Future's Insikt Group is tracking the activity under the temporary moniker TAG-100, noting that the adversary likely compromised organizations in at least ten countries across Africa, Asia, North America,
Kategorie: Hacking & Security

TAG-100: New Threat Actor Uses Open-Source Tools for Widespread Attacks

The Hacker News - 18 Červenec, 2024 - 11:10
Unknown threat actors have been observed leveraging open-source tools as part of a suspected cyber espionage campaign targeting global government and private sector organizations. Recorded Future's Insikt Group is tracking the activity under the temporary moniker TAG-100, noting that the adversary likely compromised organizations in at least ten countries across Africa, Asia, North America, Newsroomhttp://www.blogger.com/profile/[email protected]
Kategorie: Hacking & Security

Nvidia přechází na open source moduly jádra pro GPU

AbcLinuxu [zprávičky] - 18 Červenec, 2024 - 10:46
Společnost Nvidia na svém technickém blogu informuje o přechodu na open source moduly jádra pro GPU (představila je v květnu 2022). Na nejnovějších platformách Grace Hopper nebo Blackwell lze používat pouze open source moduly. Pro Turing, Ampere, Ada Lovelace nebo Hopper se doporučuje přejít na open source moduly, v oficiálním instalátoru si lze vybrat mezi proprietárním a open source modulem. Nejstarší Maxwell, Pascal nebo Volta vyžadují proprietární ovladače.
Kategorie: GNU/Linux & BSD

Elon Musk stěhuje sídla X a SpaceX z Kalifornie do Texasu. Jde mu o zákony, principy a daně

Živě.cz - 18 Červenec, 2024 - 10:45
Elon Musk oznámil, že přesune centrály vesmírné společnosti SpaceX a také sociální sítě X z Kalifornie do Texasu. Jako jeden z důvodů označil nový kalifornský zákon, který zakazuje školským úřadům vyžadovat od učitelů, aby informovali rodiče o genderové identitě nebo sexuální orientaci jejich ...
Kategorie: IT News
Syndikovat obsah