Agregátor RSS

Will AI Take Your Job? It Depends on These 4 Key Advantages AI Has Over Humans

Singularity HUB - 23 Červen, 2025 - 16:00

This framework can help you understand where AI provides value.

If you’ve worried that AI might take your job, deprive you of your livelihood, or maybe even replace your role in society, it probably feels good to see the latest AI tools fail spectacularly. If AI recommends glue as a pizza topping, then you’re safe for another day.

But the fact remains that AI already has definite advantages over even the most skilled humans, and knowing where these advantages arise—and where they don’t—will be key to adapting to the AI-infused workforce.

AI will often not be as effective as a human doing the same job. It won’t always know more or be more accurate. And it definitely won’t always be fairer or more reliable. But it may still be used whenever it has an advantage over humans in one of four dimensions: speed, scale, scope, and sophistication. Understanding these dimensions is the key to understanding AI-human replacement.

Speed

First, speed. There are tasks that humans are perfectly good at but are not nearly as fast as AI. One example is restoring or upscaling images: taking pixelated, noisy or blurry images and making a crisper and higher-resolution version. Humans are good at this; given the right digital tools and enough time, they can fill in fine details. But they are too slow to efficiently process large images or videos.

AI models can do the job blazingly fast, a capability with important industrial applications. AI-based software is used to enhance satellite and remote sensing data, to compress video files, to make video games run better with cheaper hardware and less energy, to help robots make the right movements, and to model turbulence to help build better internal combustion engines.

Real-time performance matters in these cases, and the speed of AI is necessary to enable them.

Scale

The second dimension of AI’s advantage over humans is scale. AI will increasingly be used in tasks that humans can do well in one place at a time, but that AI can do in millions of places simultaneously. A familiar example is ad targeting and personalization. Human marketers can collect data and predict what types of people will respond to certain advertisements. This capability is important commercially; advertising is a trillion-dollar market globally.

AI models can do this for every single product, TV show, website, and internet user. This is how the modern ad-tech industry works. Real-time bidding markets price the display ads that appear alongside the websites you visit, and advertisers use AI models to decide when they want to pay that price—thousands of times per second.

Scope

Next, scope. AI can be advantageous when it does more things than any one person could, even when a human might do better at any one of those tasks. Generative AI systems such as ChatGPT can engage in conversation on any topic, write an essay espousing any position, create poetry in any style and language, write computer code in any programming language, and more. These models may not be superior to skilled humans at any one of these things, but no single human could outperform top-tier generative models across them all.

It’s the combination of these competencies that generates value. Employers often struggle to find people with talents in disciplines such as software development and data science who also have strong prior knowledge of the employer’s domain. Organizations are likely to continue to rely on human specialists to write the best code and the best persuasive text, but they will increasingly be satisfied with AI when they just need a passable version of either.

Sophistication

Finally, sophistication. AIs can consider more factors in their decisions than humans can, and this can endow them with superhuman performance on specialized tasks. Computers have long been used to keep track of a multiplicity of factors that compound and interact in ways more complex than a human could trace. The 1990s chess-playing computer systems such as Deep Blue succeeded by thinking a dozen or more moves ahead.

Modern AI systems use a radically different approach: Deep learning systems built from many-layered neural networks take account of complex interactions—often many billions—among many factors. Neural networks now power the best chess-playing models and most other AI systems.

Chess is not the only domain where eschewing conventional rules and formal logic in favor of highly sophisticated and inscrutable systems has generated progress. The stunning advance of AlphaFold 2, the AI model of structural biology whose creators Demis Hassabis and John Jumper were recognized with the Nobel Prize in chemistry in 2024, is another example.

This breakthrough replaced traditional physics-based systems for predicting how sequences of amino acids would fold into three-dimensional shapes with a 93 million-parameter model, even though it doesn’t account for physical laws. That lack of real-world grounding is not desirable: No one likes the enigmatic nature of these AI systems, and scientists are eager to understand better how they work.

But the sophistication of AI is providing value to scientists, and its use across scientific fields has grown exponentially in recent years.

Context Matters

Those are the four dimensions where AI can excel over humans. Accuracy still matters. You wouldn’t want to use an AI that makes graphics look glitchy or targets ads randomly—yet accuracy isn’t the differentiator. The AI doesn’t need superhuman accuracy. It’s enough for AI to be merely good and fast, or adequate and scalable. Increasing scope often comes with an accuracy penalty, because AI can generalize poorly to truly novel tasks. The 4 S’s are sometimes at odds. With a given amount of computing power, you generally have to trade off scale for sophistication.

Even more interestingly, when an AI takes over a human task, the task can change. Sometimes the AI is just doing things differently. Other times, AI starts doing different things. These changes bring new opportunities and new risks.

For example, high-frequency trading isn’t just computers trading stocks faster; it’s a fundamentally different kind of trading that enables entirely new strategies, tactics, and associated risks. Likewise, AI has developed more sophisticated strategies for the games of chess and Go. And the scale of AI chatbots has changed the nature of propaganda by allowing artificial voices to overwhelm human speech.

It is this “phase shift,” when changes in degree may transform into changes in kind, where AI’s impacts to society are likely to be most keenly felt. All of this points to the places that AI can have a positive impact. When a system has a bottleneck related to speed, scale, scope, or sophistication, or when one of these factors poses a real barrier to being able to accomplish a goal, it makes sense to think about how AI could help.

Equally, when speed, scale, scope, and sophistication are not primary barriers, it makes less sense to use AI. This is why AI auto-suggest features for short communications such as text messages can feel so annoying. They offer little speed advantage and no benefit from sophistication, while sacrificing the sincerity of human communication.

Many deployments of customer service chatbots also fail this test, which may explain their unpopularity. Companies invest in them because of their scalability, and yet the bots often become a barrier to support rather than a speedy or sophisticated problem solver.

Where the Advantage Lies

Keep this in mind when you encounter a new application for AI or consider AI as a replacement for or an augmentation to a human process. Looking for bottlenecks in speed, scale, scope, and sophistication provides a framework for understanding where AI provides value, and equally where the unique capabilities of the human species give us an enduring advantage.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

The post Will AI Take Your Job? It Depends on These 4 Key Advantages AI Has Over Humans appeared first on SingularityHub.

Kategorie: Transhumanismus

Above the Law: Big Tech’s Bid to Block AI Oversight

Singularity Weblog - 23 Červen, 2025 - 15:36
In recent years, Big Tech leaders — including Mark Zuckerberg, Tim Cook, Sundar Pichai, and Sam Altman — have made public pronouncements calling for the regulation of artificial intelligence. Some did it in glossy op-eds, while others performed moments of “humility” at congressional hearings. They spoke of responsibility. Of safety. Of the need for oversight. […]
Kategorie: Transhumanismus

GitHub’s AI billing shift signals the end of free enterprise tools era

Computerworld.com [Hacking News] - 23 Červen, 2025 - 15:11

GitHub began enforcing monthly limits on its most powerful AI coding models this week, marking the latest example of AI companies transitioning users from free or unlimited services to paid subscription tiers once adoption takes hold.

“Monthly premium request allowances for paid GitHub Copilot users are now in effect,” the company said in its update to the Copilot consumptive billing experience, confirming that billing for additional requests now starts at $0.04 each. The enforcement represents the activation of restrictions first announced by GitHub CEO Thomas Dohmke in April.

Kategorie: Hacking & Security

Zhmotnělé ChatGPT se musí přejmenovat. Název io zatrhl soud, na který se obrátili exgoogleři

Živě.cz - 23 Červen, 2025 - 14:45
Startup io od Jonyho Iva se podle příkazu soudu musí přejmenovat. • Název porušuje ochrannou známku společnosti iyO, která také vyvíjí zařízení s umělou inteligencí. • iyO vyvíjí inovativní sluchátka s AI, která přijdou na trh v srpnu 2025.
Kategorie: IT News

Dvě nové zranitelnosti v Linuxu umožňují snadné získání root přístupu

AbcLinuxu [zprávičky] - 23 Červen, 2025 - 14:40
V linuxových systémech byly odhaleny dvě závažné zranitelnosti – CVE-2025-6018 v rámci PAM (Pluggable Authentication Modules) a CVE-2025-6019 v knihovně libblockdev, kterou lze zneužít prostřednictvím služby udisks. Ta je součástí většiny běžně používaných distribucí, jako jsou Ubuntu, Debian nebo Fedora. Kombinací obou zranitelností může útočník s minimálním úsilím získat root přístup. Vzhledem k jednoduchosti zneužití doporučujeme neprodleně aktualizovat systémy, nasadit dostupné záplaty, ověřit případné využití výchozí konfigurace udisks a důsledně monitorovat systémy pro jakoukoli podezřelou aktivitu.
Kategorie: GNU/Linux & BSD

Steel giant Nucor confirms hackers stole data in recent breach

Bleeping Computer - 23 Červen, 2025 - 14:28
Nucor, North America's largest steel producer and recycler, has confirmed that attackers behind a recent cybersecurity incident have also stolen data from the company's network. [...]
Kategorie: Hacking & Security

OpenSSL Corporation zve na den otevřených dveří v Brně a konferenci OpenSSL v Praze

AbcLinuxu [zprávičky] - 23 Červen, 2025 - 14:03
OpenSSL Corporation zve na den otevřených dveří ve středu 20. srpna v Brně a konferenci OpenSSL od 7. do 9. října v Praze.
Kategorie: GNU/Linux & BSD

Výstřižky pro Windows 11 vám z nahraného videa udělají GIF. Microsoft ví, že je to nesmrtelný formát

Živě.cz - 23 Červen, 2025 - 13:45
Výstřižky pro Windows 11 několik let podporují natáčení videa. • V nové verzi můžete záznam vyexportovat do formátu GIF. • Funkci jako první nabízí Výstřižky v testovacích kanálech Dev a Canary.
Kategorie: IT News

Experts count staggering costs incurred by UK retail amid cyberattack hell

The Register - Anti-Virus - 23 Červen, 2025 - 13:29
Cyber Monitoring Centre issues first severity assessment since February launch

Britain's Cyber Monitoring Centre (CMC) estimates the total cost of the cyberattacks that crippled major UK retail organizations recently could be in the region of £270-440 million ($362-591 million).…

Kategorie: Viry a Červi

TikTok-style bite-sized videos are invading enterprises

Computerworld.com [Hacking News] - 23 Červen, 2025 - 13:21

The TikTok-ification of the corporate world is well under way, as more companies turn to video snippets to communicate information to employees and customers. But when it comes to user- and AI-generated content, the rules are different for companies than for casual TikTok or Instagram users — and enterprises need to move cautiously when implementing video-generation tools, analysts said.

“There is a definite rise in the use of short form, digestible video in the corporate workplace,” said Forest Conner, senior director and analyst at Gartner. That’s because video is a more engaging way share corporate information with employees and a better way to manage time.

“As the storytelling axiom goes, ‘Show, don’t tell,’” Connor said. “Video provides a medium for showing in practice what may be hard to relay in writing.”

Many employees would rather view short videos that summarize a document or meeting, analysts said. As a result, employees themselves are becoming digital creators using new AI-driven video generation and editing tools.

Software from companies like Atlassian, Google, and Synthesia can dynamically create videos for use in presentations, to bolster communications with employees, or to train workers. The tools can create avatars, generate quick scripts, and draw insights using internal AI systems and can sometimes be better than email for sharing quick insights on collaborative projects. (Atlassian just last week introduced new creation tools in its own Loom software that include AI-powered script editing to make videos look better; the new feature doesn’t require re-recording a video.)

In part, the rising use of these video-creation tools is “a reaction to over-meeting,” said Will McKeon-White, senior analyst for infrastructure and operations at Forrester Research. Many employees feel meetings are a waste of time and hinder productivity. As an alternative, they can record short contextual snippets in Loom for use in workflow documents or to send to colleagues — allowing them to get up to speed on projects at their own pace.

“I’ve seen this more in developer environments where teams are building complex applications in a distributed environment without spending huge amounts of time in meetings,” McKeon-White said.

HR departments are finding Loom useful for dynamically creating personalized videos while onboarding new employees, said Sanchan Saxena, head of product for Teamwork Foundations at Atlassian. The quickly generated personalized videos — which Saxena called “Looms” — can include a welcome message with the employee’s name and position and can complement written materials such as employee handbooks and codes of conduct.

“We can all agree there is a faster, richer form of communication when the written document is also accompanied by a visual video that attaches to it,” Saxena said.

AI video generation company Synthesia made its name with a tool where users select an avatar, type in a script, add text or images and can produce a video in a few minutes. Over time, the company has expanded its offerings and is seeing more business uptake, said Alexandru Voica, head of corporate affairs and policy at Synthesia.

Its offerings now include an AI video assistant to convert documents into video summaries and an AI dubbing tool that localizes videos in more than 30 languages. “These products come together to form an AI video platform that covers the entire lifecycle of video production and distribution,” said Voica.

Voica noted how one Synthesia customer, Wise, has seen efficiency gains using the software for compliance and people training, creating “engaging learning experiences across their global workforce.”

Looking ahead, video content as a tool for corporate communications will likely be adopted at a team level, said McKeon-White. “It’s going to come down to the team or the department as for what they want to do in a given scenario,” he said.

Enterprises need to keep many things in mind when including videos in the corporate workflow. Managers, for instance, shouldn’t force videos on employees or create a blanket policy to adopt more video.

They can be useful, but videos are not for everyone, said Jeff Kagan, a technology analyst. “One big mistake companies make is following the preferences of the workers or executives…rather than considering different opinions. Not everyone is cutting edge,” he said.

Companies shouldn’t jump on the video bandwagon too soon, McKeon-White said. If they do, they run the risk of overwhelming employees.

“You don’t want to suddenly have work scrolling through 30 hours of video,” he said. “If you are throwing videos onto a shared repository and saying, ‘Hey, go look at that!’ That sucks. That’s not good for anybody.”

There are also many security and compliance issues to keep in mind.

AI can now detect sensitive information, such as license plate numbers, addresses, or confidential documentation, without manually reviewing the video, Conner said. “Organizations need to ensure that any content making it out the door is scrubbed for sensitive information in advance of publication.”

And with the rise of generative AI, the problem of deepfakes remains a major concern.

The uncanny accuracy of AI video avatars creates risks for executives, where their likeness could be cloned from their available video content and then used in damaging ways, Conner said.

“This has yet to happen in practice, but my guess is it’s only a matter of time,” Conner said.

Kategorie: Hacking & Security

Podle Applu jsou testy mobilů pro energetické štítky nejednoznačné. U iPhonů a iPadů si proto záměrně zhoršuje skóre

Živě.cz - 23 Červen, 2025 - 13:15
Apple si (jako jediný) stěžuje na testovací procedury pro energetické štítky • Testy se podle něj dají interpretovat různě • U iPadů a iPhonů proto záměrně snižuje jejich skóre a volá po změnách
Kategorie: IT News

Mark Russinovich pozval Billa Gatese, Linuse Torvaldse a Davida Cutlera na večeři

AbcLinuxu [zprávičky] - 23 Červen, 2025 - 12:17
Něco z IT bulváru: Mark Russinovich pozval Billa Gatese, Linuse Torvaldse a Davida Cutlera na večeři a zveřejnil společné selfie. Linus se s Billem ani s Davidem do té doby nikdy osobně nesetkal. Linus a David měli na sobě červená polotrika. Mark a Bill byli v tmavém [LinkedIn].
Kategorie: GNU/Linux & BSD

EU prověřuje Muskův prodej jeho sítě X vlastnímu start-upu xAI

AbcLinuxu [zprávičky] - 23 Červen, 2025 - 12:04
Evropská unie nově prověřuje obchod, při němž americký miliardář Elon Musk prodal svou sociální síť X dříve známou jako Twitter vlastnímu start-upu xAI za 33 miliard dolarů (712 miliard Kč). Unijní regulační úřady zvažují, zda firmě X neudělit pokutu podle nařízení Evropské unie o digitálních službách (DSA).
Kategorie: GNU/Linux & BSD

Despite its ubiquity, RAG-enhanced AI still poses accuracy and safety risks

Computerworld.com [Hacking News] - 23 Červen, 2025 - 12:00

Retrieval-Augmented Generation (RAG) — a method used by genAI tools like Open AI’s ChatGP) to provide more accurate and informed answers — is becoming a cornerstone for generative AI (genAI) tools, “providing implementation flexibility, enhanced explainability and composability with LLMs,” according to a recent study by Gartner Research.

And by 2028, 80% of genAI business apps will be developed on existing data management platforms, with RAG a key part of future deployments.

There’s only one problem: RAG isn’t always effective. In fact, RAG, which assists genAI technologies by looking up information instead of relying only on memory, could actually be making genAI models less safe and reliable, according to recent research.

Alan Nichol, CTO at conversational AI vendor Rasa, called RAG “just a buzzword” that just means “adding a loop around large language models” and data retrieval. The hype is overblown, he said, adding that the use of “while” or “if” statements by RAG is treated like a breakthrough.

(RAG systems typically include logic that might resemble “if” or “while” conditions, such as “if” a query requires external knowledge, retrieve documents from a knowledge base, and “while” an answer might be inaccurate re-query the database or refine the result.) 

“…Top web [RAG] agents still only succeed 25% of the time, which is unacceptable in real software,” Nichol said in an earlier interview with Computerworld. “Instead, developers should focus on writing clear business logic and use LLMs to structure user input and polish search results. It’s not going to solve your problem, but it is going to feel like it is.”

Two studies, one by Bloomberg and another by The Association for Computational Linguistics (ACL) found that using RAG with large language models (LLMs) can reduce their safety, even when both the LLMs and the documents it accesses are sound. The study highlighted the need for safety research and red-teaming designed for RAG settings.

Both studies found that “unsafe” outputs such as misinformation or privacy risks increased under RAG, prompting a closer look at whether retrieved documents were to blame. The key takeaway: RAG needs strong guardrails and researchers who are actively trying to find flaws, vulnerabilities, or weaknesses in a system — often by thinking like an adversary.

How RAG works — and causes security risks

One way to think about RAG and how it works is to compare a typical genAI model to a student answering questions just from memory. The student might sometimes answer the questions from memory — but the information could also be outdated or incomplete.

A RAG system is like a student who says, “Wait, let me check my textbook or notes first,” then gives you an answer based on what they found, plus their own understanding.

Iris Zarecki, CEO of data integration services provider K2view, said most organizations now using RAG augment their genAI models with internal unstructured data such as manuals, knowledge bases, and websites. But enterprises also need to include fragmented structured data, such as customer information, to fully unlock RAG’s potential.

“For example, when customer data like customer statements, payments, and past email and call interactions with the company are retrieved by the RAG framework and fed to the LLM, it can generate a much more personalized and accurate response,” Zarecki said.

Because RAG can increase security risks involving unverified info and prompt injection, Zarecki said, enterprises should vet sources, sanitize documents, enforce retrieval limits, and validate outputs.

RAG can also create a gateway through firewalls, allowing for data leakage, according to Ram Palaniappan, CTO at TEKsystems Global Services, a tech consulting firm. “This opens a huge number of challenges in enabling secure access and ensuring the data doesn’t end up in the public domain,” Palaniappan said. “RAG poses data leakage challenges, model manipulation and poisoning challenges, securing vector DB, etc. Hence, security and data governance become very critical with RAG architecture.”

(Vector databases are commonly used in applications involving RAG, semantic search, AI agents, and recommendation systems.)

Palaniappan expects the RAG space to rapidly evolve, with improvements in security and governance through tools like the Model Context Protocol and Agent-to-Agent Protocol (A2A). “As with any emerging tech, we’ll see ongoing changes in usage, regulation, and standards,” he said. “Key areas advancing include real-time AI monitoring, threat detection, and evolving approaches to ethics and bias.”

Large Reasoning Models are also highly flawed

Apple recently published a research paper evaluating Large Reasoning Models (LRMs) such as Gemini flash thinking, Claude 3.7 Sonnet thinking and OpenAI’s o3-mini using logical puzzles of varying difficulty. Like RAG, LRMs are designed to provide better responses by incorporating a level of step-by-step reasoning in its task.

Apple’s “Illusion of Thinking” study found that as the complexity of tasks increased, both standard LLMs and LRMs saw a significant decline in accuracy — eventually reaching near-zero performance. Notably, LRMs often reduced their reasoning efforts as tasks got more difficult, indicating a tendency to “quit” rather than persist through challenges.

Even when given explicit algorithms, LRMs didn’t improve, indicating they rely on pattern recognition rather than true understanding, challenging assumptions about AI’s path to “true intelligence.”

While LRMs perform well on benchmarks, their actual reasoning abilities and limitations are not well understood. Study results show LRMs break down on complex tasks, sometimes performing worse than standard models. Their reasoning effort increases with complexity only up to a point, then unexpectedly drops.

LRMs also struggle with consistent logical reasoning and exact computation, raising questions about their true reasoning capabilities, the study found. “The fundamental benefits and limitations of LRMs remain insufficiently understood,” Apple said. “Critical questions still persist: Are these models capable of generalizable reasoning or are they leveraging different forms of pattern matching.”

Reverse RAG can improve accuracy

A newer approach, Reverse RAG (RRAG), aims to improve accuracy by adding verification and better document handling, Gartner Senior Director Analyst Prasad Pore said. Unlike typical RAG, which uses a workflow that retrieves data and then generates a response, Reverse RAG flips it to generate an answer, retrieve data to verify that answer and then regenerate that answer to be passed along to the user.

First, the model drafts potential facts or queries, then fetches supporting documents and rigorously checks each claim against those sources. Reverse RAG emphasizes fact-level verification and traceability, making outputs more reliable and auditable.

RRAG represents a significant evolution in how LLMs access, verify and generate information, Pore said. “Although traditional RAG has transformed AI reliability by connecting models to external knowledge sources and making completions contextual, RRAG offers novel approaches of verification and document handling that address challenges in genAI applications related to fact checking and truthfulness of completions.”

The bottom line is that RAG and LRM alone aren’t silver bullets, according to Zarecki. Additional methods to improve genAI output quality must include:

  • Structured grounding: Fragmented structured data, such as customer info, in RAG.
  • Fine-tuned guardrails: Zero-shot or few-shot prompts with constraints, using control tokens or instruction tuning.
  • Human-in-the-loop oversight: Especially important for high-risk domains such as healthcare, finance, or legal.
  • Multi-stage reasoning: Breaking tasks into retrieval → reasoning → generation improves factuality and reduces errors, especially when combined with tool use or function calling.

Organizations must also organize enterprise data for GenAI and RAG by ensuring privacy, real-time access, quality, scalability, and instant availability to meet chatbot latency needs.

“This means that data must address requirements like data guardrails for privacy and security, real-time integration and retrieval, data quality, and scalability at controlled costs,” Zarecki said. “Another critical requirement is the freshness of the data, and the ability of the data to be available to the LLM in split seconds, because of the conversational latency required for a chatbot.”

Kategorie: Hacking & Security

Amerika dala zelenou preventnivní injekci proti HIV. Má 99,9 % účinnost a českou stopu

Živě.cz - 23 Červen, 2025 - 11:45
Jediná injekce jednou za půl roku nahrazuje každodenní užívání pilulek • Lék s českou stopou prokázal v testech téměř stoprocentní preventivní účinnost • Širšímu globálnímu využití zatím brání především jeho extrémně vysoká cena
Kategorie: IT News

Microsoft vyčistí Windows Update. Přestane nabízet některé starší ovladače

Živě.cz - 23 Červen, 2025 - 10:45
Microsoft plánuje pročistit databázi ovladačů služby Windows Update. • Některé starší ovladače mohou být z databáze odstraněny. • V první fázi se redmondští zaměří jen na ovladače, které mají novější náhradu.
Kategorie: IT News
Syndikovat obsah