Security-Portal.cz je internetový portál zaměřený na počítačovou bezpečnost, hacking, anonymitu, počítačové sítě, programování, šifrování, exploity, Linux a BSD systémy. Provozuje spoustu zajímavých služeb a podporuje příznivce v zajímavých projektech.

Kategorie

Microsoft’s new genAI model to power agents in Windows 11

Computerworld.com [Hacking News] - 23 Červen, 2025 - 21:40

Microsoft is laying the groundwork for Windows 11 to morph into a genAI-driven OS.

The company on Monday announced a critical AI technology that will make it possible to run generative AI (genAI) agents on Windows without Internet connectivity.

Microsoft’s small language model, called Mu, is designed to respond to natural language queries within the Windows OS, the company said in a blog post Monday. Mu takes advantage of the neural processing units (NPUs) of Copilot PCs, Vivek Pradeep, vice president and distinguished engineer for Windows Applied Sciences, said in the post.

Three chip makers — Intel, AMD and Qualcomm — provide NPUs in Copilot PCs prebuilt with Windows 11.

Mu already powers an agent that handles queries in the Settings menus in a preview version of Windows 11 available to early adopters with Copilot+ PCs. The feature is available in the Windows 11 preview version 26200.5651 that shipped  June 13. 

The model provides a better understanding and context of queries, and “has been designed to operate efficiently, delivering high performance while running locally,” Pradeep wrote.

Microsoft is aggressively pushing genAI features into the core of Windows 11 and Microsoft 365. The company introduced a new developer stack called Windows ML 2.0 last month for developers to make AI features accessible in software applications.

The company is also developing feature- or application-specific AI models for Microsoft 365 applications.

The 330-million parameter Mu model is designed to reduce AI computing cycles so it can run locally on Windows 11 PCs.  Laptops have limited hardware and battery life and need a cloud service for AI.

“This involved adjusting model architecture and parameter shapes to better fit the hardware’s parallelism and memory limits,” Pradeep wrote.

The model also generates high-quality responses with a better understanding of queries. Microsoft fine-tuned a custom Mu model for the Settings menu that could respond to ambiguous user queries on system settings. For example, the model can handle queries that do not specify whether to raise brightness on a main or secondary monitor.

The Mu encoder-decoder model breaks down large queries into a more compact representation of information, which is then used to generate responses. That’s different from large language models (LLMs), which are only decoder models and require all of the text to generate responses.

“By separating the input tokens from output tokens, Mu’s one-time encoding greatly reduces computation and memory overhead,” Pradeep said.

The encoder–decoder approach was significantly faster than LLMs such as Microsoft’s Phi-3.5, which is a decoder-only model. “When comparing Mu to a similarly fine-tuned Phi-3.5-mini, we found that Mu is nearly comparable in performance despite being one-tenth of the size,” Pradeep said.

Those gains are crucial for on-device and real-time applications. “Managing the extensive array of Windows settings posed its own challenges, particularly with overlapping functionalities,” Pradeep said.

The response time was under 500 milliseconds, which aligned with “goals for a responsive and reliable agent in Settings that scaled to hundreds of settings,” Pradeep said.

Microsoft has many genAI technologies that include OpenAI’s ChatGPT and its latest homegrown Phi 4 model, which can generate images, video and text.

Kategorie: Hacking & Security

Malware on Google Play, Apple App Store stole your photos—and crypto

Bleeping Computer - 23 Červen, 2025 - 18:44
A new mobile crypto-stealing malware called SparkKitty was found in apps on Google Play and the Apple App Store, targeting Android and iOS devices. [...]
Kategorie: Hacking & Security

Has Apple become addicted to ‘No’?

Computerworld.com [Hacking News] - 23 Červen, 2025 - 18:29

In a world loaded with existential challenge, it should not surprise anyone that Apple faces its own crisis. It should do what any cornered animal will always do and fight hard and dirty to regain freedom. That’s why it’s of concern to once again learn this weekend that Apple is “considering” acquisitions in the generative AI (genAI) space, because by this time in the fight, I want that chatter to be about acquisitions that have been made

Look, anyone can consider making a purchase and then come up with a dozen reasons not to go through with it. That’s not hard at all, it’s the inevitable articulation of small-C conservatism, which tends to favor stasis over change. My concern is that Apple’s own growth mindset might have been replaced by a more conservative approach, which means that the company becomes really good at finding reasons not to do things, and less good at identifying when it really should do something.

No can’t be the default

Apple’s history is packed with conflict between good ideas the company rejected and brilliant ideas it chose to move forward with. It is arguable that some of the ideas the company has looked at historically are only now becoming viable devices. (I’m thinking of the speculated HomePod as an idea of that kind.) Apple executives have frequently discussed how the company is just as proud of the things it doesn’t do as of those it does. It’s a company instinctively good at saying “No” — until it finds a good reason to say “Yes.”

The problem is that when it comes to genAI, it still feels like there’s a lot of creative mileage to be had from injecting some creative chaos into the R&D crib. To achieve that, it seems necessary that Apple find the spleen to take a few risks on the M&A journey. 

The company can’t simply wander down to the genAI development shops and find reasons not to purchase things; it needs to pick up all the shiniest things it comes across, using whatever financial muscle it takes to ensure they end up in Apple’s hold rather than elsewhere. 

Why must it do this? Because genAI isn’t finished yet

The genAI evolution continues

Sure, Apple’s widely disclosed challenges with Siri mean it is motivated to try new approaches to push that project ahead, but the truth is that no one — not even OpenAI — really has genAI that is anything other than a hint of what this tech is likely to be able to accomplish in a decade or two. We are still early in the AI race, and that means today’s winners can still lose and those at the back of the pack have an opportunity to get ahead. 

So, it makes sense for Apple to take a few expensive risks, rather than staying inside the safe zone. Does Perplexity have a few tools that could boost Apple Intelligence? Then grab them. Are there others in AI with tools that could help make Siri smart and hardware products sing? 

Bring them in. Take risks. Get hungry, be foolish. Make it happen.

It is also worth thinking about retention at this point. 

Keep them keen

Several pieces by Mark Gurman in recent years tell us that in many cases, people Apple has hired on the purchase of their companies have subsequently jumped ship, as they did not find their happiness. If that is the case, that’s a problem that needs to be fixed; it suggests at least some of the assumptions the company has concerning how it works with its employees must be challenged, and new ways found to ensure acquired staffers actually want to stick around. 

Apple has tried stock options to boost retention. That’s not enough. Money helps, but as Maslow says, agency and empowerment are more important. Steve Jobs understood this, saying during his last D: All Things Digital interview in 2010, “If you want to hire great people and have them stay working for you, [you] have to let them make a lot of decisions and you have to be run by ideas, not hierarchy…. The best ideas have to win — otherwise good people don’t want to stay.” 

I’m not saying Apple has become hierarchical, though I look with suspicion at work-from-home mandates and opposition to employee unionization as hints that hierarchy exists in some parts of the company. What I am saying is that if the old M.O. isn’t working, and if the important new recruits the company needs to tackle genAI don’t want to stick around, then something’s got to change. And if that means a lot more collaboration and empowerment and a few internal changes in approach, that’s a small price to pay in contrast to the global opportunity to lead the AI-driven tech future on a planet seemingly owned by billionaires and technocrats.

Sometimes you got to play your hunches — how else are you going to find what you love?

You can follow me on social media! Join me on BlueSky,  LinkedIn, and Mastodon.

Kategorie: Hacking & Security

US Homeland Security warns of escalating Iranian cyberattack risks

Bleeping Computer - 23 Červen, 2025 - 18:22
The U.S. Department of Homeland Security (DHS) warned over the weekend of escalating cyberattack risks by Iran-backed hacking groups and pro-Iranian hacktivists. [...]
Kategorie: Hacking & Security

AI, cybersecurity, and quantum computing shape UK’s new 10-year economic plan

Computerworld.com [Hacking News] - 23 Červen, 2025 - 17:43

Artificial intelligence, quantum computing and cybersecurity are “frontier technologies” the UK government plans to prioritize as part of its blueprint to overhaul the nation’s economy and industries over the next decade.

That’s according to its long-awaited industrial strategy policy paper and a separate plan going into more detail on digital and other technologies.

It would perhaps have been bigger news if the government hadn’t put AI, cybersecurity and quantum computing at the heart of its plans, given that it has already trailed this heavily in a sequence of reports, including January’s AI Opportunities Action Plan.

But the hope for the tech sector expressed in the paper, titled The UK’s Modern Industrial Strategy, is still ambitious, including that by 2035 the UK should aim to be one of the world’s top three R&D superpowers and home to a tech business worth a trillion dollars.

All that has to happen in a mere decade in a country unaccustomed to talking about its future more than one four-year election cycle ahead. It’s also interventionist in tone, an idea at odds with half a century of thinking in Britain which assumed technology should be left to its own devices.

AI and quantum computing will build the companies of the future. However, because this infrastructure will be vulnerable to disruption, it will need cybersecurity innovation to ensure its operation.

“We will enable new sectors to establish themselves e.g., our rapidly growing AI sector. […] Driving investment into our internationally renowned cybersecurity sector and supporting cutting-edge innovation to address the challenges that prevent widespread technology adoption,” the government wrote in the the Digital Technologies Sector Plan.

With a combined value of £1 trillion ($1.4 trillion) the UK’s tech sector was currently the world’s third most valuable behind only the US and China, it calculated.

The Plan’s focus on AI in particular sets out ambitious uptake goals. As soon as 2030, the UK should have several AI growth zones, with 7.5 million people upskilled to use the technology while the country’s AI research capacity will grow twentyfold, the plan projects. By the same date, the CyberASAP accelerator program should be supporting 250 cybersecurity companies and 28 spinouts.

Big interventions

Some optimism is probably justified — the country is home to a good collection of AI expertise for example — but it wouldn’t be Britain if there weren’t doubts.

The first is that while the UK has a reasonable track record at creating AI, cybersecurity, and technology companies, its record of keeping them British is less positive. Two examples are Google getting its hands on AI specialist DeepMind at a bargain-basement price in 2014, and Softbank’s purchase of chip designer Arm two years later. Both are still based in the UK, but with their profits flowing elsewhere.

That’s not always an issue but, without a core of sovereign businesses, it’s debatable whether a country is really in charge of its technology ecosystem in the long run.

A second issue is the size of the government interventions necessary to fuel local technology businesses today from startup to unicorn and beyond. In a sector that thinks in the hundreds of billions, the UK Government’s budget, doled out in tens of millions in a variety of programs, remains more constrained.

There’s also doubt about whether the rest of the UK economy will be able to profit from AI developments.

“According to Cisco’s latest UK AI Readiness Index, only 10% of UK organisations are fully prepared to harness AI’s potential,” said Cico’s UK and Ireland chief executive, Sarah Walker.

Cisco collaborated with the development of the Government’s plan, but Walker pointed out that its success still depended on overcoming deeper workforce challenges:

“AI adoption and implementation is primarily a people challenge. From traditional IT roles to marketing and supply chain management, almost every job will require AI literacy in the very near future,” she said.

In some parts of the UK, this would be easier than in others. “We need to ensure up-skilling is addressed with equality, to avoid exacerbating economic gaps that already exist across demographics and regions.”

Kategorie: Hacking & Security

Canada says Salt Typhoon hacked telecom firm via Cisco flaw

Bleeping Computer - 23 Červen, 2025 - 17:23
The Canadian Centre for Cyber Security and the FBI confirm that the Chinese state-sponsored 'Salt Typhoon' hacking group is also targeting Canadian telecommunication firms, breaching a telecom provider in February. [...]
Kategorie: Hacking & Security

Revil ransomware members released after time served on carding charges

Bleeping Computer - 23 Červen, 2025 - 17:12
Four REvil ransomware members arrested in January 2022 were released by Russia on time served after they pleaded guilty to carding and malware distribution charges. [...]
Kategorie: Hacking & Security

McLaren Health Care says data breach impacts 743,000 patients

Bleeping Computer - 23 Červen, 2025 - 16:28
McLaren Health Care is warning 743,000 patients that the health system suffered a data breach caused by a July 2024 attack by the INC ransomware gang. [...]
Kategorie: Hacking & Security

GitHub’s AI billing shift signals the end of free enterprise tools era

Computerworld.com [Hacking News] - 23 Červen, 2025 - 15:11

GitHub began enforcing monthly limits on its most powerful AI coding models this week, marking the latest example of AI companies transitioning users from free or unlimited services to paid subscription tiers once adoption takes hold.

“Monthly premium request allowances for paid GitHub Copilot users are now in effect,” the company said in its update to the Copilot consumptive billing experience, confirming that billing for additional requests now starts at $0.04 each. The enforcement represents the activation of restrictions first announced by GitHub CEO Thomas Dohmke in April.

Kategorie: Hacking & Security

Steel giant Nucor confirms hackers stole data in recent breach

Bleeping Computer - 23 Červen, 2025 - 14:28
Nucor, North America's largest steel producer and recycler, has confirmed that attackers behind a recent cybersecurity incident have also stolen data from the company's network. [...]
Kategorie: Hacking & Security

TikTok-style bite-sized videos are invading enterprises

Computerworld.com [Hacking News] - 23 Červen, 2025 - 13:21

The TikTok-ification of the corporate world is well under way, as more companies turn to video snippets to communicate information to employees and customers. But when it comes to user- and AI-generated content, the rules are different for companies than for casual TikTok or Instagram users — and enterprises need to move cautiously when implementing video-generation tools, analysts said.

“There is a definite rise in the use of short form, digestible video in the corporate workplace,” said Forest Conner, senior director and analyst at Gartner. That’s because video is a more engaging way share corporate information with employees and a better way to manage time.

“As the storytelling axiom goes, ‘Show, don’t tell,’” Connor said. “Video provides a medium for showing in practice what may be hard to relay in writing.”

Many employees would rather view short videos that summarize a document or meeting, analysts said. As a result, employees themselves are becoming digital creators using new AI-driven video generation and editing tools.

Software from companies like Atlassian, Google, and Synthesia can dynamically create videos for use in presentations, to bolster communications with employees, or to train workers. The tools can create avatars, generate quick scripts, and draw insights using internal AI systems and can sometimes be better than email for sharing quick insights on collaborative projects. (Atlassian just last week introduced new creation tools in its own Loom software that include AI-powered script editing to make videos look better; the new feature doesn’t require re-recording a video.)

In part, the rising use of these video-creation tools is “a reaction to over-meeting,” said Will McKeon-White, senior analyst for infrastructure and operations at Forrester Research. Many employees feel meetings are a waste of time and hinder productivity. As an alternative, they can record short contextual snippets in Loom for use in workflow documents or to send to colleagues — allowing them to get up to speed on projects at their own pace.

“I’ve seen this more in developer environments where teams are building complex applications in a distributed environment without spending huge amounts of time in meetings,” McKeon-White said.

HR departments are finding Loom useful for dynamically creating personalized videos while onboarding new employees, said Sanchan Saxena, head of product for Teamwork Foundations at Atlassian. The quickly generated personalized videos — which Saxena called “Looms” — can include a welcome message with the employee’s name and position and can complement written materials such as employee handbooks and codes of conduct.

“We can all agree there is a faster, richer form of communication when the written document is also accompanied by a visual video that attaches to it,” Saxena said.

AI video generation company Synthesia made its name with a tool where users select an avatar, type in a script, add text or images and can produce a video in a few minutes. Over time, the company has expanded its offerings and is seeing more business uptake, said Alexandru Voica, head of corporate affairs and policy at Synthesia.

Its offerings now include an AI video assistant to convert documents into video summaries and an AI dubbing tool that localizes videos in more than 30 languages. “These products come together to form an AI video platform that covers the entire lifecycle of video production and distribution,” said Voica.

Voica noted how one Synthesia customer, Wise, has seen efficiency gains using the software for compliance and people training, creating “engaging learning experiences across their global workforce.”

Looking ahead, video content as a tool for corporate communications will likely be adopted at a team level, said McKeon-White. “It’s going to come down to the team or the department as for what they want to do in a given scenario,” he said.

Enterprises need to keep many things in mind when including videos in the corporate workflow. Managers, for instance, shouldn’t force videos on employees or create a blanket policy to adopt more video.

They can be useful, but videos are not for everyone, said Jeff Kagan, a technology analyst. “One big mistake companies make is following the preferences of the workers or executives…rather than considering different opinions. Not everyone is cutting edge,” he said.

Companies shouldn’t jump on the video bandwagon too soon, McKeon-White said. If they do, they run the risk of overwhelming employees.

“You don’t want to suddenly have work scrolling through 30 hours of video,” he said. “If you are throwing videos onto a shared repository and saying, ‘Hey, go look at that!’ That sucks. That’s not good for anybody.”

There are also many security and compliance issues to keep in mind.

AI can now detect sensitive information, such as license plate numbers, addresses, or confidential documentation, without manually reviewing the video, Conner said. “Organizations need to ensure that any content making it out the door is scrubbed for sensitive information in advance of publication.”

And with the rise of generative AI, the problem of deepfakes remains a major concern.

The uncanny accuracy of AI video avatars creates risks for executives, where their likeness could be cloned from their available video content and then used in damaging ways, Conner said.

“This has yet to happen in practice, but my guess is it’s only a matter of time,” Conner said.

Kategorie: Hacking & Security

Despite its ubiquity, RAG-enhanced AI still poses accuracy and safety risks

Computerworld.com [Hacking News] - 23 Červen, 2025 - 12:00

Retrieval-Augmented Generation (RAG) — a method used by genAI tools like Open AI’s ChatGP) to provide more accurate and informed answers — is becoming a cornerstone for generative AI (genAI) tools, “providing implementation flexibility, enhanced explainability and composability with LLMs,” according to a recent study by Gartner Research.

And by 2028, 80% of genAI business apps will be developed on existing data management platforms, with RAG a key part of future deployments.

There’s only one problem: RAG isn’t always effective. In fact, RAG, which assists genAI technologies by looking up information instead of relying only on memory, could actually be making genAI models less safe and reliable, according to recent research.

Alan Nichol, CTO at conversational AI vendor Rasa, called RAG “just a buzzword” that just means “adding a loop around large language models” and data retrieval. The hype is overblown, he said, adding that the use of “while” or “if” statements by RAG is treated like a breakthrough.

(RAG systems typically include logic that might resemble “if” or “while” conditions, such as “if” a query requires external knowledge, retrieve documents from a knowledge base, and “while” an answer might be inaccurate re-query the database or refine the result.) 

“…Top web [RAG] agents still only succeed 25% of the time, which is unacceptable in real software,” Nichol said in an earlier interview with Computerworld. “Instead, developers should focus on writing clear business logic and use LLMs to structure user input and polish search results. It’s not going to solve your problem, but it is going to feel like it is.”

Two studies, one by Bloomberg and another by The Association for Computational Linguistics (ACL) found that using RAG with large language models (LLMs) can reduce their safety, even when both the LLMs and the documents it accesses are sound. The study highlighted the need for safety research and red-teaming designed for RAG settings.

Both studies found that “unsafe” outputs such as misinformation or privacy risks increased under RAG, prompting a closer look at whether retrieved documents were to blame. The key takeaway: RAG needs strong guardrails and researchers who are actively trying to find flaws, vulnerabilities, or weaknesses in a system — often by thinking like an adversary.

How RAG works — and causes security risks

One way to think about RAG and how it works is to compare a typical genAI model to a student answering questions just from memory. The student might sometimes answer the questions from memory — but the information could also be outdated or incomplete.

A RAG system is like a student who says, “Wait, let me check my textbook or notes first,” then gives you an answer based on what they found, plus their own understanding.

Iris Zarecki, CEO of data integration services provider K2view, said most organizations now using RAG augment their genAI models with internal unstructured data such as manuals, knowledge bases, and websites. But enterprises also need to include fragmented structured data, such as customer information, to fully unlock RAG’s potential.

“For example, when customer data like customer statements, payments, and past email and call interactions with the company are retrieved by the RAG framework and fed to the LLM, it can generate a much more personalized and accurate response,” Zarecki said.

Because RAG can increase security risks involving unverified info and prompt injection, Zarecki said, enterprises should vet sources, sanitize documents, enforce retrieval limits, and validate outputs.

RAG can also create a gateway through firewalls, allowing for data leakage, according to Ram Palaniappan, CTO at TEKsystems Global Services, a tech consulting firm. “This opens a huge number of challenges in enabling secure access and ensuring the data doesn’t end up in the public domain,” Palaniappan said. “RAG poses data leakage challenges, model manipulation and poisoning challenges, securing vector DB, etc. Hence, security and data governance become very critical with RAG architecture.”

(Vector databases are commonly used in applications involving RAG, semantic search, AI agents, and recommendation systems.)

Palaniappan expects the RAG space to rapidly evolve, with improvements in security and governance through tools like the Model Context Protocol and Agent-to-Agent Protocol (A2A). “As with any emerging tech, we’ll see ongoing changes in usage, regulation, and standards,” he said. “Key areas advancing include real-time AI monitoring, threat detection, and evolving approaches to ethics and bias.”

Large Reasoning Models are also highly flawed

Apple recently published a research paper evaluating Large Reasoning Models (LRMs) such as Gemini flash thinking, Claude 3.7 Sonnet thinking and OpenAI’s o3-mini using logical puzzles of varying difficulty. Like RAG, LRMs are designed to provide better responses by incorporating a level of step-by-step reasoning in its task.

Apple’s “Illusion of Thinking” study found that as the complexity of tasks increased, both standard LLMs and LRMs saw a significant decline in accuracy — eventually reaching near-zero performance. Notably, LRMs often reduced their reasoning efforts as tasks got more difficult, indicating a tendency to “quit” rather than persist through challenges.

Even when given explicit algorithms, LRMs didn’t improve, indicating they rely on pattern recognition rather than true understanding, challenging assumptions about AI’s path to “true intelligence.”

While LRMs perform well on benchmarks, their actual reasoning abilities and limitations are not well understood. Study results show LRMs break down on complex tasks, sometimes performing worse than standard models. Their reasoning effort increases with complexity only up to a point, then unexpectedly drops.

LRMs also struggle with consistent logical reasoning and exact computation, raising questions about their true reasoning capabilities, the study found. “The fundamental benefits and limitations of LRMs remain insufficiently understood,” Apple said. “Critical questions still persist: Are these models capable of generalizable reasoning or are they leveraging different forms of pattern matching.”

Reverse RAG can improve accuracy

A newer approach, Reverse RAG (RRAG), aims to improve accuracy by adding verification and better document handling, Gartner Senior Director Analyst Prasad Pore said. Unlike typical RAG, which uses a workflow that retrieves data and then generates a response, Reverse RAG flips it to generate an answer, retrieve data to verify that answer and then regenerate that answer to be passed along to the user.

First, the model drafts potential facts or queries, then fetches supporting documents and rigorously checks each claim against those sources. Reverse RAG emphasizes fact-level verification and traceability, making outputs more reliable and auditable.

RRAG represents a significant evolution in how LLMs access, verify and generate information, Pore said. “Although traditional RAG has transformed AI reliability by connecting models to external knowledge sources and making completions contextual, RRAG offers novel approaches of verification and document handling that address challenges in genAI applications related to fact checking and truthfulness of completions.”

The bottom line is that RAG and LRM alone aren’t silver bullets, according to Zarecki. Additional methods to improve genAI output quality must include:

  • Structured grounding: Fragmented structured data, such as customer info, in RAG.
  • Fine-tuned guardrails: Zero-shot or few-shot prompts with constraints, using control tokens or instruction tuning.
  • Human-in-the-loop oversight: Especially important for high-risk domains such as healthcare, finance, or legal.
  • Multi-stage reasoning: Breaking tasks into retrieval → reasoning → generation improves factuality and reduces errors, especially when combined with tool use or function calling.

Organizations must also organize enterprise data for GenAI and RAG by ensuring privacy, real-time access, quality, scalability, and instant availability to meet chatbot latency needs.

“This means that data must address requirements like data guardrails for privacy and security, real-time integration and retrieval, data quality, and scalability at controlled costs,” Zarecki said. “Another critical requirement is the freshness of the data, and the ability of the data to be available to the LLM in split seconds, because of the conversational latency required for a chatbot.”

Kategorie: Hacking & Security

How to spot AI washing in vendor marketing

Computerworld.com [Hacking News] - 23 Červen, 2025 - 08:47
This agent is a robot

Agentic AI and AI agents are hotter than lava-fried chicken right now, and this week CIO defined how the two differ from each other. We reported that the two related technologies can work together, but CIOs should understand the difference to protect against vendor hype and obfuscation.  

And it is vendor hype that is exercising the readers of CIO, who wanted to know from Smart Answers how to spot vendor AI washing. Smart Answers may be an AI-infused chatbot, but it’s fueled by human intelligence, allowing it to know its own limitations. 

It defines AI washing as misrepresentation of basic automation or traditional algorithms as fully autonomous AI agents. Such false agents don’t possess true independent decision-making capabilities and cannot reason through multiple steps and act independently. 

Find out: What is agent washing in AI marketing? 

Windows 10: not dead yet

The imminent demise of support for Windows 10 is causing much consternation in enterprise IT. But is Microsoft really axing Windows 10? This week Computerworld reported the definitive need to know on the subject. This prompted readers to ask many questions of Smart Answers, all related to the end of Windows 10. Most often queried was the future of Microsoft 365 apps on Windows 10 after support ends.  

It’s good news and bad news. While the apps will continue to function and receive security updates until Oct. 10, 2028, users may encounter performance issues and limited support. Microsoft encourages users to upgrade to Windows 11 to avoid these potential problems. (Well, it would.) 

Find out: What happens using Microsoft 365 apps on Windows 10 after 2025?  

You say IT, we say OT

The convergence of IT and operational technology (OT) can improve security, optimize processes, and reduce costs. This week CIO reported on how some how large companies do it

Not surprisingly this prompted readers to ask Smart Answers how IT/OT collaboration can drive digital transformation. Within the answer lies one very salient point: some leaders believe that in certain sectors, rapid IT/OT convergence is critical to achieve transformation.  

Find out: How is IT/OT convergence enabling digital transformation in different industries?  

About Smart Answers 

Smart Answers is an AI-based chatbot tool designed to help you discover content, answer questions, and go deep on the topics that matter to you. Each week we send you the three most popular questions asked by our readers, and the answers Smart Answers provides. 

Kategorie: Hacking & Security

CoinMarketCap briefly hacked to drain crypto wallets via fake Web3 popup

Bleeping Computer - 22 Červen, 2025 - 23:47
CoinMarketCap, the popular cryptocurrency price tracking site, suffered a website supply chain attack that exposed site visitors to a wallet drainer campaign to steal visitors' crypto. [...]
Kategorie: Hacking & Security

Oxford City Council suffers breach exposing two decades of data

Bleeping Computer - 22 Červen, 2025 - 17:17
Oxford City Council warns it suffered a data breach where attackers accessed personally identifiable information from legacy systems. [...]
Kategorie: Hacking & Security

Rakouská vláda chce koupit spyware a prolomit šifrovanou komunikaci. Má sledovat 30 lidí ročně

Zive.cz - bezpečnost - 22 Červen, 2025 - 16:45
**Rakouská vláda se 18. června dohodla na možnosti sledování podezřelých osob. **Chce koupit spyware, který prolomí koncové šifrování. **Vládní návrh nyní poputuje do parlamentu.
Kategorie: Hacking & Security

Windows Snipping Tool now lets you create animated GIF recordings

Bleeping Computer - 22 Červen, 2025 - 16:11
​Microsoft announced that the Windows screenshot and screencast Snipping Tool utility is getting support for exporting animated GIF recordings. [...]
Kategorie: Hacking & Security

Secure RHEL Clones Chart Diverging Paths

LinuxSecurity.com - 22 Červen, 2025 - 14:27
When you're architecting a secure Linux environment, understanding where your operating system stands''both in terms of hardware compatibility and security features''isn't optional. It's critical. With RHEL 10 redefining what enterprise Linux should look like and Rocky Linux 10 and AlmaLinux 10 adapting to meet the demands of downstream users, the landscape has shifted.
Kategorie: Hacking & Security

Russian hackers bypass Gmail MFA using stolen app passwords

Bleeping Computer - 21 Červen, 2025 - 17:13
Russian hackers bypass multi-factor authentication and access Gmail accounts by leveraging app-specific passwords in advanced social engineering attacks that impersonate U.S. Department of State officials. [...]
Kategorie: Hacking & Security

WordPress Motors theme flaw mass-exploited to hijack admin accounts

Bleeping Computer - 21 Červen, 2025 - 16:09
Hackers are exploiting a critical privilege escalation vulnerability in the WordPress theme "Motors" to hijack administrator accounts and gain complete control of a targeted site. [...]
Kategorie: Hacking & Security
Syndikovat obsah