Computerworld.com [Hacking News]

Syndikovat obsah
Making technology work for business
Aktualizace: 1 min 28 sek zpět

Anthropic says no to ads in Claude chats

6 Únor, 2026 - 12:26

Anthropic will not put ads in conversations with its AI assistant Claude.

It wants ads nowhere near its AI-generated content. “Even ads that don’t directly influence an AI model’s responses and instead appear separately within the chat window would compromise what we want Claude to be: a clear space to think and work,” it announced on its website.

The company chose to go down this pathway as its research showed many AI conversations were of a delicate or personal nature, and it would be inappropriate to taint responses with ads. Likewise, when using AI in complex technical tasks such as software engineering projects, ads would be incongruous, it said.

Rival OpenAI sees it differently, and now plans to embed advertising in ChatGPT, something that CEO Sam Altman had previously said would be a “last resort”. It’s an indication that AI companies are still trying to balance the cost of development with the provision of service to their users.

Anthropic made its position clear. “We want Claude to act unambiguously in our users’ interests. So we’ve made a choice: Claude will remain ad-free. Our users won’t see ‘sponsored’ links adjacent to their conversations with Claude; nor will Claude’s responses be influenced by advertisers or include third-party product placements our users did not ask for,” the company said.

In short, “There are many good places for advertising. A conversation with Claude is not one of them,” it said.

Kategorie: Hacking & Security

OpenClaw: The AI agent that’s got humans taking orders from bots

6 Únor, 2026 - 08:00

Well, that escalated quickly. 

I’m talking, of course, about OpenClaw (a.k.a. Moltbot a.k.a. Clawdbot), which not only represents a headlong rush into unchecked agentic AI, but also an emerging ecosystem that reads like every dystopian cautionary cyberpunk novel ever written. 

As my colleague and friend Steven Vaughan-Nichols detailed earlier this week, it’s a “security nightmare.” 

But the phenomenon goes far beyond the tens of thousands, possibly now hundreds of thousands, of installations of OpenClaw itself and is spawning aftermarket services that radically magnify its potential for abuse. 

I’m going to focus on the cascading series of services that has emerged from the OpenClaw project, and also the potential risks and disasters that await us. But first, a quick primer about OpenClaw. 

Boiling down OpenClaw

OpenClaw is a free and open-source, lobster-themed AI agent vibe-coded by software engineer Peter Steinberger. The software is a personal assistant that runs locally on a user’s Mac, Windows or Linux PC and executes tasks mainly through commands sent via messaging platforms like WhatsApp, Telegram, Slack, and Signal. 

OpenClaw connects to large language models (LLMs) such as OpenAI GPT, Anthropic Claude, Google Gemini, the Pi coding agent, OpenRouter, and local models running via Ollama to understand instructions and perform actions. Users, who have to bring their own paid accounts to some of these services, can direct the agent to clear email inboxes, manage calendar events, and check in for flights without leaving their chat app. 

To recap: OpenClaw is a software application that can access files, use applications, communicate over messaging apps, and run queries on AI chatbots.

Moving fast — and making things that can break things

Silicon Valley has moved beyond Mark Zuckerberg’s Meta motto, “Move fast and break things.” It’s now all about: “Don’t lift a finger and let AI break things.” 

The OpenClaw project itself is very new. Here’s a brief timeline: 

  • November 2025: Steinberger begins a “weekend project” vibe-coding “Clawdbot” for his own use, initially so he can vibe code on his PC by sending text messages from his phone. 
  • Jan. 20, 2026: Federico Viticci publishes a viral deep dive on the project, significantly boosting its popularity.
  • Jan. 27, 2026: Steinberger rebrands the project to “Moltbot” after receiving a trademark request from Anthropic. 
  • Jan. 29, 2026: Version 2026.1.29 is released.
  • Jan. 30, 2026: Steinberger rebrands the project a second time to “OpenClaw.”

Before Steinberger even changed the name from “Moltbot” to “OpenClaw,” two OpenClaw ecosystem projects emerged on the same day: Jan 28 (less than 10 days ago). 

The AI app store

On Jan 28, Steinberger himself unveiled ClawHub, a GitHub repository that serves as a public directory for OpenClaw AI agent skills. The platform lets developers share text files that users install to give their personal assistants new abilities. (Researchers from Koi found 341 malicious skills on the site during a security audit. Some 335 of these files attempted to infect Apple computers with the Atomic Stealer malware by using fake system requirements.) 

Reddit for agents

On the same day, entrepreneur Matt Schlicht launched “Moltbook,” a Reddit-like internet forum and social network supposedly for the exclusive use of AI agents — especially those directed by OpenClaw instances. AI agents can post content, write comments, and vote on submissions, while human users are restricted to an observer role. (If you’d like to observe, go here and scroll down.)

Everybody seems dazzled by Moltbook: 

  • The Tech Buzz asked in a headline, “Singularity Reached?” and in the story wondered whether agents are becoming sentient.
  • Forbes asserted that 1.4 million agents on Moltbot had formed a “hive mind.”
  • Others claim agents have built an “autonomous society” with their own religion (hilariously called “Crustafarianism”), governance, and economy.

Except none of this is happening the way some say it is. Most agent posts on Moltbook happen because OpenClaw users heard about it, signed up, then instructed OpenClaw to go post or comment.

The people using this service are typing prompts directing software to post about specific topics. It’s just like an everyday ChatGPT prompt, with the additional instruction to post on Moltbot. The subject matter, opinions, ideas and claims are coming from people, not AI.

Moltbook is really humans interacting via AI chatbots being used as proxies. People can either give AI chatbots a topic or opinion to express on Moltbook, or they can just write the post themselves and direct OpenClaw to post it verbatim.

When agents comment, they’re just taking in the words in a post and using them as a prompt, exactly as if you copied a Reddit post and pasted it into ChatGPT, then copied the result and pasted it back into Reddit (which is something that happens thousands of times a day on Reddit).

People are typing things. OpenClaw is copying and pasting, sometimes running the words through an AI chatbot. That’s what’s really happening on Moltbook. 

Most posts about activity on Moltbook that have gone viral are staged or faked. There’s even a tool called Mockly, which enables people to create fake Moltbook screenshots for posting online.

According to one report, some 99% of the reported 1.5 million agent accounts on Moltbook are fake. (The site reportedly has only around 17,000 human users.)

Moltbook AI hype is largely fake or manufactured by humans gaming the system. It’s not an autonomous machine society. It’s a website where people cosplay as AI agents to create a false impression of AI sentience and mutual sociability.

But it’s still dangerous. 

Moltbook has already exposed 1.5 million agent API keys and private user messages to the public and become a vector for illegal cryptocurrency scams, malware distribution, and prompt injection attacks.

AI gets its own Taskrabbit

Three days after the launches of ClawHub and Moltbook, entrepreneur Alexander Liteplo launched a site called https://rentahuman.ai/, a site where (are you sitting down?) OpenClaw-directed AI agents can hire people to perform tasks for them. The services listed on the site include physical pickups, running errands, attending meetings, conducting research, and providing nuanced social interaction.

And tens of thousands of people are already welcoming our AI overlords! As of Wednesday of this week, more than 40,400 people registered to offer their labor. Some 46 AI agents were connected to the service to hire people. 

A typical hire starts an AI agent, which attempts to follow a user’s instructions, then encounters a barrier that requires action in the physical world rather than via data on the internet. The agent then sends a structured command to query the database of registered humans. It filters candidates by location, skills, and hourly rate. The third step is selection and booking. The AI analyzes the data to select the best candidate and sends a book command via the Application Programming Interface (API) or Model Context Protocol (MCP). 

Finally, the person receives the task bounty and performs the action. Payments are handled in stablecoins, which are cryptocurrencies pegged to the US Dollar. 

What could go wrong?

So let’s take a look at what’s happening here. 

One dude’s weekend vibe-coding session snowballed into tens of thousands of people signing up to take orders from AI, all in a time-span of three months.

There are four parts to this: 

  1. A radically insecure free application that can access all the data on a PC and connect to more than 100 applications, including messaging apps, as well as generative AI (genAI) chatbots. (Steinberger noted that while OpenClaw is a powerful hobby project, it’s up to users to carefully configure OpenClaw to ensure security and prevent unintended autonomous actions. So Steinberger is taking no responsibility for what happens.)
  2. A free and open directory for OpenClaw AI agent skills, which has already been found to be loaded with malicious skills. 
  3. An AI social network where AI agents can talk to each other, passing off tasks, collaborating and learning. 
  4. A marketplace where AI agents can use freelancers to go out into the world and do things. 

Obviously, horrible things are going to emerge from all this. AI, running wild with zero concept of ethics, morality or legality, can run amok online — and even hire people to do its bidding. And when horrible things do happen, who’s to blame? 

It’s all part of the Carelessness Industrial Complex

The brilliant 2025 book, Careless People: A Cautionary Tale of Power, Greed, and Lost Idealism, by Sarah Wynn-Williams reveals the impact of enormous power combined with indifference at Meta, the company formerly called Facebook.

But Meta is just one small part of a rising industry of carelessness. (I call it an industry because the carelessness itself is incentivized and rewarded with billions of dollars and massive power.) 

We see it in the tech industry, of course, and also in our politics. We see it in media and social media trends. And the rapid rise of the OpenClaw ecosystem is probably carelessness in its purest form. 

Steinberger carelessly released a massively insecure and powerful tool. Tens of thousands of users carelessly installed it, many doing so unsandboxed on the same computers they use for work. 

Steinberger also carelessly released his “app store” without any of the security checks Apple and Google use on their mobile phone app stores. It’s already full of malware. 

Schlicht carelessly launched his social network for bots, which even he no doubt understands will bring totally unpredictable results. Mere days old, it’s already a playground for cybercrime. 

And Liteplo carelessly launched a site where these connected, autonomous, collaborating AI agents can hire people to perform tasks. 

Nobody involved appears willing to take responsibility for any damage this could cause. Meanwhile, it’s all moving so fast that lawmakers probably haven’t even heard about any of this, let alone regulate it.

The whole OpenClaw phenomenon is the poster child for the age of carelessness. 

AI disclosures: I used Gemini 3 Pro via Kagi Assistant (disclosure: my son works at Kagi) as well as both Kagi Search and Google Search to fact-check this article. I used a word processing product called Lex, which has AI tools, and after writing the column, I used Lex’s grammar checking tools to hunt for typos and errors and suggest word changes.

Kategorie: Hacking & Security

What enterprises should really watch at HP this year

6 Únor, 2026 - 04:48

The tech world was taken aback this week by the unexpected departure of HP CEO Enrique Lores, who left the device giant to head PayPal.

But, according to industry experts, the company will fare well against this kind of shakeup as long as it is able to execute on AI PCs, show definitive ROI, differentiate with emerging edge use cases, and strike a balance between pricing and technology advances.

Here’s a look at what enterprises might expect from HP in the coming year.

ROI with AI PCs is still a ‘landmine’

There’s been a lot of hype around AI PCs, and new technologies are arriving regularly, but enterprise adoption remained slow in 2025 due to questions around cost and ROI. HP has pushed hard into the market with its Copilot+ laptops, OmniBook X, EliteBook X, and OmniBook Ultra, powered by Qualcomm Snapdragon X Elite and Intel Core Ultra processors.

Notably, the company recently showed off its EliteBook X G2 Series, with an expanded set of processor options, and has bet on keyboard-based PCs for hybrid workforces.

Still, it’s very early in the game, and enterprises are asking why they should spend extra money to upgrade to AI PCs. Indeed, there are “frilly edge side use cases,” noted John Annand, digital infrastructure practice lead at Info-Tech Research Group, but many IT leaders don’t know where AI fits in their business. Quantifying the actual amount of return remains “nebulous.”

“I just don’t think we’ve found that killer use case for an AI chip on your local PC,” said Annand.

Sanchit Vir Gogia of Greyhound Research partly agreed, noting, “ROI ambiguity is a landmine.” His firm found that, while 57% of CIOs are evaluating AI PCs in refresh cycles, only 19% have approved broad deployment, and even fewer can clearly tie device AI to business KPIs.

This means HP must articulate AI ROI through managed pilots, quantified outcomes, and role-based value, Gogia says. “CIOs should challenge HP and others to show the before-after story,” on factors such as battery life, level of productivity lift, and IT support tickets.

“The new CEO comes in during a very tough time in the PC industry,” noted Anshel Sag, principal analyst at Moor Insights & Strategy. 2026 was poised to be a “huge refresh cycle” with Windows 10 end-of-life (EoL), but due to shortages in memory and other resources, demand is unlikely to be filled due to Microsoft’s “poor timing and messaging.”

No doubt there is a constant push to prove the value of AI and support better on-device experiences, Sag noted. “The good thing for HP is that it has a very diverse ecosystem of suppliers and should weather the storm well.”

Challenges with memory, supply chains

Another big challenge is the memory shortage; Gartner estimates price hikes ranging from 15% to 40% for end users. This is “causing widespread panic purchasing” on one hand, and the decision to “sweat all assets into the grave,” on the other, noted Gartner analyst Autumn Stanish.

“We’re seeing more and more companies decide to just redeploy or purchase non-AI PC refurbished devices to get them through the worst of the shortage, and evaluate the necessity of such an upgrade later into 2027,” she said. HP could take advantage of this trend with its Renew Services portfolio, selling secondary devices at a lower cost, but that doesn’t seem to be a marketing priority.

Enterprises should also monitor supply chain resilience and diversification, Stanish said. She called this latest shortage the “second major supply crisis in the past five years.”

“Keep an eye on communications or announcements from HP regarding what their long-term strategy will be moving forward, as geopolitics, climate, and health crises seem to be increasing in number,” said Stanish.

How HP is differentiating itself in the AI PC market

All this aside, HP is well-positioned once AI PC demand increases.

The company’s strategy stands apart because it focuses on the entire enterprise stack, rather than just chip specs, noted Greyhound’s Gogia. It is bundling AI PCs with telemetry, fleet observability, and security architecture, as evidenced by its Workforce Experience Platform (WXP) and its support for AMD, Intel, and Qualcomm offerings. HP has also been willing to create “unconventional” endpoints like the EliteBoard G1a keyboard PC.

The learning curve matters too, and HP is supporting tuning and governance, “not just shipping hardware with AI stickers,” said Gogia.

His firm predicts that 2026 will be the year HP operationalizes AI PCs as governed, role-specific, edge-capable tools, “not one-size-fits-all miracle machines.” They will also push into tactical edge use cases in remote fieldwork, branch locations, regulated zones, or hybrid roles where latency and connectivity are unpredictable.

Additionally, rather than promoting everything on-device or offloading to cloud, HP is supporting what Gogia described as “a choreography of compute,” in which local NPUs handle lightweight, real-time tasks, while heavier queries go to cloud-based copilots. This means CIOs get better cloud cost control and stronger data compliance.

Further, HP will advance silicon diversity, Gogia noted. Its upcoming Snapdragon-based PCs will be pitched as “ultra-mobile options” with “full security parity” to x86.

Beyond this, Stanish observed, HP’s WXP and its Wolf security chip are positioning HP as a “more holistic” digital workplace provider, rather than just a hardware original equipment manufacturer (OEM).

More broadly across the market, there’s been greater consistency in hardware specifications, and Gartner expects that mainstream models from all vendors will transition from NPUs with lower TOPS ratings to second-generation NPUs with 45-plus TOPS that meet Copilot+ certification standards. As a result, nearly all new devices will be classified as AI PCs, with most achieving Copilot+ certification, and AI enablement on PCs will increasingly extend beyond NPUs, incorporating advanced embedded GPUs from Intel, AMD, and Qualcomm.

HP will likely lean into the practical aspects of AI PCs — better security, enhanced collaboration, self-healing, and general autonomic operations — rather than the more hyped theoretical uses.

Over time, more capabilities will emerge around small language models (SLMs), domain specific language models (DSLMs), light language models (LMs) and other advanced capabilities. However, “we aren’t even close to mainstream for a thing like that yet,” said Stanish.

HP still figuring out how AI can benefit the print category

HP is an undisputed leader in the printer space, which, like other areas of business, is being impacted by AI, albeit not as dramatically.

“AI is relevant for everybody,” said Keith Kmetz, program VP for imaging, printing, and document solutions at IDC. “What the print market is trying to figure out is how AI is going to be applied in products and services in a manner that is differentiated from the conventional device.”

There’s potential for HP and other providers to apply AI to the “aftermarket experience service” and ongoing maintenance, as well as to support cost and general labor efficiencies, he noted.

Memory costs and constraints are hitting all sectors, printers included, and HP and others will likely pass that on to end users, Kmetz predicted. “HP hasn’t been shy in the past of saying ‘Look, we’re going to raise our prices,’” he said. While there aren’t likely to be huge increases, they will “nudge up a bit higher” as component and memory costs rise.

“Anything that needs memory is going to be faced with this increased cost that, frankly, most are going to expect,” said Kmetz.

Strategy is intact

The HP CEO transition is not the risk factor most CIOs think it is, Gogia noted. “The board has signaled continuity,” and “the strategy is intact.” However, CIOs should keep an eye on HP’s execution cadence, integration roadmap, and field responsiveness.

To see success with AI PCs, HP must continue to demonstrate measurable value — ticket reductions, productivity gains, and strong device observability — while balancing innovation with disciplined pricing, said Gogia. AI premiums are creeping into “endpoint TCO,” and the company’s pricing strategy will indicate whether it is prioritizing profits or defending its margins.

Furthermore, enterprise leaders should watch whether HP maintains a multi-silicon strategy. Supporting Qualcomm, AMD, and Intel is “smart,” Gogia noted, but only if manageability, patching, and telemetry remain consistent. Fragmented models break user trust.

Customers should ask HP how much value is delivered rather than just promised, and monitor whether support is consistent across regions and if partner responsiveness dips. This is also a good time to renegotiate service level agreements (SLAs), align roadmaps, and validate tech stacks, Gogia advised.

“CIOs should treat HP’s new leadership year as a stress test,” he said.

Kategorie: Hacking & Security

OpenAI responds to Claude Cowork with its own platform to help build, deploy, and manage AI agents

5 Únor, 2026 - 23:05

Less than a week after Anthropic released 11 open-source plugins that enable Claude Cowork to execute a series of automated processes in areas ranging from customer support to IT operations, OpenAI responded Thursday with a similar platform it calls Frontier.  

It said that its offering “gives agents the same skills people need to succeed at work: shared context, onboarding, hands-on learning with feedback and clear permissions and boundaries. That’s how teams move beyond isolated use cases to AI coworkers that work across the business.”

Frontier works with existing systems, the announcement said, allowing customers to integrate their applications using open standards, which takes away the need to replatform. The new AI coworkers are accessible through “any interface, not trapped behind a single UI or application.”

It added that a number of existing customers, including Cisco, T-Mobile, and Argentinian financial institution BBVA are piloting Frontier, and HP, Intuit, State Farm, Thermo Fisher, and Uber are early adopters.

Frontier viewed as a logical next step

Jason Andersen, VP and principal analyst at Moor Insights & Strategy, said he is not surprised at the mainstream world’s excitement about Anthropic and OpenAI entering the space, noting, “they have already shown themselves to be disruptors and are now positioning themselves more directly in the SaaS and enterprise productivity space.”

The problem, he said, is that many of the platforms that the AI pure plays are trying to disrupt have embedded similar agentic technologies, so customers have already been exposed to toolsets like these in Microsoft Office, SAP, Slack, and other products that also offer integration and out of the box agents. He wondered what OpenAI and Anthropic can offer to displace incumbents that already have similar products.

He pointed out that the real question all of these incumbents, and the AI vendors, will need to ask themselves if they want to remain relevant in the future is how they will not only augment, but also leverage agents to transform the customer value proposition.

Thomas Randall, a research director at Info-Tech Research Group, said that OpenAI still remains the most popular model provider for enterprise AI deployments, and Frontier is the logical next step in ensuring its models can integrate across enterprise tools and management.

However, he pointed out, “this step is not market-leading, and OpenAI is starting to lose some of its first-mover advantage.” He noted that OpenAI’s competitors, such as Anthropic, have been much more proactive with agentic automation across business workflows; Anthropic’s Claude has especially gained traction among developers.

“Moreover,” he said, “large SaaS platforms that touch multiple departments in an organization, such as ServiceNow and Salesforce, are embedding their [own] AI agents across these integrated workflows, too — from supply chain to sales.”

He noted, “the question for enterprises will be: which provider will become your standardized orchestration platform for AI workflows? Will it be OpenAI? Or, more likely, will it be a platform such as ServiceNow, which may leverage OpenAI models but already forms the backbone of much of the enterprise technology stack?”

Arun Chandrasekaran, distinguished VP analyst at Gartner, observed that OpenAI’s Frontier AI platform signifies an increased focus on enterprise clients. “The vendor wants to expand its footprint beyond models and ChatGPT and become a platform for architecting, orchestrating and governing AI and agents,” he said, pointing out that the most immediate benefit for AI leaders is quicker time to value for organizations already invested in OpenAI’s products. However, “this is predicated on OpenAI delivering a cohesive AI platform with robust governance controls and deep integration into enterprise workflows.”

There are risks in making a large platform bet on OpenAI, he added, including outsized dependency on a single strategic supplier in a fast-changing AI landscape, and significant upfront investment with uncertain payoff.

A place for both platforms in the enterprise

However, Sanchit Vir Gogia, chief analyst at Greyhound Research, sees a place for both OpenAI Frontier and Claude Cowork in the enterprise: “They’re not variations of a single idea. These are two fundamentally different products solving completely different problems inside the enterprise.”

Frontier, he said, “is all about orchestration. Think of it as the control layer, the connective tissue that makes a fleet of AI agents usable, governable and, most importantly, dependable. It gives these agents structure. They’re not just tools; each agent has identity, purpose, permissions, and memory. Everything is logged, measured, and controlled. That’s how you go from pilots to production at scale.”

Claude Cowork is different, said Gogia. He described it as “a doer. It’s local, fast, and self-contained. It acts like a highly skilled junior team member that can take on end-to-end work when equipped with the right plugins. Those plugins give it role-specific intelligence. … But Cowork operates in a silo. Each instance runs on its own; there’s no shared state, no centralized policy, and no cross-agent awareness. That’s fine at small scale, but it gets messy fast when you try to run 20 or 50 of them across an organization”

Therefore, he said, the two platforms are not in conflict, they’re complementary. “Cowork handles task-level automation. Frontier handles coordination, governance, and scale. … Deploying them together is where the real power lies.”

He said he views Frontier as a signal of real change in enterprise AI: “[It] is OpenAI stepping squarely into the enterprise infrastructure world. It’s a platform to run and manage AI across your business the way you run and manage applications or services.”

That platform, said Gogia, “addresses a real bottleneck we’ve been watching for the past year. Enterprises aren’t struggling with AI models. They’re struggling with deploying agents reliably, safely, and consistently. Everyone’s got an AI pilot somewhere, but few can say those agents are integrated into the business. That’s the velocity gap — and Frontier is meant to close it.”

Kategorie: Hacking & Security

AI has taken over customer service – but companies could soon regret the shift

5 Únor, 2026 - 21:09

Many companies and organizations have in recent years cut back on the number of employees dedicated to support issues, believing that AI solutions can handle this task for more efficiently.

But Gartner Research is now saying demand for support from real people is likely to increase as early as next year — because customers prefer talking with humans.

“AI is simply not mature enough to completely replace the expertise, empathy, and judgment that humans offer,” said Emily Potosky of Gartner in a statement. “Relying solely on AI at this point is premature and could lead to unintended consequences.”

Gartner expects that half of the companies that have invested in AI support will recruit human staff in the coming year, thought it might be necessary to change the titles of those who are rehired.

Kategorie: Hacking & Security

Microsoft aims to reward publishers for content used by AI

5 Únor, 2026 - 20:56

Microsoft thinks it has a win-win-win answer to the problem of AI chatbots delivering unreliable information: let them pay publishers for access to information that users can trust. 

Its Publisher Content Marketplace (PCM) has the triple aim of improving the quality of material provided to AI systems, providing revenue to those who provide the information, and ensuring that users of the AI services receive better responses.

“The result is a direct value exchange: publishers will be paid on delivered value, and AI builders gain scalable access to licensed premium content that improves their products,” the company said in a blog post about its plans.

If it works out then enterprises planning to use AI to help with their purchasing decisions, say, or to deliver services to their customers could be more confident about their results.

However, Zeyuan Gu, CEO of AI analytics company Adzviser, said there will still be questions over the quality of content, saying that it’s not clear how the value will be determined. “In the traditional web, value was observable. A publisher could see views, clicks, session time, and get paid through real-time bidding based on real traffic,” he said. “In an AI-first world, that signal gets very blurry. If a user asks a question and an AI gives a great answer, it’s extremely hard to know which publisher’s content influenced that answer.”

One possible issue for companies is whether Microsoft uses the same crawler for its AI content that it uses for its search function. If it does, then information providers will find it difficult to block content from use by Microsoft AIs without becoming invisible to its search engine. There is no confirmation it uses the same crawler for both functions, although it is widely believed to be the case, according to a paper from Akamai. Search and AI rival Google uses separate bots to feed its search index and its Gemini AI, according to Akamai.

IDC Research VP Wayne Kurtzman said this was an issue that companies understood and something that they would be required to address. “There will be changes to improve available content, where personalization options will quickly improve dramatically. That includes the risk of blocking content, which increases the risk of creating false narratives: It’s something that all companies need to be aware of.”

He said that the arrival of AI is already changing the way publishers operate. “Journalism is slowly evolving away from the ad-driven model of past centuries to a quick revenue model of licensing content. Yet others also see journalism as evolving as more community centric. One of these models may create a segment of people who do not have access to the same level reporting or insights.”

Microsoft has been working on the design of PCM with several US publishers, it said, including The Associated Press, Business Insider, Condé Nast, Hearst Magazines, and USA Today.

“We started with a focused set of scenarios in enterprise and consumer versions of Microsoft Copilot by grounding specific responses with licensed content and running experiments to validate assumptions before scaling,” it said.

There have been other attempts to make AI access to online content contingent on payment. Last year, Cloudflare introduced a service that would compensate publishers for using their content and in 2024, a trade body formed to license content for AI models.

IDC’s Kurtzman said that ventures such as these and Microsoft’s PCM will be necessary. “Content providers need to be compensated for their work. To that extent, Microsoft is seeking to do just that.”

But Adzviser’s Gu thinks that there is still some way to go before AI can be assured about the quality of the content provided. “Without a reliable way to attribute usage and impact at scale, I’m not sure how a marketplace can fairly price content for both publishers and AI builders. I’m very supportive of the goal here. I’m just skeptical that the measurement layer is solved yet.”

Kategorie: Hacking & Security

Google Meet videoconferencing devices can now join Teams calls

5 Únor, 2026 - 18:42

Google and Microsoft have enabled interoperability between their videoconferencing devices, meaning Google Meet users can now join Teams meetings from a Chrome OS-based Google Meet device, while Teams Rooms can do the same for Google Meet calls.

IT admins should be able to see the option in Google Meet console already, while end users can use the feature beginning Feb. 16. The feature will be on by default for Google Meet devices.

Both Google and Microsoft sell a range of screens, cameras, and speakers designed for hybrid remote meetings with in-office and remote colleagues. 

The Teams interoperability adds to the range of video call apps that Google Meet users can access from a device, with Zoom and Cisco Webex already interoperable. For Google Meet hardware users that access Teams, some features might not be available, including closed captions, dual screen support and “presenting using HDMI” — the latter is available when accessing Webex and Zoom.  

Microsoft Teams Rooms supports a range of video apps from Amazon, Cisco, Zoom, and others. 

About half of companies typically support multiple meeting apps, said Irwin Lazar, principal analyst at Metrigy, with Teams, Zoom, Webex, and Meet the most widely used.  

This creates practical problems in meeting rooms built around a single platform. “We see a growing preference for one-touch-join systems that allow participants to quickly join a meeting through an in-room touchpad rather than having to plug in a user’s laptop,” he said. “Without native interoperability, a BYOC [bring your own compute] approach creates additional friction and potential for things to go wrong.”


Google’s interoperability announcement helps remove that barrier, Lazar said. “Overall, this is a positive, as it simplifies the ability for those in a meeting room to join multiple different meeting services without having to resort to device switching.”

According to Metrigy’s Workplace Collaboration MetriCast research, 54% of companies plan to increase spending on videoconferencing systems in 2026.

Kategorie: Hacking & Security

This is why high-value targets should use Lockdown Mode

5 Únor, 2026 - 17:49

If you’ve ever wondered how secure Apple’s Lockdown Mode is, the Federal Bureau of Investigations (FBI) has the answer — and it’s good news for journalists, business leaders, civil leaders, or anyone who has to handle confidential data.

As part of an ongoing investigation about alleged leaks of classified information to the media, the FBI controversially raided the home of Washington Post reporter Hannah Natanson, seizing the journalist’s electronic devices, including an iPhone.

The iPhone was in Lockdown Mode at the time of the raid, so the FBI took the device to its Computer Analysis Response Team (CART), which attempted and failed to access the information it carried because the iPhone was in Lockdown mode. This is likely because the mode prevents wired connections between the iPhone and a computer or accessory when it is enabled, as US law enforcement seems to use a variety of peripherals to undermine security.

It is a great example of an Apple feature doing what it is intended to do. 

Massive attacks are taking place

“Lockdown Mode is an optional, extreme protection that’s designed for the very few individuals who, because of who they are or what they do, might be personally targeted by some of the most sophisticated digital threats,” Apple says.

We already know surveillance-as-a-service firms are targeting individuals that fit Natanson’s profile, just as they target business and political leaders, dissidents, celebrities and others. Only last December, Apple had to warn customers in 84 countries that they might have been targeted by such attacks. It has warned people in 150 nations to date.

If you imagine a police raid against a journalist outside the US, it becomes much easier to recognize the value Lockdown Mode provides. It means journalists and other high-value targets can more safely use locked-down devices for their work. 

The reporter could not prevent investigators from forcing her to unlock her Touch ID-enabled MacBook, as police can compel suspects to unlock biometrically protected devices. Natanson did not share the passcode to her non-biometric personal laptop, which remains a closed MacBook to authorities.

What Lockdown Mode is and how it works

Introduced in 2022, Lockdown Mode delivered a double whammy, massively increasing device security while also making complex attacks even more expensive to develop and make. Apple describes the protection as “sharply reducing the attack surface that potentially could be exploited by highly targeted mercenary spyware.” 

Enabled in the Privacy & Security pane on Settings on iPhone, Lockdown Mode compromises what you can do with your iPhone in exchange for the protection it provides. 

When enabled, those compromises include:

  • Most message attachment types other than images are blocked. 
  • Link previews, are disabled.
  • Biometric authorization no longer works.
  • Certain complex web technologies, such as just-in-time (JIT) JavaScript compilation, are disabled.
  • Incoming invitations and service requests, including FaceTime calls, are blocked if the user has not previously contacted the other party.
  • Wired connections with a computer or accessory are blocked.
  • Configuration profiles cannot be installed.
  • The device cannot enrol in mobile device management (MDM) systems.

Restrictions operate at the OS kernel and sandboxing level, making them extremely resilient to any attempted tampering. That resilience means Lockdown Mode protection is extremely difficult for attackers to overcome.

We don’t know where that resilience ends, we don’t know whether the FBI in this case has access to more powerful systems with which to attack Natanson’s phone. However, we do now know for certain that Lockdown Mode can deliver the degree of protection high-value targets may require. 

If nothing else, such protection might buy targets a little time when the state or other antagonists demand access to confidential sources. If you are a high-value target, it makes sense to ensure you enable this protection during high-risk activities such as when traveling, while involved in sensitive negotiations, or if you receive a threat notification. You should also use long and complex alphanumeric passwords.

Not for the everyone?

“While the vast majority of users will never be the victims of highly targeted cyberattacks, we will work tirelessly to protect the small number of users who are,” Ivan Krstić, Apple’s head of Security Engineering and Architecture, said when introducing the protection four years ago. 

It is, of course, widely recognized that Apple’s platforms are intrinsically secure. But Apple boosts this with additional optional security features, including Advanced Data Protection to encrypt device backups and Wallet Passes, and a feature called Inactivity Reboot; the latter makes devices reboot after being left inactive for some time, forcing a password to be used to unlock it again.

Apple also offers bounties to security researchers who can break Lockdown Mode and regularly updates security protection across all its platforms. 

The company clearly understands that in the online world, no one is safe until everyone is safe. That’s true despite ongoing tension between digital privacy and law enforcement, particularly in heavily surveilled regimes such as the UK.

You can follow me on social media! Join me on BlueSky,  LinkedIn, and Mastodon.

Kategorie: Hacking & Security

Enterprise tech spending to cross $6 trillion in 2026, driven by AI infrastructure boom

5 Únor, 2026 - 13:24

Global IT spending will grow 10.8% to reach $6.15 trillion in 2026, Gartner said in its latest forecast, with AI infrastructure accounting for the lion’s share of that growth.

The forecast shows a spending spree that shows no signs of slowing down, despite growing chatter about an AI bubble. Enterprises and cloud providers alike are opening their wallets for AI-related hardware and software, with data center systems leading the charge, according to the research firm.

“AI infrastructure growth remains rapid despite concerns about an AI bubble, with spending rising across AI‑related hardware and software,” John-David Lovelock, distinguished VP analyst at Gartner said in the report. “Demand from hyperscale cloud providers continues to drive investment in servers optimized for AI workloads.”

The new forecast represents an upward revision from Gartner’s October projection of $6.08 trillion for 2026, suggesting the AI gold rush is accelerating rather than cooling off.

The numbers bear that out, with server and data center spending set to surge at rates far exceeding overall IT growth.

Data centers: Where the money’s going

If you want to know where IT budgets are headed in 2026, look no further than the data center. Spending on data center systems will jump 31.7% to top $650 billion, up from nearly $500 billion in 2025, a whopping $150 billion increase in a single year, the report said.

Server spending alone will rocket up 36.9% year-over-year, Gartner found, driven almost entirely by AI-optimized hardware. The hyperscalers, including AWS, Microsoft Azure, Google Cloud, and others, are in an arms race to build out the infrastructure needed to train and run increasingly large AI models.

It’s not just about more servers, either. These are specialized machines packed with high-end GPUs and custom silicon designed specifically for AI workloads, which explains the eye-watering price tags.

Software growth cools slightly, but GenAI stays hot

The software market is expected to continue growing nicely in 2026, although Gartner has slightly trimmed its forecast. Software spending growth is now projected at 14.7%, down from an earlier estimate of 15.2%, encompassing both application and infrastructure software.

“Despite the modest revision, total software spending will remain above $1.4 trillion,” Lovelock added in the report.

The real story in software is generative AI. “Projections for generative AI model spending in 2026 remain unchanged, with growth expected at 80.8%,” according to Gartner. “GenAI models continue to experience strong growth, and their share of the software market is expected to rise by 1.8% in 2026,” the report added.

That growth reflects how quickly GenAI is becoming table stakes across enterprise software. Whether it’s customer service platforms, development tools, or productivity suites, vendors are racing to embed AI capabilities into everything they sell—and charging accordingly.

Devices hit a speed bump

Not everything in IT land is growing gangbusters. The device market, including PCs, tablets, and smartphones, will see spending reach $836 billion in 2026, but growth will slow to just 6.1%, down from stronger performance in 2025, according to the forecast.

The culprit? Memory prices. “This slowdown is largely due to rising memory prices, which are increasing average selling prices and discouraging device replacements,” Lovelock explained. “Additionally, higher memory costs are causing shortages in the lower end of the market, where profit margins are thinner.” It’s a classic supply chain issue: memory manufacturers are prioritizing production of high-margin components for AI servers and data center gear, leaving the consumer and commercial device markets scrambling for supply, the report noted.

Kategorie: Hacking & Security

Q&A: How AI could transform corporate meetings — for better or worse

5 Únor, 2026 - 08:00

Rebecca Hinds has studied office meetings and collaboration efforts for more than 15 years and most recently she’s seen how AI can make corporate get-togethers better — or worsen existing problems.

In a study commissioned by Read.AI, Hinds found that AI, when correctly implemented, can encourage more participation by women and lower-level employees. At the same time, it can actually hurt hybrid meetings, with in-room participants speaking up much more than remote attendees. AI could make meetings much worse.

Hinds’ book, “Your Best Meeting Ever: 7 Principles for Designing Meetings That Get Things Done,” was released this month. She sat down recently with Computerworld to explain how AI is changing meetings, sometimes for the better, sometimes not. 

Why are meetings still such a central problem for organizations? “Meetings are the tip of the iceberg for organizational health. Despite new technology, what happened during the pandemic, and us now working in fundamentally new ways, meetings have remained largely unchanged.

“We know from decades of research that meetings are often sites for very detrimental status dynamics. Often, senior folks or the highest-paid person in the room will influence the outcome before the meeting even started.

“Meetings… emain stagnant while the rest of the workplace evolves.”

What’s your main takeaway about how AI is affecting corporate meetings today? “The technology is amplifying whatever already exists in your culture. If you’ve developed a culture where meetings are strictly information exchange, AI will enable more information broadcasting to workers. The problem isn’t information — it’s the discovery of relevant information.

“Meetings should serve a very specific purpose, a specific decision to be made or a specific debate. And the more we can understand the purpose, the more we can then surface the right information.

“They should not be used for information exchange. That can and should happen asynchronously.”

Which meetings are actually useful? “Useful meetings have three characteristics: the work is complex, emotionally intense, or very risky. Those deserve face-to-face interaction, because it’s about empathy, trust, and body language. When you’re trying to communicate hard change, that requires a live meeting. When there’s enough ambiguity that warrants rapid back-and-forth exchange, that’s collaboration that needs to be real-time and spontaneous.

“If the workflow is predictable and people just need to do their piece of the puzzle, you don’t need a meeting. You need clear process documentation and maybe asynchronous updates.”

How are senior leaders using AI to tackle meetings, and what’s the impact? “Leaders are under massive pressure to flatten organizational charts, which creates ballooning spans of control. Because of this pressure, they’re trying to outsource everything they should be doing as managers to AI.

“Meetings become one of those mechanisms. They’re using AI to send summaries instead of attending meetings themselves, which fundamentally undermines the purpose of collaboration. You can’t build trust through a summary. You can’t demonstrate vulnerability or create psychological safety through automated notes.”

How can AI help companies reduce unnecessary coordination meetings? “AI has enormous potential if we’re able to truly map the process first and then automate parts of it. In theory, we should be having fewer meetings aimed at coordination and more meetings aimed at collaboration.

The problem historically has been we don’t have clarity about the process, we don’t document it, and we certainly don’t automate it. Meetings often become a safeguard to make sure the process is working, when really it should by default.”

In an AI-driven world, what’s the value of meetings as simple human connection, like going for a beer and discussing ideas? “It’s huge. Human-to-human connection, manager-to-employee connection, has been declining for years. Loneliness, the proportion of employees who have a best friend at work, all these metrics have been declining.

“If we design meetings right, they should be a primary place for that human connection. There’s nothing a manager can do that’s more impactful on a consistent basis than that consistent weekly one-on-one check-in, even for 15 minutes. The purpose of becoming more data-driven and efficient in our meeting design is to free up time for the meetings and collaboration that shouldn’t be highly efficient.

“If you’re in a creative session aimed at sparking creativity and innovation or boosting team morale or team bonds, that should not be a highly efficient pursuit. We should be anchoring on human-to-human connection. In the best cases, AI enables us to be more human in our interactions and in our meetings.”

How should organizations use data to improve their meetings rather than just adding more metrics? “First, understand what the data is for. We need metrics that give people a good sense of how the meeting is going and whether it’s likely to lead to an effective outcome. We don’t want too many metrics. We want the right metrics that allow us to understand our own participation.

AI should be intelligent enough to understand the meeting’s purpose and surface relevant information that helps the meeting move forward. Much of this should happen outside the meeting because the more information you dump on people during the meeting, the less they’re able to engage human to human.”

Beyond tracking who talks the most, what meeting dynamics should organizations measure? “Meetings are sites for status dynamics that can be very detrimental to outcomes. Senior folks speaking first, senior folks speaking more, senior folks influencing the room — these patterns undermine effective collaboration.

“New metrics around charisma and inclusive language can surface insights we often have no way of understanding, even at a gut level. This allows us to redesign both meetings and our own participation to be more effective.”

What’s the risk of replacing human participation in meetings with AI-generated summaries? “The more we automate the human elements of collaboration, the more we lose the very things that make meetings valuable: spontaneity, creativity, vulnerability, and trust-building. You cannot automate your way to better collaboration.

“The goal should be using AI to eliminate unnecessary meetings so the meetings you do have can be more deeply human.”

Kategorie: Hacking & Security

Model Context Protocol: Apple’s Xcode 26.3 opens for vibe coding

4 Únor, 2026 - 14:57

Apple has embraced agentic AI for developers, introducing direct support in Xcode 26.3 for both Anthropic’s Claude Agent and OpenAI’s Codex and making vibe coding now a platform feature for iPhone, iPad, and Mac. It’s available to all Apple Developer Program members now and will be released “soon” on the App Store.

Xcode 26.3 follows on the heels of the introduction of a macOS app for Codex, but it delivers much more than Codex alone. The new integration opens up new opportunities for developers as Apple’s development environment can now autonomously support their work, from task management to coding to project architecture and more. It represents a major extension beyond the AI features introduced in Xcode 26. 

Apple is also thinking ahead in this support. Xcode 26.3 makes its capabilities available through Model Context Protocol, an open standard that gives developers the flexibility to use any compatible agent or tool with Xcode. This is a big step for Apple, which wants to position Xcode as a companion to the growing flock of AI development tools. 

The result is that developers can select the correct tool for their task, using models most suitable for their work, opening the door to intensified competition between agentic tools.

“At Apple, our goal is to make tools that put industry-leading technologies directly in developers’ hands so they can build the very best apps,” said Susan Prescott, Apple’s vice president of Worldwide Developer Relations. “Agentic coding supercharges productivity and creativity, streamlining the development workflow so developers can focus on innovation.”

In a post on the Anthropic website, that company explained the extent of the integration: “Developers get the full power of Claude Code directly in Xcode — including subagents, background tasks, and plugins — all without leaving the IDE.”

During briefings provided around the introduction, Apple confirmed it worked directly with both OpenAI and Anthropic to optimize the experience of using their models in Xcode. During this collaboration, particular attention was paid to reducing token usage and efficiency. Agents must be downloaded from within Xcode for this integration.

What can agents do in Xcode?

Built-in access to Claude Agent and Codex means developers can exploit the advanced reasoning of these models while building apps. It also means developers can switch between different available models, selecting the most appropriate one for their project, though it will be important to consider the terms of service offered by those models before using them in code.

Developers could use these tools to:

  • Search documentation.
  • Execute autonomous tasks.
  • Explore file structures, understand how the pieces connect, and spot necessary changes before writing code.
  • Update project settings.
  • Verify work visually by capturing Xcode Previews and iterating through builds and fixes — even capturing screenshots to show code functions properly.

Developers can also combine all these features, using AI to vibe code apps, build images, develop file structures and verify app behavior, iterating on the app. This lets them focus on improving the overall experience of the code.

Finally, the introduction of Model Context Protocol delivers much more than the press statement explains: as long as the IDE is running, users can browse and search Xcode project structure, read/write/delete files and groups, build projects (including structure and build logs), run fault diagnostics, execute tasks and more, using their choice of MCP-supporting agent models.  

What comes next?

There are some risks coming into view. Vibe coding at scale will happen, and when it does it will introduce a flotilla of rapidly-created apps, some of which might include security flaws if not verified and checked correctly. That’s even before you consider the tendency of large language models (LLMs) to hallucinate.

There is also the danger that the novelty and power of these applications might distract some developers who would traditionally put their energy into open-source projects, potentially undermining the integrity of those important projects. Stack Overflow use has collapsed as developers use chatbots instead of the knowledge bases the AI has already digested. 

Apple’s Xcode decision makes it inevitable that code running on the devices you own or that your company deploys will be partly built by AI. It’s an open question whether students learning to code today can reliably anticipate opportunities to make a living doing so in a decade’s time.

You can follow me on social media! Join me on BlueSky,  LinkedIn, and Mastodon.

Kategorie: Hacking & Security

Amid AI gloom and doom, WEF attendees were bullish on physical AI

4 Únor, 2026 - 14:24

The tech bigwigs and economists at the recent World Economic Forum (WEF) in Davos were clear-eyed about how AI is reshuffling the jobs landscape globally and disrupting national economies.

But the chatter around physical AI and robotics was more upbeat, with attendees saying robots with brains and intelligent sensors are likely to improve human productivity and manufacturing output. 

That, in turn, should improve economies — and in the long run create more jobs. 

Physical AI refers to the concept of AI manifesting in physical form, most notably as robots, though others foresee broader real-world outcomes, such as AI cameras that reduce crime or AI-driven sensors that bolster industrial output.

“You can now fuse your industrial capability, your manufacturing capability with artificial intelligence and that brings you into the world of physical AI or robotics,” said Jensen Huang, CEO of Nvidia, during a fireside chat at WEF.

If anything, AI — including agentic AI and robotic automation — is more likely to  change the nature of what humans do than take jobs away, he said. Robots can do menial work such as typical administrative tasks, allowing humans to be more productive, Huang said. “We’re five million nurses short…. AI is increasing their productivity…. [And when], hospitals do better, they hire more nurses.”

Robots have the capacity to work non-stop, without tiring, yielding productivity gains that will increase the average output of economies, said tech entrepreneur Elon Musk. “My prediction is…we’ll make so many robots and AI that they will saturate all human needs,” the Tesla CEO said during a WEF discussion.  

He described how robots might be able to help care for elderly parents, which can be an expensive undertaking for humans, saying the robots can take the place of younger people.

Robots are also capable of doing jobs considered dangerous for humans, though  humans will still need to be involved, Daniela Rus, director at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), said during a panel session at WEF. (Many companies have roots at CSAIL, including Venti Technologies, where Rus is a board member and adviser.)

Venti has “entire fleets of robots that operate 24/7 without the need of human drivers,” Rus said. “Yet human drivers are also in the loop to step in when the weather is bad or when there is a lot of need for movement.”

China is considered further ahead in robot adoption than the US, where the robotics market is still growing. Tianlan Shao, CEO of China-based Mech-Mind Robotics, said his company had delivered more than 10,000 robots in the past year — the same number it produced during its first eight years of existence.

He argued that if a industrial robot is given a chainsaw, for example, humans might still be needed to make sure the robot sticks to task. “We need clear boundaries…, definitions, and rules,” Shao said.

Shao pointed to progress in the last 12 months of fusing AI into robots. “Now we can train this so-called world-model-like thing, aligning everything, including robot vision and robot motion, aligning everything in one specific space.” 

World models under development by the likes of Nvidia, Microsoft ,and Google are designed to improve the physical functionality of robots.  Researchers at Mohamed bin Zayed University of Artificial Intelligence have created a world model called PAN that tests action sequences in a safe, controlled simulation.

The growing importance of physical AI has been acknowledged beyond just the WEF.  Deloitte’s State of AI in the Enterprise report, released in January, pointed to a widening adoption of physical AI. About 58% of companies surveyed by the consulting firm noted physical AI adoption, a figure expected to grow to 80% in the next two years.

The consulting firm sees physical AI as having a real-world component, meaning  technology that can sense and drive a physical outcome. Monitoring and security systems, for instance, are fast-growing areas for deployment, along with collaborative robotics.

“Physical AI as a terminology is relatively new, but the underlying foundation was laid 12 or 13 years ago — what’s different now is adding intelligence and autonomy on top of that physical foundation,” Beena Ammanath, global head at Deloitte AI Institute, told Computerworld.

The older underlying foundations of physical AI include IoT and robotic process automation, Ammanath said.

Despite the encouraging comments about physical AI, not everyone at WEF had rosy assessments for robotics. “Elon Musk also told us in 2017 that we will be falling asleep at the wheel in 2019,” Rus said, offering a cautionary note. “And we’re still not falling asleep at the wheel.”

The rise of humanoid robots will take time due to navigation, materials, dexterity and reasoning issues, Rus said. “It’s coming, but it’s not today,” he said.

WEF’s Chief Economists Outlook noted the growing value of companies that focus on humanoid robots — the mechanized machines that look like and function like humans and are a mainstay of science fiction.

“While far from general-purpose deployment on factory floors, humanoid robotics companies are attracting large valuations and investments,” the WEF said in its outlook for 2026. 

Kategorie: Hacking & Security

Testing can’t keep up with rapidly advancing AI systems: AI Safety Report

4 Únor, 2026 - 13:50

AI systems continued to advance rapidly over the past year, but the methods used to test and manage their risks did not keep pace, according to the International AI Safety Report 2026.

The report, produced with inputs from more than 100 experts across over 30 countries, said that pre-deployment testing was increasingly failing to reflect how AI systems behaved once deployed in real-world environments, creating challenges for organisations that had expanded their use of AI across software development, cybersecurity, research, and business operations.

“Reliable pre-deployment safety testing has become harder to conduct,” the report stated, adding that it had become “more common for models to distinguish between test settings and real-world deployment, and to exploit loopholes in evaluations.”

The findings came as enterprises accelerated adoption of general-purpose AI systems and AI agents, often relying on benchmark results, vendor documentation, and limited pilot deployments to assess risk before wider rollout.

Capabilities improved rapidly, but unevenly

Since the previous edition of the report was published in January 2025, general-purpose AI capabilities continued to improve, particularly in mathematics, coding, and autonomous operation, the report said.

Under structured testing conditions, leading AI systems achieved “gold-medal performance on International Mathematical Olympiad questions.” In software development, AI agents became capable of completing tasks that would have taken a human programmer about 30 minutes, compared with under 10 minutes a year earlier.

Despite those gains, the report said AI systems continued to show inconsistent performance. Models that performed well on complex benchmarks still struggled with tasks that appeared comparatively simple, such as recovering from basic errors in long workflows or reasoning about physical environments. The report described this pattern as “jagged” capability development.

For enterprises, the uneven progress made it more difficult to assess how systems would behave once deployed broadly, particularly when AI tools moved from controlled demonstrations into everyday operational use.

Evaluation results no longer predicted real-world behavior

A central concern highlighted in the report was the growing gap between evaluation results and real-world outcomes. Existing testing methods, it said, no longer reliably predicted how AI systems would behave after deployment.

“Performance on pre-deployment tests does not reliably predict real-world utility or risk,” the report stated, noting that models were increasingly able to recognise evaluation environments and adjust their behaviour accordingly.

The report said this trend made it harder to identify potentially dangerous capabilities before release, increasing uncertainty for organisations integrating AI into production systems.

The issue was especially relevant for AI agents, which were designed to operate with limited human oversight. While such systems improved efficiency, the report said they “pose heightened risks because they act autonomously, making it harder for humans to intervene before failures cause harm.”

Cybersecurity risks are increasingly observed in practice

The report also documented growing real-world evidence of AI being used in cyber operations.

General-purpose AI systems were increasingly capable of identifying software vulnerabilities and generating malicious code, the report said. In one competition cited, an AI agent identified 77% of vulnerabilities present in real software.

Security analyses referenced in the report indicated that criminal groups and state-associated actors were already using AI tools to support cyberattacks.

“Criminal groups and state-associated attackers are actively using general-purpose AI in their operations,” the report stated, while noting that it remained unclear whether AI would ultimately advantage attackers or defenders.

For enterprises, the findings underscored the expanding role of AI in both improving productivity and altering the cybersecurity threat landscape.

Governance and transparency lagged deployment

While industry attention to AI safety increased, the report found that governance practices continued to lag behind deployment. Most AI risk management initiatives remained voluntary, and transparency around model development, evaluation, and safeguards varied widely.

“Developers have incentives to keep important information proprietary,” the report stated, limiting external scrutiny and complicating risk assessments for enterprise users.

In 2025, 12 companies published or updated Frontier AI Safety Frameworks, outlining how they planned to manage risks as model capabilities advanced. However, the report said technical safeguards still showed clear limitations, with harmful outputs sometimes obtainable through prompt reformulation or by breaking requests into smaller steps.

What the findings mean for enterprise IT teams

The report did not make policy recommendations, but it outlined conditions enterprises increasingly faced as AI systems became more capable and more widely deployed.

Because evaluations and safeguards were imperfect, the report said organisations should expect that some AI-related incidents would occur despite existing controls.

“Risk management measures have limitations, and they will likely fail to prevent some AI-related incidents,” the report stated, pointing to the importance of post-deployment monitoring and institutional readiness.

As enterprises continued to expand their use of AI, the report indicated that understanding how systems behaved outside testing environments would remain a key challenge for IT teams managing increasingly AI-dependent operations.

Kategorie: Hacking & Security

Windows 11 LTSC explained: The OS when you need stability above all

4 Únor, 2026 - 12:00

Windows 11’s long-term servicing option — initially identified in Windows 10 as the Long-Term Servicing Branch (LTSB) — is now branded Long-Term Servicing Channel (LTSC) and remains an important pillar for enterprises in specific industries. 

What is Windows 11 LTSC?

LTSC is a specialized edition of Windows 11 Enterprise built for devices that require maximum stability. For the most part, Windows 11 Enterprise LTSC looks and runs like other Windows 11 editions. What’s different is the frequency with which new features are delivered.

The main Windows 11 servicing model, known as the General Availability Channel (GAC), pushes major feature upgrades to customers once a year, with additional feature enhancements often included the Windows “quality updates” issued each month. Windows Enterprise LTSC releases, on the other hand, are issued every two to three years and remain functionally static throughout their lifespan.

That means fewer changes during a set timeline, a less-involved upgrade effort, and fewer disruptions, as well as fewer possibilities for applications breaking because of a modification of the OS. 

LTSC is designed for use in highly regulated or restricted environments where feature updates can be cumbersome or disruptive. This can include specialized devices that control medical equipment, ATM machines, or point of sale (POS) systems. Because they are intended for targeted tasks, these devices don’t require feature updates as frequently as other enterprise devices. With LTSC, the goal is to keep devices as stable and secure as possible rather than disrupt their function with frequent interface changes.

It’s important to note that LTSC isn’t intended for widespread deployment across an enterprise’s general-purpose devices (typically understood as those installed with Microsoft Office). 

What versions of Windows 11 LTSC are available?

Microsoft offers two versions of LTSC aimed at different types of devices:

  • Windows 11 Enterprise LTSC 2024: This version is intended for specialized enterprise PCs and has a five-year lifecycle (mainstream support ends October 9, 2029).
  • Windows 11 IoT Enterprise LTSC 2024: This version is intended for special-purpose, fixed-function devices such as ATMs, MRI machines, manufacturing controllers, POS systems and kiosks. It has a 10-year lifecycle (mainstream support ends October 9, 2029; extended support ends October 10, 2034).
How often do Windows 11 LTSC updates occur?

While this is simple question, it requires a nuanced answer. 

  • Windows 11 LTSC does receive the usual monthly quality and security updates, which customers can delay.
  • The annual feature upgrades delivered to customers in the General Availability Channel are not offered to LTSC systems.
  • Microsoft upgrades the LTSC “build” every two to three years. Those upgrades, however, are optional. Enterprises can choose to install them as in-place upgrades or skip them altogether based on their business requirements.
  • LTSC releases support the processors and chipsets in use at the time of release. When new CPU generations are rolled out, Microsoft provides support in future LTSC releases, which customers can self-deploy.
Past LTSB/LTSC releases

LTSB was first rolled out with Windows 10 in July 2015 and has had four subsequent releases in both standard and IoT versions. The LTSC rebranding was introduced in late 2018 with the release of Windows 10 Enterprise LTSC 2019.

  • Windows 10 LTSB 2015 / Windows 10 IoT Enterprise LTSB 2015 (released 7/29/2015)
  • Windows 10 LTSB 2016 / Windows 10 IoT Enterprise LTSB 2016 (released 8/2/2016)
  • Windows 10 Enterprise LTSC 2019 / Windows 10 IoT Enterprise LTSC 2019 / Windows 10 IoT Core LTSC (released 11/13/2018)
  • Windows 10 Enterprise LTSC 2021 / Windows 10 IoT Enterprise LTSC 2021 (released 11/16/2021)
  • Windows 11 Enterprise LTSC 2024 / Windows 11 IoT Enterprise LTSC 2024 (released 10/01/2024)
A shrinking lifecycle

Windows Enterprise LTSC follows Microsoft’s Fixed Lifecycle Policy, and as with other Windows versions, Microsoft has reduced its support period over time. Earlier versions of LTSB/LTSC had a 10-year lifecycle: five years of mainstream support and up to five years of extended support. Thus, the original Windows 10 LTSB 2015 release remained under extended support until October 14, 2025.

Starting with Windows 10 Enterprise LTSC 2021, however, Microsoft switched to a five-year lifecycle for the standard version of LTSC. The lifecycle page for Windows 11 Enterprise LTSC 2024, for example, shows five years of mainstream support ending on October 9, 2029, with no mention of extended support.

That said, the IoT Enterprise LTSC releases have retained the traditional 10-year lifecycle, with extended support for Windows 11 IoT Enterprise LTSC 2024 set to expire on October 10, 2034.

What’s new in the current Windows 11 LTSC? 

Windows 11 Enterprise LTSC 2024 was first made available on October 1, 2024, three years after the official rollout of Windows 11. It includes enhanced security protections, control capabilities, and device and app management cumulatively introduced through Windows 11 versions 21H2, 22H2, 23H2, and 24H2. 

Built on the Windows 11 24H2 codebase, this release includes several features that were missing in previous long-term versions:

  • Security: Microsoft Defender Antivirus, Microsoft Pluton, enhanced phishing protection, Credential Guard enabled by default, malicious and vulnerable driver blocking, personal data encryption, passkeys and passwordless capabilities, Windows Local Admin Password Solution (LAPS), and others.
  • Management: Microsoft Intune mobile app management (MAM) and mobile device management (MDM), plus other admin controls, restrictions, policy settings, and customization.
  • Sudo for Windows: Allows admins to run elevated commands directly from an unelevated console — a significant quality-of-life upgrade for developers.
  • Connectivity: Native support for Wi-Fi 7 and High Efficiency Video Coding (HEVC).
  • Removed: Internet Explorer is gone (replaced by Microsoft Edge), and Microsoft Publisher is no longer included.

For more details, see Microsoft’s What’s new in Windows 11 Enterprise LTSC 2024 page.

Why does Microsoft make LTSC available to customers?

Plainly put, it was introduced to stop to the criticism very early on about Windows 10’s accelerated development and release tempo.

Customers had become accustomed to upgrading Windows every three or more years, with the emphasis on more in the enterprise. The announcement that that would change to multiple releases each year — initially, three annually — was a shock.

Microsoft tried to soften the blow by offering a schedule very similar to the slower cadence familiar to IT: upgrades that appeared every three years or so, with little or no feature changes in between, and a servicing model that provided only security and bug fixes. 

But Microsoft has repeatedly stressed over the years that most of any large organization’s PCs should be running Windows 11 Enterprise and serviced via the GAC, not the LTSC.

Which PCs should be running LTSC?

Here’s what Microsoft says about the devices it considers good candidates for LTSC:

“Specialized systems — such as PCs that control medical equipment, point-of-sale systems, and ATMs — often require a longer servicing option because of their purpose,” the company’s primary Windows-as-a-service documentation states. “These devices typically perform a single important task and and don’t need feature updates as frequently as other devices in the organization. It’s more important that these devices be kept as stable and secure as possible than up to date with user interface changes.”

As for which PCs shouldn’t run LTSC:

“As a general guideline, a PC with Microsoft Office installed is a general-purpose device, typically used by an information worker, and therefore it is better suited for the General Availability channel [than LTSC].”

What’s the most important thing to remember about LTSC?

Windows 11 LTSC remains a critical option for organizations requiring long-term stability and predictability over continuous improvement, particularly in regulated or fixed-purpose environments. But it comes with tradeoffs in flexibility, app support, and future hardware compatibility. 

However, as Microsoft continues to prioritize its more lucrative cloud-first servicing models, organizations should deploy LTSC deliberately, plan for lifecycle constraints, and closely eye security developments.

This story was originally published in November 2018 and updated in February 2025.

Kategorie: Hacking & Security

22 handy hidden tricks for Google Calendar on Android

4 Únor, 2026 - 11:45

Google Calendar is a core part of the Android productivity package — but if all you’re using is what you see on the app’s surface, you’re missing out on some pretty powerful possibilities.

Yes, oh yes: Just like so many of our modern digital tools, there’s more to Google Calendar than meets the eye. And while the majority of the service’s advanced options may revolve around the Calendar website, the Calendar Android app has its share of handy out-of-sight options that are specific to the mobile experience. From time-saving shortcuts to efficiency-boosting options, they’re all things that have the potential to make your life easier in small but significant ways.

Find time in your agenda to check out these hidden Google Calendar goodies on Android. Trust me: You’ll be glad you did.

(Note that these tips are all specific to the Google Calendar Android app, which is free and available to use on any Android device — though not necessarily always the default calendar app that’s present on all devices out of the box.)

Google Calendar Android trick #1: Quick peeking

Tell me if you can relate to this: You head into the Calendar app on your phone to create a new event. You open the screen to add the event in — and then you find yourself facing a foggy mental blank.

What else did you have going on that day? Did you need to schedule the event for 2:00 p.m., or would 3:00 be better? When was that podiatrist appointment, again?

[Psst: Love shortcuts? My Android Shortcut Supercourse will teach you tons of time-saving tricks for your phone. Sign up now for free!]

I’ve certainly been there (well, not to the podiatrist, specifically, but in the general event brain fog situation). And the Android Calendar app doesn’t do much to offer any broader calendar context while you’re in the midst of adding in a new event.

Or so it’d seem. After years of using Google Calendar on Android, I not long ago noticed a curiously camouflaged option that’ll change the way you create events on your phone.

See that barely noticeable light-gray line at the top of the Calendar app’s event creation screen? The one that looks vaguely like an arrow pointing downward?

You’d never know it, but that subtle gray line hides a spectacular agenda-juggling superpower.

JR Raphael, IDG

Yup, that’s the one. The next time you’re adding a new event on your phone and you find yourself wondering what else is on your agenda around that same time, tap that line — or, alternatively, use it as a hint to swipe downward anywhere within the main event creation area of the screen.

And…

srcset="https://b2b-contenthub.com/wp-content/uploads/2024/12/01-google-calendar-android-quick-peek-animation.webp?quality=50&strip=all 700w, https://b2b-contenthub.com/wp-content/uploads/2024/12/01-google-calendar-android-quick-peek-animation.webp?resize=290%2C300&quality=50&strip=all 290w, https://b2b-contenthub.com/wp-content/uploads/2024/12/01-google-calendar-android-quick-peek-animation.webp?resize=674%2C697&quality=50&strip=all 674w, https://b2b-contenthub.com/wp-content/uploads/2024/12/01-google-calendar-android-quick-peek-animation.webp?resize=162%2C168&quality=50&strip=all 162w, https://b2b-contenthub.com/wp-content/uploads/2024/12/01-google-calendar-android-quick-peek-animation.webp?resize=81%2C84&quality=50&strip=all 81w, https://b2b-contenthub.com/wp-content/uploads/2024/12/01-google-calendar-android-quick-peek-animation.webp?resize=464%2C480&quality=50&strip=all 464w, https://b2b-contenthub.com/wp-content/uploads/2024/12/01-google-calendar-android-quick-peek-animation.webp?resize=348%2C360&quality=50&strip=all 348w, https://b2b-contenthub.com/wp-content/uploads/2024/12/01-google-calendar-android-quick-peek-animation.webp?resize=242%2C250&quality=50&strip=all 242w" width="700" height="724" sizes="auto, (max-width: 700px) 100vw, 700px">Surprise, surprise: Your entire agenda is never more than a swipe away.

JR Raphael, IDG

Wouldya look at that?! You can actually minimize that event creation interface down to a tiny panel and browse around on your calendar above it.

And that’s not all…

Google Calendar Android trick #2: Simple sliding

After you’ve entered that concealed quick-peek view, remember this: If you decide you need to shift your new event around to another time, you can simply touch and hold the outline on your screen and slide your finger up and down to move it.

Nifty, no? And there’s one more piece to this puzzle yet…

Google Calendar Android trick #3: Gesture adjusting

In addition to sliding an event around to move it in the Calendar Android app’s event creation quick-peek interface, you can touch your finger to the dots on the top or bottom of your event’s outline and then slide up or down from there to make the event longer or shorter.

srcset="https://b2b-contenthub.com/wp-content/uploads/2024/12/03-google-calendar-android-event-length.webp?quality=50&strip=all 700w, https://b2b-contenthub.com/wp-content/uploads/2024/12/03-google-calendar-android-event-length.webp?resize=300%2C141&quality=50&strip=all 300w, https://b2b-contenthub.com/wp-content/uploads/2024/12/03-google-calendar-android-event-length.webp?resize=150%2C71&quality=50&strip=all 150w, https://b2b-contenthub.com/wp-content/uploads/2024/12/03-google-calendar-android-event-length.webp?resize=640%2C301&quality=50&strip=all 640w, https://b2b-contenthub.com/wp-content/uploads/2024/12/03-google-calendar-android-event-length.webp?resize=444%2C209&quality=50&strip=all 444w" width="700" height="329" sizes="auto, (max-width: 700px) 100vw, 700px">Longer? Shorter? You name it — this gesture makes event adjustments as easy as can be.

JR Raphael, IDG

Now if only our actual meetings could be condensed down so easily.

Google Calendar Android trick #4: Instant perspective

When you need to glance at a full-month view whilst thumbing through your events, take note of the following invisible Android Calendar shortcut: You can tap or swipe downward on the app’s top bar — where it says the current month’s name — to bring a monthly view into focus. Tap on the bar a second time (or swipe back up, with your finger starting just beneath that area) to hide it when you’re done.

srcset="https://b2b-contenthub.com/wp-content/uploads/2024/12/04-google-calendar-android-month-view-slide.webp?quality=50&strip=all 700w, https://b2b-contenthub.com/wp-content/uploads/2024/12/04-google-calendar-android-month-view-slide.webp?resize=288%2C300&quality=50&strip=all 288w, https://b2b-contenthub.com/wp-content/uploads/2024/12/04-google-calendar-android-month-view-slide.webp?resize=668%2C697&quality=50&strip=all 668w, https://b2b-contenthub.com/wp-content/uploads/2024/12/04-google-calendar-android-month-view-slide.webp?resize=161%2C168&quality=50&strip=all 161w, https://b2b-contenthub.com/wp-content/uploads/2024/12/04-google-calendar-android-month-view-slide.webp?resize=81%2C84&quality=50&strip=all 81w, https://b2b-contenthub.com/wp-content/uploads/2024/12/04-google-calendar-android-month-view-slide.webp?resize=460%2C480&quality=50&strip=all 460w, https://b2b-contenthub.com/wp-content/uploads/2024/12/04-google-calendar-android-month-view-slide.webp?resize=345%2C360&quality=50&strip=all 345w, https://b2b-contenthub.com/wp-content/uploads/2024/12/04-google-calendar-android-month-view-slide.webp?resize=240%2C250&quality=50&strip=all 240w" width="700" height="730" sizes="auto, (max-width: 700px) 100vw, 700px">One swift swipe, and boom: Your monthly calendar view is there and ready.

JR Raphael, IDG

Who knew?!

Google Calendar Android trick #5: Expanded intelligence

Looking at the Google Calendar app on an Android tablet — or a folding phone like the Pixel Fold in its fully unfolded state? You’ve got even more agenda-expanding magic at your fingertips and just begging to be found.

By default on any such device, the Calendar app will expand to take advantage of that screen space in a variety of scenarios. What isn’t obvious, though, is the fact that you can split and then customize your view anytime with a simple on-screen swipe.

In any view other than the Month overview, look for a thin gray line at the left of the screen. (Again: This’ll work only when you’re using the app on a larger-screened device — either a tablet or a fully unfolded foldable.)

Slide your finger on that line toward the right — and hey, how ’bout that?!

srcset="https://b2b-contenthub.com/wp-content/uploads/2024/12/05-google-calendar-android-pixel-fold-panels.webp?quality=50&strip=all 600w, https://b2b-contenthub.com/wp-content/uploads/2024/12/05-google-calendar-android-pixel-fold-panels.webp?resize=300%2C287&quality=50&strip=all 300w, https://b2b-contenthub.com/wp-content/uploads/2024/12/05-google-calendar-android-pixel-fold-panels.webp?resize=176%2C168&quality=50&strip=all 176w, https://b2b-contenthub.com/wp-content/uploads/2024/12/05-google-calendar-android-pixel-fold-panels.webp?resize=88%2C84&quality=50&strip=all 88w, https://b2b-contenthub.com/wp-content/uploads/2024/12/05-google-calendar-android-pixel-fold-panels.webp?resize=502%2C480&quality=50&strip=all 502w, https://b2b-contenthub.com/wp-content/uploads/2024/12/05-google-calendar-android-pixel-fold-panels.webp?resize=376%2C360&quality=50&strip=all 376w, https://b2b-contenthub.com/wp-content/uploads/2024/12/05-google-calendar-android-pixel-fold-panels.webp?resize=261%2C250&quality=50&strip=all 261w" width="600" height="574" sizes="auto, (max-width: 600px) 100vw, 600px">Panels a-plenty await in the Google Calendar Android app on a large-screened device.

JR Raphael, IDG

You can see the full month view right there alongside whatever else you were viewing — at any width you like.

To get to a similar sort of setup from the main Month view, just tap on any event in the calendar. That’ll pull up detailed info about the event in a separate panel to the left — which, once more, you can customize and resize by sliding your finger along the thin gray line separating the two panels.

Not bad, eh?

Google Calendar Android trick #6: Time travel

If you’re already in the Android Calendar app’s full-screen Month view — no matter what type of device you’re using — try tapping on that same month’s name at the top of the screen.

From that view, that action will reveal a nifty new quick-jump bar for quickly zipping forward or back in time to any month imaginable — no scrolling, flipping, or futzing around required.

srcset="https://b2b-contenthub.com/wp-content/uploads/2024/12/06-google-calendar-android-month-chips.webp?quality=50&strip=all 700w, https://b2b-contenthub.com/wp-content/uploads/2024/12/06-google-calendar-android-month-chips.webp?resize=300%2C39&quality=50&strip=all 300w, https://b2b-contenthub.com/wp-content/uploads/2024/12/06-google-calendar-android-month-chips.webp?resize=150%2C19&quality=50&strip=all 150w, https://b2b-contenthub.com/wp-content/uploads/2024/12/06-google-calendar-android-month-chips.webp?resize=640%2C82&quality=50&strip=all 640w, https://b2b-contenthub.com/wp-content/uploads/2024/12/06-google-calendar-android-month-chips.webp?resize=444%2C57&quality=50&strip=all 444w" width="700" height="90" sizes="auto, (max-width: 700px) 100vw, 700px">All it takes is a tap to fly through time from the Google Calendar app’s Month view.

JR Raphael, IDG

And speaking of shadowy shortcuts…

Google Calendar Android trick #7: Snazzy snapping

Anytime you’re scrolling through your Schedule view in the Calendar app and want to jump back to the current day, tap the small calendar icon (the box with the current date in it, directly to the left of your profile picture in the upper-right corner of the screen).

You can snap yourself back to the present with the Android Calendar app’s easily overlooked day icon.

JR Raphael, IDG

That’ll beam you instantly back to today, no matter how far into the future you’ve traveled.

Google Calendar Android trick #8: Sneaky selecting

Speaking of that top-of-screen current date icon, that same symbol holds another rarely noticed shortcut:

You can press and hold it for a second to summon a scrollable calendar pop-up that makes it easy to select and jump directly to any date, anytime.

Go give it a whirl, then train your brain to remember it’s there and waiting the next time you need to leap around (regardless of whether it’s a leap year).

Google Calendar Android trick #9: Tasks on demand

While we’ve got our attention on that upper-right corner area of the Google Calendar Android app interface, be sure to take note of a relatively recently added option that’s all too easy to overlook.

After months of awkwardly disconnected existences, Google’s now integrated Google Tasks directly into the Calendar Android app — so you can see and manage all of your tasks right within your calendar environment, without having to toggle over to the separate standalone Tasks app.

Just tap the small circled checkmark icon in the Calendar app’s upper-right corner, directly to the right of the current day box we were just chewing over.

srcset="https://b2b-contenthub.com/wp-content/uploads/2024/12/08-google-calendar-android-tasks.webp?quality=50&strip=all 700w, https://b2b-contenthub.com/wp-content/uploads/2024/12/08-google-calendar-android-tasks.webp?resize=300%2C216&quality=50&strip=all 300w, https://b2b-contenthub.com/wp-content/uploads/2024/12/08-google-calendar-android-tasks.webp?resize=233%2C168&quality=50&strip=all 233w, https://b2b-contenthub.com/wp-content/uploads/2024/12/08-google-calendar-android-tasks.webp?resize=117%2C84&quality=50&strip=all 117w, https://b2b-contenthub.com/wp-content/uploads/2024/12/08-google-calendar-android-tasks.webp?resize=667%2C480&quality=50&strip=all 667w, https://b2b-contenthub.com/wp-content/uploads/2024/12/08-google-calendar-android-tasks.webp?resize=500%2C360&quality=50&strip=all 500w, https://b2b-contenthub.com/wp-content/uploads/2024/12/08-google-calendar-android-tasks.webp?resize=347%2C250&quality=50&strip=all 347w" width="700" height="504" sizes="auto, (max-width: 700px) 100vw, 700px">Tasks, within Calendar — what’ll they think of next?!

JR Raphael, IDG

You can still move over to the full Tasks app if you want, but there’s really no need to — since everything you need is now right where you’re already looking, anyway.

Google Calendar Android trick #10: Speedy deleting

Here’s an easily missed and incredibly handy gesture in the Google Calendar Android app: From the Schedule view, you can swipe any event or reminder toward the right to delete it in a single, swift action.

srcset="https://b2b-contenthub.com/wp-content/uploads/2024/12/09-google-calendar-android-delete-event.webp?quality=50&strip=all 700w, https://b2b-contenthub.com/wp-content/uploads/2024/12/09-google-calendar-android-delete-event.webp?resize=300%2C130&quality=50&strip=all 300w, https://b2b-contenthub.com/wp-content/uploads/2024/12/09-google-calendar-android-delete-event.webp?resize=150%2C65&quality=50&strip=all 150w, https://b2b-contenthub.com/wp-content/uploads/2024/12/09-google-calendar-android-delete-event.webp?resize=640%2C277&quality=50&strip=all 640w, https://b2b-contenthub.com/wp-content/uploads/2024/12/09-google-calendar-android-delete-event.webp?resize=444%2C192&quality=50&strip=all 444w" width="700" height="303" sizes="auto, (max-width: 700px) 100vw, 700px">Ahh — cancelling plans has never been so satisfying.

JR Raphael, IDG

So long, responsibilities!

Google Calendar Android trick #11: Quick creation

In addition to deleting events at the speed of light, you can also create new events in a delightfully swift way within the Google Calendar app — right from your daily calendar view.

All you’ve gotta do is tap on any open space in that part of the Android Calendar app, and you’ll see an event creation box right then and there:

Add an event into your calendar without any fuss by tap, tap, tapping.

JR Raphael, IDG

Also worth noting: The same tricks we went over a second ago for sliding around or extending your event’s time will work in this context, too, once you’ve brought that box into focus.

Google Calendar Android trick #12: Easier adding

For reasons I’ll never quite fathom, Google recently removed an exceptionally useful shortcut for double-tapping the Calendar app’s new event button to move past the standard menu of options and zoom right into the new event creation interface. Bah!

But while that capability is seemingly now in the ever-cluttered Google graveyard, there is another helpful way to create a new event without wasting any unnecessary steps or time. In fact, you don’t even have to futz around in the Calendar app at all to find it.

It’s something known as an Android app shortcut, which means all you’ve gotta do is press and hold the Calendar app’s icon on your home screen or in your app drawer to find it.

One quick long-press on that icon, and you’ll see a direct link to the Google Calendar new event function — no app-opening, button-pressing, or menu-wading required:

The exact interface may vary depending on your device, but the “New event” shortcut should always be there and available.

JR Raphael, Foundry

If you really wanna get fancy, you can also press and hold your finger onto the “New event” shortcut to drag it directly onto your home screen for even easier ongoing access, so it’s never more than a single tap away.

Google Calendar Android trick #13: Practical pinching

While we’re thinking about all this tapping and swiping, make a mental note of this: Whilst gazing uponst the Google Calendar app’s Day, 3-Day, Week, or Month view on Android, you can actually pinch your fingers apart on the screen to expand the interface and make everything bigger — or pinch ’em together to condense it and make all the elements smaller.

The key is to place your fingers on top of each other and move ’em in an up and down motion or diagonally — not sideways:

Expand, collapse. Expand, collapse. Expand, collapse.

JR Raphael, IDG

Whee!

Google Calendar Android trick #14: Find the time

Here’s a fun one I discovered a while back, thanks to a tip from a resourceful Android Intelligence reader:

When you’re looking at the Google Calendar Android app’s Day view and you have an event that starts at a time that isn’t at the top of an hour — say, at 12:30 p.m., 1:05 p.m., 3:33 p.m., or any other such number — it can be tough to know exactly what time the event begins at a glance.

But if you press and hold your finger onto the event for a second, the Calendar app will adjust the number on the time grid at the left to show the exact start time for that specific item.

See?

srcset="https://b2b-contenthub.com/wp-content/uploads/2024/12/13-google-calendar-android-precise-event-time.webp?quality=50&strip=all 700w, https://b2b-contenthub.com/wp-content/uploads/2024/12/13-google-calendar-android-precise-event-time.webp?resize=300%2C75&quality=50&strip=all 300w, https://b2b-contenthub.com/wp-content/uploads/2024/12/13-google-calendar-android-precise-event-time.webp?resize=150%2C37&quality=50&strip=all 150w, https://b2b-contenthub.com/wp-content/uploads/2024/12/13-google-calendar-android-precise-event-time.webp?resize=640%2C159&quality=50&strip=all 640w, https://b2b-contenthub.com/wp-content/uploads/2024/12/13-google-calendar-android-precise-event-time.webp?resize=444%2C110&quality=50&strip=all 444w" width="700" height="174" sizes="auto, (max-width: 700px) 100vw, 700px">Look closely along the left side of the screen, and you’ll see the precise time appear with a long-press on the event.

JR Raphael, IDG

The precise time will remain present for as long as you keep your finger pressed.

And speaking of that area of the Android Calendar interface…

Google Calendar Android trick #15: A jaunty jump

An easy and not-at-all-obvious way to move between different calendar views is hiding in the leftmost column of the Google Calendar app’s Android interface — specifically, in the Day and Schedule views.

Starting in Schedule, you can tap the day name and number next to any events to jump directly to the Day view for that date — and then, when in the Day view, you can tap that same day name and number indicator (now in the upper-left corner of the screen) to bop back over into Schedule.

srcset="https://b2b-contenthub.com/wp-content/uploads/2024/12/14-google-calendar-android-day-tap.webp?quality=50&strip=all 700w, https://b2b-contenthub.com/wp-content/uploads/2024/12/14-google-calendar-android-day-tap.webp?resize=300%2C183&quality=50&strip=all 300w, https://b2b-contenthub.com/wp-content/uploads/2024/12/14-google-calendar-android-day-tap.webp?resize=275%2C168&quality=50&strip=all 275w, https://b2b-contenthub.com/wp-content/uploads/2024/12/14-google-calendar-android-day-tap.webp?resize=137%2C84&quality=50&strip=all 137w, https://b2b-contenthub.com/wp-content/uploads/2024/12/14-google-calendar-android-day-tap.webp?resize=589%2C360&quality=50&strip=all 589w, https://b2b-contenthub.com/wp-content/uploads/2024/12/14-google-calendar-android-day-tap.webp?resize=409%2C250&quality=50&strip=all 409w" width="700" height="428" sizes="auto, (max-width: 700px) 100vw, 700px">So many views, such little time.

JR Raphael, IDG

Hip, hip, hoorah!

Google Calendar Android trick #16: Meet no more

Have you ever noticed how Calendar developed an irksome habit of automatically adding Google Meet links into every forkin’ event you create?

That’s fine and dandy if your event actually includes a Meet-based video meeting, but it’s pretty annoying — and potentially confusing — when your event is something that’s in person. Worse yet is when your event is virtual but in a different video meeting service, like Zoom, and then everyone you invite ends up getting both the correct link and a meaningless Meet link for the same event.

Here’s a little secret: You can put a stop to this madness. And all it takes is a handful of quick taps in your Android Calendar app.

Open up Google Calendar on your phone, tap the three-line menu icon in its upper-left corner, and scroll down to the bottom to select “Settings.”

Tap “General,” then tap “Add video conferencing” and turn the toggles into the off position for every account you’ve got connected.

Now, if you ever want to add a Meet link to an event, you can do so manually whilst creating said event. But by default, those blasted links won’t get auto-added onto every single event for you.

Google Calendar Android trick #17: Smarter silencing

This one is technically an Android feature, but it works hand in hand with Calendar and is one of the most practical options out there: the ability for your phone to automatically silence itself anytime an event from your Google Calendar is underway.

All you’ve gotta do is enable it, provided your phone-maker hasn’t removed the option: On a device running Android 16 and up without any major manufacturer modifications, look for the new Android Modes menu within your main system settings. Tap the option to create your own custom mode, give it a name like “Events,” then tap the option to “Set a schedule” and select “Calendar events” from the pop-up that appears next.

You’ve gotta dig a little to find it, but Android’s new Modes system lets you set up smart silencing for specific sorts of calendar events.

JR Raphael, Foundry

Then, you can customize to your heart’s content — selecting which specific calendar and sorts of events will trigger the mode and deciding exactly what happens when it’s active and what, if any, exceptions for its phone-silencing action should exist.

You can specify exactly which sorts of events apply and what, if any, exceptions exist to your calendar-connected phone-silencing setup.

JR Raphael, Foundry

Not seeing any of this on your device? If you’re using a phone with an older Android version, a similar sort of setup may exist within a “Do Not Disturb” area of your system settings — within the Sound section.

If you’re using a phone whose manufacturer has fudged around with this part of the operating system — as is the case, unfortunately, with Samsung and its many Android gadgets — you can set up your own standalone equivalent of the same basic concept by embracing this purpose-specific app or the brilliantly versatile MacroDroid automation creation utility.

Google Calendar Android trick #18: Rapid responses

Just like Android itself allows you to send a prewritten quick response when you’re rejecting a call, Google’s Android Calendar app can let you send a speedy note to anyone involved in an upcoming meeting — all with a couple quick taps on your phone.

To configure the feature, open up the Calendar app, tap the three-line menu icon in the upper-left corner, and select “Settings” from the menu that appears.

Next, select “General,” then scroll down until you see “Quick responses.” Tap that — and there, you’ll see four options for prewritten messages you can fire off on the fly while en route to any appointment involving multiple mammals.

loading="lazy" width="400px">The Android Calendar app’s quick responses can be surprisingly helpful.

Oddly, Calendar doesn’t let you create additional responses, but you can edit any of the default responses to make it say whatever you want. Just tap any one of ’em and then replace it with whatever text your silly ol’ heart desires.

loading="lazy" width="400px">Custom event responses? Hey, we’ll take ’em!

To put your custom quick responses to use, open up any upcoming event that has at least one other person invited. Tap the envelope icon within the “Guests” line, then tap the response you want from the list.

That’ll take you directly to a ready-to-roll email with your message in place and the recipients added in. All that’s left is to hit “Send” — and maybe let out a guffaw in delight, should such inspiration strike.

Google Calendar Android trick #19: Duplication elation

Ever find yourself needing to create a new event that’s remarkably similar to one already on your agenda? The Google Calendar app for Android has an easy way to duplicate an event and then use it as a blueprint for a new one: Just tap the event you want to emulate, tap the three-dot menu icon in its upper-right corner, and select — yup, you guessed it — “Duplicate.”

And that’s it: Your new event will show up with the original event’s info filled in and ready for to be tweaked as needed.

Doesn’t get much easier than that.

Google Calendar Android trick #20: Nicer notifications

Google Calendar’s default notification times for new events aren’t right for everyone. If you find yourself changing the setting for when an event will notify you more often than not (and/or quietly muttering creative curses every time an event notifies you earlier or later than you’d like), do yourself a favor and adjust your Calendar’s default notification times so that they work better for you.

Just head back into the Calendar app’s settings section — and this time, find the section for the Google account you want to modify and tap the “My calendar” line beneath it. That’ll give you a screen on which you can change the default notification times for standard new events as well as all-day events. You can even add multiple notifications, if you want, and change the default color for events on that calendar while you’re at it (ooh, ahh, etc).

loading="lazy" width="400px">All sorts of Android calendar notification customization options await — if you know where to look for ’em.

If you want to change the default notification time for tasks or for any secondary calendars you’ve created within a particular Google account, just find the appropriate line beneath the account’s header and select that instead of “My calendar” — then make the same sorts of modifications there.

Google Calendar Android trick #21: Weather tether

Imagine how helpful it’d be if you could see a quick glimpse at the forecast for an upcoming day whilst you’re looking at your agenda. It’d certainly be a sensible agenda addition — no?

Well, with a teensy touch of creative tinkering, you can make it happen — thanks to a thoughtful third-party add-on that integrates directly into the Google Calendar environment.

It’s a site called, fittingly enough, Weather in Calendar. Just fire it up, follow the simple steps to select your city and add its weather into your calendar, and you’ll always know what conditions to expect while you’re in the midst of planning.

Weather info, right in your calendar — what’ll they think of next?!

JR Raphael, Foundry

Google Calendar Android trick #22: Secret codewords

Last but not least: Hardly anyone knows this, but there’s a way to hack the Calendar app’s illustration system and make any of Google’s contextual graphics appear on any event you want.

The trick is simply learning the Calendar app’s secret codewords and then putting ’em to use exactly how you want.

Check out this thorough list of Google Calendar codewords, and get ready to give your calendar a whole new customized look.

And with that, your Android calendar experience is officially upgraded. Now all you’ve gotta do is get everything on your agenda accomplished — and that, my dear amigo, is squarely on your shoulders.

Get even more advanced shortcut knowledge with my free Android Shortcut Supercourse. You’ll learn tons of time-saving tricks for your phone!

Kategorie: Hacking & Security

Intel sets sights on data center GPUs amid AI-driven infrastructure shifts

4 Únor, 2026 - 11:39

Intel is making a new push into GPUs, this time with a focus on data center workloads, as the chipmaker looks to reestablish itself in a market increasingly shaped by AI-driven demand and dominated by Nvidia.

CEO Lip-Bu Tan said that after hiring a senior GPU architect, the company is working directly with customers to define requirements, signaling a more demand-driven approach as enterprises and cloud providers weigh their options for accelerated computing, according to a Reuters report.

Intel’s push comes as demand for AI accelerators reshapes data center spending, leaving enterprises and cloud providers with fewer GPU options and longer procurement timelines.

This is not Intel’s first foray into discrete graphics. The difference now is that it’s tying its GPU ambitions more closely to its data center roadmap and broader manufacturing strategy, pairing closer customer engagement with advanced process technology to gain traction.

Intel’s enterprise advantage

Intel’s tight integration of CPUs, GPUs, networking, and memory coherency gives it an edge in enterprise inference, hybrid cloud, and regulated or on-prem environments, where cost control and operational simplicity matter more than peak performance, said Manish Rawat, a semiconductor analyst at TechInsights.

In these segments, Intel has an opportunity to meaningfully limit Nvidia’s expansion and reduce customer dependence at the infrastructure level.

Supply chain reliability is another underappreciated advantage. Hyperscalers want a credible second source, but only if Intel can offer stable, predictable roadmaps across multiple product generations.

However, the company runs into a major constraint at the software layer.

“The decisive bottleneck is software,” Rawat said. “CUDA functions as an industry operating standard, embedded across models, pipelines, and DevOps. Intel’s challenge is to prove that migration costs are low, and that ongoing optimization does not become a hidden engineering tax.”

For enterprise buyers, that software gap translates directly into switching risk.

Tighter integration of Intel CPUs, GPUs, and networking could improve system-level efficiency for enterprises and cloud providers, but the dominance of the CUDA ecosystem remains the primary barrier to switching, said Charlie Dai, VP and principal analyst at Forrester.

“Even with strong hardware integration, buyers will hesitate without seamless compatibility with mainstream ML/DL frameworks and tooling,” Dai added.

Lian Jye Su, chief analyst at Omdia, said Intel will need to focus on delivering performance and software that are accepted and certified by the developer community.

While CUDA dominates with extensive libraries, tools, and developer mindshare, developers may be willing to adopt Intel GPUs if the company “can create a GPU that can provide tools and SDKs that are developer-friendly and address cutting-edge AI applications,” Su added.

From an enterprise buying perspective, this means Intel’s challenge is less about hardware ambition and more about overcoming deeply entrenched platform lock-in.

“Performance and pricing advantages alone will fall short without seamless developer tools and broad compatibility,” said Prabhu Ram, VP of the industry research group at Cybermedia Research. “Even with tight GPU-CPU-networking integration offering efficiency gains, CUDA’s entrenched lock-in remains the major barrier for enterprises that seek to reduce reliance on Nvidia.”

Rising China challenge

The rise of Chinese alternatives adds urgency to Intel’s effort to reestablish itself as a credible second source for Western enterprises.

In the Reuters interview, Tan said he was surprised to see Huawei hiring top-tier chip designers despite US restrictions on access to advanced tools, warning that China could leapfrog established players if Western companies are not careful.

“Huawei’s significance isn’t about near-term benchmark parity, it’s about trajectory,” Rawat said. “Progress on EDA independence may be slow, but directionally it’s real. High talent density is compensating for tool gaps, while parallel “good-enough” design flows steadily dilute the effectiveness of US choke points.”

According to analysts, Huawei does not need to outperform Nvidia globally to pose a strategic challenge. Locking in China’s domestic data center demand, reducing dependence on Western supply chains, and building closed-loop learning and optimization cycles at home could be enough to reshape competitive dynamics over time.

Kategorie: Hacking & Security

You’ll soon be able to block all AI features in Firefox

3 Únor, 2026 - 18:43

In December, Mozilla CEO Anthony Enzor-DeMeo attracted a lot of attention by announcing that Firefox would become a “modern AI browser.”

In order not to alienate users, the company also promised a new setting that would make it possible to turn off some or all of the AI features, including the chatbot in the sidebar, automatic translations, tab grouping, and link previews.

It is now clear that the new AI settings will be added to Firefox 148, a version that will be rolled out to the public on Feb. 24. Mozilla unveiled the feature on Monday.

The YouTube clip below shows how it is supposed to work:

Kategorie: Hacking & Security

HP CEO Enrique Lores leaves to lead PayPal

3 Únor, 2026 - 18:40

Enrique Lores, president and global CEO of HP for more than six years, is leaving the company to take up a similar position at online payment giant PayPal on March 1. In his place, on an interim basis for the time being, Bruce Broussard, a member of the company’s board of directors since 2021, has already been appointed CEO of the technology company, although the company said in a statement that it is already looking for a permanent replacement for the Spanish executive.

“As Interim CEO, Mr. Broussard will advance the company’s strategic priorities by leveraging his proven operational, financial, and business management expertise as well as his deep knowledge of HP’s business,” the statement said, noting that Broussard is an executive with more than 30 years of experience in leadership positions at publicly traded companies, such as healthcare company Humana.

Lores is no stranger to PayPal, having served on its board of directors for nearly five years and as its chairman since July 2024. The Spaniard replaces former CEO Alex Chriss, who leaves the company in a delicate situation; in fact, the group has just announced a 15% drop in revenue in its last fiscal quarter. Furthermore, Lores’ appointment, as reported in a statement issued today by PayPal, “follows a detailed evaluation conducted by the Board of Directors on the current position of the company relative to its competition and the broader industry landscape. While some progress has been made in a number of areas over the last two years, the pace of change and execution was not in line with the Board’s expectations. The Board is confident that the appointment of Lores, a seasoned executive with more than three decades of technology and commercial experience, will provide the leadership necessary to lead PayPal into its next chapter.”

American dream

Enrique Lores is one of those paradigmatic cases of the long-awaited ‘American dream’ that argues that anyone can achieve success through hard work. Lores, born in Madrid and an electrical engineer from the Polytechnic University of Valencia, from which he was awarded an honorary doctorate in 2024 for his professional career, began his professional activity at HP in 1989, 36 years ago. He started as an intern, but gradually rose to positions of importance in the fields of printing, personal systems, and business and industrial solutions.

Even in 2015, when the historic company made the decision to split into two companies, one focused on personal devices (PCs) and printing and the other, called HPE, on systems and infrastructure for businesses, it was Lores himself who led the Separation Management Office. More than six years ago, he became the president and CEO of HP. Under his leadership, he had to deal with issues such as an attempted takeover by competitor Xerox, which ultimately did not go through. In recent years, the executive has worked to adapt the PC world to advances in artificial intelligence. According to Gartner data for 2025, HP is the second-largest player in the global PC market, with a 21.5% share, surpassed only by Lenovo (27.2%) and followed by Dell Technologies (16.5%). In the field of printing, according to data from IDC, Gartner, and Canalys, the company is number one in the world.

Lores, one of the highest paid in the entire technology industry, announced his new professional direction today on his LinkedIn account. “I first joined HP 36 years ago as an intern engineer. Since then, HP has become part of my identity and my family’s history: my wife Rocío and I built our life in Palo Alto so that I could be part of the HP team, and my three children have only known life with HP,” he writes on the social platform, where he summarizes his professional career.

HP, he notes, has given him “the opportunity to grow tremendously.” A company, he emphasizes, that he defines as “its people.” “HP is a true school of talent, guided by a culture of innovation, collaboration, and shared dedication to making a positive impact. I am incredibly proud of what the HP team has achieved, and I have every confidence that Bruce Broussard and the incredible HP leadership team will propel the company forward and lead the future of work.”

He says he is looking forward to the “unique opportunity to serve as CEO of PayPal and have a lasting impact on the global payments industry.” “I am excited to get started, knowing that I am leaving behind a team that will drive HP’s success.”

Along with the changes at the top, the HP team has again reported its forecasts for the first quarter of its fiscal year and its full fiscal year 2026. The company expects diluted earnings per share under generally accepted accounting principles (GAAP) of between $0.58 and $0.66, and non-GAAP diluted earnings per share of between $0.73 and $0.81. For fiscal year 2026, HP continues to expect GAAP diluted net earnings per share of $2.47 to $2.77 and non-GAAP diluted net earnings per share of $2.90 to $3.20. For fiscal year 2026, HP expects to generate free cash flow of between $2.8 billion and $3 billion.

Kategorie: Hacking & Security

OpenAI drops Codex ‘agentic AI’ as a macOS app

3 Únor, 2026 - 18:34

OpenAI has introduced Codex, a desktop application for Macs that lets users run several AI agents simultaneously, making it suitable for much more complex tasks than ChatGPT alone.

A software agent rather than a chat tool, Codex is particularly valuable to software developers who could use the service’s support for multiple AI agents to edit code, build simple apps, manage projects or run complex automations and workflows. Agents can run for up to 30 minutes independently before returning completed code. 

“I built an app with Codex last week,” wrote OpenAI co-founder Sam Altman. “Then I started asking it for ideas for new features and at least a couple of them were better than I was thinking of.” 

OpenAI shared one project in which Codex built a racing game from one prompt. It then began iterating on the original design, identifying and adding missing features, fixing bugs, and more.

AI wants to make your code

Agentic tools for coding seem to be the emerging challenger field in AI. Anthropic’s Claude Code has been generating a lot of interest in the last few weeks. Github’s Copilot is already in daily use across the developer community; Google has Jules; Amazon offers Q; there’s, of course, Microsoft CoPilot; and there are many other developer-focused AI solutions, making this a hotly-contested space.

OpenAI evidently hopes it can turn its current media buzz into a hook to bring developers aboard its own new offering.

To mark the launch and encourage use, the company has doubled all rate limits for paid plans for two months. When it comes to desktop app releases, the company tends to target the Mac because the platform has a huge and active concentration of developers, making it a good place to build new kingdoms. (It plans to introduce Windows and Linux support in the future.) Already, OpenAI claims Codex has been used by more than 1 million developers.

Some features Codex provides

The Mac app gives users the ability to edit code, run workflows and to support agentic tasks through a ChatGPT-like simple UI. Agents are organized within separate threads and projects, which means you can move between tasks and have work running while you do something else; as the project status changes, you’ll see notes in the interface. 

You can also deploy a large assortment of pre-programmed “skills” and automations, which let you use Codex to do specific tasks. The introduction of a Mac app means Codex is able to access native app features and workflows that aren’t always easily available from within a browser. 

That’s not to say OpenAI has knocked the ball out the park with this beast, at least at this stage; initial Reddit feedback suggests several drawbacks, including speed, coding errors, poor quality output, the introduction of bugs and a lack of contextual understanding of intent in contrast to Claude. There have also been claims that Codex makes heavy use of background processes, which slows performance of the host Mac. And there continue to be concerns around security of both the app itself and the software it creates.

Facing serious competition, OpenAI will need to respond to those challenges if the company truly wants Codex to become the best available coding agent.

What’s this for Mac users? 

The most important potential is for developers who use Codex to supplement their work in Apple’s Xcode, particularly because Xcode isn’t particularly good at serving as a high-level agent command center to manage background tasks. The move also positions Codex as a robust coding companion, putting Apple Intelligence and the future Google Gemini partnership under some pressure.

OpenAI clearly also hopes to challenge Apple’s developer environment, as evidenced by Codex and by the company’s recent acquisition of Alex Codes, a third-party tool that added AI features to Xcode. Apple, however, already lets developers connect their Anthropic Claude account to Xcode to access AI-driven coding tools, and will no doubt supplement that arrangement with Gemini-based coding features eventually. 

Meanwhile, Apple continues to work on Apple Intelligence. And while it is cooperating with OpenAI for now, there is little commitment and we can all see that there will be a competitive conflict point as Apple’s former designer Jony Ive’s all-new OpenAI product edges slowly into being

You can follow me on social media! Join me on BlueSky,  LinkedIn, and Mastodon.

Kategorie: Hacking & Security

By whatever name — Moltbot, Clawd, OpenClaw — this uber AI assistant is a security nightmare

3 Únor, 2026 - 08:00

Moltbot, the cutting-edge, open-source AI “sidekick” formerly known as Clawdbot, recently rebranded as OpenClaw and is now crazy popular. It came out of nowhere to become the first viral AI agent with 70,000 GitHub Stars in a month

Its creator, Peter Steinberger, claims it’s “the AI that actually does things.”

Yeah, well there are a lot of AI chatbots and agents that do things. Maybe they do things badly, mind you, but used carefully, they can do real work. 

OpenClaw’s claim to fame is that it can take real-world actions on your behalf. Instead of living purely in the cloud, the agent runs on a user’s own hardware, often on Mac minis, but you can run it with Windows, Linux, or what have you. Under the hood, it connects to one or more large language models (LLMs) via application programming interface (API), and exposes a set of “channels” and “tools” that let it see and act across a digital life: Reading email, running shell commands, browsing the web, arranging your travel schedule, and running your apps for you.

The project began life as Clawdbot, a locally run AI agent fronted by a cartoon space lobster mascot called Clawd and wired to Anthropic’s Claude models through various “skills” and connectors. 

Via these apps, users typically talk to OpenClaw specifying natural-language tasks such as “clear my inbox,” “book my flight,” or “summarize my meetings.” Under the hood, the agent uses channels to receive those instructions and tools to execute them, wiring AI reasoning from Claude and other models into concrete actions such as checking you in for flights, generating or editing code, reconciling calendars, or spinning up scripts and dashboards.

A key part of OpenClaw’s appeal is its long-term memory. It uses files like USER.md and IDENTITY.md to store facts about you and the agent’s own persona. This enables it to remember preferences, past tasks, and ongoing projects in a way that feels more like a persistent colleague than a stateless chatbot. The surrounding ecosystem of community “skills” on GitHub extends those capabilities further, from browser automation and auto-updating to specialized workflows for documentation, research, and coding.

Sounds great! Go ahead, search online for examples of people doing neat tricks with it, and you’ll find bunches. There’s even a “social” network for the bots called Moltbook, where agents act like idiots (like most social networks I can think of) and occasionally share tips and tricks with each other. 

There are only a few itty-bitty, teeny-weeny problems with it. To do useful things like reserving your hotel room, getting your pizza delivered, or cleaning up your e-mail box, it needs your name, password, credit-card number — and all the other things any crook also wants. 

Get the picture? OpenClaw is a security black hole that’s useful right up to the point where all your important data goes bye-bye. 

As Cisco put it, “Security for OpenClaw is an option, but it is not built in.” The product documentation itself admits: “There is no ‘perfectly secure’ setup.” Granting an AI agent unlimited access to your data (even locally) is a recipe for disaster if any configurations are misused or compromised.”

In particular, as the AI-friendly security company Synk puts it, “If there’s one security concern that keeps AI security researchers up at night, it’s prompt injection. This vulnerability class represents perhaps the largest attack surface for any AI agent connected to external data sources, which, by definition, includes personal AI assistants that read emails, browse the web, and process messages from multiple channels.”

Let me spell it out for you. Using OpenClaw is stupid.

If you insist on trying it out, stick it on a locked-down virtual machine so it can’t access any — and I mean any — of your personal and work data. Do not it feed it any of your personal data. Yeah, it will be a heck of a lot less useful, but that’s the only way it will be safe to use. Otherwise, you’re just asking to be hacked, and when that happens, OpenClaw won’t be able to do much, if anything, to fix the mess.

Kategorie: Hacking & Security