Kategorie
Microsoft makes all new accounts passwordless by default
Apple radí, jak správně nabíjet iPhone přes noc. Neměli byste ležet na nabíječce a strkat telefon pod polštář
Microsoft Sets Passkeys Default for New Accounts; 15 Billion Users Gain Passwordless Support
Google now injects hyper-personalized ads into third party AI chats
As it stands to potentially lose ad revenue after being ruled a monopoly, and also to maintain an edge in the digital ad space as generative AI use soars, Google is purportedly now injecting ads into third party chatbot conversations.
It’s not a surprising move, particularly given Google’s antitrust loss that could eventually lead to the breakup of its ad business (although there are likely years of appeals to come before any tangible changes). The tech giant is also in fierce competition with the likes of OpenAI, Perplexity, Meta, Microsoft, Salesforce, and a slew of others to get enterprise users to adopt its genAI platforms.
“Google knows its long-dominant search funnel is leaking,” said Julie Geller, principal research director at Info-Tech Research Group. “If conversational AI becomes the way people discover, decide, and buy, Google needs a revenue engine ready before regulators or rivals box it out.”
More than a money moveAccording to reports, the Google AdSense network expanded to include chatbot conversations earlier this year, after it tested the capability last year. Particularly, according to anonymous sources, it is working with AI search startups iAsk and Liner.
The move comes as Google grapples with the fallout of not just one, but two, antitrust trials in which it was found guilty of establishing “monopoly power.” Most recently, in April, a federal judge in Virginia ruled that the company monopolized two online advertising markets: publisher ad servers and the ad exchanges that sit between buyers and sellers. Google reportedly earned nearly $265 billion in 2024 alone through ad placement and sales.
Incorporating ads into generative AI is a “risky move at a fragile moment,” noted Ria Delamere, chief technology and product officer at Traject Data. “This isn’t just about making money. It’s about trying to hold onto ground as Google faces pressure from AI-native competitors and antitrust regulators.”
An opportunity for hyper-personalizationOf course, Google isn’t the first to do this. Meta, for one, shows ads in “private” messenger chats, David B. Wright, president and chief marketing officer at W3 Group Marketing, pointed out.
“Google and other companies inserting ads in AI chatbots are just jumping into the next available space to place ads,” he said.
Geller pointed out that, in 2022, company leaders acknowledged that about 40% of Gen Z in the US were turning to TikTok or Instagram, not Google, for local recommendations on where to eat or shop, and that behavior has only accelerated since. Incorporating ads into generative AI sets the stage for “hyper-local, persona-level targeting” which could pull advertisers back from social platforms and keep both discovery and dollars inside Google’s walls, she said.
Enterprises will be able to deliver more relevant ads “at the right time to the right person,” Wright agreed. Consider conversing with an AI chatbot as a very long-tail search, he said: Ad servers can take the data from the conversation and use it to craft hyper-targeted ads.
“In an ideal world, we’d only see the ads we want to see when we’d want to see them: when we’re at the right stage of a buying decision for something we want to buy,” he said. “This could be a step closer to that.”
Preserving trust will be paramount moving forwardExperts emphasize that trust is imperative to all this. Notably, it hinges on “knowing when money changes the message,” said Geller. If users suspect an answer is ranked by revenue over relevance, “confidence tanks,” and they may migrate to more “neutral” assistants.
Google will need to flag sponsored content in real-time, explain why it surfaced the ads it did, and prove that organic answers aren’t “quietly demoted,” she said. “Anything less invites skepticism and churn.”
Delamere agreed that when ads start showing up as part of an “answer,” it gets harder to tell where information ends and influence begins. When AI is driving decisions, transparency and explainability aren’t an option, she said.
“This may help Google in the short term, but credibility is hard to earn and easy to lose,” Delamere emphasized.
From a user interface perspective, if ads distract or cause delays, consumers won’t go near the app, noted Melissa Copeland of Blue Orbit Consulting. “Consumers may try it, but if they don’t get the efficient and effective answer they are looking for, they will abandon the channel or brand.”
Meanwhile, when it comes to privacy, Geller pointed out that chat transcripts are stored and, at times, reviewed by humans, creating a “durable record of anything sensitive a user blurts out.”
“While Google offers opt-outs and deletion tools, they’re far from intuitive,” she said, emphasizing that enterprises must offer secure contract-level clarity on retention windows, human-review policies, and encryption standards, and should also insist on audit rights to verify compliance.
Look beyond a single-vendor strategyThis type of capability could help companies offer chatbot functionality at a lower cost, and potentially surface new insights about customers, noted Neil Chilson, head of AI policy with the Abundance Institute.
Like all ad media, he said, when considering the volume and type of ads, companies will need to balance short-time incentives to monetize and long- term financial incentives to keep customers coming back.
“Google is good at helping companies evaluate those trade offs in other advertising channels; it will be interesting to see how that expertise translates to this new area,” said Chilson.
Info-Tech’s Geller pointed out that the search and discovery landscape is evolving too quickly for a single-vendor strategy. She advised enterprise leaders to stay agile, demand full transparency around data use and monetization, and keep an eye on how AI-driven personalization opens new micro-market opportunities.
It’s also important to build flexibility into customer experience and marketing roadmaps, as ads are “only the opening salvo,” she noted. Further, companies should watch for new revenue models from app providers, such as subscription tiers or usage fees, and potentially embrace the benefits of hyper-local targeting. At the same time, she said, “keep exit routes open and your data governance questions sharp.”
Hacker 'NullBulge' pleads guilty to stealing Disney's Slack data
Google rolls out ‘AI Mode‘ to improve search results
Google is making changes to its venerable search interface so users can more naturally interact with its AI features.
“AI Mode,” a project brewing in Google’s Search Labs, will slowly roll out to general users within the company’s current search interface. (A new “AI Mode” tab will appear alongside its search box.)
“With AI Mode, you can truly ask Search anything — from complex explanations about tech and electronics to comparisons that help with really specific tasks, like assessing insurance options for a new pet,” Soufi Esmaeilzadeh, director of product management for Google’s search products, said in a blog post.
The new features will migrate from the experimental AI Mode features already being tested by users in Google’s Search Labs. Google has also added features to the experimental AI Mode so users get better search results.
With AI Mode now ready for the real world, Google promises the tool will offer more than AI Overviews; it provides basic insights for questions plugged into the search box. AI Mode is based on the Gemini 2.0 AI model.
“Because our power users are finding it so helpful, we’re starting a limited test outside of Labs. In the coming weeks, a small percentage of people in the US will see the AI Mode tab in Search,” Esmaeilzadeh said.
Google’s experimental AI Mode app had been available only to limited users. The app is available for Apple’s iOS and Android.
A Google spokesperson declined to offer further details about AI Mode search.
Google has been talking about integrating AI into search results more comprehensively since the day it launched its first large language model called Bard. The early models hallucinated and malfunctioned, so Google has been cautious in rolling out AI into its general search features.
But the company had to roll out core AI features to its search tools as soon as possible, said Jack Gold, principal analyst at J Gold Associates.
OpenAI and Anthropic have built search into their AI interfaces, and Meta recently launched its own chatbot based on Llama 4. Microsoft was already ahead of Google in integrating AI search into Bing results.
“It’s seeing increasing competition for AI from companies like Meta and OpenAI that could take some share away from them…, but it’s not clear that a competent AI model couldn’t essentially duplicate and enhance search functions for many users — see Perplexity, as an example,” Gold said.
Google attaching Gemini closer to its search tools offers several benefits, including feedback from users on how well the answers resonate. Enhancing search with AI could also drive down Google’s compute power and infrastructure costs as it could limit the number of searches needed before users get desired results.
“It can better tune its models for accuracy. It also enhances their ability to target ads at users, as AI will show complementary topics that can then be advertised about,” Gold said.
The experimental AI Mode in the search labs already delivered useful information about products and places. Google is now adding more rich results and multimedia features. A search for destinations, results, and products will show in a more organized format.
“Rolling out over the coming week, you’ll begin to see visual place and product cards in AI Mode with the ability to tap to get more details,” Esmaeilzadeh said.
Shopping, dining, and services results will have more options, real-time pricing, promotions, and ratings. And a new left-side panel on the desktop will make it easier to jump back into past searches on longer-running tasks and projects.
Typically, Google requires consent to record search history to understand user trends. A Google spokesperson declined to comment on whether AI Mode would require that.
Pro-Russia hacktivists bombard Dutch public orgs with DDoS attacks
Ukrainian extradited to US for Nefilim ransomware attacks
Harrods the next UK retailer targeted in a cyberattack
What is an AI PC? The power of artificial intelligence locally
Unlike traditional computers, an artificial intelligence PC, or AI PC, comes with AI capabilities built in by design. AI runs locally, right on the machine, allowing it to essentially learn, adapt, reason and problem-solve without having to connect to the cloud. This greatly increases the performance, efficiency and security of computing while enhancing user experience.
How are AI PCs different from traditional PCs?Traditional PCs run on CPUs and GPUs (but most PCs use an integrated CPU for everyday tasks), and their essential components include a motherboard, input devices like keyboards and mice, long-term storage, and random-access (short-term) memory (RAM). While they excel at tasks such as everyday web searching, data processing and content streaming, they typically don’t come with many built-in AI features — and they struggle to perform complex AI tasks due to limitations with latency, memory, storage and battery life.
[ Related: What is a GPU? Inside the processing power behind AI ]
AI PCs, by contrast, come preloaded with AI capabilities so that users can get started with the technology right out of the box. They feature integrated processors, accelerators and software specifically designed to handle complex AI workloads. While they also incorporate GPUs and CPUs, AI PCs contain a critical third engine: the neural processing unit (NPU).
5 things you should know about AI PCs- Local AI processing: AI PCs handle AI tasks on-device with specialized hardware (NPUs) for improved performance, privacy, and lower latency.
- Enhanced productivity: AI PCs boost efficiency and enable new capabilities like improved collaboration, personalized experiences, and advanced content creation.
- Robust security is imperative: AI PCs require a strong security framework, including hardware, data, software, and supply chain considerations.
- The market is growing: The AI PC market is expanding rapidly, with increasing availability, decreasing costs, and a growing software ecosystem.
- Big IT impact: AI PCs will require updates to IT infrastructure and management practices, including device management, application development, network infrastructure, and cost analysis.
NPUs perform parallel computing in a way that simulates the human brain, processing large amounts of data all at once — at trillions of operations per second (TOPS). This allows the machine to perform AI tasks much faster and more efficiently than regular PCs — and locally on the machine itself.
The key components of AI PCsThe generally agreed-upon definition of an AI PC is a PC embedded with an AI chip and algorithms specifically designed to improve the experience of AI workloads across the CPU, GPU and NPU.
All of the major PC vendors — Microsoft, Apple, Intel, AMD, Dell, HP, Lenovo — are building their own versions of AI PCs. Microsoft, which offers a line of Copilot+ AI PCs powered by Snapdragon X Elite and Snapdragon X Plus processors, has set a generally accepted baseline for what constitutes an AI PC. Required components include the following:
- Purpose-built hardware: An NPU works in tandem with CPUs and GPUs. NPU speed is measured in TOPS, and the machine should be able to handle at least 40 TOPS to support on-device AI workloads.
- System RAM: An AI PC must have at least 16GB of RAM. That’s the minimum; having twice as much (or more) improves performance.
- System storage: AI PCs should have a minimum of 256G of solid-state drive (SSD) storage — preferably non-volatile memory express (NVMe) — or universal flash storage (UFS).
Gartner
Benefits of AI PCsAI PCs represent a movement beyond traditional static machines that require constant human input and offer these benefits:
Enhanced productivity and computing that is truly personalizedAI has the capability to learn from what it sees and evolve based on that information; it is also increasingly agentic, meaning it can perform some approved tasks autonomously.
With AI directly integrated into a device and across various workflows, users can automate routine and repetitive tasks — such as drafting emails, scheduling meetings, compiling to-do lists, getting alerts about urgent messages, or sourcing important information from websites and databases.
Beyond that, AI PCs can support advanced content creation and real-time data processing; perform financial analysis; compile reports; enhance collaboration through voice recognition, real-time translation and transcription capabilities; and provide predictive text and writing help. Over time, AI PCs can adapt to individual workflows and eventually anticipate needs and make decisions based on user habits.
As AI agents become ever more intuitive and complex, they can serve as on-device coworkers, answering intricate business questions and helping with corporate strategy and business planning.
Reduced cloud costs, reduced latencyBuilding, training, deploying and maintaining AI models requires significant resources, and costs can quickly add up in the cloud. Running AI locally can significantly reduce cloud costs. Offline processing can also improve speed and lower latency, as data does not need to be transferred back and forth to the cloud.
Users can perform more complex tasks on-device involving natural language processing (NLP), generative AI (genAI), multimodal AI (for more advanced content generation such as 3D modeling, video, audio) and image and speech recognition.
Enhanced securitySecurity is top of mind for every enterprise today, and AI PCs can help bolster cybersecurity posture. Local processing means data stays on device (instead of being sent to cloud servers) and users have far more control over what data gets shared.
Further, AI PCs can run threat detection algorithms right on the NPU, allowing them to flag potential issues and respond more quickly. AI PCs can also be continually updated based on the latest threat intel, allowing them to adapt as cyberattackers change tactics.
Longer battery life, energy savingsWhile some AI workloads have been feasible on regular PCs, they quickly drain the battery because they require so much power. NPUs can help preserve battery life as users run more complex AI algorithms. Adding to this, they are more sustainable, as every query or prompt requires an estimated 10 times less energy compared to using the cloud.
Important considerations when considering AI PCsEven as they represent state-of-the-art, AI PCs are (not yet) for every enterprise. There are important factors IT buyers should consider, including:
- Higher up-front cost: Because they incorporate specialized hardware (NPUs) and have higher memory and power requirements, AI PCs are generally more expensive than regular PCs (even if they save on cloud costs in the long-run).
- Increased technical knowledge: Users well-versed with everyday PCs might struggle to use built-in AI features at first, requiring more training resources. Also, more advanced technical knowledge is required to train AI models and develop applications. Further, genAI is still in its early phases, so enterprise leaders have many concerns about AI misuse (whether unintentional or not).
- Not-yet proven business use cases beyond nifty gadgets: There has yet to be that “killer app” for AI PCs that make them a must-have across enterprises. If a business’s primary computing requirements are everyday tasks — think email, web searching, simple data processing — AI PCs may be too much muscle, making the increased cost difficult to justify.
While the question of whether you need an AI PC might be relevant now, that won’t be the case for much longer. “The debate has moved from speculating which PCs might include AI functionality,to the expectation that most PCs will eventually integrate AI NPU capabilities,” Ranjit Atwal, senior director analyst at Gartner, said last September. “As a result, NPU will become a standard feature for PC vendors.”
Gartner forecasts AI PCs will represent 43% of all PC shipments by the end of the year, up from 17% in 2024. The demand for AI laptops is projected to be higher than that of AI desktops, with shipments of AI laptops to account for 51% of total laptops in 2025.
AI PCs – what’s there to think about?AI PCs represent the next generation of computing, and experts predict they will soon be the only choice of laptop available to large businesses looking to refresh. But they are still in their early proving phases, and IT buyers have important considerations to keep in mind when it comes to cost, relevance and necessity.
Malicious PyPI packages abuse Gmail, websockets to hijack systems
Fake Security Plugin on WordPress Enables Remote Admin Access for Attackers
Download the ‘AI-Savvy IT Leadership Strategies’ Enterprise Spotlight
Download the May 2025 issue of the Enterprise Spotlight from the editors of CIO, Computerworld, CSO, InfoWorld, and Network World.
Apple could face criminal contempt charges over ‘Apple tax’
Apple’s salad days are over.
The company sits on the precipice of reinvention, and may become even more inward-facing in response to a damning US court judgement that may yet see it face criminal charges for contempt of court.
US District Judge Yvonne Gonzalez Rogers has ruled that Apple wilfully violated a 2021 court injunction that required it to change some of its business practices in terms of permitting developers to offer customers ways to purchase digital products outside of Apple’s App Store.
‘Will not be tolerated’The judge told the company to stop preventing developers from sharing external purchasing options and barred it from imposing commissions on transactions made outside its stores. “Apple’s continued attempts to interfere with competition will not be tolerated,” the judge wrote in her decision, finding Apple in contempt of court when Apple’s Vice President of Finance, Alex Roman, lied under oath.
Documents shared during the trial reveal “that Apple knew exactly what it was doing and at every turn chose the most anticompetitive option,” Rogers wrote.
She also noted that Apple CEO Tim Cook ignored Apple Fellow Phil Schiller’s advice that Apple should comply with the injunction, and permitted former CFO, Luca Maestri, to convince him not to do so. “Cook chose poorly,” said the judge.
An Epic winThis is just the latest instalment in the long-running dispute between Apple and a developer called Epic. The latter has been engaged in a multi-year, international campaign against Apple’s so called “Apple tax,” and the computer company appears to have failed to prevail, at least for now.
Apple said in a statement it will appeal the judgement: “We strongly disagree with the decision. We will comply with the court’s order, and we will appeal.”
But what’s gone wrong here is Apple’s alleged contempt of court. This is a very serious charge that means a criminal component to the case has now emerged.
The judge referred the matter to the US Attorney for the Northern District of California who will consider pressing contempt charges, presumably against Roman and the company that employs him. Could this conceivably put Apple CEO Tim Cook in the dock?
A global challengeIt’s well-known by now that Apple has faced steady and unrelenting attacks against the business model that evolved around its App Store. The company has been subject to anti-trust litigation across the planet, and regulators seem to be settling on a position that will outlaw those practices — even in Apple’s key target market of India.
Apple will not be able to prevent app developers from offering their software and services via external stores and will not be able to take a percentage of sales made via those stores.
These victories might well please some developers for a while. But they will likely come at a cost to platform security and ease-of-use and could generate confusion as consumers find themselves drawn to multiple stores, not all of which will prove to be as heavily curated or as secure as Apple’s.
For Apple, the consequences of the case could see billions wiped off its revenue as developers find sales outside of its store, and as it sees some of its App Store-related payments disappear. This will damage Apple’s services segment, and as that part of its business is important to keeping the company sailing in a difficult consumer hardware sales sea, it means the company will need to turn its ship.
What will Apple do?Apple is not without resources. It still has its hardware, platform, and services, and may now choose to compete on more equal terms with external stores — though how that might be implemented is uncertain. In the short term, it seems most likely Apple will need to charge developers more for access to the developer resources it provides. One way the company might do this is to offer a “Pro” tier to developers who want to sell software or services via its own store or any store, or to charge fees for APIs that enable external services.
It has to build those APIs, after all, which means they are a product it could conceivably try to profit from. (Whether this is permitted is unclear.)
For now, the direction of travel is obvious: the company must now swiftly change course to safely traverse these turbulent, shark-infested seas.
You can follow me on social media! Join me on BlueSky, LinkedIn, and Mastodon.
Why top SOC teams are shifting to Network Detection and Response
Claude AI Exploited to Operate 100+ Fake Political Personas in Global Influence Campaign
Inside Microsoft’s plans to reshape M365 apps with AI
Microsoft had an early software hit when it released Office for Windows 3.0 nearly 35 years ago. The graphical user interface got users away from command-line interfaces and into a new world of productivity.
Since then, the company has kept up with the changing way people work. Office 365 (now Microsoft 365) was its adaptation to the collaborative era of the cloud. And now, another major transition is under way — into the generative AI (genAI) era, with Copilot at the center of the strategy to work smarter.
Microsoft is positioning Copilot as a tool (or series of tools) for users to create, tap into and act on insights at individual, team, and organizational levels. The company also sees genAI as a way to break down the barriers between Word, Excel, PowerPoint and other apps; help users create their own apps; and declutter app user interfaces so features are easier to access.
Computerworld sat down with Microsoft’s Aparna Chennapragada, chief product officer of experiences and devices, to get an inside look at how the company is integrating genAI throughout its productivity apps.
Aparna Chennapragada, chief product officer of experiences and devices for Microsoft.
Microsoft
What stage are you at with Copilot in Microsoft’s productivity suites? “Our first wave brought AI to existing apps like Word, Excel, PowerPoint, Outlook, Teams for tasks like summarizing documents, prioritizing emails, recapping meetings, writing summaries, action items for meetings. As models improved at reasoning, they can now connect insights within a 200-page document way beyond human cognition.
“The second wave is M365 Copilot as a hub beyond the apps you use today. We’re building the AI hub — the productivity browser for the AI world, the one place you start and end your day. It’s a digital chief of staff, a digital assistant to stay on top of information, ask questions, and get coherent answers from the entire data of your organization and the world.”
Is there a fundamental redesign in bringing Copilot into the interface? If I’m using my desktop app, how are you thinking about that? ”We’re going for ‘hub and spokes’ — that’s the model we’re going for. You have a productivity hub, a full app that gives all the power [of Copilot], and then embedded AI in each app that you work with.
“You will have an app on Windows, on Mac, on your phone, and of course the website. That is your hub. Think about this as the full power of Copilot, the best of AI that Microsoft brings to work.
“When you’re in a document or meeting, you’ll have a narrow Sidekick presence that focuses on relevant tasks that will surface information in a thoughtful way. You have the embedded AI in each of these apps you work with.
“In a Word document, you’re unlikely to ask, ‘What’s the weather?’ But if you had this almost like a narrow Sidekick presence, then you say, ‘Im going to act on this document. Help me with this.’”
We’ve thought of Office apps as being separate. How do you break those classic Word/Excel/PowerPoint walls? How are you coupling them? ”For folks in an organization, some subsets are deeply specialized. If you’re a coder, you live in GitHub; if you’re a lawyer, you live in Word; if you’re an analyst, you live in Excel. For those cases, we want to bring the AI to where you are.
“The lines are blurred and you start with your goal. That’s why we built M365 Copilot app as a hub. You start by saying, ‘I want to write a report. I want to riff on it’ — almost thought processing versus word processing.
“Then you go back and forth. The great thing here is that at the last mile, you can turn that into a Word document, a deck, an email, and all of the above.”
Is it time to change the user interface with the closer coupling of Word, Excel, PowerPoint via Copilot? Looking into the future, will these individual apps still exist or merge into something else? ”We think about this like a pyramid structure. There’s going to be a broad base where every employee would use the universal UI — we call Copilot the UI for AI.
“As models get better and product making gets better, as we harness work data — we think about how to securely and compliantly bring work and world data together. You get 70% of the way there in most cases. Then you have a higher value artifact — you create it through chat.
“In M365 Copilot, we launched Pages. Think of this as a universal file format across Word, Excel, PowerPoint, and…these are AI-forward documents. Once created, you can parlay that into specialized apps and tools.”
It sounds like Pages is similar to XML that will help bring Word, Excel and PowerPoint closer. ”It’s very interesting actually. We GAed this four months ago and we see people create these. Initially, when you start conversations with Copilot, you’re riffing ideas and then hit on some version. You’ve co-processed with AI and now have something of high value.
“We’re seeing people want to move that into a high-value artifact they can go back to. No one does their work in a day. You have these long-lasting things.
“Then folks ask, ‘Is this persistent? Can I share it with my team?’ So, we introduced Pages and said, ‘You don’t have to be prescriptive about the format.’ This is just a canvas as a holder, a universal AI. What is a document in the AI era? From there you can branch off into any file format.”
How are you moving beyond chat interfaces? “While chat offers zero learning curve, some things are more efficient with GUIs. We’re introducing ‘Notebooks’ as an AI canvas for projects — gathering information, iterating on outlines, and working in the background to deliver insights rather than relying on chat. This represents a shift from ‘DOS to GUI’ in the AI world.”
Will Copilot and AI features be available offline? Can I use it on local hardware if I’m somewhere with poor connectivity? “We want universal features to be accessible to everyone. We’re looking at Copilot PCs with local models running on NPUs to provide an acceptable offline feature set.
“We’re working on three key factors: retaining feature power to be useful with spotty connections, managing the model footprint, and creating a seamless orchestration layer that switches between offline capabilities and cloud resources when you’re back online.
“All of these we’re working on. The idea is that as a user, you should have access to intelligence and products wherever you are, at least in these specific ecosystems.”
When you talk about offline productivity, is it a localized model like Microsoft’s homegrown Phi Silica? Or is the AI built into Windows via a driver interface like DirectML or similar? ”It’s an ensemble of local models. The first era of local models was small footprint models. What we are doing is our teams are also looking at post-training these models for specific use cases.
“For example, for writing-versus-analysis-versus-something else or image creation, we’re making sure these things fit the needs of core users for that situation. We use pipeline, we use open-source models.”
It seems like users might create their own apps without seeing Excel or Word working in the background. Is that capability coming — bringing GitHub’s power to users — and when? ”You have a good prediction model of our roadmap. Once we can generate code, you can create lightweight apps on the fly.
“For example, in the Analyst agent we just rolled out, which is a data scientist in a box, I asked it to analyze F1 stats from 2024. I didn’t provide a dataset — I just said search the web for World Bank data and NBA stats, then tell me what’s insightful.
“While that’s an idle pursuit for me, imagine turning that towards work. Internally, we’ve used it to connect to sales data and identify anomalies in scatter plots. As a customer, you don’t care if it’s writing an app, using Excel, or something else — that’s how we’re designing it.
“We see Copilot as the browser with specialized agents working for you. Then there’s a whole slew of invisible tools. We’re building many Office assets as tools so you can simply request a properly cited Word document in a particular format without thinking about the underlying technology.”
Outlook is a daily starting point for many users, but it is confusing — it seems separate from 365 and comes in the Classic and New versions. How are you planning to integrate Copilot there? ”Watch this space. After this meeting, my next session is with the Outlook team where we are going deep into useful scenarios.
“If you think about what’s happening today, even [CEO] Satya [Nadella] mentioned this — no one grew up dreaming they’d wake up every morning to sort through 500 emails and mark 30 as spam. These are gears and mechanisms we use to get work done.
“One of our motivating principles is recognizing that modern work involves 30%-40% core productive work and then a lot of work to manage the work itself — coordination, communication, figuring stuff out, scheduling meetings. Right now, we’re all turning these gears manually, but AI should be able to handle much of that overhead for us.
“We’re looking at how Copilot can help with Outlook by advising on how you’re using time throughout your week, highlighting the most important items requiring your attention, and in some cases helping draft response points or summaries of complex email threads.
“The goal is to remove that administrative overhead so you can focus on the meaningful work rather than the work about work.”
Is your goal with AI to ultimately declutter interfaces like Word and Excel that have too many icons? I get confused sometimes when using Classic Outlook or Word. ”One-hundred percent. Today, there’s such depth in these apps from years of adding valuable features for enterprise users. Traditionally, we faced a trade-off between learning curve and power usage — feature discovery is hard, but once you do, it’s very powerful.
“With AI, we can eliminate that trade-off. You’ll still need to learn how to ask for things, but it’s much easier than learning tools from the ground up.
“Power users can keep their muscle memory while we democratize that power to everyone through Copilot, making those accumulated enterprise features accessible to everybody.”
How are you approaching third-party plugins like Adobe Express? And regarding security, how do you manage appropriate data access with AI handling so much information? ”For plugins, we think there will be high-value tools Microsoft builds, but also many created by others. We’ve seen over 100,000 agents and flows built with Microsoft Copilot Studio already. We aim to provide an agent store where you can discover, install, and pin these tools — making it not just a product but a platform. Some plugins might serve just three people, while others like Adobe Express will reach millions.
“Regarding security, our unique responsibility has three aspects: First, bringing work data together with world knowledge, like combining latest competitor news with internal company data for sales prep. Second, integrating into existing workflows people already use. And third — most importantly — doing this securely, with privacy preservation and compliance.
“Our Copilot control system gives IT admins a complete view of all activities and deployed agents, with controls to manage everything and strong guarantees on security and compliance.”
New Research Reveals: 95% of AppSec Fixes Don’t Reduce Risk
DarkWatchman, Sheriff Malware Hit Russia and Ukraine with Stealth and Nation-Grade Tactics
Z Google Play za rok zmizela polovina aplikací. Hledání kvalitního obsahu je snadnější
- « první
- ‹ předchozí
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- …
- následující ›
- poslední »
