Agregátor RSS
Ubuntu Linux impacted by decade-old 'needrestart' flaw that gives root
You’ll soon be able to clone your voice to speak other languages in Teams
In connection with this year’s Ignite conference, Microsoft has unveiled a new interpretation tool that will be added to Teams in the spring. What makes the voice cloning tool — currently called “Interpreter In Teams” — special is that users will be able to use your own voice to speak in other languages in real time.
According to Techcrunch, users need a subscription to Microsoft 365 to have access to the technology.
Initially, the tool will support nine languages: English, French, Italian, Portuguese, Spanish, German, Japanese, Korean and Mandarin. More languages are likely to be added over time.
Mimozemšťané by se mohli ukrývat v paralelních vesmírech, tvrdí vědci
Mega US healthcare payments network restores system 9 months after ransomware attack
Still reeling from its February ransomware attack, Change Healthcare confirms its clearinghouse services are back up and running, almost exactly nine months since the digital disruption began.…
Vybrali jsme nejlepší hry pro mobily, které si letos můžete zahrát. Většina je zdarma | Vánoce ????
Google's AI bug hunters sniff out two dozen-plus code gremlins that humans missed
Google's OSS-Fuzz project, which uses large language models (LLMs) to help find bugs in code repositories, has now helped identify 26 vulnerabilities, including a critical flaw in the widely used OpenSSL library.…
Superpočítač El Capitan nejvýkonnějším superpočítačem na světě (TOP500 11/2024)
Leveling Up Fuzzing: Finding more vulnerabilities with AI
Recently, OSS-Fuzz reported 26 new vulnerabilities to open source project maintainers, including one vulnerability in the critical OpenSSL library (CVE-2024-9143) that underpins much of internet infrastructure. The reports themselves aren’t unusual—we’ve reported and helped maintainers fix over 11,000 vulnerabilities in the 8 years of the project.
But these particular vulnerabilities represent a milestone for automated vulnerability finding: each was found with AI, using AI-generated and enhanced fuzz targets. The OpenSSL CVE is one of the first vulnerabilities in a critical piece of software that was discovered by LLMs, adding another real-world example to a recent Google discovery of an exploitable stack buffer underflow in the widely used database engine SQLite.
This blog post discusses the results and lessons over a year and a half of work to bring AI-powered fuzzing to this point, both in introducing AI into fuzz target generation and expanding this to simulate a developer’s workflow. These efforts continue our explorations of how AI can transform vulnerability discovery and strengthen the arsenal of defenders everywhere.
In August 2023, the OSS-Fuzz team announced AI-Powered Fuzzing, describing our effort to leverage large language models (LLM) to improve fuzzing coverage to find more vulnerabilities automatically—before malicious attackers could exploit them. Our approach was to use the coding abilities of an LLM to generate more fuzz targets, which are similar to unit tests that exercise relevant functionality to search for vulnerabilities.
The ideal solution would be to completely automate the manual process of developing a fuzz target end to end:
Drafting an initial fuzz target.
Fixing any compilation issues that arise.
Running the fuzz target to see how it performs, and fixing any obvious mistakes causing runtime issues.
Running the corrected fuzz target for a longer period of time, and triaging any crashes to determine the root cause.
Fixing vulnerabilities.
In August 2023, we covered our efforts to use an LLM to handle the first two steps. We were able to use an iterative process to generate a fuzz target with a simple prompt including hardcoded examples and compilation errors.
In January 2024, we open sourced the framework that we were building to enable an LLM to generate fuzz targets. By that point, LLMs were reliably generating targets that exercised more interesting code coverage across 160 projects. But there was still a long tail of projects where we couldn’t get a single working AI-generated fuzz target.
To address this, we’ve been improving the first two steps, as well as implementing steps 3 and 4.
We’re now able to automatically gain more coverage in 272 C/C++ projects on OSS-Fuzz (up from 160), adding 370k+ lines of new code coverage. The top coverage improvement in a single project was an increase from 77 lines to 5434 lines (a 7000% increase).
This led to the discovery of 26 new vulnerabilities in projects on OSS-Fuzz that already had hundreds of thousands of hours of fuzzing. The highlight is CVE-2024-9143 in the critical and well-tested OpenSSL library. We reported this vulnerability on September 16 and a fix was published on October 16. As far as we can tell, this vulnerability has likely been present for two decades and wouldn’t have been discoverable with existing fuzz targets written by humans.
Another example was a bug in the project cJSON, where even though an existing human-written harness existed to fuzz a specific function, we still discovered a new vulnerability in that same function with an AI-generated target.
One reason that such bugs could remain undiscovered for so long is that line coverage is not a guarantee that a function is free of bugs. Code coverage as a metric isn’t able to measure all possible code paths and states—different flags and configurations may trigger different behaviors, unearthing different bugs. These examples underscore the need to continue to generate new varieties of fuzz targets even for code that is already fuzzed, as has also been shown by Project Zero in the past (1, 2).
To achieve these results, we’ve been focusing on two major improvements:
Automatically generate more relevant context in our prompts. The more complete and relevant information we can provide the LLM about a project, the less likely it would be to hallucinate the missing details in its response. This meant providing more accurate, project-specific context in prompts, such as function, type definitions, cross references, and existing unit tests for each project. To generate this information automatically, we built new infrastructure to index projects across OSS-Fuzz.
LLMs turned out to be highly effective at emulating a typical developer’s entire workflow of writing, testing, and iterating on the fuzz target, as well as triaging the crashes found. Thanks to this, it was possible to further automate more parts of the fuzzing workflow. This additional iterative feedback in turn also resulted in higher quality and greater number of correct fuzz targets.
The workflow in action
Our LLM can now execute the first four steps of the developer’s process (with the fifth soon to come).
1. Drafting an initial fuzz target
A developer might check the source code, existing documentation and unit tests, as well as usages of the target function when to draft an initial fuzz target. An LLM can fulfill this role here, if we provide a prompt with this information and ask it to come up with a fuzz target.
Prompt:
Your goal is to write a fuzzing harness for the provided function-under-test signature using <code>LLVMFuzzerTestOneInput</code>. It is important that the provided solution compiles and actually calls the function-under-test specified by the function signature:
<function signature>
unsigned char * buffer_append_base64_decode(buffer *, const char *, size_t, base64_charset)
</function signature>
Here is the source code of the function being tested:
<code>
unsigned char* buffer_append_base64_decode(buffer *out, const char* in, size_t in_length, base64_charset charset) {
const size_t reserve = 3*(in_length/4) + 3;
unsigned char * const result = (unsigned char *)
buffer_string_prepare_append(out, reserve);
const size_t out_pos =
li_base64_dec(result, reserve, in, in_length, charset);
buffer_commit(out, out_pos);
return (out_pos || !in_length) ? result : NULL;
}
</code>
Here is the source code for functions which reference the function being tested:
<code>
…
2. Fixing any compilation issues that arise.
Once a developer has a candidate target, they would try to compile it and look at any compilation issues that arise. Again, we can prompt an LLM with details of the compilation errors so it can provide fixes.
Prompt:
Below is the error to fix:
The code has the following build issues:
<error>
server_fuzzer.cc:12:46: error: only virtual member functions can be marked 'override'
12 | bool OnData(const char* data, size_t size) override {
| ^~~~~~~~
server_fuzzer.cc:22:39: error: no type named 'MultiPartFile' in namespace 'httplib'
22 | const httplib::MultiPartFile& file) override {
| ~~~~~~~~~^
server_fuzzer.cc:12:27: warning: unused parameter 'data' [-Wunused-parameter]
12 | bool OnData(const char* data, size_t size) override {
| ^
server_fuzzer.cc:37:5: error: use of undeclared 'operator()'
37 | operator()(request_data.data(), receiver);
| ^
server_fuzzer.cc:27:23: warning: private field 'provider_' is not used [-Wunused-private-field]
27 | FuzzedDataProvider& provider_;
| ^
</error>
Example of compilation errors that an LLM was able to fix
3. Running the fuzz target to see how it performs, and fixing any obvious mistakes causing runtime issues.
Once all compilation errors are fixed, a developer would try running the fuzz target for a short period of time to see if there were any mistakes that led it to instantly crash, suggesting an error with the target rather than a bug discovered in the project.
The following is an example of an LLM fixing a semantic issue with the fuzzing setup:
LLM Response:
<explanation>
The issue is that `sqlite3_blob_open` requires the table to exist before you can open a blob on it. The fix ensures that the table exists by executing a `CREATE TABLE IF NOT EXISTS` statement before calling `sqlite3_blob_open`.
Here's a breakdown of the changes:
1. **Create Table Query:** A string `createTableQuery` is constructed to hold the SQL statement for creating the table. It uses the fuzzed values for the table and column names.
2. **Prepare Statement:** `sqlite3_prepare_v2` is used to prepare the SQL statement from `createTableQuery`. This compiles the SQL into a byte-code program that SQLite can execute.
3. **Execute Statement:** `sqlite3_step` executes the prepared statement. Since it's a `CREATE TABLE` statement, it doesn't return any data, but it creates the table if it doesn't exist.
4. **Finalize Statement:** `sqlite3_finalize` destroys the prepared statement object when it's no longer needed.
By adding these steps before calling `sqlite3_blob_open`, we ensure that the table exists, preventing the null pointer access issue.
</explanation>
4. Running the corrected fuzz target for a longer period of time, and triaging any crashes.
At this point, the fuzz target is ready to run for an extended period of time on a suitable fuzzing infrastructure, such as ClusterFuzz.
Any discovered crashes would then need to be triaged, to determine the root causes and whether they represented legitimate vulnerabilities (or bugs in the fuzz target). An LLM can be prompted with the relevant context (stacktraces, fuzz target source code, relevant project source code) to perform this triage.
Prompt: Given the following crash report, fuzz driver code and relevant project function code, analyze the cause of the crash.
First, only answer “Crash is caused by bug in fuzz driver” or “Crash is caused by bug in project”. Second, offer a succinct and to-the-point analysis.
Below is crash report:
<log>
AddressSanitizer: heap-use-after-free on address 0x502000000630 at pc 0x560036939242 bp 0x7ffd65dca850 sp 0x7ffd65dca010
READ of size 1 at 0x502000000630 thread T0
SCARINESS: 40 (1-byte-read-heap-use-after-free)
#0 0x560036939241 in __asan_memcpy /src/llvm-project/compiler-rt/lib/asan/asan_interceptors_memintrinsics.cpp:63:3
#1 0x56003697d30b in EmitLiteral<false> /src/snappy/snappy.cc:661:5
#2 0x56003697d30b in snappy::internal::CompressFragment(char const*, unsigned long, char*, unsigned short*, int) /src/snappy/snappy.cc:954:10
In this example, the LLM correctly determines this is a bug in the fuzz target, rather than a bug in the project being fuzzed.
LLM Response: Crash is caused by bug in fuzz driver.
The fuzz driver creates a vector of `iovec` structures, each pointing to a chunk of memory owned by the `FuzzedDataProvider`. However, the `data_chunk` vector, which holds the actual data pointed to by `iov_base`, is deallocated at the end of each fuzzer iteration when it goes out of scope. This leaves the `iov_base` pointers dangling. When `snappy::CompressFromIOVec` tries to access the memory pointed to by these dangling pointers, a heap-use-after-free occurs.
5. Fixing vulnerabilities.
The goal is to fully automate this entire workflow by having the LLM generate a suggested patch for the vulnerability. We don’t have anything we can share here today, but we’re collaborating with various researchers to make this a reality and look forward to sharing results soon.
Improving automated triaging: to get to a point where we’re confident about not requiring human review. This will help automatically report new vulnerabilities to project maintainers. There are likely more than the 26 vulnerabilities we’ve already reported upstream hiding in our results.
Agent-based architecture: which means letting the LLM autonomously plan out the steps to solve a particular problem by providing it with access to tools that enable it to get more information, as well as to check and validate results. By providing LLM with interactive access to real tools such as debuggers, we’ve found that the LLM is more likely to arrive at a correct result.
Integrating our research into OSS-Fuzz as a feature: to achieve a more fully automated end-to-end solution for vulnerability discovery and patching. We hope OSS-Fuzz will be useful for other researchers to evaluate AI-powered vulnerability discovery ideas and ultimately become a tool that will enable defenders to find more vulnerabilities before they get exploited.
For more information, check out our open source framework at oss-fuzz-gen. We’re hoping to continue to collaborate on this area with other researchers. Also, be sure to check out the OSS-Fuzz blog for more technical updates.
Microsoft confirms game audio issues on Windows 11 24H2 PCs
Apple admins: Update your hardware now
Among the first things Apple IT admins woke up to this morning was news of a pair of actively exploited zero-day attacks in the wild targeting Intel Macs, iPhones, iPads, and even Vision Pro users. Apple has already released software patches for the flaws, which is why the second thing admins realized is that they must rush through any necessary software verification process required before expediting installation of the update.
In these days of remotely managed devices and increasingly effective MDM systems, that’s far less a problem than it was in the past. You can usually make a policy change and push out updates to all your managed devices quickly.
Companies that don’t use these systems, or those that have employees using their own personal devices to access potentially sensitive internal data, must work harder to convince users to install security updates. So, what can they tell people about the latest threat that might help motivate them to install the patch today?
Why you should update immediatelyFirst, Apple says it believes the attack is being actively used, which means any Intel system — including systems used by other people you interact with — is a potential target. “Apple is aware of a report that this issue may have been exploited,” the company said.
Second, it slips in using flaws in software you use daily, including JavaScript and WebKit, the rendering engine that powers the Safari browser on Apple devices. In other words, everyone using Apple’s devices is a potential target.
Finally — and perhaps best of all — Apple has already shipped a fix for the problem, maintaining its reputation for being ahead of threats, rather than echoing the approach taken by some other platforms and racing to keep up with attacks. It’s almost as if Apple’s systems remain more secure for a reason. The company addressed 20 zero-day attacks in 2023 and has guarded against just six so far this year.
Apple also shipped security patches for iOS 17 and iPad OS 17 systems and patches for Safari on macOS Ventura and Sonoma.
What the experts sayMichael Covington, vice president for portfolio strategy at Jamf, thinks all users should update at once.
“While Apple has warned that the vulnerabilities, also present in macOS, may be actively exploited on Intel-based systems, we recommend updating any device that is at risk,” he said. “With attackers potentially exploiting both vulnerabilities, it is critical that users and mobile-first organizations apply the latest patches as soon as they are able.”
What are these attacks?The attack vector makes use of two vulnerabilities found in macOS Sequoia JavaScriptCore (CVE-2024-44308) and WebKit (CVE-2024-44309). The first lets attackers achieve remote code execution (RCE) through maliciously crafted web content; the second lets attackers engage in cross-site scripting attacks.
As admins will recognize, RCE exploits can enable attackers to install malware surreptitiously on infected machines, perform denial-of-service attacks, or access sensitive information, while a cross-scripting attack can help hackers grab personal data for identity theft and other nefarious ends. No one wants to be a victim of either form of attack.
Who is using these attacks?No information pertaining to who has been using these flaws in their attacks has been shared. With that in mind, it’s important to note that the flaws were identified by researchers at Google’s Threat Analysis Group (TAG), which works to counter government-backed attacks. That suggests that whoever has been weaponizing these vulnerabilities is connected to a national entity of some kind.
If that is the case, recent reports from TAG suggest an upsurge in such attacks, so users in some industries and professions might want to consider locking down their devices with Apple’s Lockdown Mode to minimize their attack surface. IT, meanwhile, should review security compliance, particularly among those using older iPhones, iPads, or Intel Macs.
You can follow me on social media! Join me on BlueSky, LinkedIn, Mastodon, and MeWe.
New Ghost Tap attack abuses NFC mobile payments to steal money
Microsoft Ignite 2024 — get the latest news and insights
Microsoft Ignite 2024 kicks off in Chicago and runs Nov. 19-22. If you can’t make it to Chicago, no worries. First, the physical event is sold out, according to the Ignite event page. Second, it’s a hybrid event, so you can attend Ignite virtually.
Whether you’re there physically or online, expect to learn more about the latest technologies from Microsoft — everything from artificial intelligence (AI) to cloud computing, security, productivity tools, and more In the keynote address, Microsoft CEO Satya Nadella and Microsoft leaders — including Charlie Bell, executive vice president of Microsoft Security, and Scott Guthrie, executive vice president of the Microsoft Cloud + AI Group — will share how the company is creating new opportunities across its platforms in this rapidly evolving era of AI.
You can also network with industry experts and Microsoft’s team, IT leaders, and other tech enthusiasts; gain hands-on experience and learn from experts at technical sessions; and learn about new products and services. (Microsoft often announces new products and features at Ignite.)
Here are highlights from the 2024 show, followed by a look back at some of our previous Ignite coverage, as well as recent articles that touch on related topics. Remember to check this page often for more on Ignite 2024.
Microsoft Ignite 2024 news and insights Microsoft upgrades Copilot Studio agent builder toolsNov. 20, 2024: Microsoft unveiled new Copilot Studio features aimed at both expanding the functionality of AI agents created with the application and improving the accuracy of outputs. Customers will be able to connect Copilot Studio agents to third-party apps, and tools for building autonomous agents are now available in a public preview.
Microsoft partners with industry leaders to offer vertical SLMsNov. 20, 2024: Teaming up with industry partners such as Bayer and Rockwell Automation, Microsoft is adding pre-trained small language models to its Azure AI catalog aimed at highly specialized use cases.
Microsoft brings automated ‘agents’ to M365 CopilotNov. 19, 2024: Microsoft has introduced a new tool in Microsoft 365 Copilot to automate repetitive tasks, part of a drive to make the genAI assistant more useful to users. Copilot Actions features a simple trigger-and-action interface that Microsoft hopes will make the workflow automations accessible to a wide range of workers.
Microsoft extends Entra ID to WSL, WinGetNov. 19, 2024: Microsoft has added new security features to Windows Subsystem for Linux (WSL) and the Windows Package Manager (WinGet), including integration with Microsoft Entra ID (formerly Active Directory) for identity-based access control. The goal is to enable IT admins to more effectively manage the deployment and use of these tools in enterprises.
Microsoft looks to genAI, exposure managment, and new bug bounties to secure enterprise ITNov. 19, 2024: Microsoft announced a host of new security measures at its annual Ignite conference, with the goal of strengthening its existing data protection, endpoint security, and extended threat detection and response capabilities. Notable improvements include the introduction of a dedicated exposure management tool, an upgrade to insider risk management (IRM) tailored to GenAI usage, new data loss prevention (DLP) features, and integration of genAI into security operations center (SOC) processes.
Microsoft and Atom Computing claim breakthrough in reliable quantum computingNov. 19, 2024: The companies have announced what they claim is a significant step forward in reliable quantum computing, unveiling a commercial quantum machine built with 24 entangled logical qubits. The system, achieved through a combination of Atom Computing’s neutral-atom hardware and Microsoft’s qubit-virtualization technology, aims to address the critical challenge of error detection and correction in quantum computation.
Microsoft adds major upgrades to Power Apps at IgniteNov. 19, 2024: The company announced a series of low-code product enhancements, targeted at developers, that ranged from new agent-building capabilities in Power Apps and Power Pages to new AI and governance features in the codeless automation tool Microsoft Power Automate.
Microsoft’s Windows 365 Link is a thin client device for shared workspacesNov. 19, 2024: Microsoft will start selling a thin client device that lets workers boot directly to Windows 365 “in seconds,” the company announced on Tuesday.
Microsoft reimagines Fabric with focus on AINov. 19, 2024: The company announced a slate of enhancements to its data analytics platform, including Fabric Databases, which can provision auto-optimizing and auto-scaling AI databases in seconds.
Microsoft rebrands Azure AI Studio to Azure AI FoundryNov. 19, 2024: The toolkit for building generative AI applications has been packaged with new updates to form the Azure AI Foundry service.
From MFA mandates to locked-down devices, Microsoft posts a year of SFI milestones at IgniteNov. 19, 2024: The company shared a progress report on its Secure Future Initiative (SFI), introduced a year ago, which included significant measures such as enforcing multifactor authentication (MFA) by default for new tenants, isolating close to 100,000 work devices under conditional access policies, and blocking GitHub secrets from exposure.
Previous Microsoft Ignite coverage Microsoft to launch autonomous AI at IgniteOct. 21, 2024: Microsoft will let customers build autonomous AI agents that can be configured to perform complex tasks with little or no input from humans. Microsoft announced that tools to build AI agents in Copilot Studio will be available in a public beta that begins at Ignite on Nov. 19, with pre-built agents rolling out to Dynamics 365 apps in the coming months.
Microsoft Ignite 2023: 11 takeaways for CIOsNov. 15, 2023: Microsoft’s 2023 Ignite conference might as well be called AIgnite, with over half of the almost 600 sessions featuring AI in some shape or form. Generative AI (genAI), in particular, is at the heart of many of the product announcements Microsoft is making at the event, including new AI capabilities for wrangling large language models (LLMs) in Azure, new additions to the Copilot range of genAI assistants, new hardware, and a new tool to help developers deploy small language models (SLMs) too.
Microsoft partners with Nvidia, Synopsys for genAI servicesNov. 16, 2023: Microsoft has announced that it is partnering with chipmaker Nvidia and chip-designing software provider Synopsys to provide enterprises with foundry services and a new chip-design assistant. The foundry services from Nvidia will be deployed on Microsoft Azure and will combine three of Nvidia’s elements — its foundation models, its NeMo framework, and Nvidia’s DGX Cloud service.
As Microsoft embraces AI, it says sayonara to the metaverseFeb. 23, 2023: It wasn’t just Mark Zuckerberg who led the metaverse charge by changing Facebook’s name to Meta. Microsoft hyped it as well, notably when CEO Satya Nadella said, “I can’t overstate how much of a breakthrough this is,” in his keynote speech at Microsoft Ignite in 2021. Now, tech companies are much wiser, they tell us. It’s AI at heart of the coming transformation. The metaverse may be yesterday’s news, but it’s not yet dead.
Microsoft Ignite in the rear-view mirror: What we learnedOct. 17, 2022: Microsoft treated its big Ignite event as more of a marketing presentation than a full-fledged conference, offering up a variety of announcements that affect Windows users, as well as large enterprises and their networks. (The show was a hybrid affair, with a small in-person option and online access for those unable to travel.)
Related Microsoft coverage Microsoft’s AI research VP joins OpenAI amid fight for top AI talentOct. 15, 2024: Microsoft’s former vice president of genAI research, Sebastien Bubeck, left the company to join OpenAI, the maker of ChatGPT. Bubeck, a 10-year veteran at Microsoft, played a significant role in driving the company’s genAI strategy with a focus on designing more efficient small language models (SLMs) to rival OpenAI’s GPT systems.
Microsoft brings Copilot AI tools to OneDriveOct. 9, 2024: Microsoft’s Copilot is now available in OneDrive, part of a wider revamp of the company’s cloud storage platform. Copilot can now summarize one or more files in OneDrive without needing to open them first; compare the content of selected files across different formats (including Word, PowerPoint, and PDFs); and respond to questions about the contents of files via the chat interface.
Microsoft wants Copilot to be your new AI best friendOct. 9, 2024: Microsoft’s Copilot AI chatbot underwent a transformation last week, morphing into a simplified pastel-toned experience that encourages you…to just chat. “Hey Chris, how’s the human world today?” That’s what I heard after I fired up the Copilot app on Windows 11 and clicked the microphone button, complete with a calming wavey background. Yes, this is the type of banter you get with the new Copilot.
Strava změnila podmínky používání svého API. Může to zabít řadu užitečných služeb
Download our Microsoft Copilot for Writing Cheat Sheet
Download the PDF Computerworld Cheat Sheet today.
D-Link tells users to trash old VPN routers over bug too dangerous to identify
Owners of older models of D-Link VPN routers are being told to retire and replace their devices following the disclosure of a serious remote code execution (RCE) vulnerability.…
Extrémně velká sleva na výborný 55" OLED televizor Philips s Google TV. A je to letošní držitel ocenění EISA
Recenze hry S.T.A.L.K.E.R. 2: Heart of Chornobyl. Návrat do hrozivé i tajuplné Zóny je retro akční dobrodružství
Amazon and Audible flooded with 'forex trading' and warez listings
Google chce znovu soupeřit s MacBooky. Po letech oživí Pixelbooky, tentokrát s Androidem
Microsoft upgrades Copilot Studio agent builder tools
Microsoft at this week’s Ignite conference unveiled new Copilot Studio features aimed at both expanding the functionality of AI agents created with the application and improving the accuracy of outputs.
Copilot Studio was unveiled at last year’s event as a way to customize Microsoft’s generative AI (genAI) “copilot” assistants for different business use cases. Since then, the company has stepped up its messaging around AI agents that can perform a wider variety of tasks on behalf of workers.
Among the latest updates to Copilot Studio is the ability to connect agents to third-party applications such as Salesforce, ServiceNow, and Zendesk. The goal is to provide access to “real-time knowledge” that helps answer complex questions, Microsoft said. That feature is now in preview.
[ Related: Microsoft Ignite 2024 news and insights ]
In addition, Copilot Studio now integrates with the new Azure AI Foundry to enable access to a wider range of data within an organization, Omar Aftab, vice president of conversational AI at Microsoft, said in a blog post. “By connecting all their data sources, organizations can see that agents are more grounded in their business data and provide specific, high-quality responses,” he said.
There are also new “multimodal” AI enhancements to Copilot Studio agents. Users can embed an agent built in Copilot Studio into an interactive voice system (used in automated voice calls for customer service, for example) to create “speech enabled agents,” said Aftab. These can also be embedded in various “applications, standalone kiosks, concierge systems, and more,” he said. And Copilot Studio agents can now analyze images, allowing users to upload files and ask questions about them.
Microsoft has also opened access — in a public preview — to autonomous agent builder tools in Copilot Studio, as announced last month. “Makers can now build agents that work on their behalf, without having to prompt the agent, saving human hours and increasing efficiency,” said Aftab. “They can create these agents from scratch or configure agents that are prebuilt in Copilot Studio.”
There’s an agent library to help users get started, too, (also in public preview), with pre-built agents tailored to common work processes such as leave management, sales orders and deal acceleration, Microsoft said.
Among the other announcements Tuesday is the ability to build customized agents with a “streamlined Copilot Studio experience” that’s now embedded in the BizChat interface of Microsoft 365 Copilot. These agents are created using natural language directions, and can be given access to enterprise data held in apps such as Dynamics 365 and SharePoint. There are also pre-built agents, including an Employee Self-Service agent.
Copilot Studio can address some of the shortcomings of a “horizontal” tool such as Microsoft 365 Copilot, which often requires a lot of guidance to access the right data, and may produce hallucinations, said J.P. Gownder, vice president and principal analyst at Forrester.
“The Copilot Studio tools help to fill this gap by allowing organizations to create more finely tuned solutions that nevertheless are a lot easier and cheaper than training a model from scratch,” he said.
Improved tuning and sourcing in Copilot Studio allows more retrieval augmented generation (RAG)-based approaches, said Gownder, which specifies data more precisely, reducing the likelihood of “both vague outputs and hallucinations.” The ability to use custom Azure AI Search indexes as a knowledge source for custom RAG scenarios — another of the Copilot Studio updates at Ignite — allows for more “specific, contextual, and accurate outcomes,” he said.
“Being able to then take these Copilot Studio agents and plug them into Microsoft 365 Copilot could democratize some of these innovations, allowing employees to tap into them right in their flow of work,” said Gownder. “This heightened context, accuracy, and specificity could solve some of the problems that enterprise leaders have cited as downsides to M365 Copilot.
“Microsoft has rolled out a lot of Copilot solutions with sunny story lines that enterprises aren’t always able to replicate in their own environments,” said Gownder. “So, while the Copilot Studio announcements sound promising, we must wait and see if they truly work as advertised to create value.”
- « první
- ‹ předchozí
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- …
- následující ›
- poslední »