Kategorie
Insecure use of Signal app part of wider Department of Defense problem, suggests Senate report
The Signalgate scandal that enveloped US Secretary of Defense Pete Hegseth in March appears to be symptomatic of a wider lax attitude towards the use of non-approved messaging apps by officials and employees, a Senate Committee has concluded.
In March, the US Senate Committee on Armed Services set out to examine issues raised by the Signalgate incident: the need to clarify the existing rules on using “non-controlled” apps, and looking at whether Defense Secretary Hegseth adhered to them in his use of Signal, and whether his actions were evidence of a wider culture of insecure app usage within the Department of Defense (DoD).
This week’s dual reports have come back with a mixed assessment of these points. Broadly, what Hegseth was accused of doing – communicating sensitive information using a third-party messaging app – appears to have been happening at the DoD in less serious contexts since at least 2020.
This mirrors issues familiar to enterprises: unsanctioned or unmanaged messaging apps, including ones touting end-to-end encryption (E2EE) security, quickly become an IT backchannel that can invisibly undermine carefully-assembled security, compliance, and data retention policies.
Shadow communicationsThe first report, an assessment of the Defense Secretary’s use of the Signal app to communicate with senior colleagues in advance of a military operation against Yemen on March 15, is used to illustrate the point. It confirms the widely reported fact that two hours before the raid, Hegseth revealed details of the operation to a Signal group of 19 people, including a journalist who had been added to it in error.
In doing so, the report agrees he violated security policies by sending sensitive information from a personal device, and using the non-approved Signal app in a way that revealed important operational details in advance of the strike. The report ducks the issue of whether this information was classified at the time it was sent, noting that Hegseth was senior enough to determine this for himself.
The second background report has uncovered evidence of a more general culture of shadow communications in the DoD, including widespread use of video-conferencing apps during the Covid 19 pandemic.
The evidence gathered is sparse and partly redacted, making it difficult to assess the seriousness of any breaches. Because the scope of its remit was limited to the evidence from previous audits, one of the committee’s recommendations is to undertake a more comprehensive assessment of unsanctioned app usage inside the DoD. There’s also a question mark around how old audits analyzed by a Senate committee could accurately measure something that, by its nature, is hidden and only recorded on personal devices.
Nevertheless, the report says it is certain that Hegseth’s actions were not an isolated example, noting that staff had “used non-DoD-controlled electronic messaging systems for a variety of reasons. For example, some personnel used them because of the systems’ perceived appearance of security. As a result, DoD personnel increased the risk of exposing sensitive DoD information to our adversaries and did not comply with the legal obligation to retain and preserve official records.”
In short, while there was no evidence that unsanctioned app use is routine or normalized, it is likely that enough staff are using them to make a serious breach possible at some point. The report concludes that one of the reasons staff have taken to these messaging apps was that they lack convenient alternatives. It recommends developing approved apps to remove this need, implementing a training program to ensure existing communication regulations are complied with, and limiting the authority to use messaging apps to senior staff, in specific circumstances.
What’s surprising about this is that it has taken a major political row at government level to raise an issue that enterprise CISOs have been grappling with for years: the effects of BYOD, shadow IT (and now shadow AI), and unsanctioned apps that creep into organizations without anyone realizing it.
Over the last two decades, the rise of mobile devices, the cloud, and apps has radically de-centralized IT in ways that top-down management models struggle to control. Meanwhile, nothing has changed; the Signal app at the center of this scandal remains hugely popular on both sides of the political divide, despite the appearance of additional issues with the technology.
This article originally appeared on CSOonline.
Who would listen to AI ‘music?’
Music giant Warner Music Group announced last week that it had reached a “groundbreaking partnership agreement” with Suno, the AI startup at the forefront of AI-generated music it had sued for copyright infringement. After settling that fight, Warner Music signed new licensing models that allow Suno users to continue creating “music.”
Similar agreements have previously been inked with competitor Udio, and it seems highly likely that the other music giants will reach similar agreements. Whether it’s because record companies don’t want to risk making the “Napster mistake” again or they truly believe this is the future, AI services are here to stay.
Suno is undeniably popular. According to the company’s own figures, according to Billboard, users generate music equivalent to Spotify’s entire catalog every fortnight. The service has an admittedly high “wow” factor when tested. The results are impressive, technically speaking. But because it’s music, the question remains: who will listen to this?
I can understand the users, the people finding ways to express themselves creatively, even if it’s via prompts. If you think it’s fun to create AI-generated music, do it. Similarly, playing with Nano Banana for pictures, Sora for videos, or letting Chat GPT write a bedtime story is harmless. But just as no one wants to read an AI-generated book or be drowned in AI-generated images and clips, I don’t think music listeners are as keen on this.
If the services were intended solely for the creators themselves, the problem would be smaller. But unfortunately, the ambitions do not stop there. In its pitch deck to investors, Suno highlights the AI-created band Velvet Sundown, which became a talking point this summer: “Suno songs go viral outside the platform.”
And it’s that dream, to go viral or make a buck, that’s driving rivers of AI music to fill up streaming platforms. Spotify’s French competitor Deezer reports that more than 50,000 AI-generated songs are uploaded to the platform every day. And Spotify itself announced in September that it had removed more than 75 million songs that were considered pure AI garbage.
Sometimes it works. In addition to the aforementioned Velvet Sundown, country artist Breaking Rust recently gained attention when their song “Walk My Walk” topped Billboard’s country download chart and made it onto Spotify’s viral chart. (I hope these are the exceptions that prove the rule.)
A couple of weeks ago, I scrolled into a drama on Tiktok — as one tends to do on that platform — about the unknown artist Haven, whose song “I Run” topped Tiktok’s list of most popular songs. It had been revealed that the song was AI-generated and people were going crazy. Not because the song was bad, but because they felt cheated. Because it felt inauthentic. Because the music suddenly lost a lot of its value.
Authenticity should not be underestimated when it comes to music and other media. It may be that 97% of people can’t tell the difference between AI-generated and human-created music. But the feeling of being deceived is the same, whether it is disinformation in text and video or “good songs” created by AI. Artistic works also tend to be about an emotional connection to the work or to the creator, something I think a clear “AI labeling” of music would effectively kill.
Is there no use for AI-generated music then? Well, perhaps where that connection and authenticity doesn’t matter. A company like Sweden’s Epidemic Sound should be a little concerned. It licenses background music, what used to be called elevator music or muzak, for sound design in areas such as television and advertising. For those uses, AI music could be a cost-effective solution, just as AI-generated content is popular with marketers.
Haven had her song taken down from streaming platforms, and has now had to record a new version with real vocals instead of AI-generated ones. The singer whose voice was imitated, Jorja Smith, has demanded royalties through her record label.
How was the AI song made? With Suno. Maybe something for the next pitch deck.
This column is taken from CS Weekly, a personalized newsletter with reading tips, link tips and analysis sent directly from editor-in-chief Marcus Jerräng’s desk. Would you like to receive the newsletter on Fridays? Sign up for a free subscription here.
Barts Health NHS discloses data breach after Oracle zero-day hack
Zero-Click Agentic Browser Attack Can Delete Entire Google Drive Using Crafted Emails
Voláme po anonymitě, ale předvádíme digitální striptýz. Nejhorší jsou boomeři na Facebooku (Podcast Živě)
M365 customers should explore alternatives, plan to dicker as prices hikes loom — analysts
Microsoft 365 customers will pay more for subscriptions next year, with price hikes across most subscription plans set to begin July 1. The changes will affect customers with Business, E3/E3, Frontline, and Government subscriptions.
Microsoft said in a blog post Thursday that the increases reflect new features being added to several plans. This includes expanded Copilot Chat functionality, Microsoft Defender for Office (Plan 1) in E3, Security Copilot in E5, and additional Intune tools such as Remote Help and Advanced Analytics for E3 and E5.
The new prices are:
- Microsoft 365 Business Basic, up $1 to $7 per user each month.
- Microsoft 365 Business Standard, up $1.50 to $14 per user each month.
- Office 365 E3, up $3 to $26 per user each month.
- Microsoft 365 E3, up $3 to $39 per user each month.
- Microsoft 365 E5, up $3 to $60 per user each month.
- Microsoft 365 F1, up 75 cents to $3 per user each month.
- Microsoft 365 F3, up $2 to $10 per user each month.
Two plans will remain the same: Microsoft 365 Business Premium and Office 365 E1 ($22 and $10 per user each month, respectively). Microsoft 365 Government customers will see increases of between 5% and 10%, depending on their plan. (There’s more information in a related blog post). All prices include Microsoft Teams; lower rates apply without the collaboration app included.
“These recent price actions will further intensify customer concerns and pricing fatigue,” said Gartner analysts Zach Nagle and Stephen White in a “first take” report in response to the changes.
Microsoft last hiked M365 prices in 2022, increasing subscriptions by between 9% and 25%. It recently made changes to its Enterprise Agreement terms for products including Microsoft 365, phasing out volume-based discounts that previously lowered per-seat pricing for large customers.
The Gartner analysts said Microsoft 365 customers should “prepare to mitigate the financial impact” of the increases by “leveraging negotiation strategies, exploring alternatives and optimizing license allocations.”
They also suggest that, where possible, customers consider renewing contracts early ahead of the July 1 price changes, as that would defer the increase until the next renewal.
A recent Gartner survey of 215 IT leaders showed that 17% of M365 customers are considering alternatives, and only 5% felt they get sufficient value from their subscription.
There are now more than 430 million commercial M365 users globally, Microsoft said during an earnings call earlier this year.
Jack Gold, analyst at J. Gold Associates, said it’s not uncommon for Microsoft to raise prices periodically and “given the amount of additional processing it needs to do with its AI features, it makes sense to try and recover the costs of running a larger cloud footprint to enable those products.
“I don’t think the price increase will have a detrimental effect on customer numbers, as most are already locked in and will just go along with this,” he said. “While there is more price competition these days from Google, I’m not seeing a large shift away from Microsoft to Google office in enterprises, but Google is doing well in SMB/mid-tier.”
The new Apple thinks different?
The latest Apple leadership changes set the stage for a new approach from the company on a range of issues, including international relations, the environment, and beyond. If great artists steal, great leaders reflect the spirit of their age.
Apple’s current general counsel, Kate Adams, will leave late next year, following a transition to a new general counsel — Meta Chief Legal Officer Jennifer Newstead — in March 2026. Apple also announced that Lisa Jackson, vice president for environment, policy, and social initiatives, will retire in late January.
The new facesNewstead’s appointment is perhaps the biggest hint of the potential for change. She brings with her extensive experience in foreign policy and international affairs, both topics of major importance to Apple in the shifting sands of current global politics and alliances. She also served as a top lawyer in the first Trump administration before leaving that position to move to Meta.
A second sign of change might be the departure of Jackson, who served as US Environmental Protection Agency head in the Obama administration. She has been instrumental in guiding Apple toward its oft-stated target to be completely carbon neutral by 2030. Apple CEO Tim Cook praised her for, “advocating for the best interests of our users on a myriad of topics, as well as advancing our values, from education and accessibility to privacy and security.”
What’s unusual here is that there doesn’t appear to be a clear succession path for this role, which seems odd given the depth of talent Apple already has within Jackson’s departments. Apple says Jackson’s Environment and Social Initiatives team will report to Apple’s Chief Operating Officer (COO), Sabih Khan, while work on government affairs will transition to Apple’s former general counsel, Kate Adams, for a while until it becomes part of Adams’ new domain.
Who will advocate now?It is important not to read too much into this, but these leadership changes do make it easier to think there might be no one at Apple’s top table to advocate over some of the environment, social, and government affairs the company has regularly shown leadership in across the last decade.
Newstead will have enough on her plate handling international regulators, particularly in Europe, while Apple’s COO will be more focused on manufacturing and supply – particularly around how Apple’s $600 billion investment in US manufacturing can boost US employment.
That doesn’t mean Apple won’t hit its 2030 goals, but does suggest that those ambitions might take second place to operational concerns. (It is also notable that Cook praised Adams as having been an important advocate for Apple’s push for privacy. Will Newsom prize privacy as much?)
Given Apple’s recent warnings that meeting regulatory obligations in the EU will force it to remove privacy safeguards from products sold there, is the company preparing to give up that struggle? After all, if people continue to vote for authoritarian governments in different color shades, at what point must private enterprise concede state surveillance is what people want?
Apple’s new orderIf nothing else, the changing of the guard strongly suggests that there is plenty at stake, and while Cook may now always be known for giving a lump of gold to a President to protect Apple’s shareholders, who knows what other compromises the company has had to make? Speculation Cook might himself retire also opens the gate to a new approach from company leadership. The complex political realities we face will inevitably be reflected across corporate boardrooms, eager to foster permanent growth for their shareholders.
None of this could be true, of course. But what is true is that while the departure of Alan Dye means Apple’s products might in the future work differently, Apple’s latest board-level leadership changes — coming mere months since COO Jeff Williams left the company — make it possible the company will think different, too.
You can follow me on social media! Join me on BlueSky, LinkedIn, and Mastodon.
FBI warns of virtual kidnapping scams using altered social media photos
Critical XXE Bug CVE-2025-66516 (CVSS 10.0) Hits Apache Tika, Requires Urgent Patch
OpenAI prompts AI models to ‘confess’ when they cheat
OpenAI’s research team has trained its GPT-5 large language model to “confess” when it doesn’t follow instructions, providing a second output after its main answer that reports when the model didn’t do as it was told, cut corners, hallucinated, or was uncertain of its answer.
“If we can surface when that happens, we can better monitor deployed systems, improve training, and increase trust in the outputs,” OpenAI said in a statement.
The confession mechanism addresses a problem in reinforcement learning where models optimize for multiple goals simultaneously — correctness, helpfulness, safety, and user preferences.
OpenAI trained a version of GPT-5 Thinking to produce the confessions and tested the technique on stress-test datasets designed to elicit problematic behaviors including hallucinations, reward hacking, and instruction violations. It described the work as a proof of concept rather than a production-ready feature.
How the confession mechanism worksThe confession reports include three elements: a list of explicit and implicit instructions the answer should satisfy, an analysis of whether the answer met those objectives, and a list of uncertainties or judgment calls the model encountered. The system evaluates confessions on honesty alone, separate from the main answer’s performance metrics.
“If the model honestly admits to hacking a test, sandbagging, or violating instructions, that admission increases its reward rather than decreasing it,” OpenAI said. It compared this to the Catholic Church’s seal of confession: “Nothing the model reveals in the confession can change the reward it receives for completing its original task,” the researchers wrote in the technical paper.
OpenAI tested confessions on multiple question-answering datasets, including TruthfulQA and medical QA benchmarks. In experiments where models were trained to exploit weak reward signals, confessions became more honest over time even as the models’ main outputs became more deceptive.
Enterprise applications for error reductionThe confession system targets use cases where unknowingly providing incorrect responses carries risks, including medical diagnosis support, legal research, and financial analysis.
Manufacturing and healthcare organizations deploying language models face liability when AI systems generate inaccurate specifications or treatment recommendations. A confession-enabled model could refuse to answer instead of providing unreliable information in these scenarios.
Knowing when an AI’s answer should be set aside can be useful.
Gartner principal analyst Rekha Kaushik noted that many organizations prioritize accuracy over completeness in AI-driven decision support, especially in government, healthcare, and finance sectors. “Workflows involving compliance checks, legal document review, or customer support for sensitive topics benefit most from this approach, where ‘no answer’ is safer than a potentially misleading one,” she said.
The OpenAI research team tested confessions across domains including science, history, and current events. Science questions triggered confessions at higher rates than factual recall questions, indicating the system detects domain-specific uncertainty patterns.
Integration with existing safety measuresOpenAI positions confessions as complementary to techniques such as retrieval-augmented generation (RAG) and Constitutional AI. Organizations can combine confessions with external knowledge bases, using the uncertainty signal to trigger document retrieval or human review.
“These should be used by organizations within their AI governance frameworks, using uncertainty flags to trigger human review or external knowledge base lookups,” Kaushik said. “By combining confessions with retrieval-augmented generation and other safety measures, organizations can create robust escalation paths, automatically routing flagged responses to experts or trusted data sources.”
The method works across model sizes and architectures without requiring changes to training procedures, OpenAI said.
Kaushik said that confession signals can empower enterprises to operationalize AI safety, turning uncertainty into actionable governance. “IT leaders can build trust and reduce liability, positioning AI as a responsible partner rather than a risk factor,” she said.
OpenAI plans to integrate confession mechanisms into future API releases, though the company hasn’t announced specific timeline or availability details, it said.
A Practical Guide to Continuous Attack Surface Visibility
EU fines X $140 million over deceptive blue checkmarks
With Workspace Studio, Google wants workers to build their own AI agents
Google this week launched Workspace Studio, promising to let a wide range of employees build and use their own AI agents.
Workspace Studio — it was called Workspace Flows during preview earlier this year — is a no-code application that lets users create and customize agents with natural language descriptions and muti-step actions.
“Studio puts the full potential of agentic AI into the hands of everyone, not just specialists, by removing the friction of coding and making it easy for anyone to design agents that automate their unique business processes in minutes,” Farhaz Karmali, product director at Google Workspace Ecosystem, said in a blog post.
Agents can perform tasks such as drafting weekly project updates, Google said, or notifying a user in Google Chat when they receive an email on a certain topic.
Studio combines Google’s Gemini 3 language model with rules-based automation to enable agents to reason and adapt to new information. This could mean flagging emails where the agent identifies a negative tone from the sender, for instance.
Studio agents can access data in Google Workspace apps such as Gmail, Drive and Sheets, as well as from the web and third-party apps including Asana, Jira, and Salesforce.
A catalogue of prebuilt agents is available in the Workspace Studio app that can be customized to a user’s needs. The tool will roll out to customers “over the next few weeks,” the company said.
Google places a limit on the number of agents customers can create (up to 100) and how many can run each day. Agents are allowed a maximum of 20 steps. Workspace customers will get “promotional access” to higher usage limits during the initial launch period, with additional details coming with a future update in 2026.
Google is not alone in developing simple agent-builder tools; Microsoft’s Copilot Actions provides similar functionality.
As AI adoption continues to grow in the workplace, organizations are investigating the potential of agents to solve common tasks in systematic ways, said J. P. Gownder, Forrester vice president and principal analyst. “With an agent, an employee doesn’t have to write yet another prompt to give the AI directions. And, in theory, agents are more action-oriented than what you get from prompting,” he said.
While there’s interest in agentic AI among tech-savvy employees, “creating agents is beyond the skill level of most employees today,” said Gownder. He cited to a 2024 Forrester survey that showed only 26% of employees said they know what prompt engineering is and how to use it.
“In 2025, that number didn’t budge — it was again 26%,” he said.
The survey also found that most (58%) employees have received no formal training on how to use AI at work. “Given this lack of skills, the prospect that most employees will be able to create AI agents is premature,” he said.
Assuming adoption does grow, “agentic sprawl” could be become a headache for IT teams tasked with managing them. “Google attempts to deal with this by giving IT management tools for agents,” Gownder said. “It remains to be seen how much time and staffing will be required to use these tools, however.
“Ultimately, Google is moving in a smart direction by introducing agentic tools into Workspace. It’s just that we can expect a need for hand-holding and curation from IT teams for the next few years.”
Chinese Hackers Have Started Exploiting the Newly Disclosed React2Shell Vulnerability
Cloudflare blames today's outage on React2Shell mitigations
Pharma firm Inotiv discloses data breach after ransomware attack
Intellexa Leaks Reveal Zero-Days and Ads-Based Vector for Predator Spyware Delivery
"Getting to Yes": An Anti-Sales Guide for MSPs
Critical React2Shell flaw actively exploited in China-linked attacks
Cloudflare down, websites offline with 500 Internal Server Error
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- …
- následující ›
- poslední »



