Computerworld.com [Hacking News]
Google: Gemini 2.5 is the company’s ‘most intelligent AI model yet’
Google is beating the drum for Gemini 2.5, a new AI model that reportedly offers better performance than similar reasoning models from competitors such as OpenAI, Anthropic and Deepseek. Google calls it its “most intelligent AI model yet.”
According to a post on The Keyword blog, Gemini 2.5 can, among other things, analyze information, draw logical conclusions, take context into account, and make informed decisions. It can also interpret text, audio, images, video and code, which means it can be used to create apps and games, for example.
In the video below, a game is being created from a simple text prompt.
Gemini 2.5 can be tested using the Google AI Studio. The AI model is also available through the Gemini Advanced subscription service.
Signalgate: There’s an IT lesson here
You know how IT admins are always warning employees about best practices for security? They’re always mandating which apps to use, which to avoid and which devices can safely connect to corporate networks.
You know why they do that? To keep idiot workers from going rogue and endangering corporate data and secrets.
Case in point: Secretary of Defense Pete Hegseth, who’s under fire this week for — and it’s almost too stupid to be true, but it is — setting up a high-level chat using Signal for top National Security officials to discuss a military attack. And then somehow, some way, a journalist — Jeffrey Goldberg, editor-in-chief of the liberal publication The Atlantic — was invited to join the secretaries of State and Treasury, the director of the CIA, and the Vice President of the United States, JD Vance, for the discussion.
Now, I like serious spy shows. Give me Gary Oldman as George Smiley in Tinker Tailor Solider Spy to keep me on the edge of my seat. But I can’t watch those now, because the real world has gotten so stupid I can no longer suspend my disbelief.
I still have trouble believing what Hegseth and company did. So does Goldberg: “I could not believe that the national-security leadership of the United States would communicate on Signal, [the popular, secure messaging service] about imminent war plans. I also could not believe that the national security adviser to the president would be so reckless as to include the editor-in-chief of The Atlantic in such discussions with senior U.S. officials, up to and including the vice president.”
Believe it. Goldberg was added to the Houthi PC small group. The virtual group’s purpose was to talk about planning a military strike on Houthi rebels in Yemen. Goldberg wasn’t asked if he wanted to be involved; he was just added. If there was a group administrator, he or she paid no attention whatsoever to what they were doing.
At first, Goldberg thought this might be some kind of elaborate joke. Who would add him, of all people, to such a group? Then the bombs, as discussed in the group, started falling on rebels in Yemen.
Goldberg asked, essentially, what in the world these officials thought they were doing.
Brian Hughes, spokesman for the National Security Council, replied: “This appears to be an authentic message chain, and we are reviewing how an inadvertent number was added to the chain.”
He went on: “The thread is a demonstration of the deep and thoughtful policy coordination between senior officials. The ongoing success of the Houthi operation demonstrates that there were no threats to troops or national security.”
Oh, really?
What if, say, a spy were in the group instead of an editor and told the Houthi to aim what anti-air missiles they had in X direction at Y time? Or maybe move some school kids or hospital patients into the targeted areas so they could claim that the real terrorists were the Americans for killing helpless civilians.
For that matter, we know from Goldberg that some things were let slip in the conversation that could have compromised American intelligence agents (read, spies) in the Middle East. Do you know what happens to spies in the Middle East? They get a date with a 7.62mm bullet, if they’re lucky.
As Rep. Seth Moulton (D-MA), a Marine veteran, tweeted: “Hegseth is in so far over his head that he is a danger to this country and our men and women in uniform. Incompetence so severe that it could have gotten Americans killed.”
President Donald J. Trump said he knew nothing about what happened and downplayed it. Of course, The Atlantic then published more details of the chat, undermining Trump and what national security officials told Congress just yesterday. Oops.
Sure, Signal is a relatively secure, open-source encrypted messaging service, but it’s not approved for government use. It encrypts messages from end to end. That means only you and the people you’re sending messages to see decrypted messages. That is, of course, when it works perfectly.
But, you see, there’s this little problem. It doesn’t always work perfectly. Indeed, the National Security Agency (NSA) alerted its employees in February that Signal has vulnerabilities. The NSA also warned its employees not to send “anything compromising over any social media or Internet-based tool or application” and to not “establish connections with people you do not know.”
Someone should tell the people who are, theoretically, in charge of defending the United States about this.
On top of that, Google researchers have found that Russians have recently been attempting to compromise Signal accounts. I wonder who they might be targeting?
I use Signal myself. But, in no way, shape, or form should it ever be used for covert government work.
There is so much wrong with this, it’s impossible to overestimate how bad the whole incident looks. By sheer dumb luck, no Americans were hurt by this exercise in total operations security incompetence. We can’t count on always being so lucky.
But I bet we can count on certain government officials to ignore the experts on security and do whatever they want.
The Apple rumor machine cranks into gear for iOS 19
WWDC event marketing intensifies, and as we head toward that event in a little over two months, it looks as if we’re being told to expect a new paint job (aka user interface change)s for iOS.
But will those tweaks really be enough to move the needle on flatlining iPhone sales?
Is Apple concerned in case these changes don’t impress? Is there any reason the big names in Apple rumor all jumped into this freshwater pool of speculation at more or less the same time, like synchronized swimmers?
Somewhere in the Apple Universe there must now be a place where all rumors go to die. There must also be at least one place where they all get created in the first place.
Across the universeIf you’ve been watching Apple over the last few years, you will have seen that the vast majority of its news announcements all seem to get leaked in advance, with a tiny minority of tales that never get officially teased but could still happen. I’ve not added it up, but I now think that the number of times any given Apple speculations are shown to be false can probably be measured on one hand. Even the rumors that don’t happen in one time frame turn true later.
The connection between speculation and fact seems so strong it’s hard not to think Apple is planting at least some of these rumors to seed speculation. Well, it’s that, or at some high-level point within Apple there is a civil war going on and rumor has become a weapon to undermine company leadership. Apple is made up of some of the most talented and competitive people on the planet.
Perhaps controlling click bait is just another string to the company bow?
It is also quite amusing that Apple has managed to carve out a global reputation for secrecy at the same time as leaking just about every step it takes. Can both things be true?
Words like rainSpeculation is such fun; however, this is what we’re currently being told to expect in iOS 19 – and, no, it’s not about AI.
Changes across the interface might include:
- A more rounded aesthetic (no, I don’t know what that really means, either).
- Glassy reflective surfaces.
- An interface similar to visionOS for apps, buttons, and more.
- A new interface for the Camera app — again, more in tune with visionOS.
- A sense of what it looks like in existing tools, including the new Apple Sports and Invites apps.
Bloomberg’s Mark Gurman has previously promised iOS 19 will be the “most significant upgrade” in years, and now says the latest crop of speculation misses key details and Apple has even more planned.
I certainly hope so. I imagine one surprise might be the addition of more Accessibility options, potentially including gesture and movement-based controls bought over to iOS from visionOS. We know these work on Apple’s headsets; can some also logically work on iPhones? Accessibility isn’t just about doing the right thing at Apple; again and again, the tools Apple provides there end up feeding into its products, too.
Change my worldIt is also interesting how much of the work Apple did on visionOS is now feeding outwards across the company — Apple Intelligence is, after all, now run by the former leaders of Vision Pro development, and if the ideas they had around user interfaces are now to be deployed across the rest of the company’s products then this reflects the importance of spatial computing to Apple’s future.
However, if all Apple is promising does turn out to be some slight user interface changes, then 2025 may yet go down as a slightly fallow year. That’s going to hurt Apple financially, though it’s big enough to take a little headwind; it may still benefit it in the long term with more time to bring Apple Intelligence up to speed. A few months in calmer waters could also give Apple’s teams a little breathing space as they prepare the biggest iPhone redesign yet.
But no doubt we’ll know all about all of these announcements well before they are officially announced, thanks to the Apple speculation machine.
You can follow me on social media! Join me on BlueSky, LinkedIn, and Mastodon.
No application can eliminate human error: Signal’s head defends the app
When the editor-in-chief of The Atlantic, Jeff Goldberg, was accidentally added to a Signal conversation, things took a surprising turn. The journalist could not initially believe the authenticity of the invitation, but the chat apparently involving high-ranking US politicians and government officials discussed specific targets for attacking Huti forces in Yemen — and a few hours later, airstrikes did indeed take place.
Due to the nature of the information exchanged, his doubts were raised both by the fact that top-secret plans were discussed using an app that is not designed to transmit classified data, and by the free-form statements of the politicians, including Vice President J.D. Vance. The messages even included emoticons, symbolizing the celebration of the operation carried out.
The editor-in-chief of The Atlantic reactedGoldberg refrained from publishing details about specific targets and weaponry in his article about the chat, fearing that the safety of those involved would be compromised.
His description of the leaked news shows that Vice President JD Vance, one of the participants in the conversation, was critical of President Donald Trump’s decision to carry out the attacks, stressing that their effects could benefit Europe more than the United States.
The event instantly sparked a wave of discussion about security rules and possible violations of laws protecting classified information. Legal experts pointed out that transmitting secret data in this way could violate at least the Espionage Act, especially if the app’s configuration provides for automatic deletion of messages.
Trump, however, defended the use of Signal, explaining that access to secure devices and premises is not always possible at short notice.
Meredith Whittaker defends Signal appSignal’s CEO, Meredith Whittaker, defended the app in an interview with Polish media, stressing that Signal maintains full end-to-end encryption and prioritizes user privacy.
She pointed out that while WhatsApp also uses encryption technologies designed by Signal, it does not protect metadata to the same extent and does not guarantee such a strict policy against collecting or sharing user information.
Whittaker at the same time pointed out that no application can eliminate human error. The accidental invitation of a journalist to a government chat room is precisely one example of a risk that cannot be excluded by technological measures alone.
(This story was originally published by Computerworld Poland.)
Microsoft’s newest AI agents can detail how they reason
If you’re wondering how AI agents work, Microsoft’s new Copilot AI agents provide real-time answers on how data is being analyzed and sourced to reach results.
The Researcher and Analyst agents, announced on Tuesday, take a deeper look at data sources such as email, chat or databases within an organization to produce research reports, analyze strategies, or convert raw information into meaningful data.
[ Related: Agentic AI – Ongoing news and insights ]In the process, the agents give users a bird’s eye-view on each step of how they’re thinking and analyzing data to formulate answers. The agents are integrated with Microsoft 365 Copilot.
The agents combine Microsoft tools with OpenAI’s newer models, which don’t answer questions right away, but can reason better. The models think deeper by generating additional tokens or drawing more information from outside sources before coming up with an answer.
The Researcher agent takes OpenAI’s reasoning models, checks the efficacy of the model, pokes around by pulling data from sources via Microsoft orchestrators and then builds up the level of confidence in the retrieval and results phases, according to information provided by Microsoft.
A demonstration video provided by Microsoft shows the Copilot chatbot interface publishing its “chain of thought” — for example, the step-by-step process of searching enterprise and domain data, identifying product lines, opportunities and more — with the ultimate output being the final document.
The approach is a major benefit for Microsoft since most models operate as a black box, said Jack Gold, principal analyst at J. Gold Associates.
Accountability and the ability to see how models are getting their results are important to assure users that the technology is safe, effective and not hallucinating, Gold said.
“Much of AI today is a ‘black hole’ when it comes to being able to figure out how it got to its results — most cite references, but not the logic on how they got to the end result,” Gold said. “Any transparency you can offer is about making users feel more comfortable.”
The Copilot Researcher agent can take a deeper look at internal data to develop business strategies or identify unexplored market opportunities — typical tasks by researchers. It provides highly technical research and strategy work that you’d expect to pay a highly skilled consultant, researcher, or analyst, a Microsoft spokeswoman said.
“Its ability to combine a user’s work data and web data means its responses are current, but also contextually relevant to every user’s personal needs,” the spokeswoman said.
For example, within the Researcher agent, a user can query the chatbot on exploring new business opportunities. In the process of analyzing data, the agent shares how the model is approaching the query. It will ask clarifying questions, publish a plan to reach an answer, show the data sources it is drawing information from, and explain how the data is collated, categorized, and analyzed.
The Analyst agent takes raw data and generates insights — typically the job of a data scientist. The tool is designed for workers using data to derive insights and make decisions without knowledge of advanced data analysis like Python coding, the spokeswoman said.
For example, the Analyst agent can take a spreadsheet with charts of unstructured data and share insights. Similar to the Researcher agent, the Analyst agent takes in a question via the Copilot interface, creates a plan to analyze the data, and determines the Python tools to generate insights. The agent shares its step-by-step process of how it is responding to the query and even shares the Python code used to generate the answer.
Microsoft has had a number of documented “misses” related to problematic generative AI (genAI) tools, such as Windows Recall, a Copilot feature that uses snapshots to log the history of activity on a PC, Gold said.
Giving users a sense of security is beneficial to getting users to try CoPilot, Gold said. “Think of it as having the safest car on the road when you go to select a new car for your family,” he said.
Will Microsoft be laid low by the feds’ antitrust probe?
Microsoft is on top of the world right now, riding its AI dominance to become the world’s second-most valuable company, worth somewhere in the vicinity of $3 trillion, depending on the day’s stock price.
But that could easily change — and not because competitors have found a way to topple it as king of AI.
A federal antitrust investigation threatens to do to the company what was done to it 35 years ago by a US Justice Department suit that tumbled the company from its perch as the world’s top tech company. It also led to a lost decade in which Microsoft lagged in the technologies that would transform the world — the internet and the rise of mobile.
The current investigation was launched last year by the Federal Trade Commission (FTC) under the leadership of Chair Lina Khan. Khan was ousted by President Donald J. Trump when he re-took office in January, and there’s been a great deal of speculation about whether his administration would kill the investigation or let it proceed.
That speculation ended this month, when new FTC Chair Andrew Ferguson asked the company for a boatload of information about its AI operations dating back to 2016, including detailed requests about its training models and how it acquires the data for them.
The investigation isn’t just about AI. It also covers Microsoft’s cloud operations, cybersecurity efforts, productivity software, Teams, licensing practices, and more. In other words, just about every important part of the company.
More details about the investigationAlthough the investigation is a broad one, the most consequential parts focus on the cloud, AI, and the company’s productivity suite, Microsoft 365. It will probably dig deep into the way Microsoft uses its licensing practices to push or force businesses to use multiple Microsoft products.
Here’s how The New York Times describes it: “Of particular interest to the FTC is the way Microsoft bundles its cloud computing offerings with office and security products.”
The newspaper claims the investigation is looking at how Microsoft locks customers into using its cloud services “by changing the terms under which customers could use products like Office. If the customers wanted to use another cloud provider instead of Microsoft, they had to buy additional software licenses and effectively pay a penalty.”
That’s long been a complaint about the way the company does business. European Union regulators last summer charged that Microsoft broke antitrust laws by the way it bundles Teams into its Microsoft 365 productivity suite. Teams’ rivals like Zoom and Slack don’t have the ability to be bundled like that, the EU says, giving Microsoft an unfair advantage. Microsoft began offering some versions of the suite without Teams, but an EU statement about the suit says the EU “preliminarily finds that these changes are insufficient to address its concerns and that more changes to Microsoft’s conduct are necessary to restore competition.”
AI is a target, tooMicrosoft’s AI business is also in the legal crosshairs, though very few details have come out about it. However, at least part of the probe will likely center on whether Microsoft’s close relationship with OpenAI violates antitrust laws by giving the company an unfair market dominance.
The investigation could also focus on whether Microsoft uses its licensing practices for Microsoft 365 and Copilot, its generative AI chatbot, in ways that violate antitrust laws. In a recent column, I wrote that Microsoft now forces customers of the consumer version of Microsoft 365 to pay an additional fee for Copilot — even if they don’t want it. In January, Microsoft bundled Copilot into the consumer version of Microsoft 365 and raised prices on the suite by $3 per month or $30 for the year. Consumers are given no choice — if they want Microsoft 365, they’ll have to pay for Copilot, whether they use it or not.
Microsoft also killed two useful features in all versions of Microsoft 365, for consumers as well as businesses, and did it in a way to force businesses to subscribe to Copilot. The features allowed users to do highly targeted searches from within the suite. Microsoft said people could instead use Copilot to do that kind of searching. (In fact, Copilot can’t match the features Microsoft killed.) But business and educational Microsoft 365 users don’t get Copilot bundled in, so they’ll have to pay an additional $30 per user per month if they want the search features, approximately doubling the cost of the Office suite.
Expect the feds to file suitIt’s almost certain the feds will file at least one suit against Microsoft by the FTC, the Justice Department, or maybe both. After all, federal lawsuits against Amazon, Apple, Google, and Meta launched by the Biden administration have been continued under Trump. There’s no reason to expect he won’t target Microsoft as well.
There’s another reason the feds could hit Microsoft hard. Elon Musk is suing OpenAI and Microsoft, claiming their relationship violates antitrust laws. He’s also spending billions to compete against them. Given that he’s essentially Trump’s co-president — as well as being Trump’s most important tech advisor — it’s pretty much a slam dunk that one more federal suit will be filed.
As one piece of evidence that suits are coming, the FTC weighed in on Musk’s side in his suit against the company and OpenAI, saying antitrust laws support his claims. In a wink-wink, nudge-nudge claim that no one believes, the agency says it’s not taking sides in the Musk lawsuit.
The upshotExpect the investigations into Microsoft to culminate in one or more suits filed against the company. After that, it’s anyone’s guess what might happen. The government could ask that Microsoft be broken into pieces — perhaps lopping off its AI arm. It could even ask that the cloud as well as AI be turned into their own businesses. Or it could go a softer route by fining the company billions of dollars and forcing it to change its business practices.
Either way, hard times are likely ahead for Microsoft. The big question will be whether CEO Satya Nadella can weather the turbulence better than Bill Gates and Steve Ballmer did when the previous federal suit against the company laid it low for a decade.
The secret to using generative AI effectively
Do you think generative AI (genAI) sucks? I did. The hype around everything genAI has been over the top and ridiculous for a while now. Especially at the start, most of the tools were flashy, but quickly fell apart if you tried to use them for any serious work purposes.
When ChatGPT started really growing in early 2023, I turned against it hard. It wasn’t just a potentially interesting research product. It was a bad concept getting shoved into everything.
Corporate layoffs driven by executives who loved the idea of replacing people with unreliable robots hurt a lot of workers. They hurt a lot of businesses, too. With the benefit of hindsight, we can now all agree: genAI, in its original incarnation, just wasn’t working.
At the end of 2023, I wrote about Microsoft’s then-new Copilot AI chatbot and summed it up as “a storyteller — a chaotic creative engine that’s been pressed into service as a buttoned-up virtual assistant, [with] the seams always showing.”
You’d probably use it wrong, as I noted at the time. Even if you used it right, it wasn’t all that great. It felt like using a smarter autocomplete.
Much has changed. At this point in 2025, gen AI tools can actually be useful — but only if you use them right. And after much experimentation and contemplation, I think I’ve found the secret.
Ready to turn up your Windows productivity — with and without AI? Sign up for my free Windows Intelligence newsletter. I’ll send you free Windows Field Guides as a special welcome bonus!
The power of your internal dialogueSo here it is: To get the best possible results from genAI, you must externalize your internal dialogue. Plain and simple, AI models work best when you give them more information and context.
It’s a shift from the way we’re accustomed to thinking about these sorts of interactions, but it isn’t without precedent. When Google itself first launched, people often wanted to type questions at it — to spell out long, winding sentences. That wasn’t how to use the search engine most effectively, though. Google search queries needed to be stripped to the minimum number of words.
GenAI is exactly the opposite. You need to give the AI as much detail as possible. If you start a new chat and type a single-sentence question, you’re not going to get a very deep or interesting response.
To put it simply: You shouldn’t be prompting genAI like it’s still 2023. You aren’t performing a web search. You aren’t asking a question.
Instead, you need to be thinking out loud. You need to iterate with a bit of back and forth. You need to provide a lot of detail, see what the system tells you — then pick out something that is interesting to you, drill down on that, and keep going.
You are co-discovering things, in a sense. GenAI is best thought of as a brainstorming partner. Did it miss something? Tell it — maybe you’re missing something and it can surface it for you. The more you do this, the better the responses will get.
It’s actually the easiest thing in the world. But it’s also one of the hardest mental shifts to make.
Let’s take a simple example: You’re trying to remember a word, and it’s on the tip of your tongue. You can’t quite remember it, but you can vaguely describe it. If you were using Google to find the word, you’d have to really think about how to craft the perfect search term.
In that same scenario, you could rely on AI with a somewhat rambling, conversational prompt like this:
“What’s the word for a soft kind of feeling you get — it’s warm, but a little cold. It’s sad, but that’s not quite right. You miss something, but you’re happy you miss it. It’s not melancholy, that’s wrong, that’s too sad. I don’t know. It reminds me of walking home from school on a sunny fall afternoon. The sun is setting and you know it will be winter soon, and you miss summer, and you know it’s over, but you’re happy it happened.”
And the genAI might respond: wistful. That’s your answer. More likely, the tool will return a list of possible words. It might not magically know you meant wistful right away — but you will know the moment you see the word within its suggestions.
This is admittedly an overwrought example. A shorter description of the word — “it’s kind of like this, and it’s kind of like that” — would also likely do the trick.
Ramble onThe best way to sum up this strategy is simple: You need to ramble.
Try this, as an experiment: Open up the ChatGPT app on your Android or iOS phone and tap the microphone button at the right side of the chat box. Make sure you’re using the microphone button and not the voice chat mode button, which does not let you do this properly.
(Amusingly enough, the ChatGPT Windows app doesn’t support this style of voice input, and Microsoft’s Copilot app doesn’t, either. This shows that the companies building this type of product don’t really understand how it’s best used. If you want to ramble with your voice, you’ll need to use your phone — or ramble by typing on your keyboard.)
This is the easiest way to get started with true stream-of-consciousness rambling.Chris Hoffman, Foundry
After you tap the microphone button, ramble at your phone in a stream-of-consciousness style. Let’s say you want TV show recommendations. Ramble about the shows you like, what you think of them, what parts you like. Ramble about other things you like that might be relevant — or that might not seem relevant! Think out loud. Seriously — talk for a full minute or two. When you’re done, tap the microphone button once more. Your rambling will now be text in the box. Your “ums” and speech quirks will be in there, forming extra context about the way you were thinking. Do not bother reading it — if there are typos, the AI will figure it out. Click send. See what happens.
Just be prepared for the fact that ChatGPT (or other tools) won’t give you a single streamlined answer. It will riff off what you said and give you something to think about. You can then seize on what you think is interesting — when you read the response, you will be drawn to certain things. Drill down, ask questions, share your thoughts. Keep using the voice input if it helps. It’s convenient and helps you really get into a stream-of-consciousness rambling state.
Did the response you got fail to deliver what you needed? Tell it. Say you were disappointed because you were expecting something else. Say you’ve already watched all those shows and you didn’t like them. That is extra context. Keep drilling down.
You don’t have to use voice input, necessarily. But, if you’re typing, you need to type like you’re talking to yourself — with an inner dialogue, stream-of-consciousness style, as if you were speaking out loud. If you say something that isn’t quite right, don’t hit backspace. Keep going. Say: “That wasn’t quite right — I actually meant something more like this other thing.”
The beauty of back-and-forthLet’s say you want to use genAI to brainstorm the perfect marketing tagline for a campaign. You’d start by rambling about your project, or maybe just speaking a shorter prompt. Ask for a bunch of rough ideas so you can start contemplating and take it from there.
But then, critically, you keep going. You say you like a few ideas in particular and want to go more in that direction. You get some more possibilities back. You keep going, on and on — “Well, I like the third one, but I think it needs more of [something], and the sixth one is all right but [something else].” Keep talking, postulating, refining, following paths of concepts to something that feels more right to you.
If the tool doesn’t seem to be on the right wavelength, don’t get frustrated and back out. Tell it: “No, you don’t understand. This is for a major clothing company. I need it to sound professional but also catch people’s eyes. That’s why your suggestions are all too much.”
In a similar way to how the long stream-of-consciousness ramble lays a lot of context to push genAI in a useful direction this back-and-forth lays a lot of context as groundwork. Your entire conversation until that point forms the scaffolding of the conversation and affects the future responses in the thread. As you keep adding onto and continuing the conversation, you can make it more attuned to what you’re looking for.
Crucially, genAI is not making decisions. You are making all the decisions. You are exercising the taste. You can push it in this or that direction to get ideas. If it lands on something you disagree with, you can push back: “No, that’s not right at all. We really got off track. How about…?”
Is this silly? Well, brainstorming doesn’t normally mean sitting in an empty room meditating while staring at paint drying. It often means searching Google, seeing what other people say, poking around for inspiration. This can be similar — but faster.
Maybe you still use Google for brainstorming sometimes — or go for a walk and be alone with your thoughts! That’s fine, too. GenAI is meant to be another tool in your toolbox. It isn’t meant to be the end-all answer.
The bigger AI pictureTo be clear: I’m not here to sell you on the idea of embracing genAI. I’m here to tell you that companies peddling these tools right now are selling you the wrong thing. The way they talk about the technology is not how you should use it. It’s no wonder so many smart people are bouncing off it and being rightfully critical of what we’re being sold.
GenAI should not be a replacement for thinking. More than anything, it is a tool for exploring concepts and the connections between them. You can use it to write a better email. You can use it to put together a marketing plan. It will do things you don’t expect, and that’s the point.
Yes, it might hallucinate and make things up. (That’s why you need to keep your brain engaged.) You might want to just opt out. You might decided to keep plugging away looking for answers. Just remember: If you’re using genAI, try to use it to be more human, not less. That will help you write better emails — and accomplish much more beyond that.
Let’s stay in touch! Sign up for my free Windows Intelligence newsletter today. I’ll send you three new things to try each Friday.
Gee whiz, Gboard! An invaluable new Android typing upgrade
Ahh — don’t you just love the feeling of a fresh ‘n’ zesty virtual upgrade?
Here in the land o’ Android, that doesn’t mean only major new Android versions (and, alas, the woeful waiting that often accompanies ’em). Few mere mortals realize it, but Google’s quietly created a system of ongoing under-the-hood upgrades that affect all Android-appreciating animals equally — thanks to the way system-level components have bit by bit been pulled out of the operating system itself and positioned instead as individual apps in the Play Store. That means they’re updated instantly and universally numerous times a year, without the need for any manufacturer involvement.
And that, in turn, means even decade-old Android phones get updates every month that are equivalent to an entire Apple operating system rollout. Those updates just aren’t packaged neatly or presented cohesively, and most people don’t consider how all of the small-seeming pieces add up.
This current month of ours is the perfect example. While we’re been gawking and giggling and guffawing (and whatever other “g”-oriented actions you’ve been up to), Google’s been giving us an incredible upgrade to the Android typing experience. It’s been showing up on Android devices worldwide all week, without any fanfare or announcement. And it’s up to you to dig it up, put it into a place where you’ll see it, and then start putting it to practical use.
The good news is that (a) it’s brilliantly useful and easy to use — and (b) it couldn’t be much simpler to find, once you know where to look.
Lemme show ya.
[Psst: Love shortcuts? My free Android Shortcut Supercourse will teach you tons of time-saving tricks. Come start your adventure!]
Your new Android keyboard treasureLadies, gentlemen, gerbils, and geckos, the upgrade of which we speak revolves around Google’s Gboard keyboard — the de facto default Android typing experience and the best all-around Android keyboard app for most people nowadays.
(If you aren’t using Gboard already, you can download it for free from the Play Store and then open it once to get going, no matter what type of Android device you’re using.)
What’s new this week is the presence of a long-under-development added button for undoing any text-related actions you’ve taken — whether that’s an errant edit, an over-the-top addition, or an egregious erasing error.
Whatever the case may be, Gboard now at long last offers a super-simple one-tap way to undo anything that’s happened in any text field, anywhere on Android.
And all you’ve gotta do is uncover it — then drag it into a spot where it’s easily accessible for ongoing use.
Here’s all there is to it:
- Fire up Gboard, by tapping into any open text field anywhere on your device.
- Tap the four-square menu icon in the keyboard’s upper-left corner.
- Look for the newly added Undo button somewhere in the screen of choices.
JR Raphael, Foundry
- Press and hold that button, then drag it up into any position you like within the bar of shortcuts at the top of the keyboard.
JR Raphael, Foundry
And that’s it: Now, anytime you want to undo any sort of text-related action, you can simply tap that button at the top of your keyboard. (And if you ever aren’t seeing that row of Gboard shortcuts, you can tap the four-square menu icon again to reveal it.)
From there, you’ll just tap-a-tap-tap-tap to undo further and further into your Android typing history — or tap the “Redo” option that appears on Gboard’s top row to undo your undo, even.
srcset="https://b2b-contenthub.com/wp-content/uploads/2025/03/android-typing-gboard-undo-keyboard.webp?quality=50&strip=all 800w, https://b2b-contenthub.com/wp-content/uploads/2025/03/android-typing-gboard-undo-keyboard.webp?resize=300%2C295&quality=50&strip=all 300w, https://b2b-contenthub.com/wp-content/uploads/2025/03/android-typing-gboard-undo-keyboard.webp?resize=768%2C755&quality=50&strip=all 768w, https://b2b-contenthub.com/wp-content/uploads/2025/03/android-typing-gboard-undo-keyboard.webp?resize=709%2C697&quality=50&strip=all 709w, https://b2b-contenthub.com/wp-content/uploads/2025/03/android-typing-gboard-undo-keyboard.webp?resize=171%2C168&quality=50&strip=all 171w, https://b2b-contenthub.com/wp-content/uploads/2025/03/android-typing-gboard-undo-keyboard.webp?resize=85%2C84&quality=50&strip=all 85w, https://b2b-contenthub.com/wp-content/uploads/2025/03/android-typing-gboard-undo-keyboard.webp?resize=489%2C480&quality=50&strip=all 489w, https://b2b-contenthub.com/wp-content/uploads/2025/03/android-typing-gboard-undo-keyboard.webp?resize=366%2C360&quality=50&strip=all 366w, https://b2b-contenthub.com/wp-content/uploads/2025/03/android-typing-gboard-undo-keyboard.webp?resize=254%2C250&quality=50&strip=all 254w" width="800" height="786" sizes="(max-width: 800px) 100vw, 800px">The new Gboard Undo button makes it laughably easy to undo and redo any keyboard action.JR Raphael, Foundry
Fresh ‘n’ zesty — just the way we like it.
Now, if you aren’t seeing that Undo button anywhere within Gboard, don’t fret your freckly little ferret-face. Google’s in the midst of sending this upgrade out to everyone as we speak, and while it seems to be fairly widespread already, it’s entirely possible it hasn’t reached every single corner of the universe quite yet.
Check the Play Store for any pending app updates, then set yourself a reminder to repeat that same process in another few days. No matter what device you’re using, the new Undo button should show up for you soon — any day now, if it hasn’t already.
And once it does, my goodness, are you in for one heck of an Android typing upgrade.
Whee!
Get six full days of advanced Android knowledge with my free Android Shortcut Supercourse You’ll learn tons of time-saving tricks!
Apple’s Worldwide Developers Conference set for June 9
Apple will host its annual Worldwide Developers Conference (WWDC) online from June 9-13, with a small number of developers and students invited to attend in person for the keynote and state of the union announcements at Apple Park.
As it has been since the Covid-19 pandemic, WWDC will be an online event. That’s great for developers, as it gives everyone more equal access to the tools, developer contacts and advice offered to attendees. Apple says the event will give developers unique access to Apple experts, and insight into new tools, frameworks, and features, which is what this show is normally about.
“We’re excited to mark another incredible year of WWDC with our global developer community,” said Susan Prescott, Apple’s vice president of Worldwide Developer Relations. “We can’t wait to share the latest tools and technologies that will empower developers and help them continue to innovate.”
Why WWDC mattersWWDC is a critical event in the Apple calendar. It’s where the company reveals upcoming enhancements to its operating systems, developer tools, and hints at hardware news. It’s also the most revealing glimpse we get into the company’s strategic approach to the coming months.
This is going to be of particular importance this year, as the company works at redeeming the reputational damage it has taken thanks to not yet delivering all the features of Apple Intelligence it promised in 2024.
You’ll be able to watch the keynote online, via Apple TV, and on YouTube. Registered developers will also be able to access all the resources Apple makes available at WWDC using the Apple Developer app, Apple Developer website, and Apple Developer YouTube channel. This year’s conference will include video sessions and opportunities to connect with Apple engineers and designers in online labs.
The Swift Student Challenge takes place along with the show. Successful applicants will be announced March 27 and given a chance to join the special event at Apple Park. In addition, 50 Distinguished Winners, who are recognized for outstanding submissions, will be invited to join WWDC in California.
You can follow me on social media! Join me on BlueSky, LinkedIn, and Mastodon.
Apple’s CEO says DeepSeek AI is ‘excellent’
If you’ve been interested/enthusiastic about China’s DeepSeek AI, you’re not alone – Apple CEO Tim Cook also seems impressed with the tech, which he calls “excellent” – though he fell short of sharing any plans to integrate these models within his company’s own AI efforts.
Cook was in China to woo government and suppliers as the company remains anxious to maintain its second largest business geography even in the face of growing politically-driven tensions with the US. Apple shareholders would expect nothing less from company leadership than to work to preserve company revenue. Apple’s long-standing manufacturing alliance with China also counts for something, even as iPhone sales have declined 25% in the region.
Deep local partnershipsApple Intelligence, the company’s so-far disappointing take on generative AI (genAI), is not yet available in China, but it’s thought the company has been speaking with local partners and officials to find some way to make it available. Apple is required to work with a Chinese provider of AI services in the market. To that end, it is expected to introduce support for AI models from Baidu and/or the Alibaba Group to replace ChatGPT in Apple’s implementation. (The assumption about why DeepSeek was not selected is that it is not yet ready to scale to meet the needs of Apple’s huge customer base.)
Apple is holding a developer conference in China this week, where it is expected to announce additional plans for Apple Intelligence there.
Cost and scaleBut the capacity to scale, or lack of it, doesn’t mean DeepSeek isn’t impressive. It is, particularly as its powerful R1 model with its estimated development price of just $5.6 million compares really well to more costly models from US-based genAI firms. During the last financial call, Cook discussed DeepSeek’s low cost and high performance, characterizing the achievement as proof that “innovation that drives efficiency is a good thing.”
It certainly contrasts with the hundreds of millions of dollars Apple presumably spent on developing Apple Intelligence features it still can’t bring to market. Given the chaos that has hit the Siri team since he made those statements, I’m in little doubt he’d quite like to have seen the Apple Intelligence team forge a similar path to success. But perhaps the challenges of linking legacy Siri technologies with advanced AI remain too great for this to happen. Perhaps there’s a solution available?
Tariff troublesThe other challenge in China is the scale to which the current US administration will apply tariffs against Apple products imported from China. With new tariffs as high as 20% being discussed, Apple will want to find some way to navigate the two nations to maintain business in both regions while minimizing the impact of tariffs on product prices. It will, after all, be Apple’s US customers who end up paying more for the products taxed in this way and the company will want to manage the impact on them.
Fundamentally, there is one very big reason Apple makes so many of its products in China — the distribution of skills. Apple continues to invest in efforts to educate tomorrow’s generations of developers, but China can already field them by the thousands.
It’s not just costCook explained this a few years back: “In the US, you could have a meeting of tooling engineers, and I’m not sure we could fill the room,” Cook said. “In China, you could fill multiple football fields.” He also explained that labor costs are only part of the equation, the “quantity of skill” is also important. The upshot is that until the US solves the fundamental challenge of building workforces fully skilled up for advanced technology manufacturing, no number of tariffs will force jobs to move there. It needs to invest before US leaders can expect that to happen.
Apple has made big attempts to build business outside China since 2018. Today, just over 15% of its iPhones are made in India, and it has factories in locations across the world. It is also investing heavily in US manufacturing, including its $10 billion Advanced Manufacturing Fund, an academy in Detroit, and its new server factory in Houston.
But one thing it hasn’t got — at least, not yet — is its own slick, small and economical answer to DeepSeek. Not for lack of trying.
You can follow me on social media! Join me on BlueSky, LinkedIn, and Mastodon.
Otter.ai’s voice-activated AI agent can answer questions during online meetings
Otter.ai, which is primarily known for its voice-to-text transcription service, is rolling out a voice-activated AI agent that will participate in and answer questions during online calls.
The “Otter Meeting Agent,” launched Tuesday, activates a voice agent when called upon, understands questions put to it, and pulls information from the public web or internal documents to answer queries.
Otter.ai already has a web interface and an app that transcribes voice notes to text, generating summaries of notes and allowing users to query the system via text.
The system can now do the same via voice during meetings, which can come in handy when people need immediate information at their fingertips and don’t want to type in queries, said Sam Liang, CEO of Otter.ai.
“This new AI Meeting Agent is actually going to be able to answer questions or participate in meetings and even take some actions to perform some tasks,” Liang said.
For example, in a demonstration during a Zoom videoconference, Liang asked the AI agent, participating as an attendee, “Hey Otter, who invented the audio recorder and in what year?” and the agent responded instantly: “The audio recorder was invented by Thomas Edison in 1877.”
The voice-activated agent works with Zoom and will integrate into Google Meet and Microsoft Teams software in the coming weeks.
The agent is also able to answer questions by sourcing information from corporate data. For example, it can respond to a query about subscriber numbers or growth rate by selectively pulling data from internal databases.
The AI agent can also take other actions such as scheduling a follow-up meeting or drafting an email.
The system uses Retrieval-Augmented Generation (RAG) to analyze user queries, which it then breaks down into subtasks and decides which functions to call or which internal system to send a query. “It could pull data from multiple places and then generate the answer to summarize the results,” Liang said.
At a recent GTC panel discussion, tech leaders stressed that customers prefer to talk to humans rather than machines. But Liang said some voice AI agents can supplement human agents and meet basic needs, such as answering simple questions.
For now, the voice-activated meeting agent is an on-demand feature answering questions from internal documents, much like text-based requests. The agent is only activated on voice requests and doesn’t interrupt conversations.
However, Otter.ai is moving in the direction where an agent will soon be an active participant, Liang said. It can even correct misinformation.
The idea that agents would take part in meetings isn’t new. More than a decade ago, in 2013, Forrester analyst J.P. Gownder predicted AI tools would participate in meetings proactively, in addition to surfacing data and insights from internal systems.
“I published a report predicting that AI would participate in meetings proactively, surfacing data and insights from internal systems,” he said this week. “I was obviously a good bit ahead of time on that one. The logic here is that meetings often don’t take advantage of the vast stores of enterprise data that we have: We make decisions without consulting our own data, insights, documents, and past conversations, which is suboptimal.”
But there are several issues at play before AI meeting participants are likely to be welcome, including accuracy and social acceptance. Participants might not like being interrupted by AI agents. And if the tool makes even one or two mistakes, humans might quickly come to distrust it.
“Can these AI meeting participants find the right data, documents, and insights out of our vast enterprise SharePoint sites to surface during a meeting?” Gownder said. “I would suggest that it will be quite a few years before this becomes mainstream, even though it offers a very logical and potentially powerful value proposition.”
Otter has multiple monthly and yearly subscription plans, including a free tier and a $30 monthly plan for business users. The subscription service also includes an enterprise plan, for which a price wasn’t given.
Otter has 25 million subscribers but declined to detail the number of paid subscribers.
Microsoft launches AI agents to automate cybersecurity amid rising threats
Microsoft has introduced a new set of AI agents for its Security Copilot platform, designed to automate key cybersecurity functions as organizations face increasingly complex and fast-moving digital threats.
The new tools focus on tasks such as phishing detection, data protection, and identity management — areas where attackers continue to exploit vulnerabilities at scale.
20 advanced Android 15 tips
Like many recent Android versions, Android 15 may seem subtle on the surface.
And, to be fair, it certainly isn’t the sort of dramatic interface reinvention we’ve seen from Google in the past. But that doesn’t mean it’s short on efficiency-enhancing treats just waiting to be found.
In fact, most of Android 15’s most noteworthy advancements are things you’d never even notice if you didn’t know where to look. Once they’re on your radar, though, you’d better believe they’ll make your life — and your work — a heck of a lot easier.
Here, then, are 20 advanced Android 15 treasures. Whether you’ve had Android 15 in your life for ages already (hiya, Pixel pals!) or you’re keeping an eye out for the update to reach you hopefully any day now (oh, Samsung…), this will serve as your roadmap to tracking down the software’s best surprises — whenever they become relevant to you.
Note that these features are presented as they apply to Google-made Pixel phones and Samsung-made Galaxy devices, specifically. Different device-makers modify Android in different ways, so if you’re using a phone made by any other company, the availability and exact presentation of some items may vary.
Android 15 tips, part I: Notification nuance1. As of the most recent Android 15 quarterly update, Google’s Pixel devices have a new Notification Cooldown feature that lets you avoid being annoyed by incessant incoming alerts. Once you flip the switch — within the Notification section of your system settings — your phone will automatically lower its volume anytime you get a bunch of back-to-back notifications from the same app.
Google’s Android 15 Notification Cooldown makes notifications less annoying.
JR Raphael / Foundry
No Pixel? No problem: You can set up something similar and even more customizable on any Android device, no matter what Android version it’s running.
2. Also specific to our Pixel-preferring persons, Android 15 adds in a nifty new option that lets your phone’s vibration behavior automatically adjust itself based on your environment. If your phone is in your pocket, for instance, it’ll vibrate at full-blast — but if it’s sitting out on a table, it’ll calm its buzz considerably.
All you’ve gotta do is flip the switch to activate your new Adaptive Vibration advantage. Look in the Sound & Vibration section of your system settings, then tap “Vibration & haptics” followed by “Adaptive vibration” to find the toggle.
3. Another recent Android 15 addition is the introduction of a slightly perplexing new system called Modes. Modes is essentially an expansion of Android’s existing Do Not Disturb setting that bundles options like Bedtime and Driving into a broader interruption-controlling umbrella.
It also allows you to create your own custom modes for how you want to be notified in different scenarios — if, say, you’d like to set up a mode for work in which only work-related apps are allowed to bother you and a mode for weekends where only personal messaging apps can make any sounds.
Android 15’s Modes system provides a new way to think about controlling your phone’s behavior.
JR Raphael / Foundry
You can set up modes however you want by looking for the new Modes section within the main system settings menu. Just note that in Google’s version of Android, Do Not Disturb is now part of Modes — meaning your device’s Do Not Disturb status is actually one of the modes you can select — whereas in Samsung’s Android interface, Do Not Disturb continues to exist separately, as its own independent option outside of the main modes list.
4. Speaking of Samsung-specific changes, Android’s keeper of the Galaxy has introduced a major shift with the way you view notifications in Android 15. Instead of having Android’s notifications and Quick Settings exist in a single combined panel that’s summoned with a swipe down from the top of the screen, Samsung now has those two areas divided out into completely different panels.
That means on any Samsung device where this change is present, you’ll swipe down from the left half of your screen’s top edge to see your notifications — and swipe down from the right half to see your Quick Settings.
See?
srcset="https://b2b-contenthub.com/wp-content/uploads/2025/03/04-android-15-samsung-quick-settings-notifications.webp?quality=50&strip=all 800w, https://b2b-contenthub.com/wp-content/uploads/2025/03/04-android-15-samsung-quick-settings-notifications.webp?resize=277%2C300&quality=50&strip=all 277w, https://b2b-contenthub.com/wp-content/uploads/2025/03/04-android-15-samsung-quick-settings-notifications.webp?resize=768%2C832&quality=50&strip=all 768w, https://b2b-contenthub.com/wp-content/uploads/2025/03/04-android-15-samsung-quick-settings-notifications.webp?resize=643%2C697&quality=50&strip=all 643w, https://b2b-contenthub.com/wp-content/uploads/2025/03/04-android-15-samsung-quick-settings-notifications.webp?resize=155%2C168&quality=50&strip=all 155w, https://b2b-contenthub.com/wp-content/uploads/2025/03/04-android-15-samsung-quick-settings-notifications.webp?resize=78%2C84&quality=50&strip=all 78w, https://b2b-contenthub.com/wp-content/uploads/2025/03/04-android-15-samsung-quick-settings-notifications.webp?resize=443%2C480&quality=50&strip=all 443w, https://b2b-contenthub.com/wp-content/uploads/2025/03/04-android-15-samsung-quick-settings-notifications.webp?resize=332%2C360&quality=50&strip=all 332w, https://b2b-contenthub.com/wp-content/uploads/2025/03/04-android-15-samsung-quick-settings-notifications.webp?resize=231%2C250&quality=50&strip=all 231w" width="800" height="867" sizes="(max-width: 800px) 100vw, 800px">With Android 15, Samsung’s Quick Settings and notification panels are in two separate places.
JR Raphael / Foundry
5. With Samsung’s new split notifications-Quick-Settings setup, you also have the ability to swipe between those two areas of Android whenever they’re visible. Just slide your finger horizontally — toward the left, if you’re seeing notifications, and toward the right, from the Quick Settings view — to switch from one to the other.
6. If you find that split-apart setup to be more work than it’s worth, you can go back to the standard combined panel interface. First, swipe down from the top-right of the screen to open the Samsung version of Quick Settings, then tap the pencil-shaped editing icon, select “Panel settings,” and tap “Together” in the menu that pops up.
Android 15 tips, part II: Apps, multitasking, and on-demand info7. Android 15 adds in a way to create an extra layer of privacy — a “Private Space” designation that increases the protection around any especially important (or maybe just sensitive) apps and info.
Any apps in your device’s Private Space won’t show up in your app drawer, recent apps view, notifications, or even settings. And they’ll always require authentication — a pattern, PIN, password, or biometric verification — to be seen or opened.
On a Pixel phone or another device that follows Google’s standard Android setup, you can activate and set up your Private Space by looking for the “Private Space” option within the Security & Privacy section of your system settings. Once you’ve got it going, you’ll be able to find and interact with your Private Space at the bottom of your standard app drawer (which you can always access with a single swipe up on your home screen).
Once activated, Private Space adds a special area of protected apps into your standard Android app drawer.
JR Raphael / Foundry
Samsung didn’t bring Private Space into its heavily modified Android interface, but Galaxy devices have had a similar equivalent for some time now. If you have a Samsung device, search your system settings for Secure Folder to find and enable that feature.
8. Android’s been in a league of its own with multitasking since the platform’s earliest days, but one of the software’s most useful features is also one of the toughest to find — and to remember to keep using.
I’m talkin’ about the ability to split your screen in half and see (and use!) two apps at once, on screen together at the same time. With Android 15, that action gets easier to manage and keep front and center. That’s because of a new option that empowers you to create a shortcut for a specific preset pair of apps — say, Gmail and Chrome or maybe Docs and Outlook — and then open those two apps into a split together with a single swift tap.
To start, you’ll first need to fire up a regular split-screen with the two apps you want to use for your shortcut:
- Open up your phone’s Overview mode (by swiping up about an inch from the bottom of the screen and then stopping, if you’re using the current Android gesture system, or by tapping the square- or three-line Overview button along your device’s bottom edge, if you’re still stickin’ with the legacy three-button nav setup).
- Tap the icon above any app in that area, select “Split screen” (with standard Android) or “Open in split screen view” (in Samsung’s Android vernacular), and select another app to pair with it.
- Head back into that same Overview area, tap one of the icons above your newly created split, and tap the option to “Save app pair” — or “Add app pair,” with Samsung’s different-for-the-sake-of-different equivalent.
Android 15’s app pair feature makes it easy to use two specific apps together at the same time.
JR Raphael / Foundry
That’ll put a single shortcut to open that specific pair of apps together in a split right on your home screen — so the next time you want to make that happen, you’ll need only that one fast tap. (If you aren’t seeing the option to create a new app pair, by the way, try switching back to your device’s default launcher. It seems that support for this feature may be limited to that environment.)
9. I’m a big proponent of uninstalling stuff you aren’t actively using. That keeps unused apps from needlessly burning up resources on your device and potentially creating unnecessarily opened windows into your data. But if you aren’t gonna stay on top of that task regularly, Android 15 has a new next-best-thing option that’ll help.
It’s a feature that automatically archives apps for you when they haven’t been used in a while. And it could actually be advantageous in certain scenarios even if you’re pretty good about eliminating unused apps on your own, as it makes it easy to restore an app later — with its previous data and settings still in place — without having to start over from square one.
Any apps put into that pile are temporarily unavailable and compressed down to take up less space. But all it takes is a couple taps to bring ’em back to life from there, should the desire ever strike.
This feature is on by default in most Android 15 environments, so you shouldn’t have to do anything to have it available. If you don’t want certain specific apps to ever be archived — even if they’re unused — head into the Apps area of your system settings, tap the line to see all apps, and select any app from the list. Then, within its settings screen, look for the “Manage app if unused” toggle and flip it into the off position.
10. Pixel owners, take note: Android 15 introduces a simple but supremely welcome new switch that’ll keep your apps’ names from being cut off in your app drawer.
Traditionally, apps with long names have been truncated in that area — leaving you with awkward and occasionally confusing abbreviations like “American Ai…” (for American Airlines). With Android 15, you can opt to instead have any such titles split into two lines and shown in their entirety.
All you’ve gotta do is flip a switch: Provided you’re using the standard Pixel Launcher and not a custom Android home screen setup, long-press on any open space on your Pixel’s home screen and select “Home settings” followed by “Apps list settings.” Then, nudge the toggle next to “Show long app names” into the on position.
Just note that this change doesn’t apply to the home screen — only the app drawer. But hey, that’s what custom Android launchers are for!
11. While we’re on the subject of understated but appreciated adjustments, with its take on Android 15, Samsung is at long last giving Galaxy owners the ability to have their app drawers scroll in a sensible vertical style — by swiping up and down, in other words, instead of being forced to clumsily swipe through multiple pages just to see all of your apps.
This one may or may not be activated by default, but it isn’t at all difficult to dig up — once you know where to look. On any Galaxy gadget with Android 15 in place:
- Open up your app drawer (by swiping upward on your home screen).
- Tap the three-dot menu icon within the search bar at the bottom.
- Select “Sort,” then change that option from “Custom order” to Alphabetical order.”
Flip one switch, and boom: Samsung’s Android app drawer suddenly makes sense.
JR Raphael / Foundry
12. Here’s one for any large-screen Pixel appreciators out there: Your device’s dynamic taskbar — that handy little app-switching bar that pops up when you swipe up gently from the bottom of your screen — can now be kept visible and at your fingertips all the time instead of only when you summon it.
The secret lies within an easily overlooked new option deep inside Android 15’s innards. So if you’re toting a Pixel Tablet or a folding Pixel, try this: Summon the taskbar by swiping up gently from the bottom of the screen (while your phone is in its unfolded state, on a Fold). Then press and hold the little vertical line on the taskbar’s left side, between the app drawer icon and the first app in the list. That’ll reveal the newly added option to keep the taskbar permanently in place.
13. I wouldn’t be surprised if you’d never once thought about Android’s screen saver system. It’s one of the platform’s most potential-packed possibilities, though — and with Android 15, it’s getting even more powerful.
Specifically: Android 15 takes the once-Pixel-Tablet-exclusive ability to show a connected-device control panel anytime your device is docked or charging and makes it available on any Android product, thereby making said contraption significantly more useful in its idle state.
Just march on over to the Display section of your system settings and tap the “Screen saver” option. Turn the system on, if it isn’t already, then look for the “Home Controls” choice — or “Device Control,” in Samsung vernacular — to set and select your smart new setup.
Don’t let its name fool you: The new Home Control screen saver can be incredibly useful for anykind of connected-device control.
JR Raphael / Foundry
And if you aren’t seeing that option, make your way to the Play Store and install the Google Home app. It’s the hub for all connected-device controls, whether they’re in your home or in an office (or even in a fully colonized frog palace). Once you’ve opened it and signed into it a single time, the new screen saver option should appear and be available for you.
14. Google’s next-gen Gemini Android assistant may be overly complicated, inconsistent, and at times dangerously inaccurate, but it does have a handful of potentially useful tricks. And with Android 15, it gains a big one: the ability to perform multipart actions across multiple apps on your behalf.
So, for instance, you might ask Gemini to find all the best lunch spots in a specific area and then text ’em to a particular client or colleague — or to look up the address of a certain restaurant and then add it into an event in your calendar for dinner tonight at 6 p.m.
It’s fantastically useful, when it works. The tricky thing is knowing its limitations and which apps are compatible and then getting it to do what you want it to do.
But it’s well worth your while to play around with and see how it might save you time. And while this capability is currently being framed as a feature that’s exclusive to the new Samsung Galaxy S25 phone, I was able to get it working both on that device and on a Pixel 9 Pro — so while it’s hard to say exactly how far it’s spread at this point, it certainly isn’t limited to any one Android-15-running model.
Just fire up Gemini to give it a go for yourself. Depending on your device, you might be able to do that by saying Hey Google or pressing and holding your physical power button — or, if you haven’t yet opted into using Gemini (and it wasn’t already present by default for you), try downloading the official Google Gemini app and then opening it once to get it up and running.
Android 15 tips, part III: Smarter sounds15. With Android 15 on your device, you’ve got a snazzy new look for your volume panel — and with Pixel devices, that means you now have the ability to control exactly where any audio playing from your phone or tablet gets directed.
Just tap the three-dot menu icon within the volume panel (after pressing either of your device’s physical volume buttons to make it appear), then tap the big “Audio will play on” button at the top of the panel’s expanded view to explore your options.
The newly expanded Android 15 volume panel has a handy option for determining where your device’s audio is sent.
JR Raphael / Foundry
16. Another one for the Pixel adorers among us: The next time you temporarily turn off Bluetooth on your device, you can have it automatically come back on a day later — without having to remember yourself.
You’ll see the option appear anytime you disable Bluetooth, or you can also find the associated toggle in the Connected Devices section of your system settings — under “Connection preferences” and then “Bluetooth.”
17. If you’re packin’ a Pixel 8 or higher, Android 15 gives you some spectacular new superpowers around improving the sound in any videos you capture. It’s an expansion to your Pixel’s existing Magic Audio Eraser system, which lets you select specific individual sounds to remove from a video.
With Android 15, Magic Audio Eraser can identify distinct different sounds — like one particular person’s voice or even wind noise — and make it easy for you to lower or erase everything associated with that specific area.
Tap the Edit button beneath any video on your phone (in either the Camera app or Photos), then select “Audio Eraser” to get started.
18. Got a Galaxy gizmo? Get yourself over into your Samsung Phone app settings to surface the newly present option for recording and creating an instant transcript and summary of the conversation. Just look for the “Record calls” section in the Phone app’s settings section (accessible via the three-dot menu icon in the app’s upper-right corner).
A similar feature has been available with Google’s Pixel 9 models for a while now as well. And if you want to record a call on any other Android device — well, where there’s a will, there’s a way.
Android 15 tips, part V: The finer touches19. Android has long offered settings to boost your screen’s contrast and make it a teensy bit easier on the eyes. As of Android 15, you can take that customization up a notch with a sophisticated new series of contrast controls.
Look for the “Color contrast” option within the Display section of your system settings — or, if you’re using a Samsung-made device, go instead to the Accessibility section of your settings and then tap “Vision enhancements.”
Android 15’s expanded contrast controls, as seen in Google’s standard Android interface (left) and in Samsung’s version of the software (right).
JR Raphael / Foundry
Either way you get there, you’ll find yourself facing a freshly expanded array of contrast-related choices, with plenty of room to figure out which precise path looks most pleasing to you.
20. Last but not least is a new touch of nuance for your device’s charging habits. While Google recently released an Adaptive Charging system for Pixels that lets you cap your phone’s charge at 80% in order to prolong its battery health, Samsung has stepped things up in that department and will now let you choose exactly where your phone should stop its charge.
That means you can enjoy the same stamina-stretching advantage while having a little more flexibility — if, say, you find 80% just isn’t quite enough to make it through the day and you’d rather charge up to 85% or 90% instead.
You can find the new options within the Battery section of your system settings on a Samsung Android device, under the “Battery protection” submenu.
And there you have it: 20 bits of Android 15 brilliance to seek out and explore. Find something new and useful — or if you’re still waiting for Android 15 to arrive on your device, save this page for later and head over to my collection of Android-15-inspired features you can bring to any Android device today.
Review: The new M4 MacBook Air is even better for business
Beneath the hyperbole around Apple Intelligence and intensifying regulation, you might have missed that Apple recently introduced a new MacBook Air equipped with an M4 chip. It’s a compelling upgrade to the world’s best-selling laptop, and I’ve had time to put it through its paces.
It’s an important Mac, given that the Air is probably Apple’s biggest-selling computer. Its journey began back when Apple’s then-CEO,Steve Jobs drew the first ever model out of a brown paper envelope, stressing the power and portability of the system. “We think that this is the future of notebooks…. All notebooks will be like this someday,” he said at that time.
He was right, and also wrong, because I don’t think there’s anything else quite like the MacBook Air at this price.
You see, these (from) $999 computers deliver almost equal computational performance when measured in Geekbench 6 to the M1 Max Mac Studio Apple introduced three years ago.
I don’t know how you see that, but to me, given the extent to which everyone was blown away by just how powerful those Studio Macs were, the fact that you can now put near-equivalent performance under your arm for less than $1,000 should make the M4 MacBook Air popular with consumers and business users alike.
Ready for an upgrade?That’s something that matters as Microsoft ends support for Windows 10 later this year, giving lots of PC users a really good opportunity to upgrade to the Mac. Many already are Canalys recently reported that US Mac shipments in Q4 2024 increased 25.9%, year-on-year, even as the PC industry as a whole grew just 5.7%.
“The Windows refresh cycle provides fertile ground for Apple to target both consumers and businesses that may be open to switching operating systems,” Canalys said.
The fertility of that ground cannot have been far from Apple’s mind when it decided to add a 10-core M4 chip to the MacBook Air — especially given the speed and energy efficiency that defines the range of every single M-series Mac released. That it managed to reduce the entry-level price by $100 just adds to the appeal.
Performance comes at lower costLet’s roll out the Geekbench 6 benchmarks. They show that, iteration by iteration, Apple is delivering compelling performance improvements that cement its reputation as the leading PC maker.
- M1 MacBook Air: 2,346 single-core; 8,356 multi-core.
- M2 MacBook Air: 2,588 single-core; 9,691, multi-core.
- M3 MacBook Air: 3,065 single-core; 11,959 multi-core.
- M4 MacBook Air: 3,833 single-core; 14,871 multi-core.
Look at that data.
Not only does it illustrate the extent to which Apple Silicon has transformed the Mac, it also shows us just how solid the company’s processor development road map has turned out to be. Seeing is believing, and since the first M1 machines appeared, Apple has managed to deliver compelling upgrades on a near-annual basis. It will continue to do so, even as competitors flail in their attempts to catch up on machines that can match Apple on performance, price, and energy consumption.
Thanks for the memoryOne big upgrade in this iteration is that Apple doubled the memory inside these systems. Ostensibly to run artificial intelligence in the form of Apple Intelligence, the inclusion of 16GB as the base memory makes for improved performance in everything else you do on your Mac. Don’t underestimate the grunt here — not only can it handle existing Apple Intelligence operations, but it is plain that once Apple does deliver contextual smarts in Siri, the new Air will be able to handle it, in part thanks to that memory.
I also think the combination of a better processor with more built-in memory makes this model a compelling upgrade proposition, even if you are running an M2 MacBook Air, and certainly for M1 users. Of course, if you are using an Intel-based Mac of any kind, then any M-series Mac is a significant upgrade. (You also get a 12-megapixel Center Stage camera for those inevitable video meetings that punctuate most days.)
What’s like blue, but not as heavy?Light blue is the new color to show off. It’s quite a subtle blue — nothing like a sky blue — but pleasing all the same. To my mind, it’s more a blue-tinged silver. But the votes in my house are broadly positive for the new shade, which will no doubt appear in student halls, coffee shops, meeting rooms, and offices near you in the coming months.
However, every silver lining comes wrapped up in its very own cloud, and the M4 MacBook Air is no different. The design hasn’t shifted one iota — it’s still aluminum, still beautiful, and now features more than 55% recycled content, including a 100% recycled aluminum enclosure. It also means a 13.6-in. display in my test unit, a 2,560×1,664-pixel resolution, two Thunderbolt 4, headphone, and one MagSafe 3 port. If you want more display space, you’ve got it: the M4 Air will run up to two external displays in addition to the built-in Mac display.
Once again, this means it can be the heart of all kinds of complex professional tasks; if you want even more performance you’ll choose a MacBook Pro.
So, is this a Pro Mac at a consumer price?Yes, and no. First, it is worth noting that Geekbench currently lists the current M4 MacBook Pro as delivering 3,750 single-core and 14,707 multicores, which basically means the new Air offers close to pro performance, but there are some compromises.
Those compromises include a smaller display (compared to a 14.2-in. display), a slightly slower chip with significantly fewer CPU, and GPU cores than in the Pro, less memory and storage upper capacity, and battery life that is four hours shorter. (You’ll get 22 hours from a 14-in. M4 MacBook Pro compared to 18 hours from the new Air.) The MacBook Pro also offers much higher memory bandwidth, which means it can handle tougher tasks.
The differences basically come down to this: If you are doing computationally intensive work in any profession, you’ll want to go pro. But if you occasionally need to do something demanding you’ll be able to achieve it on a MacBook Air, though it might take a bit longer.
So, if you have a need for the most robust professional tasks, you should go up the scale, and if you want something that’s equal to most challenging tasks, and more than equal for most of the things most people use their computers for, then this is it. It is faster than other PCs in its class with outstanding battery life and speakers smart enough to wrap your ears in a cocoon of spatially balanced sound when you need it.
That focus on the user has always been a hallmark of the Air line-up, and the experience of using this one remains just as compelling as it was when that first one jumped out of that brown envelope back in 2010 — one of the best in its class.
Buying adviceWe’re running out of superlatives. This is a significant upgrade to an already excellent machine and the addition of an M4 chip just means the world’s best-selling notebook is an even better value than before. I don’t believe a PC that offers the combined appeal of these Macs really exists, which means that if you are in the market for a computer upgrade, you should certainly take a look at this. The M4 MacBook Air maintains Apple’s tradition of notebook excellence.
You can follow me on social media! Join me on BlueSky, LinkedIn, and Mastodon.
ChatGPT use makes you feel lonely
Two new studies from ChatGPT developer Open AI and MIT show that increased use of the chatbot correlates with increased loneliness and less social time with other people, according to Bloomberg.
ChatGPT users in the study also reported feeling increasingly emotionally dependent on the chatbot the more often they used it — and they perceived their use as increasingly problematic.
The first study followed close to 1,000 people for more than a month with varying levels of experience with ChatGPT. They were either given a text version of the chatbot or one of two voice-based versions, which they used for at least five minutes a day.
Some were instructed to have open conversations about anything, while others were instructed to have either non-personal or personal conversations with the service. The researchers reportedly saw no difference in results between text-based and voice-based chatbots.
The second study analyzed 3 million user conversations from ChatGPT and surveyed how users interacted with the AI tool. The results show that very few people use ChatGPT for emotional conversations.
None of the studies have yet been reviewed by other researchers.
Evaluating AI agents? Early adopters outline practical challenges
Non-tech companies trying out AI agents in the field said the technology still has a lot of challenges to overcome before it can be used practically.
Some of the early uses of agentic AI are in the area of customer service, which was part of a panel discussion at Nvidia’s GTC conference, held last week in San Jose.
The main problem is humans still prefer talking to humans because they don’t trust machines, said Cameron Davies, chief data officer at Yum Brands, which includes brands such as Pizza Hut, Taco Bell and KFC.
The company hopes to get to 100% digital ordering in five years, with agentic AI playing a role, Davies said.
Specifically, Yum Brands wants to automate orders at drive-throughs, Davies said. One of the most challenging service jobs is running drive-through windows — to take, fill and cash out orders — and AI agents could lighten that load.
But the main issue isn’t technological, Davies said. “What’s the greatest challenge in putting this into place, is that nobody wants to talk to a machine right now. And you have to ask yourself why is that the case?”
Yum Brands is also eyeing AI agents to reduce the “cognitive” load on human servers and employees. Scripted agents can do the role of “upselling and asking about charity donations,” Davies said.
“You do these things, then that person can now focus on being happier, making sure the order is right, getting the change right, etc.,” Davies said, adding that Yum Brands has been testing agents for back-office and HR functions, with mixed results.
Beyond trusting AI systems, compliance and accuracy are concerns for Craig Daniels, the head of Mayo Clinic’s Smart Hospital and Unbound Project.
Healthcare, by its nature, tends to lag other industries in adopting technology; Daniels is looking at the progress of AI agents at companies like Yum Brands to see what works.
For Mayo, the challenge is for doctors and patients to gain a high level of trust in AI to assist in diagnosis and treatment. Then they can consider the role of agents in helping doctors and patients.
Patients need to trust AI models just as they trust MRI machines without having to worry about the underlying technology, Daniels said.
Mayo Clinic is creating its own data platform with anonymous patient information gathered from 61 different healthcare organizations in four continents. For example, one Mayo AI system trained on 7 million electrocardiograms can detect and diagnose heart failure.
“There’s a point at which we have to trust the model works,” Daniels said. “It generates novel insights and we’ve researched and we trust that and we’re using it with the human to make the final decision. That’s a wonderful advancement.”
The US Food and Drug Administration (FDA) will ultimately regulate AI in healthcare through “medical-as-a-software device,” which will require proven research, Daniels said.
More than in other industries, Mayo will require a variety of guardrails be in place to deliver trustworthy results as there’s no allowance for AI or agents to hallucinate. Many panelists also mentioned they were still in the process of figuring out those guardrails. “We want to be safe,” Daniels said.
The panelists said that while customer service and chatbots aren’t new, they’re being revisited in light of the arrival of AI agents.
Agents are expected to grow more human-like in reaction and voice, and they’re more flexible. Agents don’t follow scripts like chatbots, and depending on the customer, they can connect to other agents to better serve customers.
“I can make it talk like a person. I can change dialects. I can do those things, and I want to control it,” Yum Brands’ Davies said.
Agents won’t fully replace people any time soon, as human ingenuity is still required in many areas — to supervise AI agents, modify them to be more effective, and be able to verify AI output and results.
U.S. Bank’s first AI agents are augmenting human knowledge to service customers.
Human agents for banking, mortgage or investments are subject matter experts that need information at their fingertips; this is where the bank is testing AI agents, said Sumitri Kolavennu, head of AI research and senior vice president at U.S. Bank.
“Keeping the human-in-the-loop in many [regulated] industries is … really paramount,” Kolavennu said. “We love the advantages and autonomy aspects of AI agents, but we do want to keep the human in the loop.”
U.S. Bank is testing the effectiveness of agents by seeing how quickly problems are resolved.
“The biggest thing we are seeing is resolution on first call. When you call, you don’t want the agent to say, ‘Let me call you back tomorrow’ or something like that. Those are some of the things that we are seeing [AI agents] being able to do,” Kolavennu said.
How AI agents work
When generative AI (genAI) suddenly burst onto the tech scene with the arrival in late 2022 of OpenAI’s ChatGPT, companies quickly embraced its potential for automating tasks such answering customer inquiries, handling support tickets, and generating content.
A slew of rival chatbots followed ChatGPT. But they tended to be static tools; they didn’t learn from user interactions or application integrations. Only their foundational large language models (LLMs) could be trained.
Enter agentic AI. By leveraging technologies such as machine learning, natural language processing (NLP), and contextual understanding, AI agents can operate independently, even partnering with other agents to perform complex tasks.
“Think virtual coworkers able to complete complex workflows,” McKinsey & Co. explained in a report. “The technology promises a new wave of productivity and innovation.”
According to the 2025 Connectivity Benchmark Report from Mulesoft and Deloitte, 93% of IT leaders plan to introduce autonomous AI agents within two years — and nearly half have already implemented them.
Like chatbots, AI agents have existed since the 1960s. However, it wasn’t until advances in AI, ML, deep learning, and transformer models (such as GPT-3 and ChatGPT) that they became capable of adapting to tasks and learning from data. That dramatically expanded their use cases.
Agentic AI systems typically use a transformer-based LLM as the core, enhanced with reasoning, memory, reinforcement learning, and tool integrations. The LLM’s understanding of language allows it to interpret instructions and generate responses.
In the simplest terms, an AI agent is the combination of an LLM and a traditional software application that can act independently to complete a task. They can operate autonomously, make decisions, plan, and take actions to achieve specific goals without constant human oversight.
“This is a way to deliver business value, and I think that is where the focus should be, to think about how you’re going to disrupt the business process,” said Samta Kapoor, a principal on Ernst & Young’s tech consulting team.
For example, if an employee requests vacation time, an AI agent can automate the process of entering the dates into the HR system and ensuring all other systems are aware that employee will be away for the specified time. If the employee changes his or her mind and enters new dates, the agent can reschedule everything in the HR system autonomously. All it takes is a simple set of commands and away the AI agent goes, Kapoor said.
AI agents can also autonomously write software code and offer that base code to a developer, who can then review it for accuracy and modify it if necessary. But there are also agents that can perform the code review, as well. And, best of all, it can all be done in seconds, not hours or days.
AI-assisted code generation tools are increasingly prevalent in software engineering and, somewhat unexpectedly, have become low-hanging fruit for most organizations experimenting with generative AI (genAI) tools. Adoption rates are skyrocketing, because even if they only suggest a baseline of code for a new application, automation tools can eliminate hours that otherwise would have been devoted to manual code creation and updates.
By 2027, 70% of professional developers are expected to be using AI-powered coding tools, up from less than 10% in September 2023, according to Gartner Research. And within three years, 80% of enterprises will have integrated AI-augmented testing tools into their software engineering tool chain — a significant increase from approximately 15% early last year, Gartner said.
Beyond coding, AI agents are designed to perceive their surroundings, make decisions based on that information, take actions, and sometimes learn and adapt over time to perform tasks autonomously. Reinforcement learning is key to agentic AI’s ability to continue to grow in sophistication when performing tasks.
“If you’re playing a game, you either win or lose. If you lose, you go back and evaluate why, and then play again but do it differently,” Kapoor said. “With agentic AI, there are a very defined set of KPIs that you’re asking it to meet, so it would know whether it has met them or not. And then it goes back and it reinforces itself to do this task differently.”
For agentic AI, decision-making is structured around autonomy and goal-orientation. “There is a reward system within agentic AI and this is frequently based on reinforcement learning, where the AI learns to maximize rewards through interactions with its environment,” said Arun Gururajan, NetApp’s vice president of research and data science.
The sense-think-act process and agent typesAgentic AI, Gururajan said, follows a cyclical sense-think-act process, which has the following steps:
- Perception: The system gathers input from its environment and/or the user.
- Reasoning and Planning: The central brain of the agent, typically a powerful LLM, reasons through the task and generates and evaluates possible actions.
- Decision-making: Reinforcement learning strategies, often supplemented by human feedback as well as the memory of past interactions, help select the optimal action.
- Execution: The chosen action is carried out, possibly by calling on internal/external tools via API integrations.
- Feedback loop: Outcomes are assessed and used to refine future decisions, creating a continual learning process.
There are several types of AI agents that can be employed based on the complexity of the task. They include:
- Reactive agents: These only respond to their environment based on predefined rules. They don’t store history or learn from it (e.g., simple game AI). The most basic of agents, they’re used in customer service bots or smart home devices that can adjust themselves automatically.
- Deliberative agents: These use an internal model and reasoning to make informed, long-term decisions. They’re used in applications such as autonomous vehicles, supply chain management, and medical decision systems.
- Hybrid agents: These combine reactive and deliberative approaches for more efficient decision-making. For instance, a robot might react to immediate obstacles and plan its path to a goal simultaneously. Hybrid AI is used in automating business tasks, where reactive agents handle routine actions (e.g., responding to emails) while deliberative agents plan and optimize workflows for efficiency over time.
In short, hybrid agents integrate both immediate reaction and thoughtful planning in their decision-making.
“Traditional AI — or predictive AI — is often tuned to solve a narrow and specific problem — for example, predicting drive failure in storage systems,” Gururajan said. “Agentic AI is more dynamic; It can adapt, reason and strategize.”
Imagine, for example, agents managing a data storage system by monitoring dashboards, identifying bottlenecks, predicting failures, and proactively taking action to prevent errors, ensuring system SLAs are met.
NetApp, for instance, sets up reward models based on objectives (such as maximizing uptime or minimizing energy use) that combine human preferences, real-time data, and instructions, enabling AI to optimize behavior and improve performance over time, according to Gururajan.
Reasoning techniques like Chain-of-Thought prompting, which mirrors human thought, or ReAct prompting help break down tasks and plan actions. Memory modules store context and intermediate results for tasks requiring continuity. Reinforcement learning with human feedback fine-tunes the system’s outputs to align with human values. Additionally, tool integrations enable the AI to perform complex tasks beyond text generation, such as web search and interacting with APIs.
The growing use of API integrationsAPI integrations with AI agents are currently the pinnacle of use cases. In agentic AI, tools via API integrations allow agents to interact with the real world. When a task requires external information, the agent generates an API call, formats parameters, authenticates, and processes the response to complete the task or take further action.
“When an agent needs to perform a task that requires external information, such as searching a database, sending an email, executing another ML model,” Gururajan said, “it generates an API call based on its understanding of the task and the API’s documentation.
Executing on an API involves formatting aspects with the correct parameters and authenticating with the API, which in turn, returns data (or performs actions); the agent processes the response and completes the task or takes subsequent actions if needed, Gururajan explained.
Looking ahead, there are still improvements needed for agentic AI to mature, such as addressing challenges with API discoverability and adaptation, and dealing with issues such as a lack of standardization and documentation, Gururajan said.
Change management also makes it difficult for agents to select the right APIs. And API security and authentication remain crucial, requiring robust protocols and access control to protect sensitive data. Implementing service-level credentials could provide more granular control, such as restricting agents to read-only access or specific actions.
There is emerging research involving agents, such as multi-objective optimization, which focuses on solving conflicting task goals using goal-based programming. Additionally, system-level heuristics can be created as general rules reflecting core principles, constraints, or safety measures.
Heuristics can be incorporated into the agentic framework by: (a) filtering goals (such as removing goals requiring restricted data), (b) modifying objectives (insuring safety overrides efficiency), and (c) integrating reinforcement learning to weight goals.
Looking ahead, there’s a need for agents to autonomously create their own APIs for tasks, as most agents currently rely on pre-existing ones. “This would be a positive step towards Artificial General Intelligence or AGI,” Gururajan said.
How to win the race for IT talent
Your weekly round-up of the questions asked by readers of CIO, Computerworld, CSO, and Network World. Smart Answers to questions around How to win the race for IT talent; how to use FinOps to find savings beyond the cloud; and what competitive advantages come from hiring a Chief AI Officer (CAIO)?
Tempting TalentWith the pace of generative AI (genAI) adoption accelerating across all business sectors and functions, required skills are increasingly in demand. This week we reported on how IT leaders are tackling this issue when the people who can meet these needs are in such short supply.
This sent the readers of CIO.com rushing to Smart Answers to seek insights into how to win the battle for IT talent. The answer, gleaned from decades of human reporting, centers around rethinking job descriptions to elevate human roles with AI capabilities, rather than simply replacing humans.
And it’s important to focus on skill assessment and gap analysis and develop talent rather than rigid adherence to traditional roles and career paths. There’s much more, but for that you need to ask Smart Answers yourself:
Find out: How can IT leaders address the AI talent shortage?
Making SavingsAlso this week, we reported that FinOps is expanding into optimizing outcomes for private cloud, SaaS, licensing, AI, and even traditional data centers. Thus, FinOps is no longer confined to being a key methodology for cost control and workload optimization of public cloud services.
Everyone likes saving money, right?
Readers of CIO.com certainly do; they were keen to ask whether they can use FinOps to make savings beyond the cloud. According to Smart Answers, they can indeed. Large organizations are using FinOps to manage spiraling AI costs — and Smart Answers can show you how.
Find out: Can I use FinOps for IT cost savings beyond the cloud?
H-AI-l to the ChiefThe only constant is change, and CIOs are learning to accommodate the new role of Chief AI Officer.
This week we wrote about how many organizations might assume that newly minted CAIOs’ responsibilities should fall under those of the CIO, but the reality of the role likely calls for a different approach.
So is it a good thing to have a CAIO? How does that role play with the CIO position, and do CAIOs add value? You asked; Smart Answers has the facts.
Find out: What competitive advantages come from hiring a Chief AI Officer (CAIO)?
About Smart Answers
Smart Answers is an AI-based chatbot tool designed to help you discover content, answer questions, and go deep on the topics that matter to you. Each week we send you the three most popular questions asked by our readers, and the answers Smart Answers provides.
Developed in partnership with Miso.ai, Smart Answers draws only on editorial content from our network of trusted media brands—CIO, Computerworld, CSO, InfoWorld, and Network World—and was trained on questions that a savvy enterprise IT audience would ask. The result is a fast, efficient way for you to get more value from our content.
–
New AI weather forecasting system called a huge step forward
Researchers at the University of Cambridge, together with the Alan Turing Institute, Microsoft Research, and the European Center for Medium Range Weather Forecasts, have developed a new AI-based weather forecasting system — Aardvark Weather — that could revolutionize the field of meteorology.
Aardvark Weather can apparently provide weather forecasts that are tens of times more accurate, while requiring dramatically less computing power than modern systems. “Aardvark is thousands of times faster than all previous methods of weather forecasting,” Professor Richard Turner of Cambridge’s Department of Engineering, who led the research, said in a statement.
The AI system achieved this by replacing the entire process of weather forecasting with a single machine-learning model; it can take in observations from satellites, weather stations and other sensors and then generate both global and local forecasts.
In the past, a forecast required several different models, each of which relies on a supercomputer and a support team to run. With Aardvark Weather, the same work can be completed in just a few minutes and with a standard desktop computer.
The results were published in the journal Nature.
Related reading: New European AI weather model to improve forecasts
Siri’s ears are burning as iPhone fold goes liquid metal
We’re now in the century sci-fi writers made their fortunes writing about — and hardware manufacturers seem keen to explore new products and processes that make some of those predictions a reality. Just as Minority Report comes pretty close to predicting visionOS and Spatial Computing, so too will a whisper of Ringworld be reflected in the hard but soft metal substance Apple might use in future devices, principally the folding iPhone.
Liquid Metal has been around in the Apple-verse for a long time. It’s a zirconium- and titanium-based alloy stronger than steel and more flexible than aluminium; the company has licensed it since around 2010 for use as the SIM card removal tool that looks like it wants to be a paper clip and once found in the box with iPhones. While it doesn’t share all the same qualities as Larry Niven’s “Unobtanium” used to make the Ringworld space station, it does at least deliver resilience and flexibility and it’s the latter that matters according to Apple analyst, Ming-Chi Kuo.
He says Apple will use liquid metal to make the foldable hinge in the so-far-unannounced or confirmed folding iPhone it hopes to introduce next year.
Pour me anotherKuo explains that Apple wants to use the substance to build a folding device that is flatter and more durable than existing devices of its type, as well as having a hardly discernible crease. Bloomberg’s Mark Gurman has previously told us that Apple really wants to build a fold mechanism that isn’t visible when unfolded and does not deteriorate in use.
To achieve this, key iPhone fold components including hinges will be crafted from liquid metal. The analyst predicts Android device makers will soon follow suit, which is good news for exclusive liquid metal supplier, Dongguan EonTec, the analyst said. Apple’s 21st-century take on a folding device is likely to be thin, like the iPhone 16e, and almost certainly built to the high-end design aesthetic Apple maintains across its product range. It will also have a high-end price to match, which implies this will be the device to slam ostentatiously on the table during board meetings.
Et tu, Siri?While the hardware seems to be coming into view, it’s clear that one essential component isn’t yet in place, and that’s Siri. Only two weeks ago, Kuo told us Apple wants to position the device as a true AI-driven iPhone, with the power of artificial intelligence artfully combined with the large display. The snag? Siri isn’t ready yet, which has made for big staffing changes within Apple’s Siri team. Reflecting the strategic importance of AI to Apple, CEO Tim Cook has put Apple’s best product designers in to sort Siri out, moving former Siri boss John Giannandrea aside to make way for genius engineer Mike Rockwell, who led Vision Pro development. (Giannandrea hasn’t left the company, incidentally, but seems to have been given a more limited sphere of responsibility and is no longer reporting directly to Cook, who has lost confidence in his ability to execute on product development.)
Apple’s Game Of Thrones is interesting, but what it truly represents is the importance the company attaches to Siri on iPhones — and not just iPhones. Apple has a number of additional products in the pipeline, including a home control device, some of which have allegedly also been delayed due to contextual Siri’s no-show. In that light, the fact that a folding iPhone is expected to become a poster child for AI on a mobile device depends on what the new Siri team can build.
No pressure, then.
What else do we know about the folding iPhone? In development intermittently since at least 2014, Apple’s hardware engineers now seem to feel the tech and the time is right for the device. Kuo tells us to expect a 7.8-in. display when unfolded and a 5.5-inch display at rest. You’ll have rear and front camera, eSIM, Apple’s C2 5G modem, and Touch ID as a side button. The device should be as slim as 9mm when closed but isn’t expected to appear until next year’s main iPhone refresh cycle in late 2026. Final specifications will be set in stone later this year. Apple will likely follow this up by developing and optimizing the manufacturing process in preparation for mass production next year.
But a lot depends on getting those Siri improvements in place.
You can follow me on social media! Join me on BlueSky, LinkedIn, and Mastodon.