Kategorie
US wants to nix the EU AI Act’s code of practice, leaving enterprises to develop their own risk standards
The European Union (EU) AI Act may seem like a done deal, but stakeholders are still drafting the code of practice that will lay out rules for general-purpose AI (GPAI) models, including those with systemic risk.
Now, though, as that drafting process approaches its deadline, US President Donald Trump is reportedly pressuring European regulators to scrap the rulebook. The US administration and other critics claim that it stifles innovation, is burdensome, and extends the bounds of the AI law, essentially creating new, unnecessary rules.
The US government’s Mission to the EU recently reached out to the European Commission and several European governments to oppose its adoption in its current form, Bloomberg reports.
“Big tech, and now government officials, argue that the draft AI rulebook layers on extra obligations, including third party model testing and full training data disclosure, that go beyond what is in the legally binding AI Act’s text, and furthermore, would be very challenging to implement at scale,” explained Thomas Randall, director of AI market research at Info-Tech Research Group.
Onus is shifting from vendor to enterpriseOn its web page describing the initiative, the European Commission said, “the code should represent a central tool for providers to demonstrate compliance with the AI Act, incorporating state-of-the-art practices.”
The code is voluntary, but the goal is to help providers prepare to satisfy the EU AI Act’s regulations around transparency, copyright, and risk mitigation. It is being drafted by a diverse group of general-purpose AI model providers, industry organizations, copyright holders, civil society representatives, members of academia, and independent experts, overseen by the European AI Office.
The deadline for its completion is the end of April. The final version is set to be presented to EU representatives for approval in May, and will go into effect in August, one year after the AI Act came into force. It will have teeth; Randall pointed out that non-compliance could draw fines of up to 7% of global revenue, or heavier scrutiny by regulators, once it takes effect.
But whether or not Brussels, the de facto capital of the EU, relaxes or enforces the current draft, the weight of ‘responsible AI’ is already shifting from vendors to the customer organizations deploying the technology, he noted.
“Any organization conducting business in Europe needs to have its own AI risk playbooks, including privacy impact checks, provenance logs, or red-team testing, to avoid contractual, regulatory, and reputational damages,” Randall advised.
He added that if Brussels did water down its AI code, it wouldn’t just be handing companies a free pass, “it would be handing over the steering wheel.”
Clear, well-defined rules can at least mark where the guardrails sit, he noted. Strip those out, and every firm, from a garage startup to a global enterprise, will have to chart its own course on privacy, copyright, and model safety. While some will race ahead, others will likely have to tap the brakes because the liability would “sit squarely on their desks.”
“Either way, CIOs need to treat responsible AI controls as core infrastructure, not a side project,” said Randall.
A lighter touch regulatory landscapeIf other countries were to follow the current US administration’s approach to AI legislation, the result would likely be a lighter touch regulatory landscape with reduced federal oversight, noted Bill Wong, AI research fellow at Info-Tech Research Group.
He pointed out that in January, the US administration issued Executive Order 14179, “Removing Barriers to American Leadership in Artificial Intelligence.” Right after that, the National Institute of Standards and Technology (NIST) updated its guidance for scientists working with the US Artificial Intelligence Safety Institute (AISI). Further, references to “AI safety,” “responsible AI,” and “AI fairness” were removed; instead, a new emphasis was placed on “reducing ideological bias to enable human flourishing and economic competitiveness.”
Wong said: “In effect, the updated guidance appears to encourage partners to align with the executive order’s deregulatory stance.”
Windows 11's Recall AI is now rolling out on Copilot+ PCs
Windows 11 KB5055627 update released with 30 new changes, fixes
Craft CMS RCE exploit chain used in zero-day attacks to steal data
Marks & Spencer pauses online orders after cyberattack
Mobile provider MTN says cyberattack compromised customer data
Windows "inetpub" security fix can be abused to block future updates
Baltimore City Public Schools data breach affects over 31,000 people
North Korean Hackers Spread Malware via Fake Crypto Firms and Job Interview Lures
SAP fixes suspected Netweaver zero-day exploited in attacks
How to win fake friends and influence fake people
We’re all talking to fake people now, but most people don’t realize that interacting with AI is a subtle and powerful skill that can and should be learned.
The first step in developing this skill set is to acknowledge to yourself what kind of AI you’re talking to and why you’re talking to it.
AI voice interfaces are powerful because our brains are hardwired for human speech. Even babies’ brains are tuned to voices before they can talk, picking up language patterns early on. This built-in conversational skill helped our ancestors survive and connect, making language one of our most essential and deeply rooted abilities.
But that doesn’t mean we can’t think more clearly about how to talk when we speak to AI. After all, we already speak differently to other people in different situations. For example, we talk one way to our colleagues at work and a different way to our spouses.
Yet people still talk to AI like it’s a person, which it’s not; like it can understand, which it cannot; and like it has feelings, pride, or the ability to take offense, which it doesn’t.
The two main categories of talking AIIt’s helpful to break the world of talking AI (both spoken and written) into two categories:
- Fantasy role playing, which we use for entertainment.
- Tools, which we use for some productive end, either to learn information or to get a service to do something useful for us.
Let’s start with role-playing AI.
AI for pretendingYou may have heard of a site and app called Status AI, which is often described as a social network where everyone else on the network is an AI agent.
A better way to think about it is that it’s a fantasy role-playing game in which the user can pretend to be a popular online influencer.
Status AI is a virtual world that simulates social media platforms. Launched as a digital playground, it lets people create online personas and join fan communities built around shared interests. It “feels” like a social network, but every interaction—likes, replies, even heated debates—comes from artificial intelligence programmed to act like real users, celebrities, or fictional characters.
It’s a place to experiment, see how it feels to be someone else, and interact with digital versions of celebrities in ways that aren’t possible on real social media. The feedback is instant, the engagement is constant, and the experience, though fake, is basically a game rather than a social network.
Another basket of role-playing AI comes from Meta, which has launched AI-powered accounts on Facebook, Instagram, and WhatsApp that let users interact with digital personas — some based on real celebrities like Tom Brady and Paris Hilton, others entirely fictional. These AI accounts are clearly labeled as such, but (thanks to AI) can chat, post, and respond like real people. Meta also offers tools for influencers to use AI agents to reply to fans and manage posts, mimicking their style. These features are live in the US, with plans to expand, and are part of Meta’s push to automate and personalize social media.
Because these tools aim to provide make-believe engagements, it’s reasonable for users to pretend like they’re interacting with real people.
These Meta tools attempt to cash in on the wider and older phenomenon of virtual online influencers. These are digital characters created by companies or artists, but they have social media accounts and appear to post just like any influencer. The best-known example is Lil Miquela, launched in 2016 by the Los Angeles startup Brud, which has amassed 2.5 million Instagram followers. Another is Shudu, created in 2017 by British photographer Cameron-James Wilson, presented as the world’s first digital supermodel. These characters often partner with big brands.
A post by one of the major virtual influencer accounts can get hundreds or thousands of likes and comments. The content of these comments ranges from admiration for their style and beauty to debates about their digital nature. Presumably, many people think they’re commenting to real people, but most probably engage with a role-playing mindset.
By 2023, there were hundreds of these virtual influencers worldwide, including Imma from Japan and Noonoouri from Germany. They’re especially popular in fashion and beauty, but some, like FN Meka, have even released music. The trend is growing fast, with the global virtual influencer market estimated at over $4 billion by 2024.
AI for knowledge and productivityWe’re all familiar with LLM-based chatbots like ChatGPT, Gemini, Claude, Copilot, Meta AI, Mistral, and Perplexity.
The public may be even more familiar with non-LLM assistants like Siri, Google Assistant, Alexa, Bixby, and Cortana, which have been around much longer.
I’ve noticed that most people make two general mistakes when interacting with these chatbots or assistants.
The first is that they interact with them as if they’re people (or role-playing bots). And the second is that they don’t use special tactics to get better answers.
People often treat AI chatbots like humans, adding “please,” “thank you,” and even apologies. But the AI doesn’t care, remember, and is not significantly affected by these niceties. Some people even say “hi” or “how are you?” before asking their real questions. They also sometimes ask for permission, like “Can you tell me…” or “Would you mind…” which adds no value. Some even sign off with “goodbye” or “thanks for your help,” but the AI doesn’t notice or care.
Politeness to AI wastes time — and money! A year ago, Wharton professor Ethan Mollick pointed out that people using “please” and “thank you” in AI prompts add extra tokens, which increases the compute power needed by the LLM chatbot companies. This concept resurfaced on April 16 of this year, when OpenAI CEO Sam Altman replied to another user on X, saying (perhaps exaggerating) that polite words in prompts have cost OpenAI “tens of millions of dollars.”
“But wait a second, Mike,” you say. “I heard that saying ‘please’ to AI chatbots gets you better results.” And that’s true — sort of. Several studies and user experiments have found that AI chatbots can give more helpful, detailed answers when users phrase requests politely or add “please” and “thank you.” This happens because the AI models, trained on vast amounts of human conversation, tend to interpret polite language as a cue for more thoughtful responses.
But prompt engineering experts say that clear, specific prompts — such as giving context or stating exactly what you want — consistently produce much better results than politeness.
In other words, politeness is a tactic for people who aren’t very good at prompting AI chatbots.
The best way to get top-quality answers from AI chatbots is to be specific and direct in your request. Always say exactly what you want, using clear details and context.
Another powerful tactic is something called “role prompting” — tell the chatbot to act as a world-class expert, such as, “You are a leading cybersecurity analyst,” before asking a question about cybersecurity. This method, proven in studies like Sander Schulhoff’s 2025 review of over 1,500 prompt engineering papers, leads to more accurate and relevant answers because it tells the chatbot to favor content in the training data produced by experts, rather than just lumping the expert opinion in with the uneducated viewpoints.
Also: Give background if it matters, like the audience or purpose.
(And don’t forget to fact-check responses. AI chatbots often lie and hallucinate.)
It’s time to up your AI chatbot game. Unless you’re into using AI for fantasy role playing, stop being polite. Instead, use prompt engineering best practices for better results.
@font-face
{font-family:"Cambria Math";
panose-1:2 4 5 3 5 4 6 3 2 4;
mso-font-charset:0;
mso-generic-font-family:roman;
mso-font-pitch:variable;
mso-font-signature:-536870145 1107305727 0 0 415 0;}@font-face
{font-family:"Arial Unicode MS";
panose-1:2 11 6 4 2 2 2 2 2 4;
mso-font-charset:128;
mso-generic-font-family:swiss;
mso-font-pitch:variable;
mso-font-signature:-134238209 -371195905 63 0 4129279 0;}@font-face
{font-family:"Helvetica Neue";
panose-1:2 0 5 3 0 0 0 2 0 4;
mso-font-charset:0;
mso-generic-font-family:auto;
mso-font-pitch:variable;
mso-font-signature:-452984065 1342208475 16 0 1 0;}@font-face
{font-family:"\@Arial Unicode MS";
panose-1:2 11 6 4 2 2 2 2 2 4;
mso-font-charset:128;
mso-generic-font-family:swiss;
mso-font-pitch:variable;
mso-font-signature:-134238209 -371195905 63 0 4129279 0;}p.MsoNormal, li.MsoNormal, div.MsoNormal
{mso-style-unhide:no;
mso-style-qformat:yes;
mso-style-parent:"";
margin:0in;
mso-pagination:widow-orphan;
font-size:12.0pt;
font-family:"Times New Roman",serif;
mso-fareast-font-family:"Arial Unicode MS";
border:none;}h2
{mso-style-priority:9;
mso-style-qformat:yes;
mso-style-parent:"";
mso-style-link:"Heading 2 Char";
mso-style-next:Body;
margin:0in;
mso-pagination:widow-orphan;
page-break-after:avoid;
mso-outline-level:2;
font-size:16.0pt;
font-family:"Helvetica Neue";
mso-bidi-font-family:"Arial Unicode MS";
color:black;
border:none;
mso-style-textoutline-type:none;
mso-style-textoutline-outlinestyle-dpiwidth:0pt;
mso-style-textoutline-outlinestyle-linecap:flat;
mso-style-textoutline-outlinestyle-join:bevel;
mso-style-textoutline-outlinestyle-pctmiterlimit:0%;
mso-style-textoutline-outlinestyle-dash:solid;
mso-style-textoutline-outlinestyle-align:center;
mso-style-textoutline-outlinestyle-compound:simple;}a:link, span.MsoHyperlink
{mso-style-unhide:no;
mso-style-parent:"";
text-decoration:underline;
text-underline:single;}a:visited, span.MsoHyperlinkFollowed
{mso-style-noshow:yes;
mso-style-priority:99;
color:fuchsia;
mso-themecolor:followedhyperlink;
text-decoration:underline;
text-underline:single;}span.Heading2Char
{mso-style-name:"Heading 2 Char";
mso-style-priority:9;
mso-style-unhide:no;
mso-style-locked:yes;
mso-style-link:"Heading 2";
mso-ansi-font-size:16.0pt;
mso-bidi-font-size:16.0pt;
font-family:"Helvetica Neue";
mso-ascii-font-family:"Helvetica Neue";
mso-hansi-font-family:"Helvetica Neue";
mso-bidi-font-family:"Arial Unicode MS";
color:black;
mso-style-textoutline-type:none;
mso-style-textoutline-outlinestyle-dpiwidth:0pt;
mso-style-textoutline-outlinestyle-linecap:flat;
mso-style-textoutline-outlinestyle-join:bevel;
mso-style-textoutline-outlinestyle-pctmiterlimit:0%;
mso-style-textoutline-outlinestyle-dash:solid;
mso-style-textoutline-outlinestyle-align:center;
mso-style-textoutline-outlinestyle-compound:simple;
font-weight:bold;}p.Body, li.Body, div.Body
{mso-style-name:Body;
mso-style-unhide:no;
mso-style-parent:"";
margin:0in;
mso-pagination:widow-orphan;
font-size:11.0pt;
font-family:"Helvetica Neue";
mso-fareast-font-family:"Arial Unicode MS";
mso-bidi-font-family:"Arial Unicode MS";
color:black;
border:none;
mso-style-textoutline-type:none;
mso-style-textoutline-outlinestyle-dpiwidth:0pt;
mso-style-textoutline-outlinestyle-linecap:flat;
mso-style-textoutline-outlinestyle-join:bevel;
mso-style-textoutline-outlinestyle-pctmiterlimit:0%;
mso-style-textoutline-outlinestyle-dash:solid;
mso-style-textoutline-outlinestyle-align:center;
mso-style-textoutline-outlinestyle-compound:simple;}p.Heading, li.Heading, div.Heading
{mso-style-name:Heading;
mso-style-unhide:no;
mso-style-parent:"";
mso-style-next:Body;
margin:0in;
mso-pagination:widow-orphan;
page-break-after:avoid;
mso-outline-level:1;
font-size:18.0pt;
font-family:"Helvetica Neue";
mso-fareast-font-family:"Arial Unicode MS";
mso-bidi-font-family:"Arial Unicode MS";
color:black;
border:none;
mso-style-textoutline-type:none;
mso-style-textoutline-outlinestyle-dpiwidth:0pt;
mso-style-textoutline-outlinestyle-linecap:flat;
mso-style-textoutline-outlinestyle-join:bevel;
mso-style-textoutline-outlinestyle-pctmiterlimit:0%;
mso-style-textoutline-outlinestyle-dash:solid;
mso-style-textoutline-outlinestyle-align:center;
mso-style-textoutline-outlinestyle-compound:simple;
font-weight:bold;}.MsoChpDefault
{mso-style-type:export-only;
mso-default-props:yes;
font-size:10.0pt;
mso-ansi-font-size:10.0pt;
mso-bidi-font-size:10.0pt;
mso-fareast-font-family:"Arial Unicode MS";
border:none;
mso-font-kerning:0pt;
mso-ligatures:none;}.MsoPapDefault
{mso-style-type:export-only;}div.WordSection1
{page:WordSection1;}ol
{margin-bottom:0in;}ul
{margin-bottom:0in;}
people now, but most people don’t realize that interacting with AI is a subtle
and powerful skill that can and should be learned.The first step in developing
this skill set is to acknowledge to yourself what kind of AI you’re talking to
and why you’re talking to it. AI voice interfaces are
powerful because our brains are hardwired for human speech. Even babies’ brains
are tuned to voices before they can talk, picking up language patterns early
on. This built-in conversational skill helped our ancestors survive and connect,
making language one of our most essential and deeply rooted abilities.But that doesn’t mean we can’t
think more clearly about how to talk when we speak to AI. After all, we already
speak differently to other people in different situations. For example, we talk
one way to our colleagues at work and a different way to our spouses. Yet people still talk to AI
like it’s a person, which it’s not; like it can understand, which it cannot;
and like it has feelings, pride, or the ability to take offense, which it
doesn’t. The two main categories of
talking AIIt’s helpful to break the
world of talking AI (both spoken and written) into two categories: >1.
Fantasy role playing, which we use for
entertainment. >2.
Tools, which we use for some productive end,
either to learn information or to get a service to do something useful for us. Let’s start with role-playing
AI. id="ai-for-pretending">AI for pretending>You may have heard of a site
and app called Status AI, which is often
described as a social network where everyone else on the network is an AI
agent. A better way to think about it
is that it’s a fantasy role-playing game in which the user can pretend to be a
popular online >influencer. Status AI is a virtual world
that simulates social media platforms. Launched as a digital playground, it
lets people create online personas and join fan communities built around shared
interests. It “feels” like a social network, but every interaction—likes,
replies, even heated debates—comes from artificial intelligence programmed to
act like real users, celebrities, or fictional characters.It’s a place to experiment,
see how it feels to be someone else, and interact with digital versions of
celebrities in ways that aren’t possible on real social media. The feedback is
instant, the engagement is constant, and the experience, though fake, is
basically a game rather than a social network. Another basket of role-playing
AI comes from Meta, which has launched
AI-powered accounts on Facebook, Instagram, and WhatsApp that let users
interact with digital personas — some based on real celebrities like Tom Brady
and Paris Hilton, others entirely fictional. These AI accounts are clearly
labeled as such, but (thanks to AI) can chat, post, and respond like real
people. Meta also offers tools for influencers to use AI agents to reply to
fans and manage posts, mimicking their style. These features are live in the
US, with plans to expand, and are part of Meta’s push to automate and
personalize social media.Because these tools aim to
provide make-believe engagements, it’s reasonable for users to pretend like
they’re interacting with real people. These Meta tools attempt to
cash in on the wider and older phenomenon of virtual online influencers. These
are digital characters created by companies or artists, but they have social
media accounts and appear to post just like any influencer. The best-known
example is Lil Miquela, launched in 2016 by the Los Angeles startup Brud, which
has amassed 2.5 million Instagram followers. Another is Shudu, created in 2017
by British photographer Cameron-James Wilson, presented as the world’s first
digital supermodel. These characters often partner with big brands. A post by one of the major
virtual influencer accounts can get hundreds or thousands of likes and
comments. The content of these comments ranges from admiration for their style
and beauty to debates about their digital nature. Presumably, many people think
they’re commenting to real people, but most probably engage with a role-playing
mindset. By 2023, there were hundreds
of these virtual influencers worldwide, including Imma from Japan and Noonoouri
from Germany. They’re especially popular in fashion and beauty, but some, like
FN Meka, have even released music. The trend is growing fast, with the global
virtual influencer market estimated at over $4 billion by 2024. id="ai-for-knowledge-and-productivity">AI for knowledge and productivity>We’re all familiar with
LLM-based chatbots like ChatGPT, Gemini, Claude, Copilot, Meta AI, Mistral, and
Perplexity. The public may be even more
familiar with non-LLM assistants like Siri, Google Assistant, Alexa, Bixby, and
Cortana, which have been around much longer.I’ve noticed that most people
make two general mistakes when interacting with these chatbots or assistants.The first is that they
interact with them as if they’re people (or role-playing bots). And the second
is that they don’t use special tactics to get better answers. People often treat AI chatbots
like humans, adding “please,” “thank you,” and even
apologies. But the AI doesn’t care, remember, and is not significantly affected
by these niceties. Some people even say “hi” or “how are
you?” before asking their real questions. They also sometimes ask for
permission, like “Can you tell me…” or “Would you mind…”
which adds no value. Some even sign off with “goodbye” or
“thanks for your help,” but the AI doesn’t notice or care. Politeness to AI wastes time —
and money! A year ago, Wharton professor Ethan Mollick pointed out that people
using “please” and “thank you” in AI prompts add extra
tokens, which increases the compute power needed by the LLM chatbot companies.
This concept resurfaced on April 16 of this year, when OpenAI CEO Sam Altman
replied to another
user on X, confirming that polite words in prompts have cost OpenAI “tens of millions of
dollars.” “But wait a second,
Mike,” you say. “I heard that saying ‘please’ to AI chatbots gets you
better results.” And that’s true — sort of. Several studies and user
experiments have found that AI chatbots can give more helpful, detailed answers
when users phrase requests politely or add “please” and “thank
you.” This happens because the AI models, trained on vast amounts of human
conversation, tend to interpret polite language as a cue for more thoughtful
responses.But prompt engineering experts
say that clear, specific prompts — such as giving context or stating exactly
what you want — consistently produce much better results than politeness. In other words, politeness is
a tactic for people who aren’t very good at prompting AI chatbots. The best way to get
top-quality answers from AI chatbots is to be specific and direct in your
request. Always say exactly what you want, using clear details and context. Another powerful tactic is
something called “role prompting” — tell the chatbot to act as a
world-class expert, such as, “You are a leading cybersecurity
analyst,” before asking a question about cybersecurity. This method,
proven in studies like Sander
Schulhoff’s 2025 review of over 1,500 prompt engineering papers, leads to
more accurate and relevant answers because it tells the chatbot to favor
content in the training data produced by experts, rather than just lumping the
expert opinion in with the uneducated viewpoints. Also: Give background if it
matters, like the audience or purpose. (And don’t forget to
fact-check responses. AI chatbots often lie and hallucinate.)It’s time to up your AI chatbot game.
Unless you’re into using AI for fantasy role playing, stop being polite.
Instead, use prompt engineering best practices for better results.
Apple plans to make all US iPhones in India by end of 2026
Apple is on track to source all the iPhones it sells in the US from India by the end of next year as politically driven tensions drive a wedge between the US and China. This is a major move that the company has been building toward for some time, but the recent tariffs announcements may have accelerated the plan.
Designed by Apple in California, Made in IndiaIf you’ve been following my work you’ll already be aware of the importance India now has for Apple. India is expected to be the manufacturing hub for 25% of all the iPhones sold globally by the end of the year, and now the Financial Times reports that Apple aims to make 60 million iPhones in India “as soon as next year” — though other sources suggest that target may prove too ambitious.
To support this transition, Apple and its partners have invested billions in building their businesses there. Foxconn is currently building its second-largest factory outside China in India. At a cost of $2.5 billion, the facility will create 40,000 jobs and double its manufacturing capacity in India. India’s biggest conglomerate, the Tata Group, also makes iPhones for Apple using facilities formerly owned by Pegatron and Wistron.
To support the project, Apple is also encouraging component manufacturers to set up shop in India, with India’s government recently announcing a range of incentives to help encourage them to do so.
The idea behind this is, of course, to ensure that the iPhones assembled in India make use of components that are also manufactured there in order to minimize the cost of any tariffs. These investments are accompanied by a range of external improvements, including improved infrastructure.
While there have been no major signals to this effect as yet, it is becoming increasingly likely that Apple will eventually commence manufacturing other products in India at some point.
This didn’t — and couldn’t — happen overnightWhat’s important to note is that none of this is happening suddenly. This has already taken years.
Apple has been working on its journey to India for almost a decade, presumably since before Apple CEO Tim Cook made his first disclosed visit to the nation. During that visit, Cook stressed that his company intended taking position in India, saying: “We’re not here for a quarter, or two quarters, or the next year, or the next year. We’re here for a thousand years.”
Apple had originally intended simply to set up retail stores there and build a business from India’s burgeoning middle class, but very swiftly saw the sense of transitioning some production to the nation, accelerating these plans once the Covid plague threw international supply chains into chaos.
In other words, while the company’s move to make iPhones for US market in India may seem sudden, it is something that has taken years. That effort proves that shifting manufacturing to new nations is not an overnight task; it takes time, a lot of investment, an available and accessible skilled workforce, and more.
With that in mind, it is foolish to expect manufacturing infrastructure to migrate across territorial boundaries any faster than Apple — with all its advantages — has been able to achieve in India.
The overall impact of Apple’s more diversified approach is a decreased reliance on its former manufacturing partner, China, from which India and other nations, including Brazil, Vietnam, and Thailand, are seeing some benefit as Apple is also increasing its manufacturing capabilities in those countries.
Doing the businessThe company’s moves into India aren’t just about tariff avoidance.
The decision to base more manufacturing there has helped Apple capture hearts and minds among consumers there, translating into accelerating business results. Driven by the iPhone 16 series, Apple sold three million iPhones for the first time in India in Q1 2025, according to IDC, its largest-ever shipment in the nation. “Apple achieved its best-ever Q1 in India, driven by strong iPhone 16 series momentum,” said Canalys.
In other words, it looks as if Apple’s shrewd decision to create business in India will pay a double benefit, giving the company access to a growing and developing economy even as the so-called ‘First World’ economies tumble into existential decline.
You can follow me on social media! Join me on BlueSky, LinkedIn, Mastodon, and MeWe.
Freelancers now represent more than one in four US workers
As AI integration accelerates, businesses are facing widening skills gaps that traditional employment models struggle to address, and so more companies are choosing to hire freelancers to fill the void, according to a new report.
The report, from freelance work platform Upwork, claims there’s a major workforce shift as well, with 28% of skilled knowledge workers now freelancing for greater “autonomy and purpose.” An additional 36% of full-time employees are considering switching to freelance, while only 10% of freelancers want to return to traditional jobs, according to the report, which is based on a recent survey of 3,000 skilled, US-based knowledge workers.
Gen Z is leading the shift, as 53% of skilled Gen Z professionals already freelance, and they’re expected to make up 30% of the freelance US workforce by 2030.
“The traditional 9-to-5 model is rapidly losing its grip as skilled talent chooses flexibility, financial control, and meaningful work over outdated corporate structures,” Kelly Monahan, managing director of the Upwork Research Institute, said in a statement. “Companies that cling to old hiring and workforce models risk falling behind.”
US businesses have ramped up their freelance hiring by 260% between 2022 and 2024, according to a report from Mellow.io, an HR platform that manages and pays freelance contractors.
Increasingly, US businesses have turned to freelancers overseas — most frequently Eastern Europe — to fill their tech talent void, particularly for web developers, programmers, data analysts, and web designers, according to Mellow.io.
“This trend shows no signs of slowing,” Mellow.io’s report stated. “The region offers an unparalleled balance of cost efficiency and highly skilled talent.”
The US is a freelancer havenGig workers, earning through short-term, flexible jobs via apps or platforms, are also thriving, according to career site JobLeads.
JobLeads analyzed data from the Online Labour Observatory and the World Bank Group to reveal the countries dominating online gig work. The United States is leading in the number of online freelancers with 28% of the global online freelance market.
Software and tech roles dominate in the US, representing 36.4% of freelancers, followed by creative/multimedia (21.1%) and clerical/data entry jobs (18.2%).
Globally, Spain and Mexico rank second and third in freelancer share, with 7.0% and 4.6%, respectively. Among full-time online gig workers, 52% have a high school diploma, while 20% hold a bachelor’s degree, according to JobLeads.
“The gig economy is booming worldwide, with the number of gig workers expected to rise by over 30 million in the next year alone,” said Martin Schmidt, JobLead’s managing director. “This rapid growth reflects a fundamental shift in how people approach work — flexibility and autonomy are no longer just perks but non-negotiables for today’s workforce.”
Gen Z and younger professionals are embracing gig work for its flexibility and control, while businesses gain access to a global pool of skilled freelancers, Schmidt said.
“As the sector continues to evolve, both workers and employers need to adapt to a new reality where traditional employment models may no longer meet the needs and expectations of the modern workforce,” he said.
Confidence in freelancing is high, with 84% of freelancers and 77% of full-time workers viewing its future as bright, according to Upwork’s report. Freelancers are also seeing more opportunities, with 82% reporting more work than last year, compared to 63% of full-time employees, the report said.
Freelance workers generated $1.5 trillion in earnings in 2024 alone, Upwork said. The trend is gaining momentum, particularly among Gen Z, with many full-time employees eyeing independent work, according to Upwork.
Freelancers are leading in AI, software, and sustainability jobs, demonstrating higher adaptability and continuous learning, according to the report, which focused exclusively on skilled knowledge workers, not gig workers. It also included “moonlighters,” or workers who have full-time employment but freelance on the side.
More than half (54%) of freelancers report advanced AI proficiency compared to 38% of full-time employees, and 29% have extensive experience building, training, and fine-tuning machine learning models (vs. 18% of full-time employees), the report stated.
Those who earn exclusively through freelance work report a median income of $85,000, surpassing their full-time employee counterparts at $80,000, the report stated.
Upwork’s Future Workforce Index is the company’s first such report, and so it said it is unable to provide freelance employment numbers from previous years that would indicate a rising or falling trend.
“However, what we can confidently say, based on multiple studies conducted by the Upwork Research Institute over the past several years, is that freelancing isn’t a passing trend,” an Upwork spokesperson said. “It continues to hold steady and accelerate, emerging as a vital and intentional component of the skilled workforce.”
A silver tsunamiEmily Rose McRae, a Gartner Resarch senior director analyst, said she’s seeing growing interest in freelancing from professionals who desire more flexible work, oftentimes as a safety net for people who lost their job during economic turmoil “and also as a way to build up a network of clients when starting a new business or looking to expand your small business.”
Organizations are also facing an impending “silver tsunami” of older workers retiring and leaving a talent gap in their wake.
“Many clients I speak with on this topic are trying to identify the best strategy for addressing this expertise gap, whether it is upskilling more junior employees, bringing retired experts back to serve as freelance mentors or coaches, contracting out critical projects to experts on a freelance market, or even redesigning roles and workflows to reduce the amount of expertise needed,” she said.
“Being able to bring past employees back as freelancers can be critical for knowledge management and training,” McRae said. “This is especially critical when increasingly AI tools are being deployed on the basic and repetitive tasks that were previously the training ground used for employees to create a pipeline of future experts within the employee base,” McRae said.
Despite the occurrence of layoffs — and sometimes because of them — organizations often face skills gaps exacerbated by the rise of AI, according to Forrester Research.
Skills intelligence tools, often powered by AI, can help organizations identify and manage the skills and gaps in their workforce and predict future skill needs, including recommending needed recruiting, upskilling and reskilling, and talent mobility. Companies must also be able to scale up or down rapidly in on-demand talent markets, which include contractors, freelancers, gig workers, and service providers. On-demand talent increases the adaptability of your workforce but works best for non-core functions and for specialized skills that are needed for a limited period.
Companies, however, can’t simply replace employees with freelancers without facing significant risks, McRae noted. Freelancers are best used for defined projects with clear deliverables. Using them to do the same work as former employees, without changing the role or workflow, can lead to legal and operational issues, she said. As reliance on non-employees grows, so do risks like worker misclassification, dual employment, compliance problems, and costly mistakes such as rehiring underperforming contractors or overpaying for services.
“I’ll see this at organizations that instituted hiring freezes, so business leaders turned to contractors to continue to be able to meet their goals,” she said. “It can also create financial risks — when there isn’t much transparency or data collection going on, organizations may find that they are paying the same contractor service provider or freelancer different rates in different departments, for the same set of tasks.”
There’s also a risk that third-party contractors are not vetting temp workers, who may not meet the necessary certifications and trainings to comply with local or national regulations, McRae added.
“Or that contractors and freelancers have not been fully offboarded after completing their assignments and still retain access to the organization’s systems and data,” she said.
New Critical SAP NetWeaver Flaw Exploited to Drop Web Shell, Brute Ratel Framework
New Critical SAP NetWeaver Flaw Exploited to Drop Web Shell, Brute Ratel Framework
Why NHIs Are Security's Most Dangerous Blind Spot
Why NHIs Are Security's Most Dangerous Blind Spot
14 ways Google Lens can save you time on Android
Psst: Come close. Your Android phone has a little-known superpower — a futuristic system for bridging the physical world around you and the digital universe on your device. It’s one of Google’s best-kept secrets. And it can save you tons of time and effort.
Oh — and no, it isn’t Gemini.
It’s a little somethin’ called Google Lens, and it’s been lurking around on Android and quietly getting more and more capable for years — since long before “AI” became part of our popular vernacular. Google doesn’t make a big deal about it, weirdly enough, and you really have to go out of your way to even realize it exists. But once you uncover it, you’ll feel like you have a magic wand in your pocket.
At its core, Google Lens is best described as a search engine for the real world. It uses (yes…) artificial intelligence to identify text and objects both within images and in a live view from your phone’s camera, and it then lets you learn about and interact with those elements in all sorts of interesting ways.
But while Lens’s ability to, say, identify a flower, look up a book, or give you info about a landmark is certainly impressive, it’s the system’s more mundane-seeming productivity powers that are far more likely to find a place in your day-to-day life.
So grab your nearest Android gadget, go install the Google Lens app, if you haven’t already — or take your pick from any of the other smart Google-Lens-launching shortcuts — and get ready to teach your phone some spectacularly useful new tricks.
[Hey — love shortcuts? My free Android Shortcut Supercourse will teach you tons of time-saving tricks for your phone. Sign up now and start learning!]
Google Lens trick #1: Dive deep into your screen>In a mildly wild twist, the first and newest Google Lens goody in our list is also the oldest and most familiar one of all — at least, if you’ve paying attention in this arena for long.
>It’s a snazzy new feature that lets you indirectly have Lens analyze whatever’s on your screen and then give you helpful extra context around it.
>This one can actually be accessed via Google’s next-gen Gemini virtual assistant. Just summon Gemini, using the “Hey Google” hotword or any other method you like, then look for the tappable “Ask about screen” button within its overlay interface.
The “Ask about screen” button within Gemini is a hidden way to access a powerful Lens feature.JR Raphael, Foundry
While the answer is wrapped in Gemini, the technology powering it is the same stuff that’s been present within Lens for ages. And it’s every bit as impressive.
Answers, on demand — from anywhere on Android.JR Raphael, Foundry
>And if you’re feeling a pesky sense of déjà vu around this, well, you should be: Google first announced this latest iteration of the on-demand screen searching system more than two years ago, for the previous-gen Google Assistant system. Prior to that point, Assistant had briefly offered a similar sort of setup >without> Lens’s involvement. And prior to that, Google had a spectacularly useful native Android feature called Now on Tap, way back in 2015’s Android 6.0 (Marshmallow) era — though amusingly, we haven’t >quite> caught back up to that level of search intelligence just yet.
>Hey, what can we say? It’s the Google way.
Google Lens trick #2: Copy text from the real worldFrom the virtual world to the physical world around us, Google Lens’s most potent power and the one I rely on most frequently is its ability to grab text from a physical document — a paper, a book, a whiteboard, a suspiciously wordy tattoo on your rumpus, or anything else with writing on it — and then copy that text onto your phone’s clipboard. From there, you can easily paste the text into a Google Doc, a note, an email, a Slack chat, or anywhere else imaginable.
To do that, just open up Google Lens, point your device’s camera at any text around you, then tap the big circular search icon — and you’ll be able to use your finger to select the exact portion of text you want as if it were regular ol’ digital text on a website.
All that’s left is to hit the “Copy” command in the pop-up alongside it, and every last word will be on your system clipboard and ready to paste wherever your thumpy little heart desires.
You can copy text from anywhere — virtual or physical — with a little help from Lens.JR Raphael, Foundry
Google Lens trick #3: Connect text to your computerLet’s face it: Most of us aren’t working only from our Android phones. If you need to get some real-world text onto your computer, Lens can handle that for you, too.
Just go through the same steps we did a second ago, but this time, look for the “Copy to computer” option in the same pop-up menu. (You might have to tap a three-dot icon within that pop-up to reveal it.) As long as you’re actively signed into Chrome with the same Google account on a computer — any computer, whether it’s Windows, Mac, Linux, or ChromeOS — that option should appear. And when you tap it, you’ll get a list of all available destinations.
Get any text onto your computer’s clipboard in an instant with Lens’s clever copying commands.JR Raphael, Foundry
Pick the device you want, and just like magic, the text from the physical document will be on that computer’s clipboard — ready and waiting to be pasted wherever you want it. Hit Ctrl-V (or Cmd-V, on a Mac), and shazam! It’ll pop into any text field, in any app or process where pasting is supported.
Google Lens trick #4: Hear anything out loudMaybe you’ve just been handed a long memo, a printed-out brief of some sort, or a letter from your dear Aunt Sally. Whatever it is, give your eyes a breather and let Lens read it for you while you’re on the go and between meetings.
Just point your phone at the paper, exactly as we did before, and select some specific text within the image once more. This time, look for the little “Listen” option in the pop-up panel atop your image.
Pound your pinky down on that bad boy, and the Google Lens app will actually read the selected text out loud to you, in a soothingly pleasant voice.
Hey, Google: How ’bout a nap-time story while we’re at it?!
Google Lens trick #5: Ask, ask, ask awayIn a move that now seems foreshadowing, Lens has the completely concealed ability to let you chat out loud and ask anything imaginable about whatever your device’s camera is showing.
All you’ve gotta do to try it is open up Lens, aim your camera at something, and then press and hold Lens’s big search button. Then, you can simply speak aloud and ask anything on your mind in a completely natural, conversational way.
Pressing and holding the Lens search button lets you talk and ask questions in a completely natural way.JR Raphael, Foundry
Google’s official introduction of the feature involved asking questions about why some sort of product isn’t working as expecting or how you can fix some common real-world maintenance issue, but it can be every bit as helpful for practically any purpose — anytime you find yourself facing a question about something in front of you.
This one, for now, seems to be available only within the U.S. and in English.
Google Lens trick #6: Interact with text from an imageIn addition to the live stuff, Lens can pull and process text from images — including both actual photos you’ve taken and screenshots you’ve captured.
That latter part opens up some pretty interesting possibilities. Say, for instance, you’ve just gotten an email with a tracking number in it, but the tracking number is some funky type of text that annoyingly can’t be copied. (This seems to happen to me way too often.) Or maybe you’re looking at a web page or presentation where the text for some reason isn’t selectable.
Well, grab a screenshot — by pressing your phone’s power and volume-down buttons together — then make your way over to the Google Lens app. Tap the image icon in Lens’s lower-left corner, look for your screenshot in the gallery that appears, then tap it. And from there, you can simply touch your finger anywhere on the screen to select any text you want. (The same capability is also now present in the newer Android Circle to Search system, by the by, though that feature is still much more limited in its availability.)
You can then copy the text, send it to a computer, or perform any of Lens’s other boundary-defying tricks. Speaking of which…
Google Lens trick #7: Search for any text, anywhereAfter you’ve selected any manner of text from within the Google Lens app, look for the “Search” option within the pop-up panel that appears atop it. It’s all too easy to overlook, but alongside the other options we’ve gone over sits the simple and supremely useful “Search.”
Keep that option in mind as a super-easy way to get info on text from any physical document or captured image without having to manually peck in the words on your own. (Sometimes, Lens will even put related results right within a panel beneath your image, without any additional searching required.)
And on a related note…
Google Lens trick #8: Search for similar visualsWe already know that Lens can search for the text from an image. But the app is also capable of searching the web for other images — images that match the actual objects within whatever photo or screenshot you’re viewing. It’s a fantastic way to find visually similar images or even identify something like a specific phone model or product seen within a photo.
To pull off this slice of Googley sorcery, open up an image of anything within Lens — or just point your phone at an object in the real world — then swipe up on the panel that appears beneath it and look for the “Visual matches” section.
Searching for similar visuals is a smart way to get extra context about anything around you.JR Raphael, Foundry
Google Lens trick #9: Save someone’s contact infoIf you find yourself holding a business card and thinking, “Well, blimey, I sure as heckfire don’t want to type all of this into my contacts app,” first, congratulate yourself on the excellent use of blimey — and then sit your beautiful person-shell back and let Lens handle the heavy lifting for you.
Open Lens, point your phone’s camera at the card, and tap on the person’s phone number or email address. The Google Lens app should recognize the nature of the info and prompt you to add a contact.
One more tap, and the deed is done.
Google Lens trick #10: Email, call, text, or navigateGot an address or number you need to get onto your phone for a specific sort of action? It could be on a business card, on a letter, or even on the front of a random business’s door. Whatever the case, just open the Google Lens app, point your phone at it, and tap the text. (Or, option B: Snap a photo of the info in question and then pull it up in the Lens app later.)
Once Lens sees it, it’ll offer to do whatever’s most appropriate for the sort of info involved. Then, with a single tap, you’ll have the address ready to roll in a new email draft, the number ready to call or text in your dialer or messaging app, or the website pulled up and ready for your viewing in your browser — no time-wasting typing required.
Google Lens trick #11: Translate text from the real worldIf you ever find yourself staring at a sign in another language and wondering what in the world it says, remember that the Google Lens app has a built-in translation feature. To find it, open the app, aim your phone at the text, and tap the word “Translate” along the bottom edge of the screen.
Before you know it, Lens will replace the words on your screen with their English equivalents (or with a translation in whatever language you select, if English isn’t your tasse de thé) — practically in real time. It’s almost spooky how fast and effective it is.
srcset="https://b2b-contenthub.com/wp-content/uploads/2025/04/google-lens-android-translate.webp?quality=50&strip=all 800w, https://b2b-contenthub.com/wp-content/uploads/2025/04/google-lens-android-translate.webp?resize=289%2C300&quality=50&strip=all 289w, https://b2b-contenthub.com/wp-content/uploads/2025/04/google-lens-android-translate.webp?resize=768%2C798&quality=50&strip=all 768w, https://b2b-contenthub.com/wp-content/uploads/2025/04/google-lens-android-translate.webp?resize=671%2C697&quality=50&strip=all 671w, https://b2b-contenthub.com/wp-content/uploads/2025/04/google-lens-android-translate.webp?resize=162%2C168&quality=50&strip=all 162w, https://b2b-contenthub.com/wp-content/uploads/2025/04/google-lens-android-translate.webp?resize=81%2C84&quality=50&strip=all 81w, https://b2b-contenthub.com/wp-content/uploads/2025/04/google-lens-android-translate.webp?resize=462%2C480&quality=50&strip=all 462w, https://b2b-contenthub.com/wp-content/uploads/2025/04/google-lens-android-translate.webp?resize=347%2C360&quality=50&strip=all 347w, https://b2b-contenthub.com/wp-content/uploads/2025/04/google-lens-android-translate.webp?resize=241%2C250&quality=50&strip=all 241w" width="800" height="831" sizes="(max-width: 800px) 100vw, 800px">Translation on demand, courtesy of Google Lens.JR Raphael, Foundry
Pas mal, eh?
Google Lens trick #12: Calculate quicklyThe next time you’ve got a numerical challenge in front of your weary peepers, give your musty ol’ brain a break and let Lens do some good old-fashioned solvin’ for ya.
Just open up Lens and point your phone at the equation in question — whether it’s on a whiteboard, a physical piece of paper, or even a screen in front of you. Scroll over along the line at the bottom of the Lens viewfinder screen until you see the word “Homework” (and don’t worry: Despite what that label implies, you don’t have to be an annoyingly youthful and bushytailed student to use it).
Tap that, then tap the big Lens search icon. And with everything from basic equations to advanced math, chemistry, physics, and biology, Lens will eagerly do your calculation for you and spit back an answer in the blink of an eye.
I won’t tell if you don’t.
Google Lens trick #13: Scan your skinHere’s a weird one: If you ever have some mysterious marking on your mammal skin and find yourself fretting over whether it’s a freckle or something more nefarious, take matters into your own hands and let Lens play the role of dermatologist for you.
Just fire ‘er up on whatever Android phone you’re holding and point the viewfinder at your >wretch-inducing wart perfectly natural epidermal abnormality, then tap that Lens search button. Before you can spit out the words “Holy moley,” Lens will give you a best guess at what you’ve got goin’ on.
Its results aren’t scientific, of course, but they are based on matching your marking to an endless array of examples Lens seeks out on the web — so they’re a fine way to put your mind at ease while you wait for an actual doctor to examine your suspiciously spotted outer layer.
Google Lens trick #14: Crack the codes‘Twas a time when Android code-reading apps were all the rage — and plenty of folks still have ’em hangin’ around today. So long as the Google Lens app is on your phone, though, guess what? You don’t need anything more.
Just open up Lens, aim your camera at any barcode or QR code, and poof: Lens will offer to show you whatever that code contains faster than you ask “What does QR stand for, anyway?”
Who needs a QR code reader when Lens is ready and waiting?JR Raphael, Foundry
Being a mobile-tech magician has never been so satisfying.
Get six full days of advanced Android knowledge with my free Android Shortcut Supercourse. You’ll learn tons of time-saving tricks for your phone!
FBI seeks help to unmask Salt Typhoon hackers behind telecom breaches
Researchers Identify Rack::Static Vulnerability Enabling Data Breaches in Ruby Servers
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- …
- následující ›
- poslední »
