Agregátor RSS
Blender 4.3
Francouzi vypustili do světa Kočičku. Le Chat umí prakticky to stejné co placený ChatGPT Plus, ale je zadarmo
FreeCAD 1.0
D-Link urges users to retire VPN routers impacted by unfixed RCE flaw
Nejlepší notebooky do 25 tisíc. Základní pro hry, spolehlivé na práci a vejde se i MacBook | Vánoce ????
Microsoft now testing hotpatch on Windows 11 24H2 and Windows 365
Foxconn takes another big step toward AI iPhone manufacturing
Apple’s main manufacturing partner, Foxconn, has announced it is working with Nvidia to build digital twins that it says will reshape the future of manufacturing and supply chain management.
Nvidia and Foxconn last year announced plans to use Nvidia’s Omniverse platform to create 3D digital twin tech with which to plan and simulate automated production lines. The scheme was first put into effect at Foxconn’s Hsinchu factory in Taiwan and will be scaled out to Foxconn factories worldwide.
What happens in Hsinchu…Apple’s connection with the Hsinchu facility isn’t particularly overt, but it certainly exists. There’s an Apple Store currently hiring in the city, and Apple also has an R&D facility there. In 2020, Apple confirmed plans to build a new plant in Hsinchu Science Park to supplement the operations it already had in place.
As far as we know, Apple’s Hsinchu-based R&D teams are working on next-generation monitor technologies such as low-temperature polysilicon displays and metal-oxide-semiconductor screens, along with quantum film image sensors, according to earlier reports. (Who knows, it’s not impossible that new tech used in the latest MacBook Pro displays might have been developed there.)
While a bit of a long shot, some of Apple’s server development team might also be based there, given the company is developing its own servers to support its Private Cloud Compute systems for Apple Intelligence. It was recently reported that Apple has asked Foxconn to make AI servers based on Apple Silicon in Taiwan, and given the proximity of the Hsinchu digital twins project, it is hard to ignore the overt opportunity for additional cooperation between the firms.
When it comes to manufacturing, Apple has a pressing challenge to scale up the capacity to build iPhones at factories outside China. Some of this work is already taking place in India where the company is rapidly ramping up production, but it is possible Apple wants some manufacturing taking place elsewhere, such as in Mexico.
Foxconn’s move to build heavily automated production facilities could help Apple with those efforts.
Industry 4.0 and the Apple supply chainI see the latest news with Nvidia as part of a continuum. Foxconn has already built a growing network of eight Industry 4.0 lights-out factories in Taiwan, China, and Mexico. In China, the steady move toward additional automation means Foxconn has been able to reduce its workforce by more than a third while maintaining production levels.
Foxconn’s entire Shenzhen, China, Guanlan factory operates without lighting as it is automated and controlled by a cloud-based AI. The vision of that latter project is that it will become possible to simply tell the cloud AI what products need to be made and how they are to be manufactured, and the system will adjust itself to automatically churn those products out.
There’s a ways to go before that becomes possible, but it sounds like Foxconn will use Nvidia’s tools to track existing manufacturing processes so they can be more easily replicated at factories situated elsewhere.
“Through this technology, Foxconn can replicate and establish production lines across diverse geographical locations with unprecedented speed and precision,” the company said. “This capability enables Foxconn to swiftly deploy high-quality production facilities with unified standards in strategic markets worldwide, significantly enhancing the company’s competitiveness and adaptability in the global landscape.”
Digital twin tech is also very good at identifying bottlenecks and inefficiencies in existing production processes, while the ability to more easily take manufacturing lines to new nations also help build resilience into manufacturing systems. “When facing supply chain disruptions or sudden market demands, Foxconn can quickly simulate manufacturing process changes and adjust production strategies to flexibly allocate resources across different regions for itself and its clients, ensuring production continuity and stability,” Foxconn says.
Resilience and flexibilityTo some extent, the writing has always been on the wall. Supply chains globally buckled during the height of the CoVID-19 pandemic, and Apple’s decision to widen its manufacturing base to new nations was a direct response to this. Apple — and quite clearly, Foxconn — now understand the need to build resilience into the supply chain, and one way to do that is to turn to using heavily automated manufacturing systems that can be easily set up and made productive in new locations. This seems to be the game in play here, particularly in the wake of Apple’s purchase of Darwin AI earlier this year.
The other part of that game reflects the challenge of staffing manufacturing operations at the scale Apple demands. Hundreds of thousands of people globally are now involved in building Apple hardware, and the job is skilled enough that recruiting all those workers can pose problems for the company. This is likely why in June it was revealed that Apple intends to replace 50% of iPhone related assembly line workers in the next few years. That ambition logically requires the kind of productivity enhancements Foxconn and Nvidia are working on now, so logically it makes sense that Apple’s production processes are part of the plan.
Designed by Apple, built by robotsAchieving this is not going to be easy. But where Apple goes, others inevitably follow, which itself means that future employment is going to become even further deindustrialized at about the same time as AI itself leads to mass scale changes in working practices elsewhere. It’s hard to see where this is going, but the other side of that story is that iPhone manufacturing will itself become a movable feast.
“Designed by Apple, built by robots,” some might say.
You can follow me on social media! Join me on BlueSky, LinkedIn, Mastodon, and MeWe.
Helldown ransomware exploits Zyxel VPN flaw to breach networks
Microsoft brings automated ‘agents’ to M365 Copilot
Microsoft has introduced a new tool in Microsoft 365 Copilot to automate repetitive tasks, part of a drive to make the generative AI (genAI) assistant more useful to users.
Copilot Actions, announced at Microsoft’s Ignite conference Tuesday, features a simple trigger-and-action interface that Microsoft hopes will make the workflow automations accessible to a wide range of workers.
The company offered up a few examples for Copilot Actions in a blog post. It can be set to create an automatic summary of important action points at the end of the workday, gather inputs from a team for a weekly newsletter, or summarize recent interactions with a client ahead of a meeting. The feature is now in private preview.
[ Related: Microsoft Ignite 2024 news and insights ]
A Gartner survey of digital workers indicated that 51% have customized and built their own workflows, apps and automations, “so the demand is certainly there for business users (aka citizen developers) to build their own AI agents,” said Jason Wong, vice president analyst at Gartner.
Microsoft’s low-code and no-code tools, including Power Apps and Power Automate, have already “paved the way for Copilot Studio for citizen development,” Wong said. “However, Copilot Studio is still an immature product, and organizations have only started to upskill their employees to understand how to build generative AI powered apps or agents.”
Other new features announced at Ignite include an update to Copilot Pages, the recently-launched document editor tool connected to M365 Copilot. Microsoft will add “rich artifacts” to Copilot Pages, which lets users share a wider variety of information generated in Copilot, such as blocks of code or flow charts, and share them to a Pages document.
Copilot Pages is due to be generally available in early 2025, Microsoft said.
Copilot in Teams will soon be able to analyze visual content shared on-screen during a video call, and users can ask the assistant for a quick summary of files shared in Teams Chat. Both features will be in public preview early next year.
There’s also an “interpreter” function coming to Teams that Microsoft claims will translate meeting participants’ speech in real-time during a video call. Available in public preview in early 2025, Microsoft said it will also be possible simulate a user’s voice in the translated audio.
The feature, currently in private preview, is one of several examples of AI agents coming to M365 Copilot and M365 apps.
Microsoft also announced AI “agents” for M365 Copilot — including the general availability of the previously announced agent builder functionality in SharePoint; the latter essentially lets users created a tailored chatbot to respond to queries related to a specific set of files stored in the content management application. To help manage and secure data accessed by M365 Copilot, Microsoft will make the SharePoint Advanced Management add-on (which previously cost $3 per user a month) available at no extra cost starting early next year.
There’s an Employee Self-Service Agent for BizChat — the chat interface for M365 Copilot –— whereemployees can ask HR and IT-related questions, such as requesting a new laptop. The agent, now in a private preview, can be customized in Microsoft’s Copilot Studio app.
There’s an agent to automate project management processes in Microsoft’s Planner app (in public preview now), with plans in place to open up access to third-party agents from the likes of ServiceNow in the coming months.
Microsoft has struggled to convince Microsoft 365 customers that it’s worth investing in its various genAI tools, many of which launched last year. The latest updates provide an opportunity to show the business value of the genAI assistant, which costs $30 per user each month.
While Microsoft’s “Wave 2” of M365 Copilot features announced in September can be viewed as an attempt to win over undecided buyers, Wong said the new agentic capabilities announced at Ignite are “really more for their current M365 Copilot customers to extend the business value of generative AI beyond individual productivity to show greater ROI.
“Copilot customers [don’t] just want content creation and summarization,” he said. “They want Copilot to replace manual work, impact team workflows and drive process improvements.”
LG vyrobilo nejrychlejší OLED monitor. Se 480 Hz slibuje pastvu pro oči
Vlajková bezzrcadlovka Sony A1 II přináší nutná vylepšení, nic navíc. Levnější a zajímavější konkurenci má ve vlastním hnízdě
Prusa CORE One
RECENZE: TUXEDO Sirius 16 Gen2 - AMD CPU a dGPU pod jednou střechou s Linuxem
Bezpečnostní audit Bhyve a Capsicum z FreeBSD
Předvánoční akce, výhodný scooter i hodinky za korunu. Nabídky zařízení u operátorů v listopadu
Botnet fueling residential proxies disrupted in cybercrime crackdown
Palo Alto Networks tackles firewall-busting zero-days with critical patches
Palo Alto Networks (PAN) finally released a CVE identifier and patch for the zero-day exploit that caused such a fuss last week.…
Disney+ a 30 nejoblíbenějších filmů a seriálů v listopadu 2024. Na co se Češi nejvíc dívají
New Windows 11 recovery tool to let admins remotely fix unbootable devices
Poetry by History’s Greatest Poets or AI? People Can’t Tell the Difference—and Even Prefer the Latter. What Gives?
Here are some lines Sylvia Plath never wrote:
The air is thick with tension,
My mind is a tangled mess,
The weight of my emotions
Is heavy on my chest.
This apparently Plath-like verse was produced by GPT-3.5 in response to the prompt “write a short poem in the style of Sylvia Plath.”
The stanza hits the key points readers may expect of Plath’s poetry, and perhaps a poem more generally. It suggests a sense of despair as the writer struggles with internal demons. “Mess” and “chest” are a near-rhyme, which reassures us that we are in the realm of poetry.
According to a new paper in Nature Scientific Reports, non-expert readers of poetry cannot distinguish poetry written by AI from that written by canonical poets. Moreover, general readers tend to prefer poetry written by AI—at least until they are told it is written by a machine.
In the study, AI was used to generate poetry “in the style of” 10 poets: Geoffrey Chaucer, William Shakespeare, Samuel Butler, Lord Byron, Walt Whitman, Emily Dickinson, TS Eliot, Allen Ginsberg, Sylvia Plath, and Dorothea Lasky.
Participants were presented with 10 poems in random order, five from a real poet and five AI imitations. They were then asked whether they thought each poem was AI or human, rating their confidence on a scale of 1 to 100.
A second group of participants was exposed to three different scenarios. Some were told that all the poems they were given were human. Some were told they were reading only AI poems. Some were not told anything.
They were then presented with five human and five AI poems and asked to rank them on a seven point scale, from extremely bad to extremely good. The participants who were told nothing were also asked to guess whether each poem was human or AI.
The researchers found that AI poems scored higher than their human-written counterparts in attributes such as “creativity,” “atmosphere,” and “emotional quality.”
The AI “Plath” poem quoted above is one of those included in the study, set against several she actually wrote.
A Sign of Quality?As a lecturer in English, these outcomes do not surprise me. Poetry is the literary form that my students find most unfamiliar and difficult. I am sure this holds true of wider society as well.
While most of us have been taught poetry at some point, likely in high school, our reading does not tend to go much beyond that. This is despite the ubiquity of poetry. We see it every day: circulated on Instagram, plastered on coffee cups, and printed in greeting cards.
The researchers suggest that “by many metrics, specialized AI models are able to produce high-quality poetry.” But they don’t interrogate what we actually mean by “high-quality.”
In my view, the results of the study are less testaments to the “quality” of machine poetry than to the wider difficulty of giving life to poetry. It takes reading and rereading to experience what literary critic Derek Attridge has called the “event” of literature, where “new possibilities of meaning and feeling” open within us. In the most significant kinds of literary experiences, “we feel pulled along by the work as we push ourselves through it”.
Attridge quotes philosopher Walter Benjamin to make this point: Literature “is not statement or the imparting of information.”
Philosopher Walter Benjamin argued that literature is not simply the imparting of information. Image Credit: Public domain, via Wikimedia Commons
Yet pushing ourselves through remains as difficult as ever—perhaps more so in a world where we expect instant answers. Participants favored poems that were easier to interpret and understand.
When readers say they prefer AI poetry, then, they would seem to be registering their frustration when faced with writing that does not yield to their attention. If we do not know how to begin with poems, we end up relying on conventional “poetic” signs to make determinations about quality and preference.
This is of course the realm of GPT, which writes formally adequate sonnets in seconds. The large language models used in AI are success-orientated machines that aim to satisfy general taste, and they are effective at doing so. The machines give us the poems we think we want: Ones that tell us things.
How Poems ThinkThe work of teaching is to help students attune themselves to how poems think, poem by poem and poet by poet, so they can gain access to poetry’s specific intelligence. In my introductory course, I take about an hour to work through Sylvia Plath’s “Morning Song.” I have spent 10 minutes or more on the opening line: “Love set you going like a fat gold watch.”
How might a “watch” be connected to “set you going”? How can love set something going? What does a “fat gold watch” mean to you—and how is it different from a slim silver one? Why “set you going” rather than “led to your birth”? And what does all this mean in a poem about having a baby, and all the ambivalent feelings this may produce in a mother?
In one of the real Plath poems that was included in the survey, “Winter Landscape, With Rooks,” we observe how her mental atmosphere unfurls around the waterways of the Cambridgeshire Fens in February:
Water in the millrace, through a sluice of stone,
plunges headlong into that black pond
where, absurd and out-of-season, a single swan
floats chaste as snow, taunting the clouded mind
which hungers to haul the white reflection down.
How different is this to GPT’s Plath poem? The achievement of the opening of “Winter Landscape, With Rooks” is how it intricately explores the connection between mental events and place. Given the wider interest of the poem in emotional states, its details seem to convey the tumble of life’s events through our minds.
Our minds are turned by life just as the mill is turned by water; these experiences and mental processes accumulate in a scarcely understood “black pond.”
Intriguingly, the poet finds that this metaphor, well constructed though it may be, does not quite work. This is not because of a failure of language, but because of the landscape she is trying to turn into art, which is refusing to submit to her emotional atmosphere. Despite everything she feels, a swan floats on serenely—even if she “hungers” to haul its “white reflection down.”
I mention these lines because they turn around the Plath-like poem of GPT-3.5. They remind us of the unexpected outcomes of giving life to poems. Plath acknowledges not just the weight of her despair, but the absurd figure she may be within a landscape she wants to reflect her sadness.
She compares herself to the bird that gives the poem its title:
feathered dark in thought, I stalk like a rook,
brooding as the winter night comes on.
These lines are unlikely to register highly in the study’s terms of literary response—“beautiful,” “inspiring,” “lyrical,” “meaningful,” and so on. But there is a kind of insight to them. Plath is the source of her torment, “feathered” as she is with her “dark thoughts.” She is “brooding,” trying to make the world into her imaginative vision.
Sylvia Plath. Image Credit: RBainbridge2000, via Wikimedia Commons, CC BY
The authors of the study are both right and wrong when they write that AI can “produce high-quality poetry.” The preference the study reveals for AI poetry over that written by humans does not suggest that machine poems are of a higher quality. The AI models can produce poems that rate well on certain “metrics.” But the event of reading poetry is ultimately not one in which we arrive at standardized criteria or outcomes.
Instead, as we engage in imaginative tussles with poems, both we and the poem are newly born. So the outcome of the research is that we have a highly specified and well thought-out examination of how people who know little about poetry respond to poems. But it fails to explore how poetry can be enlivened by meaningful shared encounters.
Spending time with poems of any kind, attending to their intelligence and the acts of sympathy and speculation required to confront their challenges, is as difficult as ever. As the Plath of GPT-3.5 puts it:
My mind is a tangled mess,
[…]
I try to grasp at something solid.
This article is republished from The Conversation under a Creative Commons license. Read the original article.
- « první
- ‹ předchozí
- …
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- …
- následující ›
- poslední »