Singularity HUB

Syndikovat obsah Singularity Hub
News and Insights on Technology, Science, and the Future from Singularity Group
Aktualizace: 1 min 54 sek zpět

Industry’s Influence on AI Is Shaping the Technology’s Future—for Better and for Worse

5 Březen, 2023 - 17:00

The enormous potential of AI to reshape the future has seen massive investment from industry in recent years. But the growing influence of private companies in the basic research that is powering this emerging technology could have serious implications for how it develops, say researchers.

The question of whether machines could replicate the kind of intelligence seen in animals and humans is almost as old as the field of computer science itself. Industry’s engagement with this line of research has fluctuated over the decades, leading to a series of AI winters as investment has flowed in and then back out again as the technology has failed to live up to expectations.

The advent of deep learning at the turn of the previous decade, however, has resulted in one of the most sustained runs of interest and investment from private companies. This is now beginning to yield some truly game-changing AI products, but a new analysis in Science shows that it’s also leading to industry taking an increasingly dominant position in AI research.

This is a doubled-edged sword, say the authors. Industry brings with it money, computing resources, and vast amounts of data that have turbo-charged progress, but it is also refocusing the entire field on areas that are of interest to private companies rather than those with the greatest potential or benefit to humanity.

“Industry’s commercial motives push them to focus on topics that are profit-oriented. Often such incentives yield outcomes in line with the public interest, but not always,” the authors write. “Although these industry investments will benefit consumers, the accompanying research dominance should be a worry for policy-makers around the world because it means that public interest alternatives for important AI tools may become increasingly scarce.”

The authors show that industry’s footprint in AI research has increased dramatically in recent years. In 2000, only 22 percent of presentations at leading AI conferences featured one or more co-authors from private companies, but by 2020 that had hit 38 percent. But the impact is most clearly felt at the cutting edge of the field.

Progress in deep learning has to a large extent been driven by the development of ever larger models. In 2010, industry accounted for only 11 percent of the biggest AI models, but by 2021 that had hit 96 percent. This has coincided with growing dominance on key benchmarks in areas like image recognition and language modeling, where industry involvement in the leading model has grown from 62 percent in 2017 to 91 percent in 2020.

A key driver of this shift is the much larger investments the private sector is able to make compared to public bodies. Excluding defense spending, the US government allocated $1.5 billion for spending on AI in 2021, compared to the $340 billion spent by industry around the world that year.

That extra funding translates to far better resources—both in terms of computing power and data access—and the ability to attract the best talent. The size of AI models is strongly correlated with the amount of data and computing resources available, and in 2021 industry models were 29 times larger than academic ones on average.

And while in 2004 only 21 percent of computer science PhDs that had specialized in AI went into industry, by 2020 that had jumped to almost 70 percent. The rate at which AI experts have been hired away from university by private companies has also increased eight-fold since 2006.

The authors point to OpenAI as a marker of the increasing difficulty of doing cutting-edge AI research without the financial resources of the private sector. In 2019, the organization transformed from a non-profit to a “capped for-profit organization” in order to “rapidly increase our investments in compute and talent,” the company said at the time.

This extra investment has had its perks, the authors note. It’s helped to bring AI technology out of the lab and into everyday products that can improve people’s lives. It’s also led to the development of a host of valuable tools used by industry and academia alike, such as software packages like TensorFlow and PyTorch and increasingly powerful computer chips tailored to AI workloads.

But it’s also pushing AI research to focus on areas with potential commercial benefits for its sponsors, and just as importantly, data-hungry and computationally-expensive AI approaches that dovetail nicely with the kind of things big technology companies are already good at. As industry increasingly sets the direction of AI research, this could lead to the neglect of competing approaches towards AI and other socially beneficial applications with no clear profit motive.

“Given how broadly AI tools could be applied across society, such a situation would hand a small number of technology firms an enormous amount of power over the direction of society,” the authors note.

There are models for how the gap between the private and public sector could be closed, say the authors. The US has proposed the creation of a National AI Research Resource made up of public research cloud and public datasets. China recently approved a “national computing power network system.” And Canada’s Advanced Research Computing platform has been running for almost a decade.

But without intervention from policymakers, the authors say that academics will likely be unable to properly interpret and critique industry models or offer public interest alternatives. Ensuring they have the capabilities to continue to shape the frontier of AI research should be a key priority for governments around the world.

Image Credit: DeepMind / Unsplash 

Kategorie: Transhumanismus

This Week’s Awesome Tech Stories From Around the Web (Through March 4)

4 Březen, 2023 - 17:00

Microsoft Unveils AI Model That Understands Image Content, Solves Visual Puzzles
Benj Edwards | Ars Technica
“On Monday, researchers from Microsoft introduced Kosmos-1, a multimodal model that can reportedly analyze images for content, solve visual puzzles, perform visual text recognition, pass visual IQ tests, and understand natural language instructions. The researchers believe multimodal AI—which integrates different modes of input such as text, audio, images, and video—is a key step to building artificial general intelligence (AGI) that can perform general tasks at the level of a human.”


Figure Promises First General-Purpose Humanoid Robot
Evan Ackerman | IEEE Spectrum
“Over the past year, the company has hired more than 40 engineers from institutions that include IHMC, Boston Dynamics, Tesla, Waymo, and Google X, most of whom have significant prior experience with humanoid robots or other autonomous systems. ‘It’s our view that this is the best humanoid robotics team out there,’ Adcock tells IEEE Spectrum.”


Ethereum Moved to Proof of Stake. Why Can’t Bitcoin?
Amy Castor | MIT Technology Review
“A single Bitcoin transaction uses the same amount of energy as a single US household does over the course of nearly a month. But does it have to be that way? The Bitcoin community has historically been fiercely resistant to change, but pressure from regulators and environmentalists fed up with Bitcoin’s massive carbon footprint may force them to rethink that stance.”


The Inside Story of How ChatGPT Was Built From the People Who Made It
Will Douglas Heaven | MIT Technology Review
“When OpenAI launched ChatGPT, with zero fanfare, in late November 2022, the San Francisco–based artificial-intelligence company had few expectations. Certainly, nobody inside OpenAI was prepared for a viral mega-hit. The firm has been scrambling to catch up—and capitalize on its success—ever since. …To get the inside story behind the chatbot—how it was made, how OpenAI has been updating it since release, and how its makers feel about its success—I talked to four people who helped build what has become one of the most popular internet apps ever.”


Face Recognition Software Led to His Arrest. It Was Dead Wrong
Khari Johnson | Wired
“The Alonzo Sawyer case adds to just a handful of known instances of innocent people getting arrested following investigations that involved face recognition misidentification—all have been Black men. Three cases came to light in 2019 and 2020 and another last month in which Georgia resident Randal Reid was released from jail after a judge recalled an arrest warrant linking him to thefts of designer purses in Louisiana.”


As AI Booms, Lawmakers Struggle to Understand the Technology
Cecilia Kang and Adam Satariano | The New York Times
“The problem is that most lawmakers do not even know what AI is, said Representative Jay Obernolte, a California Republican and the only member of Congress with a master’s degree in artificial intelligence. ‘Before regulation, there needs to be agreement on what the dangers are, and that requires a deep understanding of what AI is,’ he said. ‘You’d be surprised how much time I spend explaining to my colleagues that the chief dangers of AI will not come from evil robots with red lasers coming out of their eyes.’i”


Key Steps in Evolution on Earth Tell Us How Likely Intelligent Life Is Anywhere Else
Adam Frank | Big Think
“There are trillions of planets where life could form. But what are the odds that intelligence could evolve on any of them? The Hard Steps Model identifies the unlikely accidents that led to intelligent life on Earth. It allows for the possibility of mathematically modeling the possibility of life emerging elsewhere. The model makes it seem like intelligence in the cosmos will be really, really rare.”


Stability AI, Hugging Face and Canva Back New AI Research Nonprofit
Kyle Wiggers | TechCrunch
“Developing cutting-edge AI systems like ChatGPT requires massive technical resources, in part because they’re costly to develop and run. While several open source efforts have attempted to reverse-engineer proprietary, closed source systems created by commercial labs such as Alphabet’s DeepMind and OpenAI, they’ve often run into roadblocks—mainly due to a lack of capital and domain expertise. Hoping to avoid this fate, one community research group, EleutherAI, is forming a nonprofit foundation.”

Image Credit: Fernand De Canne / Unsplash

Kategorie: Transhumanismus

Scientists Thought the First Hunter-Gatherers in Europe Disappeared During the Last Ice Age. Now, Ancient DNA Analysis Says Otherwise

2 Březen, 2023 - 17:00

Hunter-gatherers took shelter from the ice age in Southwestern Europe, but were replaced on the Italian peninsula according to two new studies, published in Nature and Nature Ecology & Evolution today.

Modern humans first began to spread across Eurasia approximately 45,000 years ago, arriving from the near east. Previous research claimed these people disappeared when massive ice sheets covered much of Europe around 25,000–19,000 years ago. By comparing the DNA of various ancient humans, we show this was not the case for all hunter-gatherer groups.

Our new results show the hunter-gatherers of Central and Southern Europe did disappear during the last ice age. However, their cousins in what is now France and Spain survived, leaving genetic traces still visible in the DNA of Western European peoples nearly 30,000 years later.

Two Studies With One Intertwining Story

In our first study in Nature, we analyzed the genomes—the complete set of DNA a person carries—of 356 prehistoric hunter-gatherers. In fact, our study compared every available ancient hunter-gatherer genome.

In our second study in Nature Ecology & Evolution, we analyzed the oldest hunter-gatherer genome recovered from the southern tip of Spain, belonging to someone who lived approximately 23,000 years ago. We also analyzed three early farmers who lived roughly 6,000 years ago in southern Spain. This allowed us to fill an important sampling gap for this region.

By combining results from these two studies, we can now describe the most complete story of human history in Europe to date. This story includes migration events, human retreat from the effects of the ice age, long-lasting genetic lineages, and lost populations.

Post-Ice-Age Genetic Replacement

Between 32,000 and 24,000 years ago, hunter-gatherer individuals (associated with what’s known as Gravettian culture) were widespread across the European continent. This critical time period ends at the Last Glacial Maximum. This was the coldest period of the last ice age in Europe, and took place 24,000 to 19,000 years ago.

Our data show that populations from Southwestern Europe (today’s France and Iberia), and Central and Southern Europe (today’s Italy and Czechia), were not closely genetically related. These two distinct groups were instead linked by similar weapons and art.

We could see that Central and Southern European Gravettian populations left no genetic signal after the Last Glacial Maximum—in other words, they simply disappeared. The individuals associated with a later culture (known as the Epigravettian) were not descendants of the Gravettian. According to one of my Nature co-authors, He Yu, they were “genetically distinct from the area’s previous inhabitants. Presumably, these people came from the Balkans, arrived first in northern Italy around the time of the Last Glacial Maximum, and spread all the way south to Sicily.”

In Central and Southern Europe, our data indicate people associated with the Epigravettian populations of the Italian peninsula later spread across Europe. This occurred approximately 14,000 years ago, following the end of the ice age.

Climate Refuge

While the Gravettian populations of Central and Southern Europe disappeared, the fate of the Southwestern populations was not the same.

We detected the genetic profile of Southwestern Gravettian populations again and again for the next 20,000 years in Western Europe. We saw this first in their direct descendants (known as Solutrean and Magdalenian cultures). These were the people who took refuge and flourished in Southwestern Europe during the ice age. Once the ice age ended, the Magdalenians spread northeastward, back into Europe.

Remarkably, the 23,000-year-old remains of a Solutrean individual from Cueva de Malalmuerzo in Spain allowed us to make a direct link to the first modern humans that settled Europe. We could connect them to a 35,000-year-old individual from Belgium, and then to hunter-gatherers who lived in Western Europe long after the Last Glacial Maximum.

Sea levels during the ice age were lower, making it only 13 kilometers from the tip of Spain to Northern Africa. However, we observed no genetic links between individuals in southern Spain and northern Morocco from 14,000 years ago. This showed that while European populations retreated south during the ice age, they surprisingly stopped before reaching Northern Africa.

Our results show the special role the Iberian peninsula played as a safe haven for humans during the ice age. The genetic legacy of hunter-gatherers would survive in the region after more than 30,000 years, unlike their distant relatives further east.

Post Ice-Age Interaction

Some 2,000 years after the end of the ice age, there were again two genetically distinct hunter-gatherer groups. There was the “old” group in Western and Central Europe, and the “more recent” group in Eastern Europe.

These groups showed no evidence of genetic exchange with southwestern hunter-gatherer populations for approximately 6,000 years, until roughly 8,000 years ago.

At this time, agriculture and a sedentary lifestyle had begun to spread with new peoples from Anatolia into Europe, forcing hunter-gatherers to retreat to the northern fringes of Europe.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: Mauricio Anton/Wikimedia Commons

Kategorie: Transhumanismus

This Steak Is Tender, Marbled, Meaty—and 100% Vegan

1 Březen, 2023 - 17:00

From burgers to sausages and steak tips to chicken nuggets, there’s no shortage of plant-based “meat” products on grocery store shelves and in restaurants these days. Companies like Beyond Meat and Impossible Foods have done an impressive job diversifying their offerings, with almost any processed meat you can think of now on their lists (even beef jerky and popcorn chicken). But a key cut of meat is still missing from these big names’ menus: a good old-fashioned filet, just like the cows make ‘em.

A lesser-known player in the industry has been working to fill this gap. Slovenian startup Juicy Marbles was co-founded in early 2021 by Y Combinator alums Luka Sincek, Tilen Travnik, and Maj Hrova. The company launched its first product, a thick-cut filet mignon, in early 2022, and more recently started selling a whole-cut loin as well.

I received a whole-cut loin to sample; it arrived on dry ice, uncooked, and I was instructed to freeze or cook it within ten days. As a lifelong and unrepenting carnivore, I was wary but curious. It looked like meat: light red, fibrous, slightly moist. It felt like meat: dense, not overly pliable, requiring some pressure to slice through. But how was it going to taste?

Vegan Venture Slicing up the steak

I cut the steak into one-inch-thick slices and seasoned them with salt, pepper, paprika, and garlic powder, then pan-fried them in a little bit of vegetable oil for four minutes on each side. Upon flipping them over I was surprised to see that the edges had browned, much like real meat does.

I served the steak with quinoa and sauteed veggies, and after a few bites, I couldn’t deny it was both tasty and had a pleasant texture. Did it taste or feel like a real steak? Not really. The real meat it most reminded me of was rib meat, the kind that easily pulls off the bone when the ribs have been slow-cooked; soft and tender, but not dried out. The plant-based steak had a distinctly fatty-like mouthfeel without the excessive oiliness you sometimes get from animal fat.

Marbling Mystery

Achieving this texture, and a realistic “marbling” effect, has been one of the biggest challenges for plant-based meat companies. How do you replicate—with plants—animal tissue that has thin ribbons of fat running through it?

Though they can’t reveal too many details of their proprietary technology, Juicy Marbles has disclosed that unlike many plant-based meat companies, they don’t use 3D printing in their production process. Rather, they use a grinder they call the Meat-O-Matic 9000, which layers plant protein fibers on top of each other in a way that resembles muscle fibers. Deposits of hardened sunflower oil help add a realistic fat marbling texture and mouthfeel.

“Our business is based around the concept of protein texture—this is the defining factor that draws people to steak, when compared to a cheaper cut,” the company told TechCrunch. “In the plant-based meat vertical, there has not been as much innovation in the whole cuts space, and no one has come close to inventing a steak that resembles anything high-end.”

Health Highlights

The first few ingredients listed on the whole-cut loin package are plant structure (70 percent, made up of water, soy protein concentrate, and wheat protein isolate), sunflower oil, natural flavors, beetroot powder, and thickener. In terms of nutritional value, a four-ounce serving has 200 calories, 8 grams of fat, 7 grams of dietary fiber, no cholesterol, and 26 grams of protein.

A three-ounce filet mignon has 227 calories, 15 grams of fat, 6 grams of saturated fat, 82 milligrams of cholesterol, and 22 grams of protein. So the plant-based meat and the real meat are comparable in terms of some key parts of our daily diet.

Image Credit: Juicy Marbles

One motivator for carnivores to go plant-based is the healthier profile of vegan meat. Juicy Marbles’ filet mignon got some press a little under a year ago when Lizzo posted a video to TikTok of herself cooking the filet with vegan eggs for breakfast, proclaiming after taking a bite, “It’s good!”

Though I enjoyed the steak and think it’s a high-quality product, I’m not certain I’d purchase it and regularly incorporate it into my meals, even if its price substantially undercut real steak. The problem with plant-based meat (and really any plant-based or vegan product that imitates an animal product, from eggs to milk to bacon) is that people who like the real thing aren’t likely to switch over to an imitation of the real thing, no matter how good it is. If I want meat, a plant-based approximation isn’t going to cut it.

Meanwhile, people who’ve made the choice to exclude meat from their diets may not be looking for a protein source that tastes and feels like meat.

Vladimir Mićković, Juicy Marbles’ Chief Business Officer, doesn’t feel that the company needs to target meat-eaters nor vegetarians. “It feels limiting to see people merely through the lens of their diet, so we don’t like to put them in such groups at all,” he told me in an email. “It doesn’t matter what diet our customers may follow, their taste buds and bodies will be the final judge.”

Food of the Future?

Could plant-based meat make a difference for the environment in terms of water, land, and emissions? Sure—but it would need to be adopted on a massive scale, and that could take a while; the industry has seen some fluctuation, with fast food giants like Burger King, McDonald’s, and Kentucky Fried Chicken jumping on the bandwagon over the last three years, but plant-based meat more recently being called a fad and a flop as companies like Beyond Meat and Impossible Foods see declining sales.

If consumers didn’t have some amount of appetite for meat substitutes, though, we wouldn’t be seeing more of them appear on the market, so there must be something to the plant-based movement. And it’s undeniable that the way we produce meat needs to change.

It seems it’s still too early to say whether plant-based meat is a short-term fad or a long-term fix. But in the meantime, if you’re looking for a meat-like protein to add to your meal while keeping it vegan, Juicy Marbles steak is worth a try.

Mićković believes the product has endless possibilities. “I just adore it with bordelaise sauce, but most commonly, I like to cut it into strips and combine it with unusual flavors like chermoula, maple, fruit, or new spices,” he said. “The question we love to answer in the kitchen is, “Ok, hear me out, what if we…”

Image Credit: Juicy Marbles

Kategorie: Transhumanismus

Scientists Are Using AI to Dream Up Artificial Enzymes

28 Únor, 2023 - 17:00

One of my favorite childhood summertime memories is being surrounded by fireflies. As the sun set, their shimmering glow would spark up the backyard like delicate fairy lights. The fact that living beings could produce light felt like magic.

But it’s not magic. It’s enzymes.

Enzymes are the catalysts of life. They drive every step of our metabolism, power photosynthesis in plants, propel viruses to replicate—and in certain organisms, trigger bioluminescence so they shine like diamonds.

Unlike manmade catalysts, which help speed up chemical reactions but often require high heat, pressure, or both, enzymes are incredibly gentle. Similar in concept to yeast for baking, enzymes work at life-sustaining temperatures. All you need to do is give them a substrate and working conditions—for example, flour and water—and they’ll perform their magic.

It’s partially why enzymes are incredibly valuable. From brewing beer to manufacturing medications and breaking down pollutants, enzymes are nature’s expert chemists.

What if we can outperform nature?

This week, a new study in Nature tapped into AI to engineer enzymes from scratch. Using deep learning, Dr. David Baker’s team at the University of Washington designed a new enzyme that mimics the firefly’s ability to spark light, but inside human cells in Petri dishes. Overall, the AI “hallucinated” over 7,500 promising enzymes, which were further experimentally tested and optimized. The resulting light was bright enough to see with bare eyes.

Compared to its natural counterpart, the new enzyme was highly effective, requiring just a little bit of substrate to light up the dark. It was also highly specific, meaning that the enzyme only preferred one substrate. In other words, the strategy could design multiple enzymes, each never seen in nature, to simultaneously perform multiple jobs. For example, they could trigger multiple-colored bioluminescence like a disco ball for imaging different biochemical pathways inside cells. One day, the engineered enzymes could also “double-tap” medicine and, say, diagnose a condition and test a treatment at the same time.

“Living organisms are remarkable chemists. Rather than relying on toxic compounds or extreme heat, they use enzymes to break down or build up whatever they need under gentle conditions. New enzymes could put renewable chemicals and biofuels within reach,” said Baker.

Proteins by Design

At their core, enzymes are just proteins. That’s great news for AI.

Back in 2021, the Baker lab developed an algorithm that accurately predicts protein structures based on the amino acid sequence alone. The team next nailed down functional sites in proteins using trRosetta, an AI architect that imagines and then hones in on hot spots that a drug, protein, or antibody can grab onto—paving the way for medications humans can’t dream up.

So why not use the same strategy to design enzymes and fundamentally rewire nature’s biochemistry?

Enzyme 2.0

The team focused on luciferase as their first target—the enzyme that makes fireflies sparkle.

It’s not for childhood nostalgia: luciferase is widely used in biological research. With the right partner substrate, luminescent photons shine through the dark without the need for an external light source, allowing scientists to directly peek inside a cell’s inner workings. So far, scientists have only identified a few types of these valuable enzymes, with many unsuitable for mammalian cells. This makes the enzyme a perfect candidate for AI-driven design, the team said.

They set out with several goals. One, the new light-emitting enzyme should be small and stable in higher temperatures. Two, it needed to play well with cells: when coded as DNA letters and delivered into living human cells, it could hijack the cell’s internal protein-making factory and fold into accurate 3D structures without causing stress or damage to its host. Three, the candidate enzyme had to be selective for its substrate to emit light.

Selecting the substrates was easy: the team focused on two chemicals already useful for imaging. Both are in a family dubbed “luciferin,” but they differ in their exact chemical structure.

Then they ran into problems. A critical factor to train an AI is tons of data. Most previous studies used open-sourced databases such as the Protein Data Bank to screen for possible protein scaffolds—the backbone that makes up a protein. Yet DTZ (diphenylterazine), their first luciferin of choice, had few entries. Even worse, changes to their sequence caused unpredictable results in their ability to emit light.

As a workaround, the team generated their own database of protein scaffolds. Their backbone of choice started from a surrogate protein, dubbed NTF2 (nuclear transport factor 2). It’s a wild bet: NTF2 has nothing to do with bioluminescence, but contained multiple pockets in size and structure feasible for DTZ to bind to—and potentially emit light.

The adoption strategy worked. With a method called “family-wide hallucination,” the team used deep learning to hallucinate over two thousand potential enzyme structures based on NTF2-like protein backbones. The algorithm then optimized the core regions of the binding pocket, while allowing creativity in more flexible regions of the protein.

In the end, the AI hallucinated over 1,600 protein scaffolds, each better suited for DTZ than the original NTF2 protein. Next, with the help of RosettaDesign—a suite of AI and other computational tools for protein design—the team further screened for active sites for DTZ while keeping the scaffold stable. Overall, over 7,600 designs were selected for screening. In a matchmaker’s dream (and a grad student’s nightmare), the designs were encoded into DNA sequences and inserted into bacteria to test their enzymatic strengths.

One winner reigned. Dubbed LuxSit (from the Latin for “let light exist”), it’s compact—smaller than any known luciferases—and incredibly stable, retaining full structure at 95 degrees Celsius (203 Fahrenheit). And it works: when given its substrate, DTZ, the testing apparatus glowed.

The Race for Designer Enzymes

With LuxSit in hand, the team next set out to optimize its ability. Focusing on its binding pocket, they generated a library of mutants in which each amino acid was mutated one at a time to see if these “letter” changes affected its performance.

Spoiler: they did. Screening for the most active enzyme, the team found LuxSit-i, which pumps out 100 more photons every second onto the same area compared to LuxSit. The new enzyme also triumphed over natural luciferases, lighting up cells 40 percent more than naturally-occurring luciferase from the sea pansy—a species that glow on the luminescent beaches on the warm shores of Florida.

Compared to its natural counterparts,  LuxSit-i also had an “exquisite” ability to target its substrate molecule, DTZ, with a 50-fold selectivity over another substrate. This means the enzyme played well with other luciferases, allowing researchers to monitor multiple events inside cells simultaneously. In a proof-of-concept the team proved just that, tracking two critical cellular pathways involved in metabolism, cancer, and immune system function using LuxSit-i and another luciferase enzyme. Each enzyme grabbed onto their substrate, emitting a different color of light.

Overall, the study further illustrates the power of AI for altering existing biochemical processes—and potentially designing synthetic life. It’s not the first to hunt for enzymes with additional, or more efficient, abilities. Back in 2018, a team at Princeton engineered a new enzyme by experimentally mutating each “hotspot” amino acid at a time—a tedious, if rewarding attempt. Flash forward and deep learning is, cough, catalyzing the entire design process.

“This breakthrough means that custom enzymes for almost any chemical reaction could, in principle, be designed,” said study author Dr. Andy Hsien-Wei Yeh.

Image Credit: Joshua Woroniecki from Pixabay

Kategorie: Transhumanismus

Robots Could Be Doing Almost Half of Our Household Chores Within a Decade

27 Únor, 2023 - 17:00

There are growing fears around the impact automation could have on jobs, but there’s been much less focus on how it could impact unpaid labor. New research suggests close to half of the time-consuming domestic work people do for free could be automated within a decade.

The prospect of “technological unemployment” has been a central part of the public discourse around AI and robotics ever since an influential 2013 study from the University of Oxford reported that around 47 percent of US employment was at risk of automation.

While there’s been considerable debate about the scale of the problem, it is now widely accepted that emerging technologies could dramatically reshape the world of work in ways unseen since the industrial revolution. More often than not, this is framed in a negative light, with the focus on the economic impact for displaced workers.

But the authors of a new study in PLOS ONE point out that much of the work humans do isn’t related to their employment and is instead dedicated to household chores or caring for relatives, with women carrying the bulk of this burden. And it turns out many of these tasks are just as amenable to automation as our day jobs, with the study predicting that 39 percent of the time currently spent on this kind of work could be automated within a decade

“If it is true that robots are taking our jobs, then it appears that they are also capable of taking out the trash for us,” the authors write. “Considering that people currently spend almost similar amounts of time on unpaid work as they do on paid work, the social and economic implications of this future of unpaid work could be significant.”

The authors reached their conclusions by asking a panel of 65 AI experts from the UK and Japan to predict about how much of the time spent on 17 domestic tasks—such as cooking, doing laundry, and car maintenance—would be automated in the next five to ten years, and how much it would cost users of those technologies.

These experts were pulled in roughly equal numbers from academia, corporate research and development, and business backgrounds and almost evenly split between men and women. To account for varying attitudes towards automation between different cultures, the team drew 29 of these experts from the UK and 36 from Japan.

After getting a smaller group to help them narrow down the tasks to consider, the authors then put their questions to the wider group. The experts who gave the highest and lowest answers were asked to give explanations, and then the entire group was allowed to use these explanations and statistics from the first round to revise their estimates.

Grocery shopping was seen as the most susceptible, with the researchers predicting on average that 59 percent of the time spent on it would be automated within 10 years. In contrast, the hardest to automate was physical childcare at 21 percent. Care work was generally seen as more difficult to automate, with an average score of 29 percent across several tasks, while housework was seen to be much simpler with a score of 44 percent.

Interestingly, despite the researchers instructing the experts to focus purely on the technical feasibility of automating each task, in many cases their explanations suggested they had also considered more human factors. In particular, much of the reasoning for why care-related tasks were less amenable to technological solutions was a lack of societal acceptance.

The authors also broke down the responses based on the demographics of the experts to see how different cultural factors could impact the forecast. They found that male experts from the UK were much more optimistic than their female counterparts, which the authors say fits previous research showing men are typically more optimistic about technology.

However, the situation was reversed when it came to the Japanese experts, which the team suggests could be due to much greater gender disparity in who does housework in Japan. They point to surveys showing that only 52 percent of Japanese men aged 20 to 59 do any domestic work, compared to 88 percent in the UK.

The authors say that discrepancies due to cultural differences show the potential limitations to this kind of forecasting study, but also suggest that more carefully taking these factors into account could help boost their validity.

Regardless of how accurate the results are, though, the study shines a light on an important and much-overlooked aspect of automation. While it is likely to cause significant disruption to the world of work as we know it, it could also free us from much of the domestic drudgery that currently occupies our free time.

Image Credit: Photos Hobby / Unsplash

Kategorie: Transhumanismus

Imagination Makes Us Human. When Did Our Species First Acquire This Ability?

26 Únor, 2023 - 21:11

You can easily picture yourself riding a bicycle across the sky even though that’s not something that can actually happen. You can envision yourself doing something you’ve never done before—like water skiing—and maybe even imagine a better way to do it than anyone else.

Imagination involves creating a mental image of something that is not present for your senses to detect, or even something that isn’t out there in reality somewhere. Imagination is one of the key abilities that make us human. But where did it come from?

I’m a neuroscientist who studies how children acquire imagination. I’m especially interested in the neurological mechanisms of imagination. Once we identify what brain structures and connections are necessary to mentally construct new objects and scenes, scientists like me can look back over the course of evolution to see when these brain areas emerged—and potentially gave birth to the first kinds of imagination.

From Bacteria to Mammals

After life emerged on Earth around 3.4 billion years ago, organisms gradually became more complex. Around 700 million years ago, neurons organized into simple neural nets that then evolved into the brain and spinal cord around 525 million years ago.

Eventually dinosaurs evolved around 240 million years ago, with mammals emerging a few million years later. While they shared the landscape, dinosaurs were very good at catching and eating small, furry mammals. Dinosaurs were cold-blooded, though, and, like modern cold-blooded reptiles, could only move and hunt effectively during the daytime when it was warm. To avoid predation by dinosaurs, mammals stumbled upon a solution: hide underground during the daytime.

Not much food, though, grows underground. To eat, mammals had to travel above the ground—but the safest time to forage was at night, when dinosaurs were less of a threat. Evolving to be warm-blooded meant mammals could move at night. That solution came with a trade-off, though: Mammals had to eat a lot more food than dinosaurs per unit of weight in order to maintain their high metabolism and to support their constant inner body temperature around 99 degrees Fahrenheit (37 degrees Celsius).

Our mammalian ancestors had to find 10 times more food during their short waking time, and they had to find it in the dark of night. How did they accomplish this task?

To optimize their foraging, mammals developed a new system to efficiently memorize places where they’d found food: linking the part of the brain that records sensory aspects of the landscape—how a place looks or smells—to the part of the brain that controls navigation. They encoded features of the landscape in the neocortex, the outermost layer of the brain. They encoded navigation in the entorhinal cortex. And the whole system was interconnected by the brain structure called the hippocampus. Humans still use this memory system for remembering objects and past events, such as your car and where you parked it.

Groups of neurons in the neocortex encode these memories of objects and past events. Remembering a thing or an episode reactivates the same neurons that initially encoded it. All mammals likely can recall and re-experience previously encoded objects and events by reactivating these groups of neurons. This neocortex-hippocampus-based memory system that evolved 200 million years ago became the first key step toward imagination.

The next building block is the capability to construct a “memory” that hasn’t really happened.

Involuntary Made-Up ‘Memories’

The simplest form of imagining new objects and scenes happens in dreams. These vivid, bizarre involuntary fantasies are associated in people with the rapid eye movement (REM) stage of sleep.

Scientists hypothesize that species whose rest includes periods of REM sleep also experience dreams. Marsupial and placental mammals do have REM sleep, but the egg-laying mammal the echidna does not, suggesting that this stage of the sleep cycle evolved after these evolutionary lines diverged 140 million years ago. In fact, recording from specialized neurons in the brain called place cells demonstrated that animals can “dream” of going places they’ve never visited before.

In humans, solutions found during dreaming can help solve problems. There are numerous examples of scientific and engineering solutions spontaneously visualized during sleep.

The neuroscientist Otto Loewi dreamed of an experiment that proved nerve impulses are transmitted chemically. He immediately went to his lab to perform the experiment—later receiving the Nobel Prize for this discovery.

Elias Howe, the inventor of the first sewing machine, claimed that the main innovation, placing the thread hole near the tip of the needle, came to him in a dream.

Dmitri Mendeleev described seeing in a dream “a table where all the elements fell into place as required. Awakening, I immediately wrote it down on a piece of paper.” And that was the periodic table.

These discoveries were enabled by the same mechanism of involuntary imagination first acquired by mammals 140 million years ago.

Imagining on Purpose

The difference between voluntary imagination and involuntary imagination is analogous to the difference between voluntary muscle control and muscle spasm. Voluntary muscle control allows people to deliberately combine muscle movements. Spasm occurs spontaneously and cannot be controlled.

Similarly, voluntary imagination allows people to deliberately combine thoughts. When asked to mentally combine two identical right triangles along their long edges, or hypotenuses, you envision a square. When asked to mentally cut a round pizza by two perpendicular lines, you visualize four identical slices.

This deliberate, responsive and reliable capacity to combine and recombine mental objects is called prefrontal synthesis. It relies on the ability of the prefrontal cortex located at the very front of the brain to control the rest of the neocortex.

When did our species acquire the ability of prefrontal synthesis? Every artifact dated before 70,000 years ago could have been made by a creator who lacked this ability. On the other hand, starting about that time there are various archeological artifacts unambiguously indicating its presence: composite figurative objects, such as lion-man; bone needles with an eye; bows and arrows; musical instruments; constructed dwellings; adorned burials suggesting the beliefs in afterlife, and many more.

Multiple types of archaeological artifacts unambiguously associated with prefrontal synthesis appear simultaneously around 65,000 years ago in multiple geographical locations. This abrupt change in imagination has been characterized by historian Yuval Harari as the “cognitive revolution.” Notably, it approximately coincides with the largest Homo sapiens‘ migration out of Africa.

Genetic analyses suggest that a few individuals acquired this prefrontal synthesis ability and then spread their genes far and wide by eliminating other contemporaneous males with the use of an imagination-enabeled strategy and newly developed weapons.

So it’s been a journey of many millions of years of evolution for our species to become equipped with imagination. Most nonhuman mammals have potential for imagining what doesn’t exist or hasn’t happened involuntarily during REM sleep; only humans can voluntarily conjure new objects and events in our minds using prefrontal synthesis.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: Jr Korpa / Unsplash

Kategorie: Transhumanismus

This Week’s Awesome Tech Stories From Around the Web (Through February 25)

25 Únor, 2023 - 17:00

I Made an AI Clone of Myself
Chloe Xiang | Motherboard
“To create my AI clone, Synthesia told me that we would have to clone my voice and body, and it would take a total of a little over two hours to do so. Before the shoot, I was given a schedule of ‘Voice Clone,’ ‘Prep [Hair and Makeup],’ and ‘Video Performance.’ No details beyond that. Entering the studio the day of, I had no idea what to expect, other than that I was like an actress responding to a call sheet, ready to do my best improv.”


California Company Sets Launch Date for World’s First 3D-Printed Rocket
Passant Rabie | Gizmodo
“On Wednesday, Relativity Space announced that it had secured its launch license from the Federal Aviation Administration and is ready to blast its Terran 1 rocket into space. …Terran 1 is a two-stage, 110-foot-tall (33 meters) rocket that’s 85% 3D printed, making it the ‘largest 3D printed object to exist and to attempt orbital flight,’ according to the company. Relativity Space is working towards its goal of making the rocket 95% 3D printed.”


Google’s Improved Quantum Processor Good Enough for Error Correction
John Timmer | Ars Technica
“…getting quantum error correction isn’t really the news—they’d managed to get it to work a couple of years ago. Instead, the signs of progress are a bit more subtle. In earlier generations of processors, qubits were error-prone enough that adding more of them to an error-correction scheme caused problems that were larger than the gain in corrections. In this new iteration, adding more qubits and getting the error rate to go down is possible.”


ChatGPT-Style Search Represents a 10x Cost Increase for Google, Microsoft
Ron Amadeo | Ars Technica
“A ChatGPT-style search engine would involve firing up a huge neural network modeled on the human brain every time you run a search, generating a bunch of text and probably also querying that big search index for factual information. …All that extra processing is going to cost a lot more money. After speaking to Alphabet Chairman John Hennessy (Alphabet is Google’s parent company) and several analysts, Reuters writes that ‘an exchange with AI known as a large language model likely costs 10 times more than a standard keyword search’ and that it could represent ‘several billion dollars of extra costs.’i”


Ingenious Technique Could Make Moon Farming Possible
Kevin Hurler | Gizmodo
“The idea is that astronauts can extract nutrients in lunar regolith to create fertilizer for hydroponic farming. These nutrients could be pulled from soil using a processing plant and then dissolved into water, all on the Moon’s surface. The resulting nutrient-rich water can then be pumped into a greenhouse for hydroponic farming, a crucial part of maintaining a long-term human presence on the Moon.”


The US Copyright Office Says You Can’t Copyright Midjourney AI-Generated Images
Richard Lawler | The Verge
“A copyright registration granted to the Zarya of the Dawn comic book has been partially canceled, because it included ‘non-human authorship’ that hadn’t been taken into account. …To justify the decision, the Copyright Office cites previous cases where people weren’t able to copyright words or songs that listed ‘non-human spiritual beings’ or the Holy Spirit as the author—as well as the infamous incident where a selfie was taken by a monkey.”


Alphabet Layoffs Hit Trash-Sorting Robots
Paresh Dave | Wired
“Just over a year after graduating from Alphabet’s X moonshot lab, the team that trained over a hundred wheeled, one-armed robots to squeegee cafeteria tables, separate trash and recycling, and yes, open doors, is shutting down as part of budget cuts spreading across the Google parent, a spokeswoman confirmed. …Everyday Robots emerged from the rubble of at least eight robotics acquisitions by Google a decade ago. Google cofounders Larry Page and Sergey Brin expected machine learning would reshape robotics, and Page in particular wanted to develop a consumer-oriented robot, a former employee involved at the time says, speaking anonymously to discuss internal deliberations.”

Image Credit: Abhishek Tiwari / Unsplash

Kategorie: Transhumanismus