Security-Portal.cz je internetový portál zaměřený na počítačovou bezpečnost, hacking, anonymitu, počítačové sítě, programování, šifrování, exploity, Linux a BSD systémy. Provozuje spoustu zajímavých služeb a podporuje příznivce v zajímavých projektech.

Kategorie

California Governor Newsom vetoes AI safety bill, arguing it’s ‘not the best approach’

Computerworld.com [Hacking News] - 30 Září, 2024 - 13:23

In a significant move, California Governor Gavin Newsom vetoed a highly contested artificial intelligence (AI) safety bill on Sunday, citing concerns over its potential to stifle innovation and impose excessive restrictions on AI technologies, including basic functions.

Newsom’s decision to veto Senate Bill 1047, which would have required safety testing and imposed stringent standards on advanced AI models, came after tech giants like Google, Meta, and OpenAI raised concerns that the bill could hamper AI innovation in the state and possibly push companies to relocate.

“While well-intentioned, SB 1047 (The AI Bill)  does not take into account whether an AI system is deployed in high-risk environments, involves critical decision-making or the use of sensitive data,” Newsom said in a statement.

“Instead,” he added, “the bill applies stringent standards to even the most basic functions — so long as a large system deploys it. I do not believe this is the best approach to protecting the public from real threats posed by the technology.”

Newsom emphasized that while the bill was well-intentioned, it failed to differentiate between high-risk AI systems and those with less impact on public safety.

The bill, authored by Democratic State Senator Scott Wiener, sought to mandate safety testing for the most advanced AI models, particularly those costing more than $100 million to develop or requiring significant computing power. It also proposed creating a state entity to oversee “Frontier Models” — highly advanced AI systems that could pose a greater risk due to their capabilities.

Besides, the bill also required developers to implement a “kill switch” to deactivate models that pose a threat and to undergo third-party audits to verify their safety practices.

Proponents such as Senator Wiener argued that voluntary commitments from AI developers were insufficient and that enforceable regulations were necessary to protect the public from potential AI-related harm.

Senator Scott Wiener expressed disappointment, arguing that the lack of enforceable AI safety standards could put Californians at risk as AI systems continue to advance at a rapid pace.

“This veto is a setback for everyone who believes in oversight of massive corporations that are making critical decisions that affect the safety and welfare of the public and the future of the planet,” Senator Wiener said in a statement.

Wiener had earlier stated that voluntary commitments from AI companies were insufficient to ensure public safety, calling the veto a setback in efforts to hold powerful AI systems accountable.

“While the large AI labs have made admirable commitments to monitor and mitigate these risks, the truth is that voluntary commitments from industry are not enforceable and rarely work out well for the public,” Wiener added.

Earlier this month the California State Assembly had passed the bill only to go to Newsom to approve or veto.

Expanding the horizon

While Governor Newsom vetoed the AI bill, he also balanced the veto by announcing a series of new initiatives aimed at protecting Californians from the risks posed by fast-developing generative AI (GenAI) technology, the statement added.

The Governor has signed 17 bills related to generative AI technology, covering areas like AI-generated misinformation, deepfake prevention, AI watermarking, and protecting children and workers from harmful AI applications.

According to Newsom, this legislative package is the most comprehensive set of AI regulations in the country.

“We have a responsibility to protect Californians from potentially catastrophic risks of Gen AI deployment. We will thoughtfully — and swiftly — work toward a solution that is adaptable to this fast-moving technology and harnesses its potential to advance the public good,” Newsom said in the statement.

Among the new measures, Newsom has tasked California’s Office of Emergency Services (Cal OES) with expanding its assessment of risks posed by GenAI to the state’s critical infrastructure. This includes energy, water, and communications systems, to prevent mass casualty events. In the coming months, Cal OES will conduct risk assessments in collaboration with AI companies that develop frontier models and with infrastructure providers across various sectors.

Additionally, Newsom has directed state agencies to engage with leading AI experts to develop “empirical, science-based trajectory analysis” for AI systems, with a focus on high-risk environments.

He also announced plans to work closely with labor unions, academic institutions, and private sector stakeholders to explore the use of Gen AI technology in workplaces, ensuring that AI tools can benefit workers while maintaining safety and fairness.

Newsom’s veto and subsequent announcements underscore the state’s complex position as both a leader in AI innovation and a regulator of potentially disruptive technologies. While tech industry leaders, including Microsoft, OpenAI, and Meta, have opposed the bill, others, like Tesla CEO Elon Musk, have supported the bill, emphasizing the need for more stringent AI safeguards. OpenAI’s chief strategy officer Jason Kwon in a letter to Senator Weiner said the bill would “stifle innovation.”

The controversy surrounding AI regulation in California reflects broader national and global concerns about the impact of AI on society. As federal legislation on AI safety stalls in Congress, California’s actions are being closely watched by policymakers and industry leaders alike.

Despite vetoing SB 1047, Newsom signaled that further AI legislation could be on the horizon.

“A California-only approach may well be warranted — especially absent federal action by Congress,” Newsom said in the statement, leaving open the possibility of revisiting AI safety measures in future legislative sessions.

Kategorie: Hacking & Security

Session Hijacking 2.0 — The Latest Way That Attackers are Bypassing MFA

The Hacker News - 30 Září, 2024 - 13:20
Attackers are increasingly turning to session hijacking to get around widespread MFA adoption. The data supports this, as: 147,000 token replay attacks were detected by Microsoft in 2023, a 111% increase year-over-year (Microsoft).  Attacks on session cookies now happen in the same order of magnitude as password-based attacks (Google). But session hijacking isn’t a new technique – so
Kategorie: Hacking & Security

Session Hijacking 2.0 — The Latest Way That Attackers are Bypassing MFA

The Hacker News - 30 Září, 2024 - 13:20
Attackers are increasingly turning to session hijacking to get around widespread MFA adoption. The data supports this, as: 147,000 token replay attacks were detected by Microsoft in 2023, a 111% increase year-over-year (Microsoft).  Attacks on session cookies now happen in the same order of magnitude as password-based attacks (Google). But session hijacking isn’t a new technique – so The Hacker Newshttp://www.blogger.com/profile/[email protected]
Kategorie: Hacking & Security

Už žádné povinné znaky a pravidelná změna hesel. Americká autorita navrhuje nová pravidla

Zive.cz - bezpečnost - 30 Září, 2024 - 12:45
Americký Národní institut standardů a technologií (NIST) připravil druhý nefinální návrh nových pravidel zabezpečení, který je povinný pro federální úřady a doporučený pro všechny ostatní včetně soukromých firem. Magazín ArsTechnica si všiml, že odpadají některé nesmyslné požadavky, které ve ...
Kategorie: Hacking & Security

A Hacker's Era: Why Microsoft 365 Protection Reigns Supreme

The Hacker News - 30 Září, 2024 - 12:30
Imagine a sophisticated cyberattack cripples your organization’s most critical productivity and collaboration tool — the platform you rely on for daily operations. In the blink of an eye, hackers encrypt your emails, files, and crucial business data stored in Microsoft 365, holding it hostage using ransomware. Productivity grinds to a halt and your IT team races to assess the damage as the clock
Kategorie: Hacking & Security

A Hacker's Era: Why Microsoft 365 Protection Reigns Supreme

The Hacker News - 30 Září, 2024 - 12:30
Imagine a sophisticated cyberattack cripples your organization’s most critical productivity and collaboration tool — the platform you rely on for daily operations. In the blink of an eye, hackers encrypt your emails, files, and crucial business data stored in Microsoft 365, holding it hostage using ransomware. Productivity grinds to a halt and your IT team races to assess the damage as the clockThe Hacker Newshttp://www.blogger.com/profile/[email protected]
Kategorie: Hacking & Security

Checkr ditches GPT-4 for a smaller genAI model, streamlines background checks

Computerworld.com [Hacking News] - 30 Září, 2024 - 12:00

Checkr provides 1.5 million personnel background checks per month for more than 100,000 businesses, a process that requires generative AI (genAI) and machine learning tools to sift through massive amounts of unstructured data.

The automation engine produces a report about each potential job prospect based on background information that can come from a number of sources, and it categorizes criminal or other issues described in the report.

Of Checkr’s unstructured data about 2% is considered “messy,” meaning the records can’t be easily processed with traditional machine learning automation software. So, like many organizations today, Checkr decided to try a genAI tool — in this case, OpenAI’s GPT-4 large language model (LLM).

GPT-4, however, only achieved an 88% accuracy rate on background checks, and on the messy data, that figure dropped to 82%. Those low percentages meant the records didn’t meet customer standards.

Checkr then added retrieval augmented generation (or RAG) to its LLM, which added more information to improve the accuracy. While that worked on the majority of records (with 96% accuracy rates), the numbers for more difficult data dropped even further, to just only 79%.

The other problem? Both the general purpose GPT-4 model and the one using RAG had slow response times: background checks took 15 and seven seconds, respectively.

So, Checkr’s machine learning team decided to go small and try out an open-source small language model (SLM). Vlad Bukhin, Checkr’s machine learning engineer, fine-tuned the SLM using data collected over years to teach what the company sought in employee background checks and verifications.

That move did the trick. The accuracy rate for the bulk of the data inched up to 97% — and for the messy data it jumped to 85%. Query response times also dropped to just half a second. Additionally, the cost to fine-tune an SLM based on Llama-3 with about 8 billion parameters was one-fifth of that for a 1.8 billion-parameter GPT-4 model.  

To tweak its SLM, CheckR turned to Predibase, a company that offers a cloud platform through which Checkr takes thousands of examples from past background checks and then connects that data to Predibase. From there, the Predibase UI made it as easy as just clicking a few buttons to fine-tune the Llama-3 SLM. After a few hours of work, Bukhin had a custom model built.

Predibase operates a platform that enables companies to fine-tune SLMs and deploy them as a cloud service for themselves or others. It works with all types of SLMs, ranging in size from 300 million to 72 billion parameters.

SLMs have gained traction quickly and some industry experts even believe they’re already becoming mainstream enterprise technology. Designed to perform well for simpler tasks, SLMs are more accessible and easier to use for organizations with limited resources; they’re more natively secure, because they exist in a fully self-manageable environment; they can be fine-tuned for particular domains and data security; and they’re cheaper to run than LLMs.

Computerworld spoke with Bukhin and Predibase CEO Dev Rishi about the project, and the process for creating a custom SLM. The following are excerpts from that interview.

When you talk about categories of data used to perform background checks, and what you were trying to automate, what does that mean? Bukhin: “There are many different types of categorizations that we they would do, but in this case [we] were trying to understand what civil or criminal charges were being described in reports. For example, ‘disorderly conduct.'”

What was the challenge in getting your data prepared for use by an LLM? Bukhin: “Obviously, LLMs have only been popular for the past couple of years. We’ve been annotating unstructured data long before LLMs. So, we didn’t need to do a lot of data cleaning for this project, though there could be in the future because we are generating lots of unstructured data that we haven’t cleaned yet, and now that may be possible.”

Why did your initial attempt with GPT-4 fail? You started using RAG on an OpenAI model. Why didn’t it work as well as you’d hoped? Bukhin: “We tried GPT-4 with and without RAG for this use case, and it worked decently well for the 98% of the easy cases, but struggled with the 2% of more complex cases., was something I’d tried to fine tune before. RAG would go through our current training [data] set and it would pick up 10 examples of similarly categorized categories of queries we wanted, but these 2% [of complex cases, messy data] don’t appear in our training set. So that sample that we’re giving to the LLM wasn’t as effective.”

What did you feel failed? Bukhin: “RAG is useful for other use cases. In machine learning, you’re typically solving for the 80% or 90% of the problem, and then the longtail you handle more carefully. In this case where we are classifying text with a supervised model, it was kind of the opposite. I was trying to handle the last 2% — the unknown part. Because of that, RAG isn’t as useful because you’re bringing up known knowledge while dealing with the unknown 2%.”

Dev: “We see RAG be helpful for injecting fresh context into a given task. What Vlad is talking about is minority classes; things where you’re looking for the LLM to pick up on very subtle differences — in this case the classification data for background checks. In those cases, we find what’s more effective is teaching the model by example, which is what fine-tuning will do over a number of examples.”

Can you explain how you’re hosting the LLM and the background records? Is this SaaS or are you running this in your own data center? Bukhin: “This is where it’s more useful to use a smaller model. I mentioned we’re only classifying 2% of the data, but because we have a fairly large data lake that still is quite a few requests per second. Because our costs scale with usage, you have to think about the system set-up different. With RAG, you would need to give the model a lot of context and input tokens, which results in a very expensive and high latency model. Whereas with fine-tuning, because the classification part is already fine-tuned, you just give it the input. The number of tokens you’re giving it and that it’s churning out is so small that it becomes much more efficient at scale”

“Now I just have one instance that’s running and it’s not even using the full instance.”

What do you mean by “the 2% messy data” and what do you see as the difference between RAG and fine tuning? Dev: “The 2% refers to the most complex classification cases they’re working on.

“They have all this unstructured, complex and messy data they have to process and classify to automate the million-plus background checks they do every month for customers. Two percent of those records can’t process with their traditional machine learning models very well. That’s why he brought in a language model.

“That’s where he first used GPT-4 and the RAG process to try to classify those records to automate background checks, but they didn’t get good accuracy, which means those background checks don’t meet the needs of their customers with optimal occuracy.”

Vlad: “To give you an idea of scale, we process 1.5 million background checks per month. That results in one complex charge annotation request every three seconds. Sometimes that goes to several requests per second. That would be really tough to handle if it was a single instance LLM because it would just queue. It would probably take several seconds if you were using RAG on an LLM. It would take several seconds to answer that.

“In this case because it’s a small language model and it uses fewer GPUs, and the latency is less [under .15 seconds], you can accomplish more on a smaller instance.”

Do you have multiple SLMs running multiple applications, or just one running them all? Vlad: Thanks to the Predibase platform, you can launch several use cases solutions onto one [SLM] GPU instance. Currently, we just have the one, but there are several problems we’re trying to solve that we would eventually add. In Predibase terms, it’s called an Adapter. We would add another adatpersolution to the same model for a different use case.

“So, for example, if you’ve deployed a small language model like a Llama-3 and then we have an adapter solution on it that responds to one type of requests, we might have another adatper solution on that same instance because there’s still capacity, and itthat solution can respond to a completely different type of requests using the same base model.

“Same [SLM] instance but a different parameterized set that’s responsible just for your solution.”

Dev: “This implementation we’ve open-sourced as well. So, for any technologist that’s interested in how it works, we have an open-source serving project called LoRAX. When you fine-tune a model… the way I think about it is RAG just injects some additional context when you make a request of the LLM, which is really good for Q&A-style use cases, such that it can get the freshest data. But it’s not good for specializing a model. That’s where fine tuning comes in, where you specialized it by giving it sets of specific examples. There are a few different techniques people use in fine-tuning models.

“The most common technique is called LoRA, or low-rank adaptation. You customize a small percentage of the overall parameters of the model. So, for example, Llama-3 has 8 billion parameters. With LoRA, you’re usually fine tuning maybe 1% of those parameters to make the entire model specialized for the task you want it to do. You can really shift the model to be able to the task you want it to do.

“What organizations have traditionally had to do is put every fine-tuned model on its own GPU. If you had three different fine-tuned models – even if 99% of those models were the same – every single one would need to be on its own server. This gets very expensive very quickly.”

One of the things we did with Predibase is have a single Llama 3 instance with 8 billion parameters and bring multiple fine-tuned Adapters towards it. We call this small percentage of customized model weights Adapters because they’re the small part of the overall model that have been adapted for a specific task.

Vlad hasd a use case up now, let’s call it Blue, running on Llama 3 with 8 billion parameters that does the background classification. But if he had another use case, for example to be able to extract out key information in those checks, he could serve that same Adapter on top of his existing deployment.

This is essentially a way of building multiple use cases to be cost effective using the same GPU and base model.

How many GPU’s is Checkr using to run its SLM? “Vlad’s running on a single A100 GPU today.

“What we see is when using a small model version, like sub 8 billion-parameter models, you can run the entire model with multiple use cases on a single GPU, running on the Predibase cloud offering, which is a distributed cloud.”

What were the major differences between the LLM and the SLM? Bukhin: “I don’t know that I would have been able to run a production instance for this problem using GPT. These big models are very costly, and there’s always a tradeoff between cost and scale.

“At scale, when there are a lot of requests coming in, it’s just a little bit costly to run them over GPT. I think using a RAG situation, it was going to cost me about $7,000 per month using GPT, $12,000 if we didn’t use RAG but just asked GPT-4 directly.

“With the SLM, it costs about $800 a month.”

What were the bigger hurdles in implementing the genAI technology? Bukhin: “I’d say there weren’t a lot of hurdles. The challenge was as Predibase and other new vendors were coming up, there were still a lot of documentation holes and SDK holes that needed to be fixed so you could just run it.

“It’s so new that metrics were showing up as they needed to. The UI features weren’t as valuable. Basically, you had to do more testing on your own side after the model was built. You know, just debugging it. And, when it came to putting it into production, there were a few SDK errors we had to solve.

“Fine tuning the model itself [on Predibase] was tremendously easy. Parameter tuning was easy so we was just need to pick the right model.

“I found that not all models solve the problems with the same accuracy. We optimized with to Llama-3, but we’re constantly trying different models to see if we can get better performance, and better convergence to our training set.”

Even with small, fine-tuned models, users report problems, such as errors and hallucinations. What did you experience those issues, and how did you address them? Bukhin: Definitely. It hallucinates constantly. Luckily, when the problem is classification, you have the 230 possible responses. Quite frequently, amazingly, it comes up with responses that are not in that set of 230 possible [trained] responses. That’s so easy for me to check and just disregard and then redo it.

“It’s simple programmatic logic. This isn’t part of the small language model. In this context, we’re solving a very narrow problem: here’s some text. Now, classify it.

“This isn’t the only thing happening to solve the entire problem. There’s a fallback mechanism that happens… so, there are more models you try out and that that’s not working you try deep learning and then an LLM. There’s a lot of logic surrounding LLMs. There is logic that can help as guardrails. It’s never just the model. There’s programmatic logic around it.

“So, we didn’t need to do a lot of data cleaning for this project, though there could be in the future because we are generating lots of unstructured data that we haven’t cleaned yet, and now that may be possible. The effort to clean most of the data is already complete. But we could enhance some of the cleaning with LLMs”

Kategorie: Hacking & Security

Meta Fined €91 Million for Storing Millions of Facebook and Instagram Passwords in Plaintext

The Hacker News - 30 Září, 2024 - 08:12
The Irish Data Protection Commission (DPC) has fined Meta €91 million ($101.56 million) as part of a probe into a security lapse in March 2019, when the company disclosed that it had mistakenly stored users' passwords in plaintext in its systems. The investigation, launched by the DPC the next month, found that the social media giant violated four different articles under the European Union's
Kategorie: Hacking & Security

Meta Fined €91 Million for Storing Millions of Facebook and Instagram Passwords in Plaintext

The Hacker News - 30 Září, 2024 - 08:12
The Irish Data Protection Commission (DPC) has fined Meta €91 million ($101.56 million) as part of a probe into a security lapse in March 2019, when the company disclosed that it had mistakenly stored users' passwords in plaintext in its systems. The investigation, launched by the DPC the next month, found that the social media giant violated four different articles under the European Union's Ravie Lakshmananhttp://www.blogger.com/profile/[email protected]
Kategorie: Hacking & Security

Critical flaw in NVIDIA Container Toolkit allows full host takeover

Bleeping Computer - 29 Září, 2024 - 16:23
A critical vulnerability in NVIDIA Container Toolkit impacts all AI applications in a cloud or on-premise environment that rely on it to access GPU resources. [...]
Kategorie: Hacking & Security

Reproduktor Marshall za 253 Kč!!! Ukázka sofistikovaného podvodu, který by možná nachytal i vás

Zive.cz - bezpečnost - 29 Září, 2024 - 10:45
Podvodné e-shopy na internetu se zjevují a mizí neustále. Málokdo z nás na některý nenarazil. Dříve nebyl pro běžně vzdělaného člověka velký problém takový obchod odhalit. Většinou nabízel pofidérní zboží, nebo naopak značkové zboží, ale za neuvěřitelně nízké ceny. Společným jmenovatelem bylo to, ...
Kategorie: Hacking & Security

Ireland fines Meta €91 million for storing passwords in plaintext

Bleeping Computer - 28 Září, 2024 - 16:16
The Data Protection Commission (DPC) in Ireland has fined Meta Platforms Ireland Limited (MPIL) €91 million ($100 million) for storing in plaintext passwords of hundreds of millions of users. [...]
Kategorie: Hacking & Security

Crypto Scam App Disguised as WalletConnect Steals $70K in Five-Month Campaign

The Hacker News - 28 Září, 2024 - 11:54
Cybersecurity researchers have discovered a malicious Android app on the Google Play Store that enabled the threat actors behind it to steal approximately $70,000 in cryptocurrency from victims over a period of nearly five months. The dodgy app, identified by Check Point, masqueraded as the legitimate WalletConnect open-source protocol to trick unsuspecting users into downloading it. "Fake
Kategorie: Hacking & Security

Crypto Scam App Disguised as WalletConnect Steals $70K in Five-Month Campaign

The Hacker News - 28 Září, 2024 - 11:54
Cybersecurity researchers have discovered a malicious Android app on the Google Play Store that enabled the threat actors behind it to steal approximately $70,000 in cryptocurrency from victims over a period of nearly five months. The dodgy app, identified by Check Point, masqueraded as the legitimate WalletConnect open-source protocol to trick unsuspecting users into downloading it. "Fake Ravie Lakshmananhttp://www.blogger.com/profile/[email protected]
Kategorie: Hacking & Security

U.S. Charges Three Iranian Nationals for Election Interference and Cybercrimes

The Hacker News - 28 Září, 2024 - 08:03
U.S. federal prosecutors on Friday unsealed criminal charges against three Iranian nationals who are allegedly employed with the Islamic Revolutionary Guard Corps (IRGC) for their targeting of current and former officials to steal sensitive data. The Department of Justice (DoJ) accused Masoud Jalili, 36, Seyyed Ali Aghamiri, 34, and Yasar (Yaser) Balaghi, 37, of participating in a conspiracy
Kategorie: Hacking & Security

U.S. Charges Three Iranian Nationals for Election Interference and Cybercrimes

The Hacker News - 28 Září, 2024 - 08:03
U.S. federal prosecutors on Friday unsealed criminal charges against three Iranian nationals who are allegedly employed with the Islamic Revolutionary Guard Corps (IRGC) for their targeting of current and former officials to steal sensitive data. The Department of Justice (DoJ) accused Masoud Jalili, 36, Seyyed Ali Aghamiri, 34, and Yasar (Yaser) Balaghi, 37, of participating in a conspiracy Ravie Lakshmananhttp://www.blogger.com/profile/[email protected]
Kategorie: Hacking & Security

How much are companies willing to spend to get workers back to the office?

Computerworld.com [Hacking News] - 27 Září, 2024 - 21:48

With more and more companies wanting to bring employees back to the office, I pointed out last week the ill-kept secret that there’s a widespread aversion to open office floor plans — or activity-based workplaces, as they have often evolved into today — and that it partially explains why many employees want to continue remote and hybrid work. 

This is not rocket science. For many years, it has been the consensus in the research community that open office landscapes are bad for both the work environment and employee performance. (There’s really no need for research at all — just talk to workers. They hate it.)

To be honest, open office environments are not downright bad. But it takes the right business, and the right type of people, for them to work. For example, I work in an industry where the ideal image is a teeming newsroom, where creative angles and news hooks are thrown back and forth, just as you see in a movie.

Even so, you don’t have to go back more than two or three decades to a time when most journalists, even in large newsrooms, had their own offices. That’s how Swedish offices used to look, people had their own rooms — not “cubicles,” but real rooms, with a door, and a small Do Not Disturb lamp. There was desk, pictures of the children (and maybe the dog), a plant and a small radio. It was a place where you could feel at home, even at work.

Then real estate development took over and today only 19% of office workers in Stockholm have their own space. The largest proportion, 42%, have no place of their own at all. And, according to researchers, it is the real estate companies that have been driving the transition to open office landscapes. 

It’s easy to see why: an open floor plan is, of course, much more surface-efficient than one with walls and corridors; it is much easier to scale up or down based on the tenants’ needs; and you can house more and larger companies in attractive locations in the city rather than large office complexes in the suburbs.

It’s not just the real estate industry’s fault. A little over 10 years ago, “activity-based offices” — otherwise known as hot-desking — arrived. Workers have neither their own room or desk. And here, the tech industry has taken the lead. 

When Microsoft rebuilt an office in Akalla in 2012, execs themselves called it one of the first large activity-based offices in Sweden, and it helped spark a trend where even the traditional companies and organizations adopted the “cool” scene from startup environments and Silicon Valley companies. (Puffs! European stools!) The office quickly evolved from cool to corporate.

Researchers actually welcomed the shift, as it at least gave people an opportunity to find a quieter place if they were disturbed or to avoid sitting next to colleagues they didn’t like. Then the COVID-19 pandemic hit and we know what happened next. Many people discovered how nice it is to work in their own room, at their own desk, that picture of the children, with maybe the dog at your feet, a plant nearby and some music. You didn’t need the Do Not Disturb light and there were no chattering colleagues.

As a Stockholm Chamber of Commerce’s survey found: 46% say that permanent workplaces in the office have become more important, and 45% of younger people would come in more if they had better opportunities for undisturbed work. (Whether it’s correlation or causality, I don’t think it’s a coincidence that the most important selling point for headphones these days is how good their noise canceling is. It makes public transportation bearable, certainly, and with headphones, you create your own room — even at work.)

As a result of these recent trends, property owners and companies alike find themselves in a tricky, but self-inflicted, position. To say the least, property owners have begun to see the disadvantages of the open solutions they pushed: vacancies in downtown office buildings are skyrocketing as tenants have reduced office space after the transition to hybrid work.

Yes, companies see the chance to save money by reducing office space, especially if employees aren’t there all the time anyway. But at the same time, they want their workers to be in the office more. And the employees say, “Okay, but then I would like to have my own place, preferably my own room.”

Of course, that equation doesn’t add up. And this is where the whole “return to office” trend is brought to a head. If company culture, creativity and productivity are so critical that employees need to be forced back into the office, how far are companies willing to go?

How big does the office space need to be, if everyone is to be there basically at the same time — if half also need their own desk to be productive, perhaps even a room of their own? 

Property owners and landlords would rejoice, but how many companies want to take on that cost? Very few, I would think.

Perhaps that tells us just how important a forced return to offices really is.

This column is taken from CS Veckobrev, a personal newsletter with reading tips, link tips and analyzes sent directly from Computerworld Sweden Editor-in-Chief Marcus Jerräng’s desk. Do you also want the newsletter on Fridays?  Sign up for a free subscription here.

Kategorie: Hacking & Security

Iranian hackers charged for ‘hack-and-leak’ plot to influence election

Bleeping Computer - 27 Září, 2024 - 21:47
The U.S. Department of Justice unsealed an indictment charging three Iranian hackers with a "hack-and-leak" campaign that aimed to influence the 2024 U.S. presidential election. [...]
Kategorie: Hacking & Security

U.S. charges Joker's Stash and Rescator money launderers

Bleeping Computer - 27 Září, 2024 - 20:00
The U.S. Department of Justice (DoJ) has announced charges against two Russian nationals for operating billion-dollar money laundering services for cybercriminals, including ransomware groups. [...]
Kategorie: Hacking & Security

HP’s new remote support service can even resurrect unbootable PCs

Computerworld.com [Hacking News] - 27 Září, 2024 - 19:34

An unbootable PC is every remote worker’s nightmare. It usually means they need hands-on support that they’re not likely to find in their home office or neighborhood Starbucks.

Now there’s hope that even that catastrophe can be corrected remotely. At its Imagine event in Palo Alto, California on Tuesday, HP announced what it calls the industry’s first out-of-band diagnostics and remediation capability that will enable remote technicians to connect, diagnose, and fix problems, even if the PC won’t boot.

The service, launching Nov. 1, lets a technician, with permission from the user, connect to a virtual KVM (keyboard, video, mouse) under the BIOS/UEFI to run diagnostics and take remedial action. With the service, a tech could have corrected the CrowdStrike issue by replacing the flawed configuration file from the bad update, for example, and could even reimage the machine if necessary.

Marcos Razon, division president of lifecycle services and customer support at HP, said that the goal is to address 70%-80% of issues without requiring a stable operating system.

However, not all PCs will benefit, as the service relies on the Intel vPro chip more typically found in commercial PCs.

“Within the vPro chipset, you have a lightweight processor, a secondary processor that can access what in the past was called BIOS, but now it’s more UEFI,” Razon explained. “What this secondary processor allows us to do is to go under the BIOS before the booting process happens and take control of the machine.”

A security code must be accepted by the PC’s user before the technician can take control. “We don’t want anybody to access a PC without being able to secure that PC,” Razon said.

Constant virtual KVM

“The beauty of it is that we have a constant virtual KVM below the BIOS/UEFI,” he said.

The catch with existing remote-control programs is they need a PC that has successfully booted a stable operating system: “What happens is that if the PC has not booted completely, and the operating system is not running perfectly, you will not be able to take control of that PC,” he said.

Mahmoud Ramin, senior research analyst at Info-Tech Research Group, is impressed.

Endpoint management tools usually fall short when a user faces serious problems with their hardware, such as boot failures and BIOS errors. Out-of-band technology can revolutionize remote endpoint management through bypassing operating systems and managing endpoints at the hardware level,” he said. “This innovation can help resolver groups seamlessly and immediately provide support to end users, reduce downtime by minimizing onsite visits, and enhance shift-left strategy through increased automation. HP’s out-of-band remediation capabilities can position it as a leader in remote endpoint support.”

The new service will be offered as an add-on to an HP Essential, Premium or Premium+ Support package with the purchase of any new vPro-enabled HP PC, the company said in a release. It will be extended to older HP commercial PCs in the coming months. It will initially be available in North America and the European Union, with rollout to other regions following. Razon said that the cost will be about US$12 per machine, per year, and HP is also working on a version for AMD processors, which it expects to release in the first half of 2025.

Kategorie: Hacking & Security
Syndikovat obsah