10 Mar 2026
Ars Technica
Reentry of NASA satellite will exceed the agency's risk guidelines
"Due to late-stage design changes, the potential risk of uncontrolled reentry increased."
10 Mar 2026 11:01pm GMT
Slashdot
Intel Demos Chip To Compute With Encrypted Data
An anonymous reader quotes a report from IEEE Spectrum: Worried that your latest ask to a cloud-based AI reveals a bit too much about you? Want to know your genetic risk of disease without revealing it to the services that compute the answer? There is a way to do computing on encrypted data without ever having it decrypted. It's called fully homomorphic encryption, or FHE. But there's a rather large catch. It can take thousands -- even tens of thousands -- of times longer to compute on today's CPUs and GPUs than simply working with the decrypted data. So universities, startups, and at least one processor giant have been working on specialized chips that could close that gap. Last month at the IEEE International Solid-State Circuits Conference (ISSCC) in San Francisco, Intel demonstrated its answer, Heracles, which sped up FHE computing tasks as much as 5,000-fold compared to a top-of the-line Intel server CPU. Startups are racing to beat Intel and each other to commercialization. But Sanu Mathew, who leads security circuits research at Intel, believes the CPU giant has a big lead, because its chip can do more computing than any other FHE accelerator yet built. "Heracles is the first hardware that works at scale," he says. The scale is measurable both physically and in compute performance. While other FHE research chips have been in the range of 10 square millimeters or less, Heracles is about 20 times that size and is built using Intel's most advanced, 3-nanometer FinFET technology. And it's flanked inside a liquid-cooled package by two 24-gigabyte high-bandwidth memory chips-a configuration usually seen only in GPUs for training AI. In terms of scaling compute performance, Heracles showed muscle in live demonstrations at ISSCC. At its heart the demo was a simple private query to a secure server. It simulated a request by a voter to make sure that her ballot had been registered correctly. The state, in this case, has an encrypted database of voters and their votes. To maintain her privacy, the voter would not want to have her ballot information decrypted at any point; so using FHE, she encrypts her ID and vote and sends it to the government database. There, without decrypting it, the system determines if it is a match and returns an encrypted answer, which she then decrypts on her side. On an Intel Xeon server CPU, the process took 15 milliseconds. Heracles did it in 14 microseconds. While that difference isn't something a single human would notice, verifying 100 million voter ballots adds up to more than 17 days of CPU work versus a mere 23 minutes on Heracles.
Read more of this story at Slashdot.
10 Mar 2026 11:00pm GMT
Ars Technica
FDA contradicts Trump admin, declines to approve generic drug for autism
In the end, the FDA only approved the drug for a rare genetic condition with clearer data.
10 Mar 2026 10:12pm GMT
Slashdot
Amazon Wins Court Order To Block Perplexity's AI Shopping Bots
Last November, Amazon sued Perplexity demanding that the AI search startup stop allowing its AI browser agent, Comet, to make purchases for users online. Today, a judge ruled in favor of the tech giant, granting it a temporary court injunction blocking the scraping of Amazon's website. According to court filings, the judge found strong evidence the tool accessed the retailer's systems "without authorization." CNBC reports: In a ruling dated Monday, U.S. District Judge Maxine Chesney wrote that Amazon has provided "strong evidence" that Perplexity's Comet browser accessed its website at the user's direction, but "without authorization" from the e-commerce giant. Chesney said Amazon submitted "essentially undisputed evidence" that it spent more than $5,000 to respond to the issue, including "numerous hours" where its employees worked to develop tools to block Comet from accessing its private customer tools and to prevent the tool from "future unauthorized access." "Given such evidence, the Court finds Amazon has shown a likelihood of success on the merits of its claim," Chesney wrote. Chesney's ruling includes a weeklong stay to allow Perplexity to appeal the order. Amazon wrote in its original complaint that Perplexity's agents posed security risks to customer data because they "can act within protected computer systems, including private customer accounts requiring a password." The company also said Perplexity's agents created challenges for the company's advertising business, because when AI systems generate ad traffic, the impressions have to be detected and filtered out before advertisers can be billed. "This requires modifications to Amazon's advertising systems, including developing new detection mechanisms to identify and exclude automated traffic," Amazon wrote in its complaint. "These system adaptations are necessary to maintain contractual obligations with advertisers who pay only for legitimate human impressions."
Read more of this story at Slashdot.
10 Mar 2026 10:00pm GMT
Silicon Valley Is Buzzing About This New Idea: AI Compute As Compensation
sziring shares a report from Business Insider: Silicon Valley has long competed for talent with ever-richer pay packages built around salary, bonus, and equity. Now, a fourth line item is creeping into the mix: AI inference. As generative AI tools become embedded in software development, the cost of running the underlying models -- known as inference -- is emerging as a productivity driver and a budget line that finance chiefs can't ignore. Software engineers and AI researchers inside tech companies have already been jousting for access to GPUs, with this AI compute capacity being carefully parceled out based on which projects are most important. Now, some tech job candidates have begun asking about what AI compute budget they will have access to if they decide to join. "I am increasingly asked during candidate interviews how much dedicated inference compute they will have to build with Codex," Thibault Sottiaux, engineering lead at OpenAI's Codex, the startup's AI coding service, wrote on X recently. He added that usage per user is growing much faster than overall user growth, a sign that AI compute is becoming even scarcer and more valuable. That scarcity is reshaping how engineers think about their work and pay. "The inference compute available to you is increasingly going to drive overall software productivity," said OpenAI President Greg Brockman. The report cites a recent compensation submission from a software engineer that listed "Copilot subscription" as part of the pay and benefits. "OpenAI and Anthropic should create recruitment sites where their clients can advertise roles, listing the token budget for the job alongside the salary range," said Peter Gostev, AI capability lead at Arena, a startup that measures the performance of models. Tomasz Tunguz of Theory Ventures predicts AI inference will be the fourth component of engineering compensation, alongside salary, bonus, and equity. "Will you be paid in tokens? In 2026, you likely will start to be," Tunguz said.
Read more of this story at Slashdot.
10 Mar 2026 9:00pm GMT
Ars Technica
AI can rewrite open source code—but can it rewrite the license, too?
Is it clean "reverse engineering" or just an LLM-filtered "derivative work"?
10 Mar 2026 7:36pm GMT
09 Mar 2026
OSnews
ArcaOS 5.1.2 released
While IBM's OS/2 technically did die, its development was picked up again much later, first through eComStation, and later, after money issues at its parent company Mensys, through ArcaOS. eComStation development stalled because of the money issues and has been dead for years; ArcaOS picked up where it left off and has been making steady progress since its first release in 2017. Regardless, the developers behind both projects develop OS/2 under license from IBM, but it's unclear just how much they can change or alter, and what the terms of the agreement are. Anyway, ArcaOS 5.1.2 has just been released, and it seems to be a rather minor release. It further refines ArcaOS' support for UEFI and GPT-based disks, the tentpole feature of ArcaOS 5.1 which allows the operating system to be installed on a much more modern systems without having to fiddle with BIOS compatibility modes. Looking at the list of changes, there's the usual list of updated components from both Arca Noae and the wider OS/2 community. You'll find the latest versions of of the Panorama graphics drivers, ACPI, USB, and NVMe drivers, improved localisation, newer versions of the VNC server and viewer, and much more. If you have an active Support & Maintenance subscription for ArcaOS 5.1, this update is free, and it's also available at discounted prices as upgrades for earlier versions. A brand new copy of ArcaOS 5.1.x will set you back $139, which isn't cheap, but considering this price is probably a consequence of what must be some onerous licensing terms and other agreements with IBM, I doubt there's much Arca Noae can do about it.
09 Mar 2026 11:31pm GMT
“AI” translations are ruining Wikipedia
Oh boy. Wikipedia editors have implemented new policies and restricted a number of contributors who were paid to use AI to translate existing Wikipedia articles into other languages after they discovered these AI translations added AI "hallucinations," or errors, to the resulting article. ↫ Emanuel Maiberg at 404 Media There seems to be this pervasive conviction among Silicon Valley techbro types, and many programmers and developers in general, that translation and localisation are nothing more than basic find/replace tasks that you can automate away. At first, we just needed to make corpora of two different languages kiss and smooch, and surely that would automate translation and localisation away if the corpora were large enough. When this didn't turn out to work very well, they figured that if we made the words in the corpora tumble down a few pachinko machines and then made them kiss and smooch, yes, then we'd surely have automated translation and localisation. Nothing could be further from the truth. As someone who has not only worked as a professional translator for over 15 years, but who also holds two university degrees in the subject, I keep reiterating that translation isn't just a dumb substitution task; it's a real craft, a real art, one you can have talent for, one you need to train for, and study for. You'd think anyone with sufficient knowledge in two languages can translate effectively between the two, but without a much deeper understanding of language in general and the languages involved in particular, as well as a deep understanding of the cultures in which the translation is going to be used, and a level of reading and text comprehension that go well beyond that of most, you're going to deliver shit translations. Trust me, I've seen them. I've been paid good money to correct, fix, and mangle something usable out of other people's translations. You wouldn't believe the shit I've seen. Translation involves the kinds of intricacies, nuances, and context "AI" isn't just bad at, but simply cannot work with in any way, shape, or form. I've said it before, but it won't be long before people start getting seriously injured - or worse - because of the cost-cutting in the translation industry, and the effects that's going to have on, I don't know, the instruction manuals for complex tools, or the leaflet in your grandmother's medications. Because some dumbass bean counter kills the budget for proper, qualified, trained, and experienced translators, people are going to die.
09 Mar 2026 9:40pm GMT
“I don’t know what is Apple’s endgame for the Fn/Globe key, and I’m not sure Apple knows either”
Every modifier key starts simple and humble, with a specific task and a nice matching name. This never lasts. The tasks become larger and more convoluted, and the labels grow obsolete. Shift no longer shifts a carriage, Control doesn't send control codes, Alt isn't for alternate nerdy terminal functions. Fn is the newest popular modifier key, and it feels we're speedrunning it through all the challenges without having learned any of the lessons. ↫ Marcin Wichary Grab a blanket, curl up on the couch with some coffee or tea, and enjoy.
09 Mar 2026 9:18pm GMT
30 Jan 2026
Planet Arch Linux
How to review an AUR package
On Friday, July 18th, 2025, the Arch Linux team was notified that three AUR packages had been uploaded that contained malware. A few maintainers including myself took care of deleting these packages, removing all traces of the malicious code, and protecting against future malicious uploads.
30 Jan 2026 12:00am GMT
19 Jan 2026
Planet Arch Linux
Personal infrastructure setup 2026
While starting this post I realized I have been maintaining personal infrastructure for over a decade! Most of the things I've self-hosted is been for personal uses. Email server, a blog, an IRC server, image hosting, RSS reader and so on. All of these things has all been a bit all over the place and never properly streamlined. Some has been in containers, some has just been flat files with a nginx service in front and some has been a random installed Debian package from somewhere I just forgot.
19 Jan 2026 12:00am GMT
11 Jan 2026
Planet Arch Linux
Verify Arch Linux artifacts using VOA/OpenPGP
In the recent blog post on the work funded by Sovereign Tech Fund (STF), we provided an overview of the "File Hierarchy for the Verification of OS Artifacts" (VOA) and the voa project as its reference implementation. VOA is a generic framework for verifying any kind of distribution artifacts (i.e. files) using arbitrary signature verification technologies. The voa CLI ⌨️ The voa project offers the voa(1) command line interface (CLI) which makes use of the voa(5) configuration file format for technology backends. It is recommended to read the respective man pages to get …
11 Jan 2026 12:00am GMT