01 Mar 2026

feedSlashdot

Anthropic's Claude Leaps to #2 on Apple's 'Top Apps' Chart After Pentagon Controversy

Anthropic's Claude AI assistant "jumped to the No. 2 slot on Apple's chart of top U.S. free apps late on Friday," reports CNBC: The rise in popularity suggests that Anthropic is benefiting from its presence in news headlines, stemming from its refusal to have its models used for mass domestic surveillance or for fully autonomous weapons... OpenAI's ChatGPT sat at No. 1 on the App Store rankings on Saturday, while Google's Gemini was at No. 3... On Jan. 30, [Claude] was ranked No. 131 in the U.S., and it bounced between the top 20 and the top 50 for much of February, according to data from analytics company Sensor Tower... [And Friday night, for 85.3 million followers] pop singer Katy Perry posted a screenshot of Anthropic's Pro subscription for consumers, with a heart superimposed over it. Friday Anthropic posted "We are deeply grateful to our users, and to the industry peers, policymakers, veterans, and members of the public who have voiced their support in recent days. Thank you. "

Read more of this story at Slashdot.

01 Mar 2026 8:34am GMT

Silicon Valley's Ideas Mocked Over Penchant for Favoring Young Entrepreneurs with 'Agency'

In a 9,000-word expose, a writer for Harper's visited San Francisco's young entrepreneurs in September to mockingly profile "tech's new generation and the end of thinking." There's Cluely founder Roy Lee. ("His grand contribution to the world was a piece of software that told people what to do.") And the Rationalist movement's Scott Alexander, who "would probably have a very easy time starting a suicide cult..." Alexander's relationship with the AI industry is a strange one. "In theory, we think they're potentially destroying the world and are evil and we hate them," he told me. In practice, though, the entire industry is essentially an outgrowth of his blog's comment section... "Many of them were specifically thinking, I don't trust anybody else with superintelligence, so I'm going to create it and do it well." Somehow, a movement that believes AI is incredibly dangerous and needs to be pursued carefully ended up generating a breakneck artificial arms race. There's a fascinating story about teenaged founder Eric Zhu (who only recently turned 18): Clients wanted to take calls during work hours, so he would speak to them from his school bathroom. "I convinced my counselor that I had prostate issues... I would buy hall passes from drug dealers to get out of class, to have business meetings." Soon he was taking Zoom calls with a U.S. senator to discuss tech regulation... Next, he built his own venture-capital fund, managing $20 million. At one point cops raided the bathroom looking for drug dealers while Eric was busy talking with an investor. Eventually, the school got sick of Eric's misuse of the facilities and kicked him out. He moved to San Francisco. Eric made all of this sound incredibly easy. You hang out in some Discord servers, make a few connections with the right people; next thing you know, you're a millionaire... Eric didn't think there was anything particularly special about himself. Why did he, unlike any of his classmates, start a $20 million VC fund? "I think I was just bored. Honestly, I was really bored." Did he think anyone could do what he did? "Yeah, I think anyone genuinely can." The article concludes Silicon Valley's investors are rewarding young people with "agency". Although "As far as I could tell, being a highly agentic individual had less to do with actually doing things and more to do with constantly chasing attention online." Like X.com user Donald Boat, who successfully baited Sam Altman into buying him a gaming PC in "a brutally simplified miniature of the entire VC economy." (After which "People were giving him stuff for no reason except that Altman had already done it, and they didn't want to be left out of the trend.") Shortly before I arrived at the Cheesecake Factory, [Donald Boat] texted to let me know that he'd been drinking all day, so when I met him I thought he was irretrievably wasted. In fact, it turned out, he was just like that all the time... He seemed to have a constant roster of projects on the go. He'd sent me occasional photos of his exploits. He went down to L.A. to see Oasis and ended up in a poker game with a group of weapons manufacturers. "I made a bunch of jokes about sending all their poker money to China," he said, "and they were not pleased...." "I don't use that computer and I think video games are a waste of time. I spent all the money I made from going viral on Oasis tickets." As far as he was concerned, the fact that tech people were tripping over themselves to take part in his stunt just confirmed his generally low impression of them. "They have too much money and nothing going on..." Ever since his big viral moment, he'd been suddenly inundated with messages from startup drones who'd decided that his clout might be useful to them. One had offered to fly him out to the French Riviera. The author's conclusion? "It did not seem like a good idea to me that some of the richest people in the world were no longer rewarding people for having any particular skills, but simply for having agency."

Read more of this story at Slashdot.

01 Mar 2026 5:34am GMT

Sam Altman Answers Questions on X.com About Pentagon Deal, Threats to Anthropic

Saturday afternoon Sam Altman announced he'd start answering questions on X.com about OpenAI's work with America's Department of War - and all the developments over the past few days. (After that department's negotions had failed with Anthropic, they announced they'd stop using Anthropic's technology and threatened to designate it a "Supply-Chain Risk to National Security". Then they'd reached a deal for OpenAI's technology - though Altman says it includes OpenAI's own similar prohibitions against using their products for domestic mass surveillance and requiring "human responsibility" for the use of force in autonomous weapon systems.) Altman said Saturday that enforcing that "Supply-Chain Risk" designation on Anthropic "would be very bad for our industry and our country, and obviously their company. We said [that] to the Department of War before and after. We said that part of the reason we were willing to do this quickly was in the hopes of de-esclation.... We should all care very much about the precedent... To say it very clearly: I think this is a very bad decision from the Department of War and I hope they reverse it. If we take heat for strongly criticizing it, so be it." Altman also said that for a long time, OpenAI was planning to do "non-classified work only," but this week found the Department of War "flexible on what we needed..." Sam Altman: The reason for rushing is an attempt to de-escalate the situation. I think the current path things are on is dangerous for Anthropic, healthy competition, and the U.S. We negotiated to make sure similar terms would be offered to all other AI labs. I know what it's like to feel backed into a corner, and I think it's worth some empathy to the Department of War. They are... a very dedicated group of people with, as I mentioned, an extremely important mission. I cannot imagine doing their work. Our industry tells them "The technology we are building is going to be the high order bit in geopolitical conflict. China is rushing ahead. You are very behind." And then we say "But we won't help you, and we think you are kind of evil." I don't think I'd react great in that situation. I do not believe unelected leaders of private companies should have as much power as our democratically elected government. But I do think we need to help them. Question: Are you worried at all about the potential for things to go really south during a possible dispute over what's legal or not later on and be deemed a supply chain risk...? Sam Altman: Yes, I am. If we have to take on that fight we will, but it clearly exposes us to some risk. I am still very hopeful this is going to get resolved, and part of why we wanted to act fast was to help increase the chances of that... Question: Why the rush to sign the deal ? Obviously the optics don't look great. Sam Altman: It was definitely rushed, and the optics don't look good. We really wanted to de-escalate things, and we thought the deal on offer was good. If we are right and this does lead to a de-escalation between the Department of War and the industry, we will look like geniuses, and a company that took on a lot of pain to do things to help the industry. If not, we will continue to be characterized as as rushed and uncareful. I don't where it's going to land, but I have already seen promising signs. I think a good relationship between the government and the companies developing this technology is critical over the next couple of years... Question: What was the core difference why you think the Department of War accepted OpenAI but not Anthropic? Sam Altman: [...] We believe in a layered approach to safety--building a safety stack, deploying FDEs [embedded Forward Deployed Engineers] and having our safety and alignment researcher involved, deploying via cloud, working directly with the Department of War. Anthropic seemed more focused on specific prohibitions in the contract, rather than citing applicable laws, which we felt comfortable with. We feel that it it's very important to build safe system, and although documents are also important, I'd clearly rather rely on technical safeguards if I only had to pick one... I think Anthropic may have wanted more operational control than we did... Question: Were the terms that you accepted the same ones Anthropic rejected? Sam Altman: No, we had some different ones. But our terms would now be available to them (and others) if they wanted. Question: Will you turn off the tool if they violate the rules? Sam Altman: Yes, we will turn it off in that very unlikely event, but we believe the U.S. government is an institution that does its best to follow law and policy. What we won't do is turn it off because we disagree with a particular (legal military) decision. We trust their authority. Questions were also answered by OpenAI's head of National Security Partnerships (who at one point posted that they'd managed the White House response to the Snowden disclosures and helped write the post-Snowden policies constraining surveillance during the Obama years.) And they stressed that with OpenAI's deal with Department of War, "We control how we train the models and what types of requests the models refuse." Question: Are employees allowed to opt out of working on Department of War-related projects? Answer: We won't ask employees to support Department of War-related projects if they don't want to. Question: How much is the deal worth? Answer: It's a few million $, completely inconsequential compared to our $20B+ in revenue, and definitely not worth the cost of a PR blowup. We're doing it because it's the right thing to do for the country, at great cost to ourselves, not because of revenue impact... Question: Can you explicitly state which specific technical safeguard OpenAI has that allowed you to sign what Anthropic called a 'threat to democratic values'? Answer: We think the deal we made has more guardrails than any previous agreement for classified AI deployments, including Anthropic's. Other AI labs (including Anthropic) have reduced or removed their safety guardrails and relied primarily on usage policies as their primary safeguards in national security deployments. Usage policies, on their own, are not a guarantee of anything. Any responsible deployment of AI in classified environments should involve layered safeguards including a prudent safety stack, limits on deployment architecture, and the direct involvement of AI experts in consequential AI use cases. These are the terms we negotiated in our contract. They also detailed OpenAI's position on LinkedIn: Deployment architecture matters more than contract language. Our contract limits our deployment to cloud API. Autonomous systems require inference at the edge. By limiting our deployment to cloud API, we can ensure that our models cannot be integrated directly into weapons systems, sensors, or other operational hardware... Instead of hoping contract language will be enough, our contract allows us to embed forward deployed engineers, commits to giving us visibility into how models are being used, and we have the ability to iterate on safety safeguards over time. If our team sees that our models aren't refusing queries they should, or there's more operational risk than we expected, our contract allows us to make modifications at our discretion. This gives us far more influence over outcomes (and insight into possible abuse) than a static contract provision ever could. U.S. law already constrains the worst outcomes. We accepted the "all lawful uses" language proposed by the Department, but required them to define the laws that constrained them on surveillance and autonomy directly in the contract. And because laws can change, having this codified in the contract protects against changes in law or policy that we can't anticipate.

Read more of this story at Slashdot.

01 Mar 2026 2:39am GMT

28 Feb 2026

feedOSnews

Run this random script in the terminal to block Apple’s macOS Tahoe update notification spam

Are you not at all interested in upgrading to macOS Tahoe, and getting annoyed at the relentless notification spam from Apple trying to trick you into upgrading? The secret? Using device management profiles, which let you enforce policies on Macs in your organization, even if that "organization" is one Mac on your desk. One of the available policies is the ability to block activities related to major macOS updates for up to 90 days at a time (the max the policy allows), which seems like exactly what I needed. Not being anywhere near an expert on device profiles, I went looking to see what I could find, and stumbled on the Stop Tahoe Update project. The eventual goals of this project are quite impressive, but what they've done so far is exactly what I needed: A configuration profile that blocks Tahoe update activities for 90 days. ↫ Rob Griffiths All you need to do is clone a random GitHub repository, set all its scripts to executable, generate two random UUIDs, insert those UUIDs into one of the scripts in the GitHub project folder you just cloned, run said script, open System Settings and go to Privacy & Security > Profiles, install the profile the script created, click install in two different dialogs, and now you have blocked Apple's update notification spam! Well, for 90 days that is. I honestly don't understand how normal people are supposed to use macOS. The amount of weird terminal commands you need just to change basic settings is bewildering. macOS definitely isn't ready for the desktop if they expect users to use the terminal for so many basic tasks. I'm glad I'm using Linux, where I don't have to deal with the terminal at all.

28 Feb 2026 10:09pm GMT

The Windows 95 user interface: a case study in usability engineering

If this isn't catnip to the average OSNews reader, I don't know what is. Windows 95 is a comprehensive upgrade to the Windows 3.1 and Windows for Workgroups 3.11 products. Many changes have been made in almost every area of Windows, with the user interface being no exception. This paper discusses the design team, its goals and process then explains how usability engineering principles such as iterative design and problem tracking were applied to the project, using specific design problems and their solutions as examples. ↫ Kent Sullivan This case study was written in 1996 by Kent Sullivan, who joined the Windows 95 user interface team in 1992. I consider the second half of the '90s as the heyday of user interface design, with Windows 9x, Apple's Platinum in Mac OS 8 and 9, and BeOS' Tracker/Deskbar as the absolute pinnacles of user interface design. Coincidentally, this also seems to mark the end of a more scientific, study-based approach to designing graphical user interfaces. Reading through this particular case study for Windows 95 feels almost quaint. Where are the dozens of managers pushing for notification spam, upsells, and dark patterns to enable expensive data-hoarding services? Why are none of the people mentioned in the study talking about sneaky ways to secretly and silently convert your local account to an online account? Where are all the "AI" buttons? Why is there n chapter on how to trick people into enabling telemetry data? The user interfaces of the late '90s were the last ones designed by people who actually cared, by people who approached the whole process with the end user in mind, rooted in scientific data collected by simply looking at people use their ideas. They were optimised for the user as best they could, instead of being optimised for the company's bottom line. It's been downhill ever since.

28 Feb 2026 9:10pm GMT

Bootc and OSTree: modernizing Linux system deployment

Bootc and OSTree represent a new way of thinking about Linux system deployment and management. Building on container and versioning concepts, they offer robust and modern solutions to meet the current needs of administrators and developers. ↫ Quentin Joly Slowly, very slowly, I've been starting to warm up to the relatively new crop of immutable Linux distributions. As a heavy Fedora user, opting for Fedora's atomic distributions, which use bootc and OSTree, seems like the logical path to go down if I ever made the switch, and this article provides some approachable insights and examples into how, exactly, it all works, and what benefits it might give you. It definitely goes beyond what I as a mere desktop user might encounter, but if you're managing a bunch of servers or VMs in a more professional setting, you might be interested, too. I'm still not convinced I need to switch to an immutable distribution, but I'd be lying if I said some of the benefits didn't appeal to me.

28 Feb 2026 8:54pm GMT

feedArs Technica

Trump moves to ban Anthropic from the US government

The Defense Department pressured Anthropic to drop restrictions on how its AI can be used by the military.

28 Feb 2026 8:00pm GMT

In puzzling outbreak, officials look to cold beer, gross ice, and ChatGPT

An AI chatbot convinced health investigators they had the right answer.

28 Feb 2026 6:17pm GMT

Google quantum-proofs HTTPS by squeezing 15kB of data into 700-byte space

Merkle Tree Certificate support is already in Chrome. Soon, it will be everywhere.

28 Feb 2026 1:26am GMT

30 Jan 2026

feedPlanet Arch Linux

How to review an AUR package

On Friday, July 18th, 2025, the Arch Linux team was notified that three AUR packages had been uploaded that contained malware. A few maintainers including myself took care of deleting these packages, removing all traces of the malicious code, and protecting against future malicious uploads.

30 Jan 2026 12:00am GMT

19 Jan 2026

feedPlanet Arch Linux

Personal infrastructure setup 2026

While starting this post I realized I have been maintaining personal infrastructure for over a decade! Most of the things I've self-hosted is been for personal uses. Email server, a blog, an IRC server, image hosting, RSS reader and so on. All of these things has all been a bit all over the place and never properly streamlined. Some has been in containers, some has just been flat files with a nginx service in front and some has been a random installed Debian package from somewhere I just forgot.

19 Jan 2026 12:00am GMT

11 Jan 2026

feedPlanet Arch Linux

Verify Arch Linux artifacts using VOA/OpenPGP

In the recent blog post on the work funded by Sovereign Tech Fund (STF), we provided an overview of the "File Hierarchy for the Verification of OS Artifacts" (VOA) and the voa project as its reference implementation. VOA is a generic framework for verifying any kind of distribution artifacts (i.e. files) using arbitrary signature verification technologies. The voa CLI ⌨️ The voa project offers the voa(1) command line interface (CLI) which makes use of the voa(5) configuration file format for technology backends. It is recommended to read the respective man pages to get …

11 Jan 2026 12:00am GMT