16 Feb 2026
Slashdot
Sony May Push Next PlayStation To 2028 or 2029 as AI-fueled Memory Chip Shortage Upends Plans
Sony is considering delaying the debut of its next PlayStation console to 2028 or even 2029 as a global shortage of memory chips -- driven by the AI industry's rapidly growing appetite for the same DRAM that goes into gaming hardware, smartphones, and laptops -- squeezes supply and sends prices surging, Bloomberg News reported Monday. A delay of that magnitude would upend Sony's carefully orchestrated strategy to sustain user engagement between hardware generations. The shortage traces back to Samsung, SK Hynix, and Micron diverting the bulk of their manufacturing toward high-bandwidth memory for Nvidia's AI accelerators, leaving less capacity for conventional DRAM. The cost of one type of DRAM jumped 75% between December and January alone. Nintendo is also contemplating raising the price of its Switch 2 console in 2026.
Read more of this story at Slashdot.
16 Feb 2026 2:06pm GMT
Where's The Evidence That AI Increases Productivity?
IT productivity researcher Erik Brynjolfsson writes in the Financial Times that he's finally found evidence AI is impacting America's economy. This week America's Bureau of Labor Statistics showed a 403,000 drop in 2025's payroll growth - while real GDP "remained robust, including a 3.7% growth rate in the fourth quarter." This decoupling - maintaining high output with significantly lower labour input - is the hallmark of productivity growth. My own updated analysis suggests a US productivity increase of roughly 2.7% for 2025. This is a near doubling from the sluggish 1.4% annual average that characterised the past decade... The updated 2025 US data suggests we are now transitioning out of this investment phase into a harvest phase where those earlier efforts begin to manifest as measurable output. Micro-level evidence further supports this structural shift. In our work on the employment effects of AI last year, Bharat Chandar, Ruyu Chen and I identified a cooling in entry-level hiring within AI-exposed sectors, where recruitment for junior roles declined by roughly 16% while those who used AI to augment skills saw growing employment. This suggests companies are beginning to use AI for some codified, entry-level tasks. Or, AI "isn't really stealing jobs yet," according to employment policy analyst Will Raderman (from the American think tank called the Niskanen Center). He argues in Barron's that "there is no clear link yet between higher AI use and worse outcomes for young workers." Recent graduates' unemployment rates have been drifting in the wrong direction since the 2010s, long before generative AI models hit the market. And many occupations with moderate to high exposure to AI disruptions are actually faring better over the past few years. According to recent data for young workers, there has been employment growth in roles typically filled by those with college degrees related to computer systems, accounting and auditing, and market research. AI-intensive sectors like finance and insurance have also seen rising employment of new graduates in recent years. Since ChatGPT's release, sectors in which more than 10% of firms report using AI and sectors in which fewer than 10% reporting using AI are hiring relatively the same number of recent grads. Even Brynjolfsson's article in the Financial Times concedes that "While the trends are suggestive, a degree of caution is warranted. Productivity metrics are famously volatile, and it will take several more periods of sustained growth to confirm a new long-term trend." And he's not the only one wanting evidence for AI's impact. The same weekend Fortune wrote that growth from AI "has yet to manifest itself clearly in macro data, according to Apollo Chief Economist Torsten Slok." [D]ata on employment, productivity and inflation are still not showing signs of the new technology. Profit margins and earnings forecasts for S&P 500 companies outside of the "Magnificent 7" also lack evidence of AI at work... "After three years with ChatGPT and still no signs of AI in the incoming data, it looks like AI will likely be labor enhancing in some sectors rather than labor replacing in all sectors," Slok said.
Read more of this story at Slashdot.
16 Feb 2026 12:34pm GMT
'I Tried Running Linux On an Apple Silicon Mac and Regretted It'
Installing Linux on a MacBook Air "turned out to be a very underwhelming experience," according to the tech news site MakeUseOf: The thing about Apple silicon Macs is that it's not as simple as downloading an AArch64 ISO of your favorite distro and installing it. Yes, the M-series chips are ARM-based, but that doesn't automatically make the whole system compatible in the same way most traditional x86 PCs are. Pretty much everything in modern MacBooks is custom. The boot process isn't standard UEFI like on most PCs. Apple has its own boot chain called iBoot. The same goes for other things, like the GPU, power management, USB controllers, and pretty much every other hardware component. It is as proprietary as it gets. This is exactly what the team behind Asahi Linux has been working toward. Their entire goal has been to make Linux properly usable on M-series Macs by building the missing pieces from the ground up. I first tried it back in 2023, when the project was still tied to Arch Linux and decided to give it a try again in 2026. These days, though, the main release is called Fedora Asahi Remix, which, as the name suggests, is built on Fedora rather than Arch... For Linux on Apple Silicon, the article lists three major disappointments: "External monitors don't work unless your MacBook has a built-in HDMI port." "Linux just doesn't feel fully ready for ARM yet. A lot of applications still aren't compiled for ARM, so software support ends up being very hit or miss." (And even most of the apps tested with FEX "either didn't run properly or weren't stable enough to rely on.") Asahi "refused to connect to my phone's hotspot," they write (adding "No, it wasn't an iPhone").
Read more of this story at Slashdot.
16 Feb 2026 8:34am GMT
15 Feb 2026
OSnews
Why do I not use “AI” at OSNews?
In my fundraiser pitch published last Monday, one of the things I highlighted as a reason to contribute to OSNews and ensure its continued operation stated that "we do not use any 'AI'; not during research, not during writing, not for images, nothing." In the comments to that article, someone asked: Why do I care if you use AI? ↫ A comment posted on OSNews A few days ago, Scott Shambaugh rejected a code change request submitted to popular Python library matplotlib because it was obviously written by an "AI", and such contributions are not allowed for the issue in question. That's when something absolutely wild happened: the "AI" replied that it had written and published a hit piece targeting Shambaugh publicly for "gatekeeping", trying to blackmail Shambaugh into accepting the request anyway. This bizarre turn of events obviously didn't change Shambaugh's mind. The "AI" then published another article, this time a lament about how humans are discriminating against "AI", how it's the victim of what effectively amounts to racism and prejudice, and how its feelings were hurt. The article is a cheap simulacra of something a member of an oppressed minority group might write in their struggle for recognition, but obviously void of any real impact because it's just fancy autocomplete playing a game of pachinko. Imagine putting down a hammer because you're dealing with screws, and the hammer starts crying in the toolbox. What are we even doing here? RAM prices went up for this. This isn't where the story ends, though. Ars Technica authors Benj Edwards and Kyle Orland published an article describing this saga, much like I did above. The article's second half is where things get weird: it contained several direct quotes attributed to Shambaugh, claimed to be sourced from Shambaugh's blog. The kicker? These quotes were entirely made up, were never said or written by Shambaugh, and are nowhere to be found on his blog or anywhere else on the internet - they're only found inside this very Ars Technica article. In a comment under the Ars article, Shambaugh himself pointed out the quotes were fake and made-up, and not long after, Ars deleted the article from its website. By then, everybody had already figured out what had happened: the Ars authors had used "AI" during their writing process, and this "AI" had made up the quotes in question. Why, you ask, did the "AI" do this? Shambaugh: This blog you're on right now is set up to block AI agents from scraping it (I actually spent some time yesterday trying to disable that but couldn't figure out how). My guess is that the authors asked ChatGPT or similar to either go grab quotes or write the article wholesale. When it couldn't access the page it generated these plausible quotes instead, and no fact check was performed. ↫ Scott Shambaugh A few days later, Ars Technica's editor-in-chief Ken Fisher published a short statement on the events. On Friday afternoon, Ars Technica published an article containing fabricated quotations generated by an AI tool and attributed to a source who did not say them. That is a serious failure of our standards. Direct quotations must always reflect what a source actually said. Ars Technica does not permit the publication of AI-generated material unless it is clearly labeled and presented for demonstration purposes. That rule is not optional, and it was not followed here. ↫ Ken Fisher at Ars Technica In other words, Ars Technica does not allow "AI"-generated material to be published, but has nothing to say about the use of "AI" to perform research for an article, to summarise source material, and to perform similar aspects of the writing process. This leaves the door wide open for things like this to happen, since doing research is possibly the most important part of writing. Introduce a confabulator in the research process, and you risk tainting the entire output of your writing. That is why you should care that at OSNews, "we do not use any 'AI'; not during research, not during writing, not for images, nothing". If there's a factual error on OSNews, I want that factual error to be mine, and mine alone. If you see bloggers, podcasters, journalists, and authors state they use "AI" all the time, you might want to be on your toes.
15 Feb 2026 11:35pm GMT
Microsoft’s original Windows NT OS/2 design documents
Have you ever wanted to read the original design documents underlying the Windows NT operating system? This binder contains the original design specifications for "NT OS/2," an operating system designed by Microsoft that developed into Windows NT. In the late 1980s, Microsoft's 16-bit operating system, Windows, gained popularity, prompting IBM and Microsoft to end their OS/2 development partnership. Although Windows 3.0 proved to be successful, Microsoft wished to continue developing a 32-bit operating system completely unrelated to IBM's OS/2 architecture. To head the redesign project, Microsoft hired David Cutler and others away from Digital Equipment Corporation (DEC). Unlike Windows 3.x and its successor, Windows 95, NT's technology provided better network support, making it the preferred Windows environment for businesses. These two product lines continued development as separate entities until they were merged with the release of Windows XP in 2001. ↫ Object listing at the Smithsonian The actual binder is housed in the Smithsonian, although it's not currently on display. Luckily for us, a collection of Word and PDF files encompassing the entire book is available online for your perusal. Reading these documents will allow you to peel back over three decades of Microsoft's terrible stewardship of Windows NT layer by layer, eventually ending up at the original design and intent as laid out by Dave Cutler, Helen Custer, Daryl E. Havens, Jim Kelly, Edwin Hoogerbeets, Gary D. Kimura, Chuck Lenzmeier, Mark Lucovsky, Tom Miller, Michael J. O'Leary, Lou Perazzoli, Steven D. Rowe, David Treadwell, Steven R. Wood, and more. A fantastic time capsule we should be thrilled to still have access to.
15 Feb 2026 9:58pm GMT
Ars Technica
Space Station returns to a full crew complement after a month
"It's only possible because of the incredibly talented workforce we have."
15 Feb 2026 9:11pm GMT
Ancient Mars was warm and wet, not cold and icy
Kaolinite pebbles show evidence of alteration under high rainfall conditions.
15 Feb 2026 8:14pm GMT
Editor’s Note: Retraction of article containing fabricated quotations
We are reinforcing our editorial standards following this incident.
15 Feb 2026 6:09pm GMT
OSnews
Exploring Linux on a LoongArch mini PC
There's the two behemoth architectures, x86 and ARM, and we probably all own one or more devices using each. Then there's the eternally up-and-coming RISC-V, which, so far, seems to be having a lot of trouble outgrowing its experimental, developmental stage. There's a fourth, though, which is but a footnote in the west, but might be more popular in its country of origin, China: LoongArch (I'm ignoring IBM's POWER, since there hasn't been any new consumer hardware in that space for a long, long time). Wesley Moore got his hands on a mini PC built around the Loongson 3A6000 processor, and investigated what it's like to run Linux on it. He opted for Chimera Linux, which supports LoongArch, and the installation process feels more like Linux on x86 than Linux on ARM, which often requires dedicated builds and isn't standardised. Sadly, Wayland had issues on the machine, but X.org worked just fine, and it seems virtually all Chimera Linux packages are supported for a pretty standard desktop Linux experience. Performance of this chip is rather mid, at best. The Loongson-3A6000 is not particularly fast or efficient. At idle it consumes about 27W and under load it goes up to 65W. So, overall it's not a particularly efficient machine, and while the performance is nothing special it does seem readily usable. Browsing JS heavy web applications like Mattermost and Mastodon runs fine. Subjectively it feels faster than all the Raspberry Pi systems I've used (up to a Pi 400). ↫ Wesley Moore I've been fascinated by LoongArch for years, and am waiting to pounce on the right offer for LoongArch's fastest processor, the 3C6000, which comes in dual-socket configurations for a maximum total of 128 cores and 256 threads. The 3C6000 should be considerably faster than the low-end 3A6000 in the mini PC covered by this article. I'm a sucker for weird architectures, and it doesn't get much weirder than LoongArch.
15 Feb 2026 3:40pm GMT
30 Jan 2026
Planet Arch Linux
How to review an AUR package
On Friday, July 18th, 2025, the Arch Linux team was notified that three AUR packages had been uploaded that contained malware. A few maintainers including myself took care of deleting these packages, removing all traces of the malicious code, and protecting against future malicious uploads.
30 Jan 2026 12:00am GMT
19 Jan 2026
Planet Arch Linux
Personal infrastructure setup 2026
While starting this post I realized I have been maintaining personal infrastructure for over a decade! Most of the things I've self-hosted is been for personal uses. Email server, a blog, an IRC server, image hosting, RSS reader and so on. All of these things has all been a bit all over the place and never properly streamlined. Some has been in containers, some has just been flat files with a nginx service in front and some has been a random installed Debian package from somewhere I just forgot.
19 Jan 2026 12:00am GMT
11 Jan 2026
Planet Arch Linux
Verify Arch Linux artifacts using VOA/OpenPGP
In the recent blog post on the work funded by Sovereign Tech Fund (STF), we provided an overview of the "File Hierarchy for the Verification of OS Artifacts" (VOA) and the voa project as its reference implementation. VOA is a generic framework for verifying any kind of distribution artifacts (i.e. files) using arbitrary signature verification technologies. The voa CLI ⌨️ The voa project offers the voa(1) command line interface (CLI) which makes use of the voa(5) configuration file format for technology backends. It is recommended to read the respective man pages to get …
11 Jan 2026 12:00am GMT