15 Dec 2025
Slashdot
Are Warnings of Superintelligence 'Inevitability' Masking a Grab for Power?
Superintelligence has become "a quasi-political forecast" with "very little to do with any scientific consensus, emerging instead from particular corridors of power." That's the warning from James O'Sullivan, a lecturer in digital humanities from University College Cork. In a refreshing 5,600-word essay in Noema magazine, he notes the suspicious coincidence that "The loudest prophets of superintelligence are those building the very systems they warn against..." "When we accept that AGI is inevitable, we stop asking whether it should be built, and in the furor, we miss that we seem to have conceded that a small group of technologists should determine our future." (For example, OpenAI CEO Sam Altman "seems determined to position OpenAI as humanity's champion, bearing the terrible burden of creating God-like intelligence so that it might be restrained.") The superintelligence discourse functions as a sophisticated apparatus of power, transforming immediate questions about corporate accountability, worker displacement, algorithmic bias and democratic governance into abstract philosophical puzzles about consciousness and control... Media amplification plays a crucial role in this process, as every incremental improvement in large language models gets framed as a step towards AGI. ChatGPT writes poetry; surely consciousness is imminent..." Such accounts, often sourced from the very companies building these systems, create a sense of momentum that becomes self-fulfilling. Investors invest because AGI seems near, researchers join companies because that's where the future is being built and governments defer regulation because they don't want to handicap their domestic champions... We must recognize this process as political, not technical. The inevitability of superintelligence is manufactured through specific choices about funding, attention and legitimacy, and different choices would produce different futures. The fundamental question isn't whether AGI is coming, but who benefits from making us believe it is... We do not yet understand what kind of systems we are building, or what mix of breakthroughs and failures they will produce, and that uncertainty makes it reckless to funnel public money and attention into a single speculative trajectory. Some key points: "The machines are coming for us, or so we're told. Not today, but soon enough that we must seemingly reorganize civilization around their arrival..." "When we debate whether a future artificial general intelligence might eliminate humanity, we're not discussing the Amazon warehouse worker whose movements are dictated by algorithmic surveillance or the Palestinian whose neighborhood is targeted by automated weapons systems. These present realities dissolve into background noise against the rhetoric of existential risk..." "Seen clearly, the prophecy of superintelligence is less a warning about machines than a strategy for power, and that strategy needs to be recognized for what it is... " "Superintelligence discourse isn't spreading because experts broadly agree it is our most urgent problem; it spreads because a well-resourced movement has given it money and access to power..." "Academic institutions, which are meant to resist such logics, have been conscripted into this manufacture of inevitability... reinforcing industry narratives, producing papers on AGI timelines and alignment strategies, lending scholarly authority to speculative fiction..." "The prophecy becomes self-fulfilling through material concentration - as resources flow towards AGI development, alternative approaches to AI starve..." The dominance of superintelligence narratives obscures the fact that many other ways of doing AI exist, grounded in present social needs rather than hypothetical machine gods. [He lists data sovereignty movements "that treat data as a collective resource subject to collective consent," as well as organizations like Canada's First Nations Information Governance Centre and New Zealand's's Te Mana Raraunga, plus "Global South initiatives that use modest, locally governed AI systems to support healthcare, agriculture or education under tight resource constraints."] "Such examples... demonstrate how AI can be organized without defaulting to the superintelligence paradigm that demands everyone else be sacrificed because a few tech bros can see the greater good that everyone else has missed..." "These alternatives also illuminate the democratic deficit at the heart of the superintelligence narrative. Treating AI at once as an arcane technical problem that ordinary people cannot understand and as an unquestionable engine of social progress allows authority to consolidate in the hands of those who own and build the systems..." He's ultimately warning us about "politics masked as predictions..." "The real political question is not whether some artificial superintelligence will emerge, but who gets to decide what kinds of intelligence we build and sustain. And the answer cannot be left to the corporate prophets of artificial transcendence because the future of AI is a political field - it should be open to contestation. "It belongs not to those who warn most loudly of gods or monsters, but to publics that should have the moral right to democratically govern the technologies that shape their lives."
Read more of this story at Slashdot.
15 Dec 2025 8:34am GMT
SpaceX Alleges a Chinese-Deployed Satellite Risked Colliding with Starlink
"A SpaceX executive says a satellite deployed from a Chinese rocket risked colliding with a Starlink satellite," reports PC Magazine: On Friday, company VP for Starlink engineering, Michael Nicolls, tweeted about the incident and blamed a lack of coordination from the Chinese launch provider CAS Space. "When satellite operators do not share ephemeris for their satellites, dangerously close approaches can occur in space," he wrote, referring to the publication of predicted orbital positions for such satellites... [I]t looks like one of the satellites veered relatively close to a Starlink sat that's been in service for over two years. "As far as we know, no coordination or deconfliction with existing satellites operating in space was performed, resulting in a 200 meter (656 feet) close approach between one of the deployed satellites and STARLINK-6079 (56120) at 560 km altitude," Nicolls wrote... "Most of the risk of operating in space comes from the lack of coordination between satellite operators - this needs to change," he added. Chinese launch provider CAS Space told PCMag that "As a launch service provider, our responsibility ends once the satellites are deployed, meaning we do not have control over the satellites' maneuvers." And the article also cites astronomer/satellite tracking expert Jonathan McDowell, who had tweeted that CAS Space's response "seems reasonable." (In an email to PC Magazine, he'd said "Two days after launch is beyond the window usually used for predicting launch related risks." But "The coordination that Nicolls cited is becoming more and more important," notes Space.com, since "Earth orbit is getting more and more crowded." In 2020, for example, fewer than 3,400 functional satellites were whizzing around our planet. Just five years later, that number has soared to about 13,000, and more spacecraft are going up all the time. Most of them belong to SpaceX. The company currently operates nearly 9,300 Starlink satellites, more than 3,000 of which have launched this year alone. Starlink satellites avoid potential collisions autonomously, maneuvering themselves away from conjunctions predicted by available tracking data. And this sort of evasive action is quite common: Starlink spacecraft performed about 145,000 avoidance maneuvers in the first six months of 2025, which works out to around four maneuvers per satellite per month. That's an impressive record. But many other spacecraft aren't quite so capable, and even Starlink satellites can be blindsided by spacecraft whose operators don't share their trajectory data, as Nicolls noted. And even a single collision - between two satellites, or involving pieces of space junk, which are plentiful in Earth orbit as well - could spawn a huge cloud of debris, which could cause further collisions. Indeed, the nightmare scenario, known as the Kessler syndrome, is a debris cascade that makes it difficult or impossible to operate satellites in parts of the final frontier.
Read more of this story at Slashdot.
15 Dec 2025 5:24am GMT
Roomba Maker 'iRobot' Files for Bankruptcy After 35 Years
Roomba manufacturer iRobot filed for bankruptcy today, reports Bloomberg. After 35 years, iRobot reached a "restructuring support agrement that will hand control of the consumer robot maker to Shenzhen PICEA Robotics Co, its main supplier and lender, and Santrum Hong Kong Compny." Under the restructuring, vacuum cleaner maker Shenzhen PICEA will receive the entire equity stake in the reorganised company... The plan will allow the debtor to remain as a going concern and continue to meet its commitments to employees and make timely payments in full to vendors and other creditors for amounts owed throughout the court-supervised process, according to an iRobot statement... he company warned of potential bankruptcy in December after years of declining earnings. Roomba says it's sold over 50 million robots, the article points out, but earnings "began to decline since 2021 due to supply chain headwinds and increased competition. "A hoped-for by acquisition by Amazon.com in 2023 collapsed over regulatory concerns."
Read more of this story at Slashdot.
15 Dec 2025 3:24am GMT
13 Dec 2025
Ars Technica
Sharks and rays gain landmark protections as nations move to curb international trade
Gov'ts agree to ban or restrict international trade in shark meat, fins, and other products.
13 Dec 2025 12:00pm GMT
12 Dec 2025
OSnews
Haiku gets new Go port
There's a new Haiku monthly activity report, and this one's a true doozy. Let's start with the biggest news. The most notable development in November was the introduction of a port of the Go programming language, version 1.18. This is still a few years old (from 2022; the current is Go 1.25), but it's far newer than the previous Go port to Haiku (1.4 from 2014); and unlike the previous port which was never in the package repositories, this one is now already available there (for x86_64 at least) and can be installed via pkgman. ↫ Haiku activity report As the project notes, they're still a few versions behind, but at least it's a lot more modern of an implementation than they had before. Now that it's in the repositories for Haiku, it might also attract more people to work on the port, potentially bringing even newer versions to the BeOS-inspired operating system. Welcome as it may be, this new Go port isn't the only big ticket item this month. Haiku can now gracefully recover from an app_server crash, something it used to be able to do a long time ago, but which was broken for a long time. The app_server is Haiku's display server and window manager, so the ability to restart it at runtime after a crash, and have it reconnect with still-running applications, is incredibly welcome. As far as I can tell, all modern operating systems can do this by now, so it's great to have this functionality restored in Haiku. Of course, aside from these two big improvements, there's the usual load of fixes and changes in applications, drivers, and other components of the operating system.
12 Dec 2025 11:51pm GMT
Rethinking sudo with object capabilities
Alpine Linux maintainer Ariadne Conill has published a very interesting blog post about the shortcomings of both sudo and doas, and offers a potential different way of achieving the same goals as those tools. Systems built around identity-based access control tend to rely on ambient authority: policy is centralized and errors in the policy configuration or bugs in the policy engine can allow attackers to make full use of that ambient authority. In the case of a SUID binary like doas or sudo, that means an attacker can obtain root access in the event of a bug or misconfiguration. What if there was a better way? Instead of thinking about privilege escalation as becoming root for a moment, what if it meant being handed a narrowly scoped capability, one with just enough authority to perform a specific action and nothing more? Enter the object-capability model. ↫ Ariadne Conill To bring this approach to life, they created a tool called capsudo. Instead of temporarily changing your identity, capsudo can grant far more fine-grained capabilities that match the exact task you're trying to accomplish. As an example, Conill details mounting and unmounting - with capsudo, you can not only grant the ability for a user to mount and unmount whatever device, but also allow the user to only mount or unmount just one specific device. Another example given is how capsudo can be used to give a service account user to only those resources the account needs to perform its tasks. Of course, Conill explains all of this way better than I ever could, with actual example commands and more details. Conill happens to be the same person who created Wayback, illustrating that they have a tendency to look at problems in a unique and interesting way. I'm not smart enough to determine if this approach makes sense compared to sudo or doas, but the way it's described it does feel like a superior, more secure solution.
12 Dec 2025 11:35pm GMT
One too many words on AT&T’s $2000 Korn shell and other Usenet topics
Unix has been enormously successful over the past 55 years. It started out as a small experiment to develop a time-sharing system (i.e., a multi-user operating system) at AT&T Bell Labs. The goal was to take a few core principles to their logical conclusion. The OS bundled many small tools that were easy to combine, as it was illustrated by a famous exchange between Donald Knuth and Douglas McIlroy in 1986. Today, Unix lives on mostly as a spiritual predecessor to Linux, Net/Free/OpenBSD, macOS, and arguably, ChromeOS and Android. Usenet tells us about the height of its early popularity. ↫ Gábor Nyéki There are so many amazing stories in this article, I honestly have no idea what to highlight. So first and foremost, I want you to read the whole thing yourself, as everyone's bound to have their own personal favourite section that resonates the most. My personal favourite story from the article - which is just an aside, to illustrate that even the asides are great - is that when Australia joined Usenet in 1983, new posts to Usenet were delivered to the country by airmail. On magnetic tape. Once per week. The overarching theme here is that the early days of UNIX, as documented on Usenet, were a fascinating wild west of implementations, hacks, and personalities, which, yes, clashed with each other, but also spread untold amounts of information, knowledge, and experience to every corner of the world. I hope Nyéki will write more of these articles.
12 Dec 2025 10:27pm GMT
Ars Technica
OpenAI built an AI coding agent and uses it to improve the agent itself
"The vast majority of Codex is built by Codex," OpenAI told us about its new AI coding agent.
12 Dec 2025 10:16pm GMT
Reminder: Donate to win swag in our annual Charity Drive sweepstakes
Help raise a charity haul that's already past $11,000 in just a couple of days.
12 Dec 2025 9:35pm GMT
11 Dec 2025
Planet Arch Linux
.NET packages may require manual intervention
The following packages may require manual intervention due to the upgrade from 9.0 to 10.0:
- aspnet-runtime
- aspnet-targeting-pack
- dotnet-runtime
- dotnet-sdk
- dotnet-source-built-artifacts
- dotnet-targeting-pack
pacman may display the following error failed to prepare transaction (could not satisfy dependencies) for the affected packages. If you are affected by this and require the 9.0 packages, the following commands will update e.g. aspnet-runtime to aspnet-runtime-9.0: pacman -Syu aspnet-runtime-9.0 pacman -Rs aspnet-runtime
11 Dec 2025 12:00am GMT
24 Nov 2025
Planet Arch Linux
Misunderstanding that “Dependency” comic
Over the course of 2025, every single major cloud provider has failed. In June, Google Cloud had issues taking down Cloud Storage for many users. In late October, Amazon Web Services had a massive outage in their main hub, us-east-1, affecting many services as well as some people's beds. A little over a week later Microsoft Azure had a [widespread outage][Azure outage] that managed to significantly disrupt train service in the Netherlands, and probably also things that matter. Now last week, Cloudflare takes down large swaths of the internet in a way that causes non-tech people to learn Cloudflare exists. And every single time, people share that one XKCD comic.
24 Nov 2025 12:00am GMT
18 Nov 2025
Planet Arch Linux
Self-hosting DNS for no fun, but a little profit!
After Gandi was bought up and started taking extortion level prices for their domains I've been looking for an excuse to migrate registrar. Last week I decided to bite the bullet and move to Porkbun as I have another domain renewal coming up. However after setting up an account and paying for the transfer for 4 domains, I realized their DNS services are provided by Cloudflare! I personally do not use Cloudflare, and stay far away from all of their products for various reasons.
18 Nov 2025 12:00am GMT