15 Dec 2025

feedLinuxiac

Fresh Launches as a New Terminal-First Text Editor

Fresh Launches as a New Terminal-First Text Editor

Fresh is a newly released open-source terminal text editor combining modern UI elements, mouse support, plugins, and strong performance.

15 Dec 2025 12:43pm GMT

feedHacker News

Optery (YC W22) Hiring CISO, Release Manager, Tech Lead (Node), Full Stack Eng

Comments

15 Dec 2025 12:00pm GMT

Largest U.S. Recycling Project to Extend Landfill Life for Virginia Residents

Comments

15 Dec 2025 11:58am GMT

feedLinuxiac

SparkyLinux 2025.12 Brings Kernel 6.17 and Updated Desktop ISOs

SparkyLinux 2025.12 Brings Kernel 6.17 and Updated Desktop ISOs

SparkyLinux 2025.12 updates the semi-rolling release with Debian Testing packages, Linux kernel 6.17, and refreshed desktop ISOs.

15 Dec 2025 11:37am GMT

feedHacker News

Avoid UUIDv4 Primary Keys

Comments

15 Dec 2025 10:08am GMT

feedSlashdot

Are Warnings of Superintelligence 'Inevitability' Masking a Grab for Power?

Superintelligence has become "a quasi-political forecast" with "very little to do with any scientific consensus, emerging instead from particular corridors of power." That's the warning from James O'Sullivan, a lecturer in digital humanities from University College Cork. In a refreshing 5,600-word essay in Noema magazine, he notes the suspicious coincidence that "The loudest prophets of superintelligence are those building the very systems they warn against..." "When we accept that AGI is inevitable, we stop asking whether it should be built, and in the furor, we miss that we seem to have conceded that a small group of technologists should determine our future." (For example, OpenAI CEO Sam Altman "seems determined to position OpenAI as humanity's champion, bearing the terrible burden of creating God-like intelligence so that it might be restrained.") The superintelligence discourse functions as a sophisticated apparatus of power, transforming immediate questions about corporate accountability, worker displacement, algorithmic bias and democratic governance into abstract philosophical puzzles about consciousness and control... Media amplification plays a crucial role in this process, as every incremental improvement in large language models gets framed as a step towards AGI. ChatGPT writes poetry; surely consciousness is imminent..." Such accounts, often sourced from the very companies building these systems, create a sense of momentum that becomes self-fulfilling. Investors invest because AGI seems near, researchers join companies because that's where the future is being built and governments defer regulation because they don't want to handicap their domestic champions... We must recognize this process as political, not technical. The inevitability of superintelligence is manufactured through specific choices about funding, attention and legitimacy, and different choices would produce different futures. The fundamental question isn't whether AGI is coming, but who benefits from making us believe it is... We do not yet understand what kind of systems we are building, or what mix of breakthroughs and failures they will produce, and that uncertainty makes it reckless to funnel public money and attention into a single speculative trajectory. Some key points: "The machines are coming for us, or so we're told. Not today, but soon enough that we must seemingly reorganize civilization around their arrival..." "When we debate whether a future artificial general intelligence might eliminate humanity, we're not discussing the Amazon warehouse worker whose movements are dictated by algorithmic surveillance or the Palestinian whose neighborhood is targeted by automated weapons systems. These present realities dissolve into background noise against the rhetoric of existential risk..." "Seen clearly, the prophecy of superintelligence is less a warning about machines than a strategy for power, and that strategy needs to be recognized for what it is... " "Superintelligence discourse isn't spreading because experts broadly agree it is our most urgent problem; it spreads because a well-resourced movement has given it money and access to power..." "Academic institutions, which are meant to resist such logics, have been conscripted into this manufacture of inevitability... reinforcing industry narratives, producing papers on AGI timelines and alignment strategies, lending scholarly authority to speculative fiction..." "The prophecy becomes self-fulfilling through material concentration - as resources flow towards AGI development, alternative approaches to AI starve..." The dominance of superintelligence narratives obscures the fact that many other ways of doing AI exist, grounded in present social needs rather than hypothetical machine gods. [He lists data sovereignty movements "that treat data as a collective resource subject to collective consent," as well as organizations like Canada's First Nations Information Governance Centre and New Zealand's's Te Mana Raraunga, plus "Global South initiatives that use modest, locally governed AI systems to support healthcare, agriculture or education under tight resource constraints."] "Such examples... demonstrate how AI can be organized without defaulting to the superintelligence paradigm that demands everyone else be sacrificed because a few tech bros can see the greater good that everyone else has missed..." "These alternatives also illuminate the democratic deficit at the heart of the superintelligence narrative. Treating AI at once as an arcane technical problem that ordinary people cannot understand and as an unquestionable engine of social progress allows authority to consolidate in the hands of those who own and build the systems..." He's ultimately warning us about "politics masked as predictions..." "The real political question is not whether some artificial superintelligence will emerge, but who gets to decide what kinds of intelligence we build and sustain. And the answer cannot be left to the corporate prophets of artificial transcendence because the future of AI is a political field - it should be open to contestation. "It belongs not to those who warn most loudly of gods or monsters, but to publics that should have the moral right to democratically govern the technologies that shape their lives."

Read more of this story at Slashdot.

15 Dec 2025 8:34am GMT

SpaceX Alleges a Chinese-Deployed Satellite Risked Colliding with Starlink

"A SpaceX executive says a satellite deployed from a Chinese rocket risked colliding with a Starlink satellite," reports PC Magazine: On Friday, company VP for Starlink engineering, Michael Nicolls, tweeted about the incident and blamed a lack of coordination from the Chinese launch provider CAS Space. "When satellite operators do not share ephemeris for their satellites, dangerously close approaches can occur in space," he wrote, referring to the publication of predicted orbital positions for such satellites... [I]t looks like one of the satellites veered relatively close to a Starlink sat that's been in service for over two years. "As far as we know, no coordination or deconfliction with existing satellites operating in space was performed, resulting in a 200 meter (656 feet) close approach between one of the deployed satellites and STARLINK-6079 (56120) at 560 km altitude," Nicolls wrote... "Most of the risk of operating in space comes from the lack of coordination between satellite operators - this needs to change," he added. Chinese launch provider CAS Space told PCMag that "As a launch service provider, our responsibility ends once the satellites are deployed, meaning we do not have control over the satellites' maneuvers." And the article also cites astronomer/satellite tracking expert Jonathan McDowell, who had tweeted that CAS Space's response "seems reasonable." (In an email to PC Magazine, he'd said "Two days after launch is beyond the window usually used for predicting launch related risks." But "The coordination that Nicolls cited is becoming more and more important," notes Space.com, since "Earth orbit is getting more and more crowded." In 2020, for example, fewer than 3,400 functional satellites were whizzing around our planet. Just five years later, that number has soared to about 13,000, and more spacecraft are going up all the time. Most of them belong to SpaceX. The company currently operates nearly 9,300 Starlink satellites, more than 3,000 of which have launched this year alone. Starlink satellites avoid potential collisions autonomously, maneuvering themselves away from conjunctions predicted by available tracking data. And this sort of evasive action is quite common: Starlink spacecraft performed about 145,000 avoidance maneuvers in the first six months of 2025, which works out to around four maneuvers per satellite per month. That's an impressive record. But many other spacecraft aren't quite so capable, and even Starlink satellites can be blindsided by spacecraft whose operators don't share their trajectory data, as Nicolls noted. And even a single collision - between two satellites, or involving pieces of space junk, which are plentiful in Earth orbit as well - could spawn a huge cloud of debris, which could cause further collisions. Indeed, the nightmare scenario, known as the Kessler syndrome, is a debris cascade that makes it difficult or impossible to operate satellites in parts of the final frontier.

Read more of this story at Slashdot.

15 Dec 2025 5:24am GMT

Roomba Maker 'iRobot' Files for Bankruptcy After 35 Years

Roomba manufacturer iRobot filed for bankruptcy today, reports Bloomberg. After 35 years, iRobot reached a "restructuring support agrement that will hand control of the consumer robot maker to Shenzhen PICEA Robotics Co, its main supplier and lender, and Santrum Hong Kong Compny." Under the restructuring, vacuum cleaner maker Shenzhen PICEA will receive the entire equity stake in the reorganised company... The plan will allow the debtor to remain as a going concern and continue to meet its commitments to employees and make timely payments in full to vendors and other creditors for amounts owed throughout the court-supervised process, according to an iRobot statement... he company warned of potential bankruptcy in December after years of declining earnings. Roomba says it's sold over 50 million robots, the article points out, but earnings "began to decline since 2021 due to supply chain headwinds and increased competition. "A hoped-for by acquisition by Amazon.com in 2023 collapsed over regulatory concerns."

Read more of this story at Slashdot.

15 Dec 2025 3:24am GMT

14 Dec 2025

feedLinuxiac

Linuxiac Weekly Wrap-Up: Week 50 (Dec 8 – 14, 2025)

Linuxiac Weekly Wrap-Up: Week 50 (Dec 8 – 14, 2025)

Catch up on the latest Linux news: Pop!_OS 24.04 LTS launches with COSMIC 1.0, Kali Linux 2025.4, Manjaro 25.1 Preview, Cinnamon 6.6, Plasma 6.5.4, Firefox 146, GNOME to reject AI-generated Shell extensions, and more.

14 Dec 2025 11:15pm GMT

13 Dec 2025

feedArs Technica

Sharks and rays gain landmark protections as nations move to curb international trade

Gov'ts agree to ban or restrict international trade in shark meat, fins, and other products.

13 Dec 2025 12:00pm GMT

12 Dec 2025

feedArs Technica

OpenAI built an AI coding agent and uses it to improve the agent itself

"The vast majority of Codex is built by Codex," OpenAI told us about its new AI coding agent.

12 Dec 2025 10:16pm GMT

Reminder: Donate to win swag in our annual Charity Drive sweepstakes

Help raise a charity haul that's already past $11,000 in just a couple of days.

12 Dec 2025 9:35pm GMT