04 Apr 2026
Slashdot
The Document Foundation Removes Dozens of Collabora Developers
Long-time GNOME/OpenOffice.org/LibreOffice contributor Michael Meeks is now general manager of Collabora Productivity. And earlier this month he complained when LibreOffice decided to bring back its LibreOffice Online project, as reported by Neowin, which had been inactive since 2022. After the original project went dormant - to which Collabora was a major contributor - they forked the code and created their own product, Collabora Online. But this week Meeks blogged about even more changes, writing that the Document Foundation (the nonprofit behind LibreOffice) "has decided to eject from membership all Collabora staff and partners. That includes over thirty people who have contributed faithfully to LibreOffice for many years." Meeks argues the ejections were "based on unproven legal concerns and guilt by association." This includes seven of the top ten core committers of all time (excluding release engineers) currently working for Collabora Productivity. The move is the culmination of TDF losing a large number of founders from membership over the last few years with: Thorsten Behrens, Jan 'Kendy' Holesovsky, Rene Engelhard, Caolan McNamara, Michael Meeks, Cor Nouws and Italo Vignoli no longer members. Of the remaining active founders, three of the last four are paid TDF staff (of whom none are programming on the core code). The blog It's FOSS calls it "LibreOffice Drama." They've confirmed the removals happened, also noting recently adopted Community Bylaws requiring members to step down if they're affiliated with a company in an active legal dispute with the Foundation. But The Documentation Foundation "also makes clear that a membership revocation is not a ban from contributing, with the project remaining open to anyone, and expects Collabora to keep contributing 'when the time comes.'" Collabora's Meeks adds in his blog post that there's "bold and ongoing plans to create an entirely new, cut-down, differentiated Collabora Office for users that is smoother, more user friendly, and less feature dense than our Classic product (which will continue to be supported for years for our partners). This gives a chance to innovate faster in a separate place on a smaller, more focused code-base with fewer build configurations, much less legacy, no Java, no database, web-based toolkit and more. We are excited to get executing on that. To make this process easier, and to put to bed complaints about having our distro branches in TDF gerrit [for code review], and to move to self-hosted FOSS tooling we are launching our own gerrit to host our existing branch of core... We will continue to make contributions to LibreOffice where that makes sense (if we are welcome to), but it clearly no longer makes much sense to continue investing heavily in building what remains of TDF's community and product for them - while being excluded from its governance. In this regard, we seem to be back where we were fifteen years ago.
Read more of this story at Slashdot.
04 Apr 2026 4:34pm GMT
'Cognitive Surrender' Leads AI Users To Abandon Logical Thinking, Research Finds
An anonymous reader quotes a report from Ars Technica: When it comes to large language model-powered tools, there are generally two broad categories of users. On one side are those who treat AI as a powerful but sometimes faulty service that needs careful human oversight and review to detect reasoning or factual flaws in responses. On the other side are those who routinely outsource their critical thinking to what they see as an all-knowing machine. Recent research goes a long way to forming a new psychological framework for that second group, which regularly engages in "cognitive surrender" to AI's seemingly authoritative answers. That research also provides some experimental examination of when and why people are willing to outsource their critical thinking to AI, and how factors like time pressure and external incentives can affect that decision. Overall, across 1,372 participants and over 9,500 individual trials, the researchers found subjects were willing to accept faulty AI reasoning a whopping 73.2 percent of the time, while only overruling it 19.7 percent of the time. The researchers say this "demonstrate[s] that people readily incorporate AI-generated outputs into their decision-making processes, often with minimal friction or skepticism." In general, "fluent, confident outputs [are treated] as epistemically authoritative, lowering the threshold for scrutiny and attenuating the meta-cognitive signals that would ordinarily route a response to deliberation," they write. These kinds of effects weren't uniform across all test subjects, though. Those who scored highly on separate measures of so-called fluid IQ were less likely to rely on the AI for help and were more likely to overrule a faulty AI when it was consulted. Those predisposed to see AI as authoritative in a survey, on the other hand, were much more likely to be led astray by faulty AI-provided answers. Despite the results, though, the researchers point out that "cognitive surrender is not inherently irrational." While relying on an LLM that's wrong half the time (as in these experiments) has obvious downsides, a "statistically superior system" could plausibly give better-than-human results in domains such as "probabilistic settings, risk assessment, or extensive data," the researchers suggest. "As reliance increases, performance tracks AI quality," the researchers write, "rising when accurate and falling when faulty, illustrating the promises of superintelligence and exposing a structural vulnerability of cognitive surrender." In other words, letting an AI do your reasoning means your reasoning is only ever going to be as good as that AI system. As always, let the prompter beware.
Read more of this story at Slashdot.
04 Apr 2026 2:00pm GMT
Colorado's New Speed Camera System Makes Waze Nearly Useless
Colorado is rolling out an average-speed camera system that tracks vehicles across multiple points instead of catching them at a single camera, making it much harder for drivers to dodge tickets with apps like Waze and Radarbot. Motor1 reports: The state's new automated vehicle identification systems (AVIS) use several cameras to calculate your average speed between them, and if it is 10 miles per hour or more over the limit, you get a ticket. No longer will you be able to slow down as you approach a camera and speed back up after passing it, not that you should be speeding on public roads in the first place. Colorado began deploying this new camera system after legislators changed the law in 2023, allowing AVIS for law enforcement use. The systems, installed on various roads and highways throughout the state, first began issuing warnings, but police began issuing tickets late last year. The most recent section of road to fall under surveillance is a stretch of I-25 north of Denver, which brought the state's growing panopticon to our attention. It began issuing tickets on April 2. The Colorado Department of Transportation installed the cameras along a construction zone. The fine is $75 and zero points for exceeding the speed limit, and the police issue it to the vehicle's owner, regardless of who is driving.
Read more of this story at Slashdot.
04 Apr 2026 11:00am GMT
03 Apr 2026
Ars Technica
Trump proposes steep cut to NASA budget as astronauts head for the Moon
Congress will likely reject the White House's NASA cuts, just as it did last year.
03 Apr 2026 11:19pm GMT
Ice Age dice show early Native Americans may have understood probability
Ice Age hunter-gatherer "were intentionally relying on random outcomes in repeatable, rule-based ways."
03 Apr 2026 10:55pm GMT
As Artemis II zooms to the Moon, everything seems to be going swimmingly
The cabin was colder on Thursday, but the crew has been able to adjust the temperature.
03 Apr 2026 10:20pm GMT
OSnews
Big-endian testing with QEMU
I assume I don't have to explain the difference between big-endian and little-endian systems to the average OSNews reader, and while most systems are either dual-endian or (most likely) little-endian, it's still good practice to make sure your code works on both. If you don't have a big-endian system, though, how do you do that? When programming, it is still important to write code that runs correctly on systems with either byte order (see for example The byte order fallacy). But without access to a big-endian machine, how does one test it? QEMU provides a convenient solution. With its user mode emulation we can easily run a binary on an emulated big-endian system, and we can use GCC to cross-compile to that system. ↫ Hans Wennborg If you want to make sure your code isn't arbitrarily restricted to little-endian, running a few tests this way is worth it.
03 Apr 2026 8:05pm GMT
Planet Arch Linux
800 Rust terminal projects in 3 years
I have discovered and shared ~800 open source Rust CLI projects over the past 3 years.
03 Apr 2026 12:00am GMT
01 Apr 2026
OSnews
How to turn anything into a router
I don't like to cover "current events" very much, but the American government just revealed a truly bewildering policy effectively banning import of new consumer router models. This is ridiculous for many reasons, but if this does indeed come to pass it may be beneficial to learn how to "homebrew" a router. Fortunately, you can make a router out of basically anything resembling a computer. ↫ Noah Bailey I genuinely can't believe making your own router with Linux or BSD might become a much more widespread thing in the US. I'm not saying it's a bad thing - it'll teach some people something new - but it just feels so absurd.
01 Apr 2026 7:43pm GMT
30 Mar 2026
OSnews
Microsoft Copilot is now injecting ads into pull requests on GitHub
Why do so many people keep falling for the same trick over and over again? With an over $400 billion gap between the money invested in AI data centers and the actual revenue these products generate, Silicon Valley slowly returned to the tested and trusted playbook: advertising. Now, ads are starting to appear in pull requests generated by Copilot. According to Melbourne-based software developer Zach Manson, a team member used the AI to fix a simple typo in a pull request. Copilot did the job, but it also took the liberty of editing the PR's description to include this message: "⚡ Quickly spin up Copilot coding agent tasks from anywhere on your macOS or Windows machine with Raycast." ↫ David Uzondu at Neowin It turns out that Microsoft has added ads to over 1.5 million Copilot pull requests on GitHub, and they're even appearing on GitLab, one of the GitHub alternatives. The reasoning is clear, too, of course: "AI" companies and investors have poured ungodly amounts of money in "AI" that is impossible to recover, even with paying customers. As such, the logical next step is ads, and many "AI" companies are already starting to add advertising to their pachinko machines. It was only a matter of time before Copilot would start inserting ads into the pull requests it ejaculates over all kinds of projects. This isn't the first time a once-free service turns on its users, but it's definitely one of the quickest turnarounds I've ever seen. Usually it takes much longer before companies reach the stage of putting ads in their products to plug any financial bleeding, but with the amount of money poured into this useless black hole, it really shouldn't be surprising we're already there. I'm sure Copilot's competitors, like Claude, will soon follow suit. They're enshittifying Git, and developers are just letting it happen. No wonder worker exploitation is so rampant in Silicon Valley.
30 Mar 2026 9:14pm GMT
28 Mar 2026
Planet Arch Linux
Building a guitar trainer with embedded Rust
All I wanted was to learn how to play guitar, but ended up building a DIY kit for it.
28 Mar 2026 12:00am GMT
30 Jan 2026
Planet Arch Linux
How to review an AUR package
On Friday, July 18th, 2025, the Arch Linux team was notified that three AUR packages had been uploaded that contained malware. A few maintainers including myself took care of deleting these packages, removing all traces of the malicious code, and protecting against future malicious uploads.
30 Jan 2026 12:00am GMT