08 May 2026

feedSlashdot

Sam Altman Had a Bad Day In Court

An anonymous reader quotes a report from Business Insider: As the trial between Elon Musk and OpenAI ended its second week, the Tesla CEO started scoring points against Sam Altman. His witnesses landed three solid punches in testimony about how Altman runs OpenAI as CEO, raising concerns about his dedication to AI safety, the nonprofit's mission, and his honesty as a leader of the organization. [...] This week, Musk's legal team called a parade of witnesses who questioned whether Altman was acting in the interest of the nonprofit. On Thursday, that included a former OpenAI safety researcher, who described a slow erosion of the company's safety teams, which prompted her to leave the company. Witnesses also shared stories about the company launching products without the proper safety reviews -- or the knowledge of the board. Rosie Campbell, a former AI safety researcher at OpenAI, testified that the company became more product-focused during her time there and moved away from the long-term safety work that had initially drawn her in. She said both long-term AI safety teams were eventually eliminated, and that she supported Altman's reinstatement only because she feared OpenAI might otherwise collapse into Microsoft: "It was my understanding at the time that the best way for OpenAI to not disintegrate and fall about would be for Sam to return." Still, Campbell's testimony wasn't entirely favorable to Musk. She also said xAI, Musk's AI company, likely had an inferior approach to safety than OpenAI. Helen Toner, another former OpenAI board member, also testified about the board's concerns leading up to Altman's removal. She said the board was not primarily worried about ChatGPT's safety, but about Altman's leadership and investor relationships, saying, "The issues that we were concerned about in our decision to fire Sam were exacerbated by relationships with investors." Toner also described concerns that Altman was misrepresenting what others had said, telling the court, "We were concerned that Sam was inserting words into other people's mouths in order to get people to do what he wanted." Meanwhile, Tasha McCauley, a former OpenAI board member, described a deep loss of trust in Altman and accused him of creating "chaos" and "crisis" inside the company. She said Altman fostered a "culture of lying and culture of deceit," including allegedly misleading others about whether GPT-4 Turbo needed internal safety review before launch. Musk's lawyers then called to the stand David Schizer, a Columbia Law professor and nonprofit-governance expert, who framed Altman's alleged behavior as a serious governance problem for an organization that was supposed to be mission-driven. Asked about claims that products were launched without full board awareness or safety review, he said, "The board and CEO need to be partnering, working together, to make sure the mission is being followed," adding that "if the CEO is withholding that information, it's a big problem." The day ended with the start of a Microsoft executive's deposition. Microsoft VP Michael Wetter said Azure had integrated OpenAI technology, that Microsoft saw strategic value in having AI developers build on Azure, and that a 2016 agreement allowed OpenAI to use Microsoft tools for free even though it could mean a loss of up to $15 million for Microsoft. Testimony ended early, with no court on Friday and the trial set to resume Monday. Recap: Sam Altman's Management Style Comes Under the Microscope At OpenAI Trial (Day Seven) Brockman Rebuts Musk's Take On Startup's History, Recounts Secret Work For Tesla (Day Six) OpenAI President Discloses His Stake In the Company Is Worth $30 Billion (Day Five) Musk Concludes Testimony At OpenAI Trial (Day Four) Elon Musk Says OpenAI Betrayed Him, Clashes With Company's Attorney (Day Three) Musk Testifies OpenAI Was Created As Nonprofit To Counter Google (Day Two) Elon Musk and OpenAI CEO Sam Altman Head To Court (Day One)

Read more of this story at Slashdot.

08 May 2026 3:50am GMT

07 May 2026

feedSlashdot

IMF Warns New AI Models Risk 'Systemic' Shock To Finance

The IMF is warning that advanced AI-powered cyberattacks pose a serious threat to global financial stability. "IMF analysis suggests that extreme cyber-incident losses could trigger funding strains, raise solvency concerns, and disrupt broader markets," the lender warned in a new report. The report urged greater international cooperation and emphasized resilience, since breaches are "inevitable" -- particularly for emerging economies with weaker defenses. Agence France-Presse reports: The study's authors highlighted the risks posed by the highly interconnected nature of the global financial system, with advanced AI models able to "dramatically reduce" the time and cost of exploiting vulnerabilities. [...] The IMF warned that emerging and developing countries, "which often have more severe resource constraints, may be disproportionately exposed to attackers targeting regions with weaker defenses." The risks, the authors said, were systemic, cut across sectors and came with the threat of contagion, with the reliance on a small number of platforms and cloud providers likely to increase "the impact of any single exploited weakness." "Defenses will inevitably be breached, so resilience must also be a priority, specifically to limit how far incidents spread and ensure rapid recovery," the report said. IMF chief Kristalina Georgieva warned last month that the global financial system was not ready for the cybersecurity threats posed by AI. "We are very keen to see more attention to the guardrails that are necessary to protect financial stability in a world of AI," she told CBS News, seeking global collaboration on the issue.

Read more of this story at Slashdot.

07 May 2026 11:00pm GMT

feedOSnews

Fedora Project Leader says he doesn’t care about the reputational damage from Fedora embracing “AI”

On the Fedora forums, there's a long-running thread about a proposal for Fedora to build a variant of the distribution aimed specifically at "AI". The "problem" identified in the proposal is that setting up the various parts that a developer in the "AI" space needs is currently quite difficult on Fedora, and as such, a bunch of technical steps need to be taken to make this easier. Setting aside the "AI" of the proposal and ensuing discussion, it's actually a very interesting read, going deep into the weeds about consequential questions like building an LTS kernel on Fedora, support for out-of-tree kernel mods, and a lot more. To spoil the ending: the proposal has already been approved unanimously by the Fedora Council, meaning the efforts laid out in the proposal will be undertaken. This means that, depending on progress, we'll see a Fedora "AI" Desktop or whatever it's going to be called somewhere in the timeframe from Fedora 45 to Fedora 47. As a Fedora user on all my machines, I'm obviously not too happy about this, since I'd much rather the scarce resources of a project like Fedora goes towards things not as ethically bankrupt, environmentally destructive, and artistically deficient as "AI", but in the end it's a project owned and controlled by IBM, so it's not exactly unexpected. What really surprised me in this entire discussion is a post by Fedora Project Leader Jef Spaleta, responding to worries people in the thread were having about such a big "AI" undertaking under the Fedora branding causing serious reputational damage to Fedora as a whole. These concerns are clearly valid, as people really fucking hate "AI", doubly so in the open source community whose work especially "AI" coding tools are built on without any form of consent. As such, Fedora undertaking a big "AI" desktop project is bound to have a negative impact on Fedora's image. Just look at what aggressively pushing Copilot has done to Windows 11's already shit reputation. Spaleta, however, just doesn't care. Literally. As the Fedora Project Leader, I am absolutely not concerned about the reputational damage to this project that comes with setting up an entirely new output attractive to developers who want to make use of Ai tools. ↫ Jef Spaleta I've been looking at this line on and off for a few days now, and I just can't wrap my head around how the leader of an open source project built on and relying on the free labour of thousands of contributors says he doesn't care about reputational damage to the project he's leading. Effective and capable open source contributors are not exactly a commodity, and a lot of the decisions they make about what projects to donate their time to are based on vibes and personal convictions - you can't really pay them to look the other way. Saying you don't care about reputational damage to your huge open source project seems rather shortsighted, but of course, I don't lead a huge open source project so what do I know? In the linked thread alone, one long-time Fedora contributor, Fernando Mancera, already decided to leave the project on the spot, and I have a sneaking suspicion he won't be the last. "AI" is a deeply tainted hype on many levels, and the more you try to chase this dragon, the more capable people you'll end up chasing away.

07 May 2026 10:11pm GMT

feedSlashdot

60% of MD5 Password Hashes Are Crackable In Under an Hour

In honor of World Password Day, Kaspersky researchers revisited their study on the crackability of real-world passwords and found that 60% of MD5-hashed passwords could be cracked in under an hour with a single Nvidia RTX 5090, and 48% could be cracked in under a minute. "The bottom line is that passwords protected only by fast hashing algorithms such as MD5 are no longer safe if attackers obtain them in a data breach," reports The Register. From the report: Much of the reason password hashes have become so easy to crack is password predictability. Per Kaspersky, its analysis of more than 200 million exposed passwords revealed common patterns that attackers can use to optimize cracking algorithms, significantly reducing the time needed to guess the character combinations that grant access to target accounts. In case you're wondering whether there's a trend to compare this to, Kaspersky ran a prior iteration of this study in 2024, and bad news: Passwords are actually a bit easier to crack in 2026 than they were a couple of years ago. Not by much, mind you -- only a few percent -- but it's still a move in the wrong direction. "Attackers owe this boost in speed to graphics processors, which grow more powerful every year," Kaspersky explained. "Unfortunately, passwords remain as weak as ever." "This World Password Day, the main message ought not to be to the users, who often have no choice but to use passwords anyway, but to the sites and providers that are requiring them to do so," said senior IEEE member and University of Nottingham cybersecurity professor Steven Furnell. His advice is that providers need to modernize their login systems and enforce stronger protections, because users are often stuck with whatever security options they're given.

Read more of this story at Slashdot.

07 May 2026 10:00pm GMT

feedArs Technica

DHS can’t create vast DNA database to track ICE critics, lawsuit says

Lawsuit accuses DHS of plugging DNA database into ICE surveillance machine.

07 May 2026 9:35pm GMT

Mozilla says 271 vulnerabilities found by Mythos have "almost no false positives"

The developer of Firefox says it has "completely bought in" on AI-assisted bug discovery.

07 May 2026 7:18pm GMT

feedOSnews

Redox gets partial window pixel updating, tmux, and more

Another month, another progress report, Redox, etc. etc., you know the drill by now. This past month Redox saw improved booting on real hardware by making sure the boot process continues even if certain drivers fail or become blocked. Thanks to some changes on the RISC-V side, running Redox on real RISC-V hardware has also improved. Furthermore, tmux has been ported to Redox, CPU time reporting has been improved, and Orbital, Redox' desktop environment, gianed support for partial window pixel updating, which should increase UI performance. On top of that, there's a brand new web user interface to browse Redox packages (x86-64, i586, ARM64 (aarch64), and RISC-V (riscv64gc)), as well as the usual list of improvements to the kernel, drivers, relibc, and many more areas of the operating system.

07 May 2026 7:00pm GMT

Setting up a Sun Ray server on OpenIndiana Hipster 2025.10

Time for another Sun Ray blog post! I've had a few people email me asking for help setting up a Sun Ray server over the last few months, and despite my attempts to help them get it going there's been mixed results with running SRSS on OpenIndiana Hipster 2025.10. my Sun Ray server is still on an earlier OI snapshot, so I figured it was about time to try to actually follow the new guides myself. ↫ The Iris System Ever since my spiraling down the Sun rabbit hole late last year, I've tried for a few times now to get the x86 version of OpenIndiana and Oracle Solaris working on any of my machines, exactly for the purposes of setting up a modern Sun Ray server. Sadly, none of my machines are compatible with any illumos distribution or Oracle Solaris, so I've been shit out of luck trying to get this side project off the ground. My Ultra 45 is sadly also not supported by any SPARC version of illumos or Oracle Solaris, so unless I buy even more hardware, my dream of a modern Sun Ray setup will have to wait. Of course, virtualisation is an option for many, and that's exactly what this particular guide is about: setting up OpenIndiana on a Proxmox virtual machine. I actually have a Proxmox machine up and running and could do this too, but I'm a sucker for running stuff like this on real hardware. Yes, that makes my life more complicated and difficult, and no, it's not more noble or real or hardcore - it's just a preference. Still, for normal people who pick up a Sun Ray or two on eBay for basically nothing, running OpenIndiana in a virtual machine is the smart, reasonable, and effective option.

07 May 2026 6:20pm GMT

feedArs Technica

Google unveils screenless Fitbit Air and Google Health app to replace Fitbit

The $100 Fitbit Air is available for preorder today.

07 May 2026 2:00pm GMT

18 Apr 2026

feedPlanet Arch Linux

Break the loop, move to Berlin

Break the pattern today or the loop will repeat tomorrow.

18 Apr 2026 12:00am GMT

11 Apr 2026

feedPlanet Arch Linux

Write less code, be more responsible

My thoughts on AI-assisted programming.

11 Apr 2026 12:00am GMT

03 Apr 2026

feedPlanet Arch Linux

800 Rust terminal projects in 3 years

I have discovered and shared ~800 open source Rust CLI projects over the past 3 years.

03 Apr 2026 12:00am GMT