08 May 2026
Ars Technica
Everyone’s a loser in Strait of Hormuz game that simulates global crisis
The game asks players to find the least worst options for a shipping chokepoint.
08 May 2026 11:15am GMT
Rocket Report: Alpha Block 2 coming this summer; Falcon sets booster landing mark
"The deciding factor was what we felt like was the team's impact to humanity."
08 May 2026 11:00am GMT
Slashdot
First Segment of the Fehmarnbelt Tunnel Is In Place
Longtime Slashdot reader Qbertino writes: The Fehrmarnbelt tunnel is a European construction megaproject building a tunnel between Denmark and Germany, crossing the Fehmarnbelt in the Baltic sea. The first segment of the tunnel has now successfully been placed in its designated spot. This is a yet-unseen, next-level engineering feat achieved by the Danish Sund & Baelt construction company. It took 14 hours and used a massive pontoon ship built specifically for this project. The tunnel segments are 217 meters long, weigh more than 73,000 metric tons, and have to be placed within a tolerance of 3 mm. The tunnel will eventually consist of 89 of these segments, be 18 km long, and connect the Danish city of Rodby with the German island Fehmarn through five individual tunnel tubes: two for cars, two for trains, and one rescue and maintenance tunnel. Crossing time will be reduced from a 45-minute ferry crossing to seven minutes by train or 10 minutes by car, and cut the travel time between the German city of Hamburg and the Danish capital, Copenhagen, down to 2.5 hours. The project's planned completion is set for the year 2029. German news Tagesschau has some details and a neat animation, while further details are available from the German tech news site Heise.
Read more of this story at Slashdot.
08 May 2026 11:00am GMT
The Canvas Hack Is a New Kind of Ransomware Debacle
Wired describes the recent Canvas breach as an unusually disruptive ransomware-style extortion incident because one attack on Instructure's learning platform temporarily paralyzed thousands of schools during finals and end-of-year assignments. The hackers using the "ShinyHunters" name claim more than 8,800 schools were affected, while Instructure says exposed data included names, email addresses, student ID numbers, and platform messages. From the report: Higher education has long been a target of ransomware gangs and data extortion attacks. But never before, perhaps, has a cyberattack against a single software platform so thoroughly disrupted the daily operations of thousands of schools across the United States. The widely used digital learning platform Canvas was put into "maintenance mode" on Thursday after its maker, the education tech giant Instructure, suffered a data breach and faced an extortion attempt by attackers using the recognizable moniker "ShinyHunters." Though the hackers have been advertising the breach and attempting to extract a ransom payment from Instructure since May 1, the situation took on additional immediacy for regular people across the US and beyond on Thursday because the Canvas downtime caused chaos at schools, including those in the midst of finals and end-of-year assignments. Universities like Harvard, Columbia, Rutgers, and Georgetown sent alerts to students about the situation in recent days; other institutions, including school districts in at least a dozen states, also appear to have been affected. In a list published by the hackers behind the attack on their ransom-focused dark web site, they claim the breach affected more than 8,800 schools. The exact scale and reach of the breach is currently unclear, though. And the fact that Canvas was down throughout Thursday afternoon and evening further complicated the picture. In a running incident update log that began on May 1, Steve Proud, Instructure's chief information security officer, said that the company had "recently experienced a cybersecurity incident perpetrated by a criminal threat actor." He added on May 2 that "the information involved" for "users at affected institutions" included names, email addresses, student ID numbers, and messages exchanged by users on the platform. The situation was ultimately marked as "Resolved" on Wednesday, with Proud writing that "Canvas is fully operational, and we are not seeing any ongoing unauthorized activity." At midday on Thursday, though, the Instructure status page registered an "issue" where "some users are having difficulties logging into Student ePortfolios." Within a few hours, the company had added another status update: "Instructure has placed Canvas, Canvas Beta and Canvas Test in maintenance mode." Late Thursday evening, the company said that Canvas was available again "for most users." TechCrunch reported on Thursday that the hackers launched a secondary wave of attacks, defacing some schools' Canvas portals by injecting an HTML file to display their own message on the schools' Canvas login pages. According to The Harvard Crimson, attackers modified the Harvard Canvas login page to show a message that included a list of schools that the hackers claim were impacted by the breach. The message from attackers "urged schools included on the affected list to consult with a cyber advisory firm and contact the group privately to negotiate a settlement before the end of the day on May 12 -- or else risk their data being leaked," The Crimson reported. "It is unclear what information tied to Harvard affiliates was included in the alleged breach."
Read more of this story at Slashdot.
08 May 2026 7:00am GMT
Sam Altman Had a Bad Day In Court
An anonymous reader quotes a report from Business Insider: As the trial between Elon Musk and OpenAI ended its second week, the Tesla CEO started scoring points against Sam Altman. His witnesses landed three solid punches in testimony about how Altman runs OpenAI as CEO, raising concerns about his dedication to AI safety, the nonprofit's mission, and his honesty as a leader of the organization. [...] This week, Musk's legal team called a parade of witnesses who questioned whether Altman was acting in the interest of the nonprofit. On Thursday, that included a former OpenAI safety researcher, who described a slow erosion of the company's safety teams, which prompted her to leave the company. Witnesses also shared stories about the company launching products without the proper safety reviews -- or the knowledge of the board. Rosie Campbell, a former AI safety researcher at OpenAI, testified that the company became more product-focused during her time there and moved away from the long-term safety work that had initially drawn her in. She said both long-term AI safety teams were eventually eliminated, and that she supported Altman's reinstatement only because she feared OpenAI might otherwise collapse into Microsoft: "It was my understanding at the time that the best way for OpenAI to not disintegrate and fall about would be for Sam to return." Still, Campbell's testimony wasn't entirely favorable to Musk. She also said xAI, Musk's AI company, likely had an inferior approach to safety than OpenAI. Helen Toner, another former OpenAI board member, also testified about the board's concerns leading up to Altman's removal. She said the board was not primarily worried about ChatGPT's safety, but about Altman's leadership and investor relationships, saying, "The issues that we were concerned about in our decision to fire Sam were exacerbated by relationships with investors." Toner also described concerns that Altman was misrepresenting what others had said, telling the court, "We were concerned that Sam was inserting words into other people's mouths in order to get people to do what he wanted." Meanwhile, Tasha McCauley, a former OpenAI board member, described a deep loss of trust in Altman and accused him of creating "chaos" and "crisis" inside the company. She said Altman fostered a "culture of lying and culture of deceit," including allegedly misleading others about whether GPT-4 Turbo needed internal safety review before launch. Musk's lawyers then called to the stand David Schizer, a Columbia Law professor and nonprofit-governance expert, who framed Altman's alleged behavior as a serious governance problem for an organization that was supposed to be mission-driven. Asked about claims that products were launched without full board awareness or safety review, he said, "The board and CEO need to be partnering, working together, to make sure the mission is being followed," adding that "if the CEO is withholding that information, it's a big problem." The day ended with the start of a Microsoft executive's deposition. Microsoft VP Michael Wetter said Azure had integrated OpenAI technology, that Microsoft saw strategic value in having AI developers build on Azure, and that a 2016 agreement allowed OpenAI to use Microsoft tools for free even though it could mean a loss of up to $15 million for Microsoft. Testimony ended early, with no court on Friday and the trial set to resume Monday. Recap: Sam Altman's Management Style Comes Under the Microscope At OpenAI Trial (Day Seven) Brockman Rebuts Musk's Take On Startup's History, Recounts Secret Work For Tesla (Day Six) OpenAI President Discloses His Stake In the Company Is Worth $30 Billion (Day Five) Musk Concludes Testimony At OpenAI Trial (Day Four) Elon Musk Says OpenAI Betrayed Him, Clashes With Company's Attorney (Day Three) Musk Testifies OpenAI Was Created As Nonprofit To Counter Google (Day Two) Elon Musk and OpenAI CEO Sam Altman Head To Court (Day One)
Read more of this story at Slashdot.
08 May 2026 3:50am GMT
07 May 2026
OSnews
Fedora Project Leader says he doesn’t care about the reputational damage from Fedora embracing “AI”
On the Fedora forums, there's a long-running thread about a proposal for Fedora to build a variant of the distribution aimed specifically at "AI". The "problem" identified in the proposal is that setting up the various parts that a developer in the "AI" space needs is currently quite difficult on Fedora, and as such, a bunch of technical steps need to be taken to make this easier. Setting aside the "AI" of the proposal and ensuing discussion, it's actually a very interesting read, going deep into the weeds about consequential questions like building an LTS kernel on Fedora, support for out-of-tree kernel mods, and a lot more. To spoil the ending: the proposal has already been approved unanimously by the Fedora Council, meaning the efforts laid out in the proposal will be undertaken. This means that, depending on progress, we'll see a Fedora "AI" Desktop or whatever it's going to be called somewhere in the timeframe from Fedora 45 to Fedora 47. As a Fedora user on all my machines, I'm obviously not too happy about this, since I'd much rather the scarce resources of a project like Fedora goes towards things not as ethically bankrupt, environmentally destructive, and artistically deficient as "AI", but in the end it's a project owned and controlled by IBM, so it's not exactly unexpected. What really surprised me in this entire discussion is a post by Fedora Project Leader Jef Spaleta, responding to worries people in the thread were having about such a big "AI" undertaking under the Fedora branding causing serious reputational damage to Fedora as a whole. These concerns are clearly valid, as people really fucking hate "AI", doubly so in the open source community whose work especially "AI" coding tools are built on without any form of consent. As such, Fedora undertaking a big "AI" desktop project is bound to have a negative impact on Fedora's image. Just look at what aggressively pushing Copilot has done to Windows 11's already shit reputation. Spaleta, however, just doesn't care. Literally. As the Fedora Project Leader, I am absolutely not concerned about the reputational damage to this project that comes with setting up an entirely new output attractive to developers who want to make use of Ai tools. ↫ Jef Spaleta I've been looking at this line on and off for a few days now, and I just can't wrap my head around how the leader of an open source project built on and relying on the free labour of thousands of contributors says he doesn't care about reputational damage to the project he's leading. Effective and capable open source contributors are not exactly a commodity, and a lot of the decisions they make about what projects to donate their time to are based on vibes and personal convictions - you can't really pay them to look the other way. Saying you don't care about reputational damage to your huge open source project seems rather shortsighted, but of course, I don't lead a huge open source project so what do I know? In the linked thread alone, one long-time Fedora contributor, Fernando Mancera, already decided to leave the project on the spot, and I have a sneaking suspicion he won't be the last. "AI" is a deeply tainted hype on many levels, and the more you try to chase this dragon, the more capable people you'll end up chasing away.
07 May 2026 10:11pm GMT
Ars Technica
DHS can’t create vast DNA database to track ICE critics, lawsuit says
Lawsuit accuses DHS of plugging DNA database into ICE surveillance machine.
07 May 2026 9:35pm GMT
OSnews
Redox gets partial window pixel updating, tmux, and more
Another month, another progress report, Redox, etc. etc., you know the drill by now. This past month Redox saw improved booting on real hardware by making sure the boot process continues even if certain drivers fail or become blocked. Thanks to some changes on the RISC-V side, running Redox on real RISC-V hardware has also improved. Furthermore, tmux has been ported to Redox, CPU time reporting has been improved, and Orbital, Redox' desktop environment, gianed support for partial window pixel updating, which should increase UI performance. On top of that, there's a brand new web user interface to browse Redox packages (x86-64, i586, ARM64 (aarch64), and RISC-V (riscv64gc)), as well as the usual list of improvements to the kernel, drivers, relibc, and many more areas of the operating system.
07 May 2026 7:00pm GMT
Setting up a Sun Ray server on OpenIndiana Hipster 2025.10
Time for another Sun Ray blog post! I've had a few people email me asking for help setting up a Sun Ray server over the last few months, and despite my attempts to help them get it going there's been mixed results with running SRSS on OpenIndiana Hipster 2025.10. my Sun Ray server is still on an earlier OI snapshot, so I figured it was about time to try to actually follow the new guides myself. ↫ The Iris System Ever since my spiraling down the Sun rabbit hole late last year, I've tried for a few times now to get the x86 version of OpenIndiana and Oracle Solaris working on any of my machines, exactly for the purposes of setting up a modern Sun Ray server. Sadly, none of my machines are compatible with any illumos distribution or Oracle Solaris, so I've been shit out of luck trying to get this side project off the ground. My Ultra 45 is sadly also not supported by any SPARC version of illumos or Oracle Solaris, so unless I buy even more hardware, my dream of a modern Sun Ray setup will have to wait. Of course, virtualisation is an option for many, and that's exactly what this particular guide is about: setting up OpenIndiana on a Proxmox virtual machine. I actually have a Proxmox machine up and running and could do this too, but I'm a sucker for running stuff like this on real hardware. Yes, that makes my life more complicated and difficult, and no, it's not more noble or real or hardcore - it's just a preference. Still, for normal people who pick up a Sun Ray or two on eBay for basically nothing, running OpenIndiana in a virtual machine is the smart, reasonable, and effective option.
07 May 2026 6:20pm GMT
18 Apr 2026
Planet Arch Linux
Break the loop, move to Berlin
Break the pattern today or the loop will repeat tomorrow.
18 Apr 2026 12:00am GMT
11 Apr 2026
Planet Arch Linux
Write less code, be more responsible
My thoughts on AI-assisted programming.
11 Apr 2026 12:00am GMT
03 Apr 2026
Planet Arch Linux
800 Rust terminal projects in 3 years
I have discovered and shared ~800 open source Rust CLI projects over the past 3 years.
03 Apr 2026 12:00am GMT