13 May 2026
Slashdot
Sam Altman Testifies That Elon Musk Wanted Control of OpenAI
OpenAI CEO Sam Altman took the stand Tuesday in Elon Musk's trial against the company, testifying that Musk repeatedly sought control of OpenAI before leaving in 2018. Altman said he opposed putting AI "under the control of any one person," while Musk's lawyer used a pointed cross-examination to attack Altman's trustworthiness. An anonymous reader shares updates from the testimony via the New York Times: Before Elon Musk left OpenAI in a power struggle in 2018, he wanted to merge the nonprofit artificial intelligence lab with Tesla, his electric car company. Mr. Musk and other OpenAI co-founders met several times to discuss the merger. OpenAI's chief executive, Sam Altman, was even offered a seat on Tesla's board of directors, according to a court document. But folding OpenAI into Tesla would have eliminated the lab's nonprofit status, and that, Mr. Altman said on the witness stand on Tuesday, was something he wanted to avoid. [...] "I believed that A.I. should not be under the control of any one person," Mr. Altman said. [...] Mr. Altman testified about his feud with Mr. Musk. He said he had become worried that Mr. Musk, who provided the early investment money for OpenAI, wanted to take control of the lab. He described what he called a "particularly harrowing moment" when his OpenAI co-founders asked Mr. Musk what would happen to his control of a potential for-profit when he died. Mr. Altman said Mr. Musk had replied that the control would pass to his children. "I was not comfortable with that," Mr. Altman said. When Mr. Musk lost a power struggle for control of the lab, he left, forcing Mr. Altman to find another big financial backer in Microsoft. But Mr. Altman ran into trouble in 2023 when OpenAI's board fired him because, as several of its members have testified in the trial, it didn't trust him. Steven Molo, Mr. Musk's lead lawyer, homed in on Mr. Altman's trustworthiness during an aggressive cross-examination. "Are you completely trustworthy?" Mr. Molo asked. "I believe so," Mr. Altman answered. After questioning Mr. Altman's trustworthiness for nearly 20 minutes, Mr. Molo turned to Mr. Altman's relationship with Mr. Musk. Mr. Altman said that after he met Mr. Musk in the mid-2010s, Mr. Musk had occasionally expressed concern about the dangers of A.I. But Mr. Musk spent far more time saying he was worried that companies like Google would get ahead in A.I. development, Mr. Altman said. (Mr. Musk testified in the trial that he had wanted to create OpenAI to prevent Google from controlling the technology.) Mr. Altman, the lawyer intimated, took advantage of Mr. Musk's concerns and was never sincere about his own A.I. fears. "Are you a person who just tells people things they want to hear whether those things are true or not?" Mr. Molo asked. The lawyer also questioned whether Mr. Atman, who became a billionaire through years of tech investments, was self-dealing through OpenAI. Mr. Molo showed a list of Mr. Altman's personal investments across a number of companies that stand to benefit from their association with OpenAI. They included Helion Energy, a start-up that has deals with Microsoft and OpenAI, and Cerebras, a chip maker in business with OpenAI. Mr. Molo asked if Mr. Altman, who is on OpenAI's board as well as its chief executive, would ever fire himself. "I have no plans to do that," Mr. Altman said. OpenAI's odd journey from nonprofit lab to what it is today -- a well-funded, for-profit company that is still connected to a nonprofit called the OpenAI Foundation with an endowment that could be worth more than $130 billion -- provided grist for Mr. Molo's questions about Mr. Altman's motivations. He implied that Mr. Altman could have continued to build OpenAI as a pure nonprofit. But the only way to build such a valuable charity was to raise billions through a for-profit venture, Mr. Altman responded. Still, the giant sums being raised appeared to upset Mr. Musk. In late 2022, according to court documents, Mr. Musk sent a text to Mr. Altman complaining that Microsoft was preparing to invest $10 billion in OpenAI. "This is a bait and switch," Mr. Musk said at the time. But Mr. Altman, under questioning from his own lawyers, said: "Every step of the way, I have done my best to maximize the value of the nonprofit. I would point out that there are not a lot of historical examples of a nonprofit at this scale." Before Altman took the stand, OpenAI board chair Bret Taylor continued his testimony that began on Monday. He said Elon Musk's 2024 bid to buy the company's assets appeared to conflict with his lawsuit and was rejected because the board did not believe OpenAI's mission should be controlled by one person. "We did not feel like it was appropriate for one person to control our mission," he said. Recap: Microsoft CEO Satya Nadella Testifies In OpenAI Trial (Day Nine) Sam Altman Had a Bad Day In Court (Day Eight) Sam Altman's Management Style Comes Under the Microscope At OpenAI Trial (Day Seven) Brockman Rebuts Musk's Take On Startup's History, Recounts Secret Work For Tesla (Day Six) OpenAI President Discloses His Stake In the Company Is Worth $30 Billion (Day Five) Musk Concludes Testimony At OpenAI Trial (Day Four) Elon Musk Says OpenAI Betrayed Him, Clashes With Company's Attorney (Day Three) Musk Testifies OpenAI Was Created As Nonprofit To Counter Google (Day Two) Elon Musk and OpenAI CEO Sam Altman Head To Court (Day One)
Read more of this story at Slashdot.
13 May 2026 3:30am GMT
12 May 2026
Slashdot
South Korea Floats 'Citizen Dividend' Using AI Profits
South Korea's presidential policy chief is calling for a "citizen dividend" that would return some AI-driven profits and tax revenue to the public. The Straits Times. From the report: Presidential policy chief Kim Yong-beom said in a Facebook post that a portion of the profits and tax revenue derived from the artificial intelligence boom "should be structurally returned to all citizens." That is because, Mr Kim argued, the economic gains from AI are based at least partly on industrial infrastructure built by the country over five decades. Mr Kim's comments come after tens of thousands of people gathered outside Samsung's main chip hub in April to demand employees get a greater share of AI profits. The company's labour union wants 15 per cent of operating profit handed to chip-division employees. The union has threatened an 18-day strike starting May 21. Workers have pointed to rising payouts at SK Hynix, which in 2025 agreed to allocate 10 per cent of its annual operating profit to a performance bonus pool, as evidence they deserve more pay. "Excess profits in the AI era are, by nature, concentrated," Mr Kim wrote. Memory companies, core engineers and asset holders are highly likely to receive substantial benefits, while much of the middle class may experience only indirect effects.
Read more of this story at Slashdot.
12 May 2026 11:00pm GMT
Instructure Pays Canvas Hackers To Delete Students' Stolen Data
Instructure, the company behind the widely used Canvas learning platform, says it reached an agreement with the hackers who stole 3.5 terabytes of student and university data. The company says it received "digital confirmation" that the information was destroyed and that affected schools and students would not be extorted. The BBC reports: Paying cyber criminals goes against the advice of law enforcement agencies around the world, as it can fuel further attacks and offers no guarantee the data has been deleted. In previous cases, criminals have accepted ransom payments but lied about destroying stolen data, instead keeping it for resale. For example, when the notorious LockBit ransomware group was hacked by the National Crime Agency, police found stolen data had not been deleted even after payments had been made. Instructure said in a statement on its website that protecting students' and education staff data was its primary motivation. "While there is never complete certainty when dealing with cyber criminals, we believe it was important to take every step within our control to give customers additional peace of mind, to the extent possible," the company said. Instructure did not set out the terms of the agreement but said that it meant that: - the data was returned to the company - it received "digital confirmation of data destruction" - it had been informed that no Instructure customers would be extorted as a result of the incident - the agreement covers all affected customers, with no need for individuals to engage with the hackers
Read more of this story at Slashdot.
12 May 2026 10:00pm GMT
Ars Technica
The newest AI boom pitch: Host a mini data center at your home
The plan aims to speed up AI compute deployment while compensating residents.
12 May 2026 9:59pm GMT
FDA chief resigns after Trump admin forced approval of fruity e-cigs
Makary reportedly spent his year bucking Trump admin and making industry enemies.
12 May 2026 9:26pm GMT
OSnews
The anti-minimalist backlash is the bigger story behind Oxygen’s revival
A few weeks ago, we talked about a project within KDE to revive two of their classic themes, Oxygen and Air, and polish them up to make them usable on the current versions of KDE. The developers and designers working on this project say they've been utterly surprised by just how popular this news has proven to be, and Filip Fila published a blog post with some thoughts on this unexpected popularity. Why are people yearning so strongly for user interfaces from the past? That's the real story underneath the retro-yearning. It isn't a simply story of people wanting their childhood from the 2000s back. It's that a lot of 'the new' we've been offering doesn't satisfy. It doesn't have personality. It doesn't feel warm. It doesn't feel like it was made with the idea of being anything more than a clean product that gets the job done. The escapism towards the past is a symptom. A symptom of unmet needs, not mere sentimentality. ↫ Filip Fila Fila uses modern architecture as an example, and I think it's an apt one. While monumental modern architecture can easily be beautiful and striking, it's the mundane buildings all around us that just don't seem to elicit any positive emotions, no sense of belonging or safety. As Fila also notes, the decades-long swing to minimalism in both architecture and UI design isn't merely because of a preference among designers, but also because minimalism is a hell of a lot cheaper to produce. A building with very little ornamentation and basic, straight lines is much easier, and thus cheaper, to design, construct, and maintain. The same applies to graphical user interface design. There are some signs that the pendulum is starting to swing back towards more instead of less, in all aspects of design. More and more people are loudly demanding buildings to adopt more classical elements, and as we can all attest to here on OSNews, the longing for aspects of UI design from the '90s and early 2000s to make a return is strong. And not just among us deep in the weeds, either; I've lost count of the number of times I've seen normal people utterly confounded by modern UI design. Anyway, bring back beveled edges.
12 May 2026 8:42pm GMT
Google gives early peek at Android laptops: Googlebooks
The news that Google is working to move Chrome OS to the Android technology stack, and that it wants to start putting Android on laptops, is not exactly news, as the company has been talking about it for years. At an Android event today, the company finally unveiled the culmination of all this work: Googlebooks. We're bringing together the best of Android, which comes with powerful apps on Google Play and a modern OS that's designed for Intelligence, and ChromeOS, which comes with the world's most popular browser. The result is Googlebook: a new category of laptops built with Gemini's helpfulness at its core, designed to work seamlessly with the devices in your life and powered by premium hardware. We're sharing a sneak peek into the Googlebook experience today and will have a lot more to share later this year. ↫ Alex Kuscher at The Keyword, a Google blog apparently The approach here seems very similar to Chromebooks, with Googlebooks being designed and built by various OEMs, but instead of Chrome OS they run Android in desktop mode. Of course, "AI" has been creamed all over these things, to the point where not even the venerable mouse cursor is safe: if you wiggle your cursor, it will turn into "Magic Pointer", which will highlight various "AI" actions as you hover over stuff on your screen. Google also showed off an "AI"-based feature to create widgets, as well as the ability to access files on your phone right from a Googlebook. That's about all we know as far as functionality and features goes. They're supposed to go on sale later this year, with models coming from Acer, ASUS, Dell, HP, and Lenovo.
12 May 2026 8:01pm GMT
Ars Technica
Twin brothers wipe 96 gov't databases minutes after being fired
A case study in why credentials are revoked before firings.
12 May 2026 7:12pm GMT
11 May 2026
OSnews
OpenBSD and slopcode: raindrop to a torrent?
Every single software product is dealing with the question about what to do with "AI"-generated code, but the question is particularly difficult to answer for open source operating systems like Linux distributions and the various BSDs, which often consist of a wide variety of software packages from hundreds to thousands of different developers. On top of that, they also have to ask the "AI" question for every layer of their offering, from the base install, to the official repositories, to community-run ones. As users, we, too, are asking these same questions, wondering just how much "AI" taint we're willing to spread across our computers. I understand the difficult position Linux distributions are in with regard to "AI". I mean, when even the Linux kernel itself is tainted by "AI", a no-"AI" policy is basically an empty gesture for them at this point. Personally, I find a policy of "we don't do 'AI' in our work, but we don't have control over the thousands of components we consist of" to be an entirely reasonable, if deeply unsatisfying, position to take. What else are they going to do? You can't really be a Linux distribution without, you know, the Linux kernel, which is, as I've already said, utterly tainted by "AI" at this point. Still, in the back of my mind, I always had a trump card: if all else fails, we'll always have OpenBSD. Its project leader Theo de Raadt is deeply principled, every OpenBSD user and contributor I know hates "AI" deeply, and the project routinely sticks to their principles even when it's difficult or inconvenient. Yes, this makes OpenBSD not the most ideal desktop operating system, but I'd rather use that than something that embraces the multitude of ethical, environmental, quality, and legal concerns regarding "AI" code completely. Imagine my surprise, then, to discover that OpenBSD already contains slopcode in its base installation, with the project's leaders and developers remaining oddly silent about it. My friend and OSNews regular Morgan posted this on Fedi a few days ago: Nearly six weeks later, and the question of whether "AI" generated code in tmux - not tool-assisted bug finding, not refactoring, actual LLM-generated slop with questionable license(1) - that was consequently merged into OpenBSD base, is considered acceptable by the lead devs, remains unanswered. Despite Theo de Raadt's concrete stance against any code of questionable license origin polluting the project - and the tmux merge was indeed questionable - it seems this is being swept under the rug. This makes me extremely uncomfortable; it's like seeing a fox in the henhouse but the farmers are all looking the other way and no one can convince them to admit they can see it and root it out. I really don't know what to do being just a user; I feel like even if I tried to chime in on the mailing list I would just be ignored like the others trying to raise the alarm. I hope, as they do, that this is being discussed internally, away from the public list, and that a positive outcome is near. Maybe they are waiting for the 7.9 release before setting anything in stone. Or maybe the "AI" disease has infected one of the last pure operating system projects we have left and there's no going back. ↫ Morgan on Fedi I obviously share Morgan's concerns, and like him, I'm also afraid that opening the door to a few drops of slop in base will quickly grow into a torrent of slop as time goes by. Yes, it's just a patch to tmux, but it's in base, and the "base" of a BSD is almost a sacred concept, and entirely the last place where you want to see code that raises ethical, environmental, quality, and legal concerns. For all we know, this patch of slop or the next one contains a bunch of GPL code because it just so happens that's where the ball tumbling down the developer's pachinko machine ended up. GPL code that would then be in the base of a BSD. I echo the call for the OpenBSD project to address this problem, and to set clear boundaries and guidelines regarding "AI" code, so users and developers alike know what level of quality and integrity we can expect from OpenBSD and its base installation going forward.
11 May 2026 11:02pm GMT
Planet Arch Linux
Ratty: A terminal emulator with inline 3D graphics
Just trying to answer one simple question: What if the terminal was 3D?
11 May 2026 12:00am GMT
18 Apr 2026
Planet Arch Linux
Break the loop, move to Berlin
Break the pattern today or the loop will repeat tomorrow.
18 Apr 2026 12:00am GMT
11 Apr 2026
Planet Arch Linux
Write less code, be more responsible
My thoughts on AI-assisted programming.
11 Apr 2026 12:00am GMT