05 Apr 2026
Slashdot
Claude Code Leak Reveals a 'Stealth' Mode for GenAI Code Contributions - and a 'Frustration Words' Regex
That leak of Claude Code's source code "revealed "all kinds of juicy details," writes PC World. The more than 500,000 lines of code included: - An 'undercover mode' for Claude that allows it to make 'stealth' contributions to public code bases - An 'always-on' agent for Claude Code - A Tamagotchi-style 'Buddy' for Claude "But one of the stranger bits discovered in the leak is that Claude Code is actively watching our chat messages for words and phrases - including f-bombs and other curses - that serve as signs of user frustration." Specifically, Claude Code includes a file called "userPromptKeywords.ts" with a simple pattern-matching tool called regex, which sweeps each and every message submitted to Claude for certain text matches. In this particular case, the regex pattern is watching for "wtf," "wth," "omfg," "dumbass," "horrible," "awful," "piece of - -" (insert your favorite four-letter word for that one), "f - you," "screw this," "this sucks," and several other colorful metaphors... While the Claude Code leak revealed the existence of the "frustration words" regex, it doesn't give any indication of why Claude Code is scouring messages for these words or what it's doing with them.
Read more of this story at Slashdot.
05 Apr 2026 11:41pm GMT
Hundreds of Theatres Show Apocalyptic-Yet-Optimistic New Movie, 'The AI Doc'
Hundreds of theatres are now showing a new documentary called The AI Doc: Or How I Became An Apocaloptimist. Variety calls it "playful and heady,"edited "with a spirit of ADHD alertness." The New York Times suggests it "tries to cover so much that it ends up being more confusing than clarifying, but parts are fascinating." But the Los Angeles Times calls it an "aggravating soup of information and opinion that wants to move at the speed of machine thought." So while co-director Daniel Roher asks whether he should bring a child into a world with AI, "Perhaps more urgently, should Roher have made an AI doc that treats us like children?" First, he parades all the safety doomers, seeming to believe their warnings that an unfeeling superintelligence is upon us and we can't trust it. Then, sufficiently disturbed, he hauls in the AI cheerleaders, a suspiciously positive gang who can envision only medical miracles and grindless lives in which we're all full-time artists. Only then, after this simplistic setup where platitudes reign, do we get the section in which the subject is treated like the brave (and grave) new world it is: geopolitically fraught, economically tenuous and a playground for billionaires. Why couldn't the complexity have been the dialogue from the beginning, instead of the play-dumb cartoon "The AI Doc" feels like for so long? Maybe Roher believes this is what our increasingly gullible, truth-challenged citizenry needs from an explanatory doc: a flashy, kindhearted reminder that we're the change we need to be. Read more reactions here and here. Mashable warns the documentary's director "will ultimately craft a journey that feels like a panic attack in real time. In the end, you may not feel better about mankind's chances against the rise of AI. But you'll likely feel less helpless in the future before us all." They also point out that the film "shares some ways its audience can more actively be apart of the conversation, and provides a link to the film's website for engagement," where 6,948 people have now signed up for its newsletter. ("Demand a seat at the table," urges its signup button, under a warning that "Government and AI companies are designing our future without us. We need to reclaim our voice in shaping the future of AI...")
Read more of this story at Slashdot.
05 Apr 2026 10:39pm GMT
Will 'AI-Assisted' Journalists Bring Errors and Retractions?
Meet the "journalist" who "uploads press releases or analyst notes into AI tools and prompts them to spit out articles that he can edit and publish quickly," according to the Wall Street Journal. "AI-assisted stories accounted for nearly 20% of Fortune's web traffic in the second half of 2025." And most were written by 42-year-old Nick Lichtenberg, who has now written over 600 AI-assisted stories, producing "more stories in six months than any of his colleagues at Fortune delivered in a year." One Wednesday in February, he cranked out seven. "I'm a bit of a freak," Lichtenberg said... A story by Lichtenberg sometimes starts with a prompt entered into Perplexity or Google's NotebookLM, asking it to write something based on a headline he comes up with. He moves the AI tools' initial drafts into a content-management system and edits the stories before publishing them for Fortune's readers... A piece from earlier that morning about Josh D'Amaro being named Disney CEO took 10 minutes to get online, he said... Like other journalists, Lichtenberg vets his stories. He refers back to the original documents to confirm the information he's reporting is correct. He reaches out to companies for comment. But he admits his process isn't as thorough as that of magazine fact-checkers. While Lichtenberg started out saying his stories were co-authored with "Fortune Intelligence", he now typically signs his own name, according to the article, "because he feels the work is mostly his own." (Though his stories "sometimes" disclose generative AI was used as a research tool...) The article asks with he could be "a bellwether for where much of the media business is headed..." "Much of the content people now consume online is generated by artificial intelligence, with some 9% of newly published newspaper articles either partially or fully AI-generated, according to a 2025 study led by the University of Maryland. The number of AI-generated articles on the web surpassed human-written ones in late 2024, according to research and marketing agency Graphite." Some executives have made full-throated declarations about the threat posed by AI. New York Times publisher A.G. Sulzberger said AI "is almost certainly going to usher in an unprecedented torrent of crap," referencing deepfakes as an example. The NewsGuild of New York, the union representing Fortune employees and journalists at other media outlets, said the people are what makes journalism so powerful. "You simply can't replicate lived experiences, human judgment and expertise," said president Susan DeCarava. For Chris Quinn, the editor of local publications Cleveland.com and the Plain Dealer, AI tools have helped tame other torrents facing the industry. AI has allowed the outlets to cover counties in Ohio that otherwise might go ignored by scraping information from local websites and sending "tips" to reporters, he said. It has also edited stories and written first drafts so the newsrooms' journalists can focus on the calls, research and reporting needed for their stories.... Newsrooms from the New York Times to The Wall Street Journal are deploying AI in various ways to help reporters and editors work more efficiently.... Not all newsrooms disclose their use of AI, and in some cases have rolled out new tools that resulted in errors or PR gaffes. An October study from the European Broadcasting Union and the BBC, which relied on professional journalists to evaluate the news integrity of more than 3,000 AI responses, found that almost half of all AI responses had at least one significant issue. Last week the New York Times even issued a correction when a freelance book reviewer using an AI tool unknowingly included "language and details similar to those in a review of the same book published in The Guardian." But it was actually "the second time in a few days that the Times was called out for potential AI plagiarism," according to the American journalist writing The Handbasket newsletter. We must stem the idea being pushed by tech companies and their billionaire funders who've sunk too much into their products to admit defeat that the infiltration of AI into journalism is inevitable; because from my perch as an independent journalist, it simply is not... Some AI-loving journalists appear to believe that if they're clear enough with the AI program they're using, it will truly understand what they're seeking and not just do what it's made to do: steal shit... If you want to work with machines, get a job that requires it. There are a whole lot more of those than there are writing jobs, so free up space for people who actually want to do the work. You're not doing the world a favor by gifting it your human/AI hybrid. Journalism will not miss you if you leave... But meanwhile, USA Today recently tried hiring for a new position: AI-Assisted reporter. (The lucky reporter will "support the launch and scaling of AI-assisted local journalism in a major U.S. metro," working with tools including Copilot and Perplexity, pioneering possible future expansions and "AI-enabled newsroom operations that support and augment human-led journalism.") And Google is already sponsoring a "publishing innovation award"...
Read more of this story at Slashdot.
05 Apr 2026 9:22pm GMT
OSnews
Adobe secretly modifies your hosts file for the stupidest reason
If you're using Windows or macOS and have Adobe Creative Cloud installed, you may want to take a peek at your hosts file. It turns out Adobe adds a bunch of entries into the hosts file, for a very stupid reason. They're using this to detect if you have Creative Cloud already installed when you visit on their website. When you visit https://www.adobe.com/home, they load this image using JavaScript: https://detect-ccd.creativecloud.adobe.com/cc.png If the DNS entry in your hosts file is present, your browser will therefore connect to their server, so they know you have Creative Cloud installed, otherwise the load fails, which they detect. They used to just hit http://localhost:<various ports>/cc.png which connected to your Creative Cloud app directly, but then Chrome started blocking Local Network Access, so they had to do this hosts file hack instead. ↫ thenickdude at Reddit At what point does a commercial software suite become malware?
05 Apr 2026 1:59pm GMT
Ars Technica
CBP facility codes sure seem to have leaked via online flashcards
Quizlet flashcards seem to include sensitive information about gate security at CBP locations.
05 Apr 2026 11:07am GMT
Artemis II is going so well that we're left to talk about frozen urine
"I think the fixation on the toilet is kind of human nature."
05 Apr 2026 12:12am GMT
04 Apr 2026
Ars Technica
Tech companies are trying to neuter Colorado’s landmark right-to-repair law
A state bill is a glimpse of how corporations are limiting people's ability to make their own fixes and upgrades.
04 Apr 2026 8:36pm GMT
OSnews
TinyOS: ultra-lightweight RTOS for IoT devices
An ultra-lightweight real-time operating system for resource-constrained IoT and embedded devices. Kernel footprint under 10 KB, 2 KB minimum RAM, preemptive priority-based scheduling. ↫ TinyOS GitHub page Written in C, open source, and supports ARM and RISC-V.
04 Apr 2026 7:32am GMT
Redox gets new CPU scheduler
Another major improvement in Redox: a brand new scheduler which improves performance under load considerably. We have replaced the legacy Round Robin scheduler with a Deficit Weighted Round Robin scheduler. Due to this, we finally have a way of assigning different priorities to our Process contexts. When running under light load, you may not notice any difference, but under heavy load the new scheduler outperforms the old one (eg. ~150 FPS gain in the pixelcannon 3D Redox demo, and ~1.5x gain in operations/sec for CPU bound tasks and a similar improvement in responsiveness too (measured through schedrs)). ↫ Akshit Gaur Work is far from over in this area, as they're now moving on to "replacing the static queue logic with the dynamic lag-calculations of full EEVDF".
04 Apr 2026 7:28am GMT
03 Apr 2026
Planet Arch Linux
800 Rust terminal projects in 3 years
I have discovered and shared ~800 open source Rust CLI projects over the past 3 years.
03 Apr 2026 12:00am GMT
28 Mar 2026
Planet Arch Linux
Building a guitar trainer with embedded Rust
All I wanted was to learn how to play guitar, but ended up building a DIY kit for it.
28 Mar 2026 12:00am GMT
30 Jan 2026
Planet Arch Linux
How to review an AUR package
On Friday, July 18th, 2025, the Arch Linux team was notified that three AUR packages had been uploaded that contained malware. A few maintainers including myself took care of deleting these packages, removing all traces of the malicious code, and protecting against future malicious uploads.
30 Jan 2026 12:00am GMT