14 Apr 2026

feedSlashdot

Audit Finds Google, Microsoft, and Meta Still Tracking Users After Opt-Out

alternative_right shares a report from 404 Media: An independent privacy audit of Microsoft, Meta, and Google web traffic in California found that the companies may be violating state regulations and racking up billions in fines. According to the audit from privacy search engine webXray, 55 percent of the sites it checked set ad cookies in a user's browser even if they opted out of tracking. Each company disputed or took issue with the research, with Google saying it was based on a "fundamental misunderstanding" of how its product works. The webXray California Privacy Audit viewed web traffic on more than 7,000 popular websites in California in the month of March and found that most tech companies ignore when a user asks to opt-out of cookie tracking. California has stringent and well defined privacy legislation thanks to its California Consumer Privacy Act (CCPA) which allows users to, among other things, opt out of the sale of their personal information. There's a system called Global Privacy Control (GPC), which includes a browser extension that indicates to a website when a user wants to opt out of tracking. According to the webXray audit, Google failed to let users opt out 87 percent of the time. "Google's failure to honor the GPC opt-out signal is easy to find in network traffic. When a browser using GPC connects to Google's servers it encodes the opt-out signal by sending the code 'sec-gpc: 1.' This means Google should not return cookies," the audit said. "However, when Google's server responds to the network request with the opt-out it explicitly responds with a command to create an advertising cookie named IDE using the 'set-cookie' command. This non-compliance is easy to spot, hiding in plain sight." The audit said that Microsoft fails to opt out users in the same way and has a failure rate of 50 percent in the web traffic webXray viewed. Meta's failure rate was 69 percent and a bit more comprehensive. "Meta instructs publishers to install the following tracking code on their websites. The code contains no check for globally standard opt-out signals -- it loads unconditionally, fires a tracking event, and sets a cookie regardless of the consumer's privacy preferences," the audit said. It showed a copy of Meta's tracking data which contains no GPC check at all.

Read more of this story at Slashdot.

14 Apr 2026 8:00pm GMT

Chrome Now Lets You Turn AI Prompts Into Repeatable 'Skills'

Google is rolling out a Chrome feature called "Skills" that lets users save Gemini prompts as reusable one-click workflows they can run across multiple tabs. The feature also includes preset Skills from Google. It's launching first for Chrome desktop users set to US English. The Verge reports: Once you have access to the feature, it can be managed by typing a forward slash ( / ) in Gemini and clicking the compass icon. AI prompts can be saved as Skills directly from your Gemini chat history on desktop, where they'll then be available to reuse on any other desktop devices that are signed into the same Google account on Chrome. The aim is to spare Chrome users from having to manually retype frequently used Gemini prompts or having to copy and paste them over from a saved list. Some of the Skills made by early testers include commands for calculating the nutritional information of online recipes and creating a side-by-side comparison of product specifications while shopping across multiple tabs, according to Google. The company is also launching a library of preset Skills that you can save and use instead of making your own. These ready-to-use Skills can also be customized to better suit your needs, providing a starting point without requiring you to create your own from scratch.

Read more of this story at Slashdot.

14 Apr 2026 7:00pm GMT

Thousands of Rare Concert Recordings Are Landing On the Internet Archive

A Chicago concert superfan Aadam Jacobs who has recorded more than 10,000 shows since the 1980s is working with Internet Archive volunteers to digitize the collection before the cassettes deteriorate. "So far, about 2,500 of these tapes have been posted on the Internet Archive, including some rare gems like a Nirvana performance from 1989," reports TechCrunch. From the report: For many of these recordings, Jacobs was using pretty mediocre equipment, but the volunteer audio engineers working with the Internet Archive have made these tapes sound great. One volunteer, Brian Emerick, drives to Jacobs' house once a month to pick up more boxes of tapes -- he has to use anachronistic cassette decks to play the tapes, which get converted into digital files. From there, other volunteers clean up, organize, and label the recordings, even tracking down song names from forgotten punk bands. The archive is available here.

Read more of this story at Slashdot.

14 Apr 2026 6:00pm GMT

13 Apr 2026

feedArs Technica

Retro Rewind re-creates the glorious drudgery of working a '90s video store

What the nostalgic throwback lacks in complexity it makes up for in repetitive charm.

13 Apr 2026 9:58pm GMT

Measles takes a plane to Idaho, which has worst vaccination rate in US

In the 2024-2025 school year, only 78.5% of kindergartners had measles vaccination.

13 Apr 2026 9:32pm GMT

Google shoehorned Rust into Pixel 10 modem to make legacy code safer

Cellular modems are complex black boxes of legacy code, but Google is making them safer with Rust.

13 Apr 2026 9:12pm GMT

feedOSnews

Scientists invented an obviously fake illness, and “AI” spread it like truth within weeks

Ever heard of a condition called bixonimania? Did you search the internet or ask your "AI" girlfriend about some symptoms you were experiencing, and this was its answer? Well… The condition doesn't appear in the standard medical literature - because it doesn't exist. It's the invention of a team led by Almira Osmanovic Thunström, a medical researcher at the University of Gothenburg, Sweden, who dreamt up the skin condition and then uploaded two fake studies about it to a preprint server in early 2024. Osmanovic Thunström carried out this unusual experiment to test whether large language models (LLMs) would swallow the misinformation and then spit it out as reputable health advice. "I wanted to see if I can create a medical condition that did not exist in the database," she says. ↫ Chris Stokel-Walker at Nature And "AI" ate it up like quality chocolate. It started appearing in the answers from all the popular "AI" tools within weeks, and later even started showing up as references in published literature, indicating that scientists copy/paste references without actually reading them. This is clearly a deeply concerning experiment, and highlights there may be many, many more nonsensical, fake studies being picked up by "AI" tools. Of course, I hear you say, it's not like propagating fake or terrible studies is the sole domain of "AI", as there are countless cases of this happening among actual real researchers and scientists, too. The issue, though, is that the fake studies concerning "bixonimania" were intentionally made to be as silly and obviously ridiculous as possible. It references Starfleet Acadamy, the lab aboard the Enterprise, the University of Fellowship of the Ring, and many other fake references instantly recognisable as such by real humans. In fact, the studies even specifically mention that "this entire paper is made up" and "fifty made-up individuals aged between 20 and 50 years were recruited for the exposure group". It would take any human only a few seconds after opening one of these papers to realise they're entirely fake - yet, the world's most advanced "AI" tools gobbled them up and spit them back out as pure fact within mere weeks of their publication This shouldn't come as a surprise. After all, "AI" tools have no understanding, no intelligence, no context, and they can't actually make sense of anything. They are glorified pachinko machines with the output - the ball - tumbling down the most likely path between the pins based on nothing but chance and which pins it has already hit. "AI" output understands the world about as much as the pachinko ball does, and as such, can't pick up on even the most obvious of cues that something is a fake or a forgery. It won't be long before truly nefarious forces start doing this very same thing. Why build, staff, and maintain a troll farm when you can just have "AI" generate intentional misinformation which will then be spread and pushed by even more "AI"? Remember, it took one malicious asshole just one long since retracted fake paper to convince millions that vaccines cause autism. I shudder to think how many people are accepting anything "AI" says as gospel.

13 Apr 2026 1:02pm GMT

Linux 7.0 released

Version 7.0 of the Linux kernel has been released, marking the arbitrary end of the 6.x series. Significant changes in this release include the removal of the "experimental" status for Rust code, a new filtering mechanism for io_uring operations, a switch to lazy preemption by default in the CPU scheduler, support for time-slice extension, the nullfs filesystem, self-healing support for the XFS filesystem, a number of improvements to the swap subsystem (described in this article and this one), general support for AccECN congestion notification, and more. See the LWN merge-window summaries (part 1, part 2) and the KernelNewbies 7.0 page for more details. ↫ corbet at LWN.net You can compile the kernel yourself, or just wait until it hits your distribution's repositories.

13 Apr 2026 12:19pm GMT

11 Apr 2026

feedPlanet Arch Linux

Write less code, be more responsible

My thoughts on AI-assisted programming.

11 Apr 2026 12:00am GMT

10 Apr 2026

feedOSnews

The disturbing white paper Red Hat is trying to erase from the internet

It shouldn't be a surprise that companies - and for our field, technology companies specifically - working with the defense industry tends to raise eyebrows. With things like the genocide in Gaza, the threats of genocide and war crimes against Iran, the mass murder in Lebanon, it's no surprise that western companies working with the militaries and defense companies involved in these atrocities are receiving some serious backlash. With that in mind, it seems Red Hat, owned by IBM, is desperately trying to scrub a certain white paper from the internet. Titled "Compress the kill cycle with Red Hat Device Edge", the 2024 white paper details how Red Hat's products and technologies can make it easier and faster to, well, kill people. Links to the white paper throw up 404s now, but it can still easily be found on the Wayback Machine and other places. It's got some disturbingly euphemistic content. The find, fix, track, target, engage, assess (F2T2EA) process requires ubiquitous access to data at the strategic, operational and tactical levels. Red Hat Device Edge embeds captured, analyzed, and federated data sets in a manner that positions the warfighter to use artificial intelligence and machine learning (AI/ML) to increase the accuracy of airborne targeting and mission-guidance systems. Delivering near real-time data from sensor pods directly to airmen, accelerating the sensor-to-shooter cycle. Sharing near real-time sensor fusion data with joint and multinational forces to increase awareness, survivability, and lethality. The new software enabled the Stalker to deploy updated, AI-based automated target recognition capabilities. If the target is an adversary tracked vehicle on the far side of a ridge, a UAS carrying a server running Red Hat Device Edge could transmit video and metadata directly to shooters. ↫ Red Hat white paper titled "Compress the kill cycle with Red Hat Device Edge" I don't think there's something inherently wrong with working together with your nation's military or defense companies, but that all hinges on what, exactly, said military is doing and how those defense companies' products are being used. The focus should be on national defense, aid during disasters, and responding to the legitimate requests of sovereign, democratic nations to come to their defense (e.g. helping Ukraine fight off the Russian invasion). There's always going to be difficult grey areas, but any military or defense company supporting the genocide in Gaza or supplying weapons to kill women and children in Iran is unequivocally wrong, morally reprehensible, and downright illegal on both an international and national level. It clearly seems someone at Red Hat feels the same way, as the company has been trying really hard to memory-hole this particular white paper, and considering its word choices and the state of the world today, it's easy to see why. Of course, the internet never forgets, and I certainly don't intend to let something like this slide. We all know companies like Microsoft, Oracle, and Google have no qualms about making a few bucks from a genocide or two, but it always feels a bit more traitorous to the cause when it's an open source company doing the profiting. It feels like Red Hat is trying to have its cake and eat it too, by, as an IBM subsidiary, trying to both profit from the vast sums of money sloshing around in the US military industrial complex as well as maintain its image as a scrappy open source business success story shitting bunnies and rainbows. It's a long time ago now that Red Hat felt like a genuine part of the open source community. Most of us - both outside and inside of Red Hat, I'm sure - have been well aware for a long time now that those days are well behind us, and I guess Red Hat doesn't like seeing its kill cycle this compressed.

10 Apr 2026 8:04pm GMT

03 Apr 2026

feedPlanet Arch Linux

800 Rust terminal projects in 3 years

I have discovered and shared ~800 open source Rust CLI projects over the past 3 years.

03 Apr 2026 12:00am GMT

28 Mar 2026

feedPlanet Arch Linux

Building a guitar trainer with embedded Rust

All I wanted was to learn how to play guitar, but ended up building a DIY kit for it.

28 Mar 2026 12:00am GMT