30 Mar 2026

feedSlashdot

OkCupid Settles FTC Case On Alleged Misuse of Its Users' Personal Data

OkCupid and parent company Match Group settled an FTC case dating back to 2014 over allegations that the dating app shared users' photos and other personal data with a third party without proper disclosure or opt-out rights. Engadget reports: According to the FTC, OkCupid's privacy policy at the time noted that the company wouldn't share a user's personal information with others, except for some cases including "service providers, business partners, other entities within its family of businesses." However, the lawsuit accused OkCupid of sharing three million photos of its users to Clarifai, which the FTC claims is a "unrelated third party" that didn't fall under the allowed entities. On top of that, the lawsuit alleged that OkCupid didn't inform its users of this data sharing, nor give them a chance to opt out. Moving forward, the settlement would "permanently prohibit" Match Group, which owns OkCupid, and Humor Rainbow, which operates OkCupid, from misrepresenting what kind of personal information it collects, the purpose for collecting the data and any consumer choices to prevent data collection. Even after the 2014 incident, OkCupid was found with security flaws that could've exposed user account info but, which were quickly patched in 2020.

Read more of this story at Slashdot.

30 Mar 2026 8:00pm GMT

Life With AI Causing Human Brain 'Fry'

fjo3 shares a report from France 24: Too many lines of code to analyze, armies of AI assistants to wrangle, and lengthy prompts to draft are among the laments by hard-core AI adopters. Consultants at Boston Consulting Group (BCG) have dubbed the phenomenon "AI brain fry," a state of mental exhaustion stemming "from the excessive use or supervision of artificial intelligence tools, pushed beyond our cognitive limits." The rise of AI agents that tend to computer tasks on demand has put users in the position of managing smart, fast digital workers rather than having to grind through jobs themselves. "It's a brand-new kind of cognitive load," said Ben Wigler, co-founder of the start-up LoveMind AI. "You have to really babysit these models." [...] "There is a unique kind of reward hacking that can go on when you have productivity at the scale that encourages even later hours," Wigler said. [Adam Mackintosh, a programmer for a Canadian company] recalled spending 15 consecutive hours fine-tuning around 25,000 lines of code in an application. "At the end, I felt like I couldn't code anymore," he recalled. "I could tell my dopamine was shot because I was irritable and didn't want to answer basic questions about my day." BCG recommends in a recently published study that company leaders establish clear limits regarding employee use and supervision of AI. However, "That self-care piece is not really an America workplace value," Wigler said. "So, I am very skeptical as to whether or not its going to be healthy or even high quality in the long term." Notably, the report says everyone interviewed for the article "expressed overall positive views of AI despite the downsides." In fact, a recent BCG study actually found a decline in burnout rates when AI took over repetitive work tasks.

Read more of this story at Slashdot.

30 Mar 2026 7:00pm GMT

Judge Allows BitTorrent Seeding Claims Against Meta, Despite Lawyers 'Lame Excuses'

An anonymous reader quotes a report from TorrentFreak: In an effort to gather material for its LLM training, Meta used BitTorrent to download pirated books from Anna's Archive and other shadow libraries. According to several authors, Meta facilitated the infringement of others by "seeding" these torrents. This week, the court granted the authors permission to add these claims to their complaint, despite openly scolding their counsel for "lame excuses" and "Meta bashing." [...] The judge acknowledged that the contributory infringement claim could and should have been added back in November 2024, when the authors amended their complaint to include the distribution claim. After all, both claims arise from the same factual allegations about Meta's torrenting activity. "The lawyers for the named plaintiffs have no excuse for neglecting to add a contributory infringement claim based on these allegations back in November 2024," Judge Chhabria wrote. The lawyers of the book authors claimed that the delay was the result of newly produced evidence that had "crystallized" their understanding of Meta's uploading activity. However, that did not impress the judge. He called it a "lame excuse" and "a bunch of doubletalk," noting that if the missing discovery truly prevented the contributory claim from being added in November 2024, the same logic would have prevented the distribution claim from being added at that time as well. "Rather than blaming Meta for producing discovery late, the plaintiffs' lawyers should have been candid with the Court, explaining that they missed an issue in a case of first impression..," the order reads. Judge Chhabria went further, noting that the authors' law firm, Boies Schiller, showed "an ongoing pattern" of distracting from its own mistakes by attacking Meta. He pointed specifically to the dispute over when Meta disclosed its fair use defense to the distribution claim, which we covered here recently, characterizing it as a false distraction. "The lawyers for the plaintiffs seem so intent on bashing Meta that they are unable to exercise proper judgment about how to represent the interests of their clients and the proposed class members," the order reads. Despite the criticism, Chhabria granted the motion. [...] For now, the case moves forward with a fourth amended complaint, three new loan-out companies added as named plaintiffs, and a growing list of BitTorrent-related claims for Judge Chhabria to resolve.

Read more of this story at Slashdot.

30 Mar 2026 6:00pm GMT

29 Mar 2026

feedArs Technica

Pints meet prop bets: Polymarket’s “Situation Room” pop-up bar in DC

Why did a leading prediction market feel the need for an in-person bar in DC?

29 Mar 2026 11:35am GMT

Polygraphs have major flaws. Are there better options?

Research proceeds on alternatives, but some doubt whether true lie detection is possible.

29 Mar 2026 11:01am GMT

28 Mar 2026

feedArs Technica

Explanation for why we don't see two-foot-long dragonflies anymore fails

Breathing capacity could have compensated for lower atmospheric oxygen.

28 Mar 2026 12:30pm GMT

feedPlanet Arch Linux

Building a guitar trainer with embedded Rust

All I wanted was to learn how to play guitar, but ended up building a DIY kit for it.

28 Mar 2026 12:00am GMT

27 Mar 2026

feedOSnews

Running a Plan 9 network on OpenBSD

This guide describes how you can install a Plan 9 network on an OpenBSD machine (it will probably work on any unix machine though). The authentication service (called "authsrv" on Plan 9) is provided by a unix version: authsrv9. The file service is provided by a program called "u9fs". It comes with Plan 9. Both run from inetd. The (diskless) cpu server is provided by running qemu, booted from only a floppy (so without local storage). Finally, the terminal is provided by the program drawterm. The nice thing about this approach is that you can use all your familiar unix tools to get started with Plan 9 (e.g. you can edit the Plan 9 files with your favorite unix editor). I'm assuming you have read at least something about Plan 9, for example the introduction paper Plan 9 from Bell Labs. ↫ Mechiel Lukkien If you're running OpenBSD, you're already doing something better than everyone else, and if you want to ascend to the next level, this is a great place to start. Of course, the final level, where you leave your earthly roots behind and become a being of pure enlightened energy, is running Plan 9 on real hardware as the universe intended, but let's not put the cart before the horse. One day, all of humanity will just be an endless collection of interconnected cosmic Plan 9 servers, more plentiful than the stars in the known universe.

27 Mar 2026 7:40pm GMT

Will “AI” chatbots be the tobacco of the future?

Towards the end of 2024, Dennis Biesma decided to check out ChatGPT. The Amsterdam-based IT consultant had just ended a contract early. "I had some time, so I thought: let's have a look at this new technology everyone is talking about," he says. "Very quickly, I became fascinated." Biesma has asked himself why he was vulnerable to what came next. He was nearing 50. His adult daughter had left home, his wife went out to work and, in his field, the shift since Covid to working from home had left him feeling "a little isolated". He smoked a bit of cannabis some evenings to "chill", but had done so for years with no ill effects. He had never experienced a mental illness. Yet within months of downloading ChatGPT, Biesma had sunk €100,000 (about £83,000) into a business startup based on a delusion, been hospitalised three times and tried to kill himself. ↫ Anna Moore at The Guardian These stories are absolutely heart-wrenching, and it doesn't just happen to people who have had a history of mental illness or other things you might associate with priming someone for "falling for" an "AI" chatbot. Just a few years in, and it's already clear that these tools pose a real danger to a group of people of indeterminate size, and proper research into the causes is absolutely warranted and needed. On top of that, if there's any evidence of wrongdoing from the companies behind these chatbots - intentionally making them more addictive, luring people in, ignoring established dangers, covering up addiction cases, etc. - lawsuits and regulation are definitely in order. Only yesterday, Facebook and Google lost a landmark trial in the US, ruling the companies intentionally made social media as addictive as possible, thereby destroying a person's life in the process. Countless similar lawsuits are underway all over the world, and I have a feeling that in a few years to decades, we'll look at unregulated, rampant social media the same way we look at tobacco now. Perhaps "AI" chatbots will join their ranks, too.

27 Mar 2026 7:30pm GMT

Microsoft removes trust for drivers signed with the cross-signed driver program

Today, we're excited to announce a significant step forward in our ongoing commitment to Windows security and system reliability: the removal of trust for all kernel drivers signed by the deprecated cross-signed root program. This update will help protect our customers by ensuring that only kernel drivers that the Windows Hardware Compatibility Program (WHCP) have passed and been signed can be loaded by default. To raise the bar for platform security, Microsoft will maintain an explicit allow list of reputable drivers signed by the cross-signed program. The allow list ensures a secure and compatible experience for a limited number of widely used, and reputable cross-signed drivers. This new kernel trust policy applies to systems running Windows 11 24H2, Windows 11 25H2, Windows 11 26H1, and Windows Server 2025 in the April 2026 Windows update. All future versions of Windows 11 and Windows Server will enforce the new kernel trust policy. ↫ Peter Waxman at the Windows IT Pro Blog The cross-signed root program was discontinued in 2021, and ran since the early 2000s, so I think it's fair to no longer automatically assume such possibly old and outdated drivers are still to be trusted.

27 Mar 2026 7:18pm GMT

30 Jan 2026

feedPlanet Arch Linux

How to review an AUR package

On Friday, July 18th, 2025, the Arch Linux team was notified that three AUR packages had been uploaded that contained malware. A few maintainers including myself took care of deleting these packages, removing all traces of the malicious code, and protecting against future malicious uploads.

30 Jan 2026 12:00am GMT

19 Jan 2026

feedPlanet Arch Linux

Personal infrastructure setup 2026

While starting this post I realized I have been maintaining personal infrastructure for over a decade! Most of the things I've self-hosted is been for personal uses. Email server, a blog, an IRC server, image hosting, RSS reader and so on. All of these things has all been a bit all over the place and never properly streamlined. Some has been in containers, some has just been flat files with a nginx service in front and some has been a random installed Debian package from somewhere I just forgot.

19 Jan 2026 12:00am GMT