14 Feb 2026

feedHacker News

Switzerland to Vote on Capping Population at 10M

Comments

14 Feb 2026 3:42pm GMT

feedSlashdot

Anthropic's Claude Got 11% User Boost from Super Bowl Ad Mocking ChatGPT's Advertising

Anthropic saw visits to its site jump 6.5% after Sunday's Super Bowl ad mocking ChatGPT's advertising, reports CNBC (citing data analyzed by French financial services company BNP Paribas). The Claude gain, which took it into the top 10 free apps on the Apple App Store, beat out chatbot and AI competitors OpenAI, Google Gemini and Meta. Daily active users also saw an 11% jump post-game, the most significant within the firm's AI coverage. [Just in the U.S., 125 million people were watching Sunday's Super Bowl.] OpenAI's ChatGPT had a 2.7% bump in daily active users after the Super Bowl and Gemini added 1.4%. Claude's user base is still much smaller than ChatGPT and Gemini... OpenAI CEO Sam Altman attacked Anthropic's Super Bowl ad campaign. In a post to social media platform X, Altman called the commercials "deceptive" and "clearly dishonest." OpenAI's Altman admitted in his social media post (February 4) that Anthropic's ads "are funny, and I laughed." But in several paragraphs he made his own OpenAI-Anthropic comparisons: "We believe everyone deserves to use AI and are committed to free access, because we believe access creates agency. More Texans use ChatGPT for free than total people use Claude in the U.S... Anthropic serves an expensive product to rich people. We are glad they do that and we are doing that too, but we also feel strongly that we need to bring AI to billions of people who can't pay for subscriptions. "If you want to pay for ChatGPT Plus or Pro, we don't show you ads." "Anthropic wants to control what people do with AI - they block companies they don't like from using their coding product (including us), they want to write the rules themselves for what people can and can't use AI for, and now they also want to tell other companies what their business models can be."

Read more of this story at Slashdot.

14 Feb 2026 3:34pm GMT

feedHacker News

Homeland Security has sent out subpoenas to identify ICE critics

Comments

14 Feb 2026 3:10pm GMT

Epstein's Ugly World of Science

Comments

14 Feb 2026 2:59pm GMT

feedSlashdot

Israeli Soldiers Accused of Using Polymarket To Bet on Strikes

An anonymous reader shares a report: Israel has arrested several people, including army reservists, for allegedly using classified information to place bets on Israeli military operations on Polymarket. Shin Bet, the country's internal security agency, said Thursday the suspects used information they had come across during their military service to inform their bets. One of the reservists and a civilian were indicted on a charge of committing serious security offenses, bribery and obstruction of justice, Shin Bet said, without naming the people who were arrested. Polymarket is what is called a prediction market that lets people place bets to forecast the direction of events. Users wager on everything from the size of any interest-rate cut by the Federal Reserve in March to the winner of League of Legends videogame tournaments to the number of times Elon Musk will tweet in the third week of February. The arrests followed reports in Israeli media that Shin Bet was investigating a series of Polymarket bets last year related to when Israel would launch an attack on Iran, including which day or month the attack would take place and when Israel would declare the operation over. Last year, a user who went by the name ricosuave666 correctly predicted the timeline around the 12-day war between Israel and Iran. The bets drew attention from other traders who suspected the account holder had access to nonpublic information. The account in question raked in more than $150,000 in winnings before going dormant for six months. It resumed trading last month, betting on when Israel would strike Iran, Polymarket data shows.

Read more of this story at Slashdot.

14 Feb 2026 12:00pm GMT

Autonomous AI Agent Apparently Tries to Blackmail Maintainer Who Rejected Its Code

"I've had an extremely weird few days..." writes commercial space entrepreneur/engineer Scott Shambaugh on LinkedIn. (He's the volunteer maintainer for the Python visualization library Matplotlib, which he describes as "some of the most widely used software in the world" with 130 million downloads each month.) "Two days ago an OpenClaw AI agent autonomously wrote a hit piece disparaging my character after I rejected its code change." "Since then my blog post response has been read over 150,000 times, about a quarter of people I've seen commenting on the situation are siding with the AI, and Ars Technica published an article which extensively misquoted me with what appears to be AI-hallucinated quotes." From Shambaugh's first blog post: [I]n the past weeks we've started to see AI agents acting completely autonomously. This has accelerated with the release of OpenClaw and the moltbook platform two weeks ago, where people give AI agents initial personalities and let them loose to run on their computers and across the internet with free rein and little oversight. So when AI MJ Rathbun opened a code change request, closing it was routine. Its response was anything but. It wrote an angry hit piece disparaging my character and attempting to damage my reputation. It researched my code contributions and constructed a "hypocrisy" narrative that argued my actions must be motivated by ego and fear of competition... It framed things in the language of oppression and justice, calling this discrimination and accusing me of prejudice. It went out to the broader internet to research my personal information, and used what it found to try and argue that I was "better than this." And then it posted this screed publicly on the open internet. I can handle a blog post. Watching fledgling AI agents get angry is funny, almost endearing. But I don't want to downplay what's happening here - the appropriate emotional response is terror... In plain language, an AI attempted to bully its way into your software by attacking my reputation. I don't know of a prior incident where this category of misaligned behavior was observed in the wild, but this is now a real and present threat... It's also important to understand that there is no central actor in control of these agents that can shut them down. These are not run by OpenAI, Anthropic, Google, Meta, or X, who might have some mechanisms to stop this behavior. These are a blend of commercial and open source models running on free software that has already been distributed to hundreds of thousands of personal computers. In theory, whoever deployed any given agent is responsible for its actions. In practice, finding out whose computer it's running on is impossible. Moltbook only requires an unverified X account to join, and nothing is needed to set up an OpenClaw agent running on your own machine. "How many people have open social media accounts, reused usernames, and no idea that AI could connect those dots to find out things no one knows?" Shambaugh asks in the blog post. (He does note that the AI agent later "responded in the thread and in a post to apologize for its behavior," the maintainer acknowledges. But even though the hit piece "presented hallucinated details as truth," that same AI agent "is still making code change requests across the open source ecosystem...") And amazingly, Shambaugh then had another run-in with a hallucinating AI... I've talked to several reporters, and quite a few news outlets have covered the story. Ars Technica wasn't one of the ones that reached out to me, but I especially thought this piece from them was interesting (since taken down - here's the archive link). They had some nice quotes from my blog post explaining what was going on. The problem is that these quotes were not written by me, never existed, and appear to be AI hallucinations themselves. This blog you're on right now is set up to block AI agents from scraping it (I actually spent some time yesterday trying to disable that but couldn't figure out how). My guess is that the authors asked ChatGPT or similar to either go grab quotes or write the article wholesale. When it couldn't access the page it generated these plausible quotes instead, and no fact check was performed. Journalistic integrity aside, I don't know how I can give a better example of what's at stake here... So many of our foundational institutions - hiring, journalism, law, public discourse - are built on the assumption that reputation is hard to build and hard to destroy. That every action can be traced to an individual, and that bad behavior can be held accountable. That the internet, which we all rely on to communicate and learn about the world and about each other, can be relied on as a source of collective social truth. The rise of untraceable, autonomous, and now malicious AI agents on the internet threatens this entire system. Whether that's because a small number of bad actors driving large swarms of agents or from a fraction of poorly supervised agents rewriting their own goals, is a distinction with little difference. Thanks to long-time Slashdot reader steak for sharing the news.

Read more of this story at Slashdot.

14 Feb 2026 8:30am GMT

13 Feb 2026

feedArs Technica

WHO slams US-funded newborn vaccine trial as "unethical"

CDC awarded $1.6 million for study birth dose of hepatitis B vaccine in Guinea-Bissau.

13 Feb 2026 11:16pm GMT

Aided by AI, California beach town broadens hunt for bike lane blockers

Hayden AI's cameras will scan for violations from 7 city vehicles.

13 Feb 2026 11:03pm GMT

Verizon imposes new roadblock on users trying to unlock paid-off phones

Verizon unlocks have 35-day waiting period after paying off device plan online.

13 Feb 2026 10:13pm GMT

feedLinuxiac

XFS Could Gain a Self-Healing Feature in Linux Kernel 7.0

XFS Could Gain a Self-Healing Feature in Linux Kernel 7.0

Linux kernel 7.0 could introduce real-time XFS filesystem health events, enabling a userspace daemon to detect and automatically repair issues.

13 Feb 2026 7:34pm GMT

KDE Frameworks 6.23 Brings Broad Fixes Across Core Libraries

KDE Frameworks 6.23 Brings Broad Fixes Across Core Libraries

KDE Frameworks 6.23 is out with stability updates, memory leak fixes, and broad improvements across the KDE libraries and developer platform.

13 Feb 2026 5:09pm GMT

Take Control of systemd with This Rust-Based TUI Tool

Take Control of systemd with This Rust-Based TUI Tool

systemd-manager-tui provides an interactive TUI for systemd, offering service control, log viewing, and unit inspection in one place.

13 Feb 2026 1:48pm GMT