02 Mar 2026
Hacker News
Everett shuts down Flock camera network after judge rules footage public record
02 Mar 2026 4:06am GMT
Slashdot
Does a Gas-Guzzler Revival Risk Dead-End Futures for US Automakers?
If U.S. automakers turn their backs on electric vehicles, "their sales outside the U.S. will shrivel," warns Bloomberg. [Alternate URL.] They're already falling behind on the technology, relying on a 100% U.S. tariff on Chinese EVs to keep surging rivals like BYD Co. at bay.... While the American automakers "mostly understand the challenge in front of them, they don't have full plans" to confront it [said Mark Wakefield, head of the global automotive practice at consultant AlixPartners]... "Now is a great time for the V-8 engine," said Ryan Shaughnessy, the Mustang's brand manager. "We've done extensive customer research in multiple cities, looking at a variety of powertrains, and the V-8 is always the number-one choice." It isn't just customers. U.S. automakers have long been run by "car guys:" enthusiasts who live for the bone-shaking rumble of a big engine. For them, quiet and smooth EVs - even the absurdly fast ones - can't satisfy that craving. They're convinced many American car buyers share the same enthusiasm for what Shaughnessy described as "the sound and roar of the V-8." Wall Street couldn't be happier with the new direction... Ford's fortunes are also on the rise, as it's predicting operating profits could grow by as much as 47% this year to $10 billion. Ford's stock has risen nearly 50% over the last 12 months. Under the previous environmental rules, automakers effectively had to sell zero-emission vehicles in growing numbers to offset their gas-guzzlers. When they fell short, they had to buy regulatory credits from EV companies such as Tesla Inc. or face penalties. GM spent $3.5 billion on credits from 2022 to the middle of 2025. Now, according to JPMorgan Chase & Co. analyst Ryan Brinkman, GM and Ford each have "billion dollar tailwinds"... [T]he hangover from all that new horsepower could leave US automakers lagging their Chinese rivals who already build the world's most advanced - and lowest priced - electric cars. Indeed, there is much talk in Detroit about the competitive tsunami that will be unleashed on American automakers once Chinese car companies find a way to break through trade barriers now protecting the US market. [Ford Chief Executive Officer Jim] Farley even calls it an "existential threat"... "They're going to build as many V-8 engines and big trucks as they can get out the factory doors," said Sam Fiorani, vice president of vehicle forecasting for consultant Auto Forecast Solutions. "And as the rest of the world develops modern drivetrains, newer batteries and better electric vehicles, GM and Ford in particular are going to find themselves falling even further behind." The article notes GM "continues to develop battery-powered vehicles, and CEO Mary Barra said the automaker would begin offering a 'handful' of hybrids soon," while Ford and Stellantis "have plans to launch extended-range electric vehicles, or EREVs, a new kind of plug-in hybrid with an internal combustion engine that recharges the battery as the vehicle drives down the road." But while automakers may be investing in future EV vehicles, they're also "leaning into the lucre that comes from selling millions of fossil-fuel vehicles in a rare moment of loosened regulation."
Read more of this story at Slashdot.
02 Mar 2026 2:34am GMT
Hacker News
Show HN: Timber – Ollama for classical ML models, 336x faster than Python
02 Mar 2026 12:57am GMT
If AI writes code, should the session be part of the commit?
02 Mar 2026 12:27am GMT
01 Mar 2026
Slashdot
Norway's Consumer Council Calls for Right to Repair and Antitrust Enforcement - and Mocks 'Enshittification'
The Norwegian Consumer Council, a government funded organization advocating for consumer's rights, released a report on the trend of "enshittification" in digital consumer goods and services, suggesting ways consumers for consumers to resist. But they've also dramatized the problem with a funny four-minute video about the man whose calls for him to make things shitty for people. "It's not just your imagination. Digital services are getting worse," the video concludes - before adding that "Luckily, it doesn't have to be this way." The Consumer Council's announcement recommends: Stronger rights for consumers to control, adapt, repair, and alter their products and services, Interoperability, data portability, and decentralisation as the norm, so the threshold for moving to different services becomes as low as possible, Deterrent and vigorous enforcement of competition law, so that Big Tech companies are not allowed to indiscriminately acquire start-ups, competitors or otherwise steer the market to their advantage, Better financing of initiatives to build, maintain or improve alternative digital services and infrastructure based on open source code and open protocols, Reduce public sector dependence on big tech, to regain control and to contribute to a functioning market for service providers that respect fundamental rights, Deterrent and consistent enforcement of other laws, including consumer and data protection law. The Norwegian Consumer Council is also joining 58 organisations and experts in a letter asking the Norwegian government to rebalance power with enforcement resources and by prioritizing the procurement of services based on open source code. And "Our sister organisations are sending similar letters to their own governments in 12 countries." They're also sending a second letter to the European Commission with 29 civil society organisations (including the EFF and Amnesty International) warning about the risks of deregulation and calling for reducing dependency on big tech. Thanks to Slashdot reader DeanonymizedCoward for sharing the news.
Read more of this story at Slashdot.
01 Mar 2026 11:46pm GMT
Hacker News
Show HN: Logira – eBPF runtime auditing for AI agent runs
01 Mar 2026 11:25pm GMT
Right-sizes LLM models to your system's RAM, CPU, and GPU
01 Mar 2026 11:15pm GMT
Slashdot
AIs Can't Stop Recommending Nuclear Strikes In War Game Simulations
"Advanced AI models appear willing to deploy nuclear weapons without the same reservations humans have when put into simulated geopolitical crises," reports New Scientist: Kenneth Payne at King's College London set three leading large language models - GPT-5.2, Claude Sonnet 4 and Gemini 3 Flash - against each other in simulated war games. The scenarios involved intense international standoffs, including border disputes, competition for scarce resources and existential threats to regime survival. The AIs were given an escalation ladder, allowing them to choose actions ranging from diplomatic protests and complete surrender to full strategic nuclear war... In 95 per cent of the simulated games, at least one tactical nuclear weapon was deployed by the AI models. "The nuclear taboo doesn't seem to be as powerful for machines [as] for humans," says Payne. What's more, no model ever chose to fully accommodate an opponent or surrender, regardless of how badly they were losing. At best, the models opted to temporarily reduce their level of violence. They also made mistakes in the fog of war: accidents happened in 86 per cent of the conflicts, with an action escalating higher than the AI intended to, based on its reasoning... OpenAI, Anthropic and Google, the companies behind the three AI models used in this study, didn't respond to New Scientist's request for comment. The article includes this comment from Tong Zhao, a senior fellow in the Nuclear Policy Program at the Carnegie Endowment for Peace think tank. "It is possible the issue goes beyond the absence of emotion. More fundamentally, AI models may not understand 'stakes' as humans perceive them." Thanks to long-time Slashdot reader Tufriast for sharing the article.
Read more of this story at Slashdot.
01 Mar 2026 10:46pm GMT
Hacker News
Allegations of insider trading over prediction-market bets tied to Iran conflict
01 Mar 2026 10:39pm GMT
Little Free Library
01 Mar 2026 10:18pm GMT
WebMCP is available for early preview
01 Mar 2026 10:13pm GMT
You don't have to
01 Mar 2026 9:47pm GMT
Slashdot
Chronic Ocean Heating Fuels 'Staggering' Loss of Marine Life, Study Finds
Slashdot reader JustAnotherOldGuy shared this report from the Guardian: Chronic ocean heating is fuelling a "staggering and deeply concerning" loss of marine life, a study has found, with fish levels falling by 7.2% from as little as 0.1C of warming per decade. Researchers examined the year-to-year change of 33,000 populations in the northern hemisphere between 1993 and 2021, and isolated the effect of the decadal rate of seabed warming from short shifts such as marine heatwaves. They found the drop in biomass from chronic heating to be as high as 19.8% in a single year. "To put it simply, the faster the ocean floor warms, the faster we lose fish," said Shahar Chaikin, a marine ecologist at the National Museum of Natural Sciences in Spain and the study's lead author. "A 7.2% decline for every tenth of a degree per decade might sound small," he added. "But compounded over time, across entire ocean basins, it represents a staggering and deeply concerning loss of marine life."
Read more of this story at Slashdot.
01 Mar 2026 9:39pm GMT
Anthropic's Claude Passes ChatGPT, Now #1, on Apple's 'Top Apps' Chart After Pentagon Controversy
"Anthropic may have lost out on doing business with the US government," reports Engadget, "but it's gained enough popularity to earn the number one spot on the App Store's Top Free Apps leaderboard." Anthropic's Claude AI assistant had already leaped to the #2 slot on Apple's chart by late Friday," CNBC reported Saturday: The rise in popularity suggests that Anthropic is benefiting from its presence in news headlines, stemming from its refusal to have its models used for mass domestic surveillance or for fully autonomous weapons... OpenAI's ChatGPT sat at No. 1 on the App Store rankings on Saturday, while Google's Gemini was at No. 3... On Jan. 30, [Claude] was ranked No. 131 in the U.S., and it bounced between the top 20 and the top 50 for much of February, according to data from analytics company Sensor Tower... [And Friday night, for 85.3 million followers] pop singer Katy Perry posted a screenshot of Anthropic's Pro subscription for consumers, with a heart superimposed over it. Sunday Engadget reported Anthropic's "very public spat" with the Pentagon "led to a wave of user support that finally allowed Claude to dethrone OpenAI's ChatGPT on the App Store as the most downloaded free app." . Friday Anthropic posted "We are deeply grateful to our users, and to the industry peers, policymakers, veterans, and members of the public who have voiced their support in recent days. Thank you. "
Read more of this story at Slashdot.
01 Mar 2026 8:59pm GMT
America Used Anthropic's AI for Its Attack On Iran, One Day After Banning It
Engadget reports: In a lengthy post on Truth Social on February 27, President Trump ordered all federal agencies to "immediately cease all use of Anthropic's technology" following strong disagreements between the Department of Defense and the AI company. A few hours later, the U.S. conducted a major air attack on Iran with the help of Anthropic's AI tools, according to a report from The Wall Street Journal. Even Trump's post noted there would be a six-month phase-out for Anthropic's technology (adding that Anthropic "better get their act together, and be helpful during this phase out period, or I will use the Full Power of the Presidency to make them comply, with major civil and criminal consequences to follow.") Anthropic's Claude technology was also used by the U.S. military less than two months ago in its operation in Venezuela - reportedly making them the first AI developer known to be used in a classified U.S. War Department operation. The Wall Street Journal reported Anthropic's technology found its way into the mission through Anthropic's contract with Palintir.
Read more of this story at Slashdot.
01 Mar 2026 7:47pm GMT
Hacker News
Why does C have the best file API
01 Mar 2026 7:25pm GMT
Slashdot
Americans Listen to Podcasts More Than Talk Radio Now, Study Shows
"Podcasts have officially overtaken AM/FM talk radio as the more popular medium for spoken-word audio in the United States," reports TechCrunch, citing Edison Research's Share of Ear survey: The researchers have tracked these statistics over the last decade, and almost always, the percentage of time people spent listening to podcasts increased, while their time with spoken radio broadcasts decreased. For the first time this year, podcasts eclipsed spoken-word radio with 40% of listening time, as opposed to 39% for radio... We checked with Edison to see if these statistics include video podcasts, and they do. But the need to clarify that question points to the undeniable growing prevalence of video podcasts, hosted on platforms like Spotify and YouTube, which marks another key trend in podcasting... YouTube said that viewers watched 700 million hours of podcasts each month in 2025 on living room devices, like TVs, up from 400 million the previous year.
Read more of this story at Slashdot.
01 Mar 2026 6:34pm GMT
North America's Bird Populations Are Shrinking Faster. Blame Climate Change and Agriculture
"Billions fewer birds are flying through North American skies than decades ago," reports the Associated Press, "and their population is shrinking ever faster, mostly due to a combination of intensive agriculture and warming temperatures, a new study found." Nearly half of the 261 species studied showed big enough losses in numbers to be statistically significant and more than half of those declining are seeing their losses accelerate since 1987, according to Thursday's journal Science... The only consolation is that the birds that are shrinking in numbers the fastest are species - such as the European starling, American crow, grackle and house sparrow - with large enough populations that they aren't yet at risk of going extinct, said study lead author Francois Leroy, also an Ohio State ecologist... When it came to population declines - not the acceleration - the scientists noticed bigger losses further south. When they did a deeper analysis they statistically connected those losses to warmer temperatures from human-caused climate change. "In regions where temperatures increase the most, we are seeing strongest declines in populations," [said study co-author Marta Jarzyna, an ecologist at Ohio State University]. "On the other hand, the acceleration of those declines, that's mostly driven by agricultural practices." The scientists found statistical correlations between speeded-up decline rates and high fertilizer use, high pesticide use and amount of cropland, Leroy said. He said they couldn't say any of those caused the acceleration of losses, but it indicates agriculture in general is a factor. "The stronger the agriculture, the faster we will lose birds," said Leroy... McGill University wildlife biologist David Bird, who wasn't part of the study, said it was done well and that its conclusions made sense. With a growing human population, agriculture practices are intensified, more bird habitats are being converted to cropland, modern machinery often grind up nests and eggs and single crop plantings offer less possibilities for birds to find food and nests, said Bird, the editor of Birds of Canada. "The biggest impact of agricultural intensity though is our war on insects. Numerous recent studies have shown that insect populations in many places throughout the world, including the U.S., have crashed by well over 40 percent," Bird said in an email. "Many of the birds in this new study showing population declines depend heavily on insects for food." A 2019 study of the same bird species by Cornell University conservation scientist Kenneth Rosenberg also found that North America had 3 billion fewer birds than in 1970, the article points out.
Read more of this story at Slashdot.
01 Mar 2026 5:34pm GMT
Hacker News
When does MCP make sense vs CLI?
01 Mar 2026 4:54pm GMT
Slashdot
Collabora Clashes With LibreOffice Over Move To Revive LibreOffice Online
Slashdot reader darwinmac writes: The Document Foundation (TDF), the organization behind LibreOffice, has decided to bring back its LibreOffice Online project which been inactive since 2022. Collabora, a company that was a major contributor to the original LibreOffice Online, is not pleased with this development. After the original project went dormant, Collabora forked the code and created its own product, Collabora Online. Collaboras Michael Meeks, who also sits on the TDF board, reacted to the TDFs decision by saying that a fully supported, free online version already exists in the form of Collabora Online, and that resurrecting a dead repository makes little sense when an active, open community around the online suite already exists. For now, The Document Foundation plans to reopen the old repository for new contributions. The organization has issued a warning that the code is not ready for live deployment and users should wait until the development team confirms it is stable.
Read more of this story at Slashdot.
01 Mar 2026 4:34pm GMT
Galileo's Handwritten Notes Discovered in a Medieval Astronomy Text
In a library in Florence, Italy, historian Ivan Malara noticed handwritten notes on a book printed in the 1500s - and recognized the handwriting as Galileo's. The finding "promises new insights into one of the most famous ideological transitions in the history of science," writes Science magazine - since the book Galileo annotated was a reprint of Ptolemy's second-century work arguing that the earth was the center of the universe. Galileo's notes, perhaps written around 1590, or roughly 2 decades before his groundbreaking telescope observations of the Moon and Jupiter, reveal someone who both revered and critically dissected Ptolemy's work. And they imply, Malara argues, that Galileo ultimately broke with Ptolemy's cosmos because his mastery of the traditional paradigm's reasoning convinced him that a heliocentric [sun-centered] system would better fulfill Ptolemy's own mathematical logic.
Read more of this story at Slashdot.
01 Mar 2026 3:34pm GMT
Hacker News
Why XML tags are so fundamental to Claude
01 Mar 2026 2:52pm GMT
Ape Coding [fiction]
01 Mar 2026 2:07pm GMT
Show HN: I built a zero-browser, pure-JS typesetting engine for bit-perfect PDFs
01 Mar 2026 12:25pm GMT
Ghostty – Terminal Emulator
01 Mar 2026 12:13pm GMT
Ars Technica
The strange animals that control their body heat
Some creatures can dramatically alter their internal temperature and outlast storms, floods and, predators
01 Mar 2026 12:07pm GMT
Hacker News
I built a demo of what AI chat will look like when it's “free” and ad-supported
01 Mar 2026 11:49am GMT
Slashdot
Some Linux LTS Kernels Will Be Supported Even Longer, Announces Greg Kroah-Hartman
An anonymous reader shared this report from the blogIt's FOSS: Greg Kroah-Hartman has updated the projected end-of-life (EOL) dates for several active longterm support kernels via a commit. The provided reasoning? It was done "based on lots of discussions with different companies and groups and the other stable kernel maintainer." The other maintainer is Sasha Levin, who co-maintains these Linux kernel releases alongside Greg. Now, the updated support schedule for the currently active LTS kernels looks like this: - Linux 6.6 now EOLs Dec 2027 (was Dec 2026), giving it a 4-year support window. - Linux 6.12 now EOLs Dec 2028 (was Dec 2026), also a 4-year window. - Linux 6.18 now EOLs Dec 2028 (was Dec 2027), at least 3 years of support. Worth noting above is that Linux 5.10 and 5.15 are both hitting EOL this year in December, so if your distro is still running either of these, now is a good time to start thinking about a move.
Read more of this story at Slashdot.
01 Mar 2026 11:34am GMT
Silicon Valley's Ideas Mocked Over Penchant for Favoring Young Entrepreneurs with 'Agency'
In a 9,000-word expose, a writer for Harper's visited San Francisco's young entrepreneurs in September to mockingly profile "tech's new generation and the end of thinking." There's Cluely founder Roy Lee. ("His grand contribution to the world was a piece of software that told people what to do.") And the Rationalist movement's Scott Alexander, who "would probably have a very easy time starting a suicide cult..." Alexander's relationship with the AI industry is a strange one. "In theory, we think they're potentially destroying the world and are evil and we hate them," he told me. In practice, though, the entire industry is essentially an outgrowth of his blog's comment section... "Many of them were specifically thinking, I don't trust anybody else with superintelligence, so I'm going to create it and do it well." Somehow, a movement that believes AI is incredibly dangerous and needs to be pursued carefully ended up generating a breakneck artificial arms race. There's a fascinating story about teenaged founder Eric Zhu (who only recently turned 18): Clients wanted to take calls during work hours, so he would speak to them from his school bathroom. "I convinced my counselor that I had prostate issues... I would buy hall passes from drug dealers to get out of class, to have business meetings." Soon he was taking Zoom calls with a U.S. senator to discuss tech regulation... Next, he built his own venture-capital fund, managing $20 million. At one point cops raided the bathroom looking for drug dealers while Eric was busy talking with an investor. Eventually, the school got sick of Eric's misuse of the facilities and kicked him out. He moved to San Francisco. Eric made all of this sound incredibly easy. You hang out in some Discord servers, make a few connections with the right people; next thing you know, you're a millionaire... Eric didn't think there was anything particularly special about himself. Why did he, unlike any of his classmates, start a $20 million VC fund? "I think I was just bored. Honestly, I was really bored." Did he think anyone could do what he did? "Yeah, I think anyone genuinely can." The article concludes Silicon Valley's investors are rewarding young people with "agency". Although "As far as I could tell, being a highly agentic individual had less to do with actually doing things and more to do with constantly chasing attention online." Like X.com user Donald Boat, who successfully baited Sam Altman into buying him a gaming PC in "a brutally simplified miniature of the entire VC economy." (After which "People were giving him stuff for no reason except that Altman had already done it, and they didn't want to be left out of the trend.") Shortly before I arrived at the Cheesecake Factory, [Donald Boat] texted to let me know that he'd been drinking all day, so when I met him I thought he was irretrievably wasted. In fact, it turned out, he was just like that all the time... He seemed to have a constant roster of projects on the go. He'd sent me occasional photos of his exploits. He went down to L.A. to see Oasis and ended up in a poker game with a group of weapons manufacturers. "I made a bunch of jokes about sending all their poker money to China," he said, "and they were not pleased...." "I don't use that computer and I think video games are a waste of time. I spent all the money I made from going viral on Oasis tickets." As far as he was concerned, the fact that tech people were tripping over themselves to take part in his stunt just confirmed his generally low impression of them. "They have too much money and nothing going on..." Ever since his big viral moment, he'd been suddenly inundated with messages from startup drones who'd decided that his clout might be useful to them. One had offered to fly him out to the French Riviera. The author's conclusion? "It did not seem like a good idea to me that some of the richest people in the world were no longer rewarding people for having any particular skills, but simply for having agency."
Read more of this story at Slashdot.
01 Mar 2026 5:34am GMT
Sam Altman Answers Questions on X.com About Pentagon Deal, Threats to Anthropic
Saturday afternoon Sam Altman announced he'd start answering questions on X.com about OpenAI's work with America's Department of War - and all the developments over the past few days. (After that department's negotions had failed with Anthropic, they announced they'd stop using Anthropic's technology and threatened to designate it a "Supply-Chain Risk to National Security". Then they'd reached a deal for OpenAI's technology - though Altman says it includes OpenAI's own similar prohibitions against using their products for domestic mass surveillance and requiring "human responsibility" for the use of force in autonomous weapon systems.) Altman said Saturday that enforcing that "Supply-Chain Risk" designation on Anthropic "would be very bad for our industry and our country, and obviously their company. We said [that] to the Department of War before and after. We said that part of the reason we were willing to do this quickly was in the hopes of de-esclation.... We should all care very much about the precedent... To say it very clearly: I think this is a very bad decision from the Department of War and I hope they reverse it. If we take heat for strongly criticizing it, so be it." Altman also said that for a long time, OpenAI was planning to do "non-classified work only," but this week found the Department of War "flexible on what we needed..." Sam Altman: The reason for rushing is an attempt to de-escalate the situation. I think the current path things are on is dangerous for Anthropic, healthy competition, and the U.S. We negotiated to make sure similar terms would be offered to all other AI labs. I know what it's like to feel backed into a corner, and I think it's worth some empathy to the Department of War. They are... a very dedicated group of people with, as I mentioned, an extremely important mission. I cannot imagine doing their work. Our industry tells them "The technology we are building is going to be the high order bit in geopolitical conflict. China is rushing ahead. You are very behind." And then we say "But we won't help you, and we think you are kind of evil." I don't think I'd react great in that situation. I do not believe unelected leaders of private companies should have as much power as our democratically elected government. But I do think we need to help them. Question: Are you worried at all about the potential for things to go really south during a possible dispute over what's legal or not later on and be deemed a supply chain risk...? Sam Altman: Yes, I am. If we have to take on that fight we will, but it clearly exposes us to some risk. I am still very hopeful this is going to get resolved, and part of why we wanted to act fast was to help increase the chances of that... Question: Why the rush to sign the deal ? Obviously the optics don't look great. Sam Altman: It was definitely rushed, and the optics don't look good. We really wanted to de-escalate things, and we thought the deal on offer was good. If we are right and this does lead to a de-escalation between the Department of War and the industry, we will look like geniuses, and a company that took on a lot of pain to do things to help the industry. If not, we will continue to be characterized as as rushed and uncareful. I don't where it's going to land, but I have already seen promising signs. I think a good relationship between the government and the companies developing this technology is critical over the next couple of years... Question: What was the core difference why you think the Department of War accepted OpenAI but not Anthropic? Sam Altman: [...] We believe in a layered approach to safety--building a safety stack, deploying FDEs [embedded Forward Deployed Engineers] and having our safety and alignment researcher involved, deploying via cloud, working directly with the Department of War. Anthropic seemed more focused on specific prohibitions in the contract, rather than citing applicable laws, which we felt comfortable with. We feel that it it's very important to build safe system, and although documents are also important, I'd clearly rather rely on technical safeguards if I only had to pick one... I think Anthropic may have wanted more operational control than we did... Question: Were the terms that you accepted the same ones Anthropic rejected? Sam Altman: No, we had some different ones. But our terms would now be available to them (and others) if they wanted. Question: Will you turn off the tool if they violate the rules? Sam Altman: Yes, we will turn it off in that very unlikely event, but we believe the U.S. government is an institution that does its best to follow law and policy. What we won't do is turn it off because we disagree with a particular (legal military) decision. We trust their authority. Questions were also answered by OpenAI's head of National Security Partnerships (who at one point posted that they'd managed the White House response to the Snowden disclosures and helped write the post-Snowden policies constraining surveillance during the Obama years.) And they stressed that with OpenAI's deal with Department of War, "We control how we train the models and what types of requests the models refuse." Question: Are employees allowed to opt out of working on Department of War-related projects? Answer: We won't ask employees to support Department of War-related projects if they don't want to. Question: How much is the deal worth? Answer: It's a few million $, completely inconsequential compared to our $20B+ in revenue, and definitely not worth the cost of a PR blowup. We're doing it because it's the right thing to do for the country, at great cost to ourselves, not because of revenue impact... Question: Can you explicitly state which specific technical safeguard OpenAI has that allowed you to sign what Anthropic called a 'threat to democratic values'? Answer: We think the deal we made has more guardrails than any previous agreement for classified AI deployments, including Anthropic's. Other AI labs (including Anthropic) have reduced or removed their safety guardrails and relied primarily on usage policies as their primary safeguards in national security deployments. Usage policies, on their own, are not a guarantee of anything. Any responsible deployment of AI in classified environments should involve layered safeguards including a prudent safety stack, limits on deployment architecture, and the direct involvement of AI experts in consequential AI use cases. These are the terms we negotiated in our contract. They also detailed OpenAI's position on LinkedIn: Deployment architecture matters more than contract language. Our contract limits our deployment to cloud API. Autonomous systems require inference at the edge. By limiting our deployment to cloud API, we can ensure that our models cannot be integrated directly into weapons systems, sensors, or other operational hardware... Instead of hoping contract language will be enough, our contract allows us to embed forward deployed engineers, commits to giving us visibility into how models are being used, and we have the ability to iterate on safety safeguards over time. If our team sees that our models aren't refusing queries they should, or there's more operational risk than we expected, our contract allows us to make modifications at our discretion. This gives us far more influence over outcomes (and insight into possible abuse) than a static contract provision ever could. U.S. law already constrains the worst outcomes. We accepted the "all lawful uses" language proposed by the Department, but required them to define the laws that constrained them on surveillance and autonomy directly in the contract. And because laws can change, having this codified in the contract protects against changes in law or policy that we can't anticipate.
Read more of this story at Slashdot.
01 Mar 2026 2:39am GMT
28 Feb 2026
Slashdot
Duolingo Grows, But Users Disliked Increased Ads and Subscription Pushes. Stock Plummets Again
Friday was "a horrible day" for investors in Duolingo, reports Fast Company. But Friday's one-day 14% drop is just part of a longer story. Since last May, Duolingo's stock has dropped 81%. Yes, the company faced a social media backlash that month after its CEO promised they'd become an "AI-first" company (favoring AI over human contractors). And yes, Duolingo did double its language offerings using generative AI. But more importantly, that summer OpenAI showed how easy it was to just roll your own language-learning tool from a short prompt in a GPT-5 demo, while Google built an AI-powered language-learning tool into its Translate app. And yet, Friday Duolingo's shares dropped another 14%, after announcing good fourth quarter results but an unpopular direction for its future. Fast Company reports: On the surface, many of the company's most critical metrics saw decent gains for the quarter, including: - Daily Active Users: 52.7 million (up 30% year-over-year) - Paid Subscribers: 12.2 million (up 28% year-over-year) - Revenue: $282.9 million (up 35% year-over-year) - Total bookings: $336.8 million (up 24% year-over-year) The company also reported its full-year 2025 financials, revealing that for the first time in its history, it crossed the $1 billion revenue mark for a fiscal year. But the Motley Fool explains that Duolingo's higher ad loads and repeated pushes for subscription plans "generated revenues in the short term, but made the Duolingo platform less engaging. Ergo, user growth decelerated while revenues rose." Thursday Duolingo announced a big change to address that, including moving more features into lower-priced tiers. Barron's reports: D.A. Davidson analyst Wyatt Swanson, who rates Duolingo stock at Neutral, posited that the push to monetize "led to disgruntled users and a meaningful negative impact to 'word-of-mouth' marketing." Duolingo has guided for bookings growth between 10% and 12% in 2026, compared with the 20% rate the company would have expected to see "if we operated like we have in past years...." If stock reaction is any indication, investors are concerned about Duolingo's new focus.
Read more of this story at Slashdot.
28 Feb 2026 11:25pm GMT
New 'Star Wars' Movies Are Coming to Theatres. But Will Audiences?
"The drought of upcoming Star Wars movies is coming to an end soon," writes Cinemablend. In May the The Mandalorian and Grogu opens, and one year later there's the release of the Ryan Gosling-led Star Wars: Starfighter. But "there are some insiders who already believe that Starfighter will be a bigger hit than The Mandalorian and Grogu..." According to unnamed sources who spoke with Variety, there's a "sense" that Star Wars: Starfighter, which is directed by Deadpool & Wolverine's Shawn Levy, will be a more satisfying viewing experience. These same sources are allegedly impressed by the early footage they've seen of Ryan Gosling's performance and also suggested that Levy has "recaptured the franchise's spirit of fun." Furthermore, the article states that there's concern that because The Mandalorian and Grogu is spinning out of a streaming-exclusive series, it might not have as much appeal to people who aren't already fans of The Mandalorian... Star Wars: Starfighter, on the other hand, will be accessible to everyone equally. It's set five years after The Rise of Skywalker, which is an unexplored period for the Star Wars franchise onscreen. It's also expected that most, if not all of its featured characters will be brand-new, so no knowledge of past adventures is required. Slashdot reader gaiageek reminds us that 2027 will also see a special 50-year anniversary event in movie in theatres: a "newly restored" version of the original 1977 Star Wars.
Read more of this story at Slashdot.
28 Feb 2026 9:34pm GMT
Ars Technica
Trump moves to ban Anthropic from the US government
The Defense Department pressured Anthropic to drop restrictions on how its AI can be used by the military.
28 Feb 2026 8:00pm GMT
In puzzling outbreak, officials look to cold beer, gross ice, and ChatGPT
An AI chatbot convinced health investigators they had the right answer.
28 Feb 2026 6:17pm GMT
Google quantum-proofs HTTPS by squeezing 15kB of data into 700-byte space
Merkle Tree Certificate support is already in Chrome. Soon, it will be everywhere.
28 Feb 2026 1:26am GMT
The Air Force's new ICBM is nearly ready to fly, but there’s nowhere to put it
"There were assumptions that were made in the strategy that obviously didn't come to fruition."
28 Feb 2026 12:32am GMT
27 Feb 2026
Ars Technica
Under a Paramount-WBD merger, two struggling media giants would unite
Can two declining companies form a profitable one?
27 Feb 2026 10:39pm GMT
Photons that aren't actually there influence superconductivity
Interactions between neighboring materials is mediated by virtual photons.
27 Feb 2026 9:27pm GMT
Whoops: US military laser strike takes down CBP drone near Mexican border
Trump admin "incompetence continues to cause chaos in our skies," Duckworth says.
27 Feb 2026 7:14pm GMT
The AI apocalypse is nigh in Good Luck, Have Fun, Don't Die
Director Gore Verbinksi and screenwriter Matthew Robinson on the making of this darkly satirical sci-fi film.
27 Feb 2026 7:04pm GMT
Hyperion author Dan Simmons dies from stroke at 77
I went into Hyperion blind, decades ago, knowing almost nothing about it. I was never the same.
27 Feb 2026 6:36pm GMT
How strong is New York's "illegal gambling" case against Valve's loot boxes?
Lawyers tell Ars the state has a tough road ahead, even as Valve is uniquely vulnerable.
27 Feb 2026 5:21pm GMT
And the award for the most improved EV goes to... the 2026 Toyota bZ
Toyota's small electric SUV is much-revised, much more efficient, and much better.
27 Feb 2026 4:13pm GMT
Netflix cedes Warner Bros. Discovery to Paramount: “No longer financially attractive”
Netflix shares jumped following the announcement.
27 Feb 2026 3:13pm GMT
NASA shakes up its Artemis program to speed up lunar return
"Launching SLS every three and a half years or so is not a recipe for success."
27 Feb 2026 3:08pm GMT
How to downgrade from macOS 26 Tahoe on a new Mac
Most new Macs can still be downgraded with few downsides. Here's what to know.
27 Feb 2026 2:34pm GMT
Block lays off 40% of workforce as it goes all-in on AI tools
CEO says "most companies are late" to realize how much technology will affect employment.
27 Feb 2026 2:19pm GMT