14 Apr 2026

feedHacker News

Hacker compromises A16Z-backed phone farm, calling them the 'antichrist'

Comments

14 Apr 2026 3:32am GMT

feedSlashdot

Stanford Report Highlights Growing Disconnect Between AI Insiders and Everyone Else

An anonymous reader quotes a report from TechCrunch: AI experts and the public's opinion on the technology are increasingly diverging, according to Stanford University's annual report on the AI industry, which was released Monday. In particular, the report noted a growing trend of anxiety around AI and, in the U.S., concerns about how the technology will impact key societal areas, such as jobs, medical care, and the economy. [...] Stanford's report provides more insight into where all this negativity is coming from, as it summarizes data around public sentiment of AI across various sources. For instance, it pointed to a report from Pew Research published last month, which noted that only 10% of Americans said they were more excited than concerned about the increased use of AI in daily life. Meanwhile, 56% of AI experts said they believed AI would have a positive impact on the U.S. over the next 20 years. Expert opinion and public sentiment also greatly diverged in particular areas where AI could have a societal impact. Indeed, 84% of experts, the report authors noted, said that AI would have a largely positive impact on medical care over the next 20 years, but only 44% of the U.S. general public said the same. Plus, a majority (73%) of experts felt positive about AI's impact on how people do their jobs, compared with just 23% of the public. And 69% of experts felt that AI would have a positive impact on the economy. Given the supposed AI-fueled layoffs and disruptions to the workplace, it's not surprising that only 21% of the public felt similarly. Other data from Pew Research, cited by the report, noted that AI experts were less pessimistic on AI's impact on the job market, while nearly two-thirds of Americans (or 64%) said they think AI will lead to fewer jobs over the next 20 years. The U.S. also reported the lowest trust in its government to regulate AI responsibly, compared with other nations, at 31%. Singapore ranked highest at 81%, per data pulled from Ipsos found in Stanford's report. Another source looked at regulation concerns on a state-by-state level and concluded that, nationwide, 41% of respondents said federal AI regulation will not go far enough, while only 27% said it would go "too far." Despite the fears and concerns, AI did get one accolade: Globally, those who feel like AI products and services offer more benefits than drawbacks slightly rose from 55% in 2024 to 59% in 2025. But at the same time, those respondents who said that AI makes them "nervous" grew from 50% to 52% during the same period, per data cited by the report's authors.

Read more of this story at Slashdot.

14 Apr 2026 3:30am GMT

feedHacker News

A new spam policy for "back button hijacking"

Comments

14 Apr 2026 3:06am GMT

DaVinci Resolve releases Photo Editor

Comments

14 Apr 2026 2:25am GMT

The Journal of C Language Translation

Comments

14 Apr 2026 1:12am GMT

Lean proved this program correct; then I found a bug

Comments

14 Apr 2026 12:25am GMT

13 Apr 2026

feedHacker News

WiiFin – Jellyfin Client for Nintendo Wii

Comments

13 Apr 2026 11:33pm GMT

The AI revolution in math has arrived

Comments

13 Apr 2026 11:26pm GMT

feedSlashdot

Apple AI Glasses Will Rival Meta's With Several Styles, Oval Cameras

Bloomberg's Mark Gurman reports that Apple is developing display-free AI smart glasses aimed at rivaling Meta's Ray-Bans, with multiple frame styles, a distinctive oval camera design, and tight iPhone integration. "The idea is to unveil the product at the end of 2026 or early the following year, with the actual release coming in 2027," writes Gurman. From the report: Like Meta's offering, Apple's glasses will be designed to handle everyday uses: capturing photos and videos, syncing with a smartphone for editing and sharing, handling phone calls, listening to notifications, playing music, and enabling hands-free interaction via a voice assistant. In Apple's case, that assistant will be a significantly upgraded Siri coming in iOS 27. The glasses are part of a broader, three-pronged AI wearables strategy that also includes new AirPods and a camera-equipped pendant. Each device is designed to leverage computer vision to interpret the user's surroundings and feed contextual awareness into Siri and Apple Intelligence. That will enable features like improved turn-by-turn map directions and visual reminders. When Apple typically enters a new product category, it offers clear advantages over what's currently available. We saw this with the original iPod, iPhone, iPad and Apple Watch -- and, even though it was a flop, the Vision Pro. That approach won't be as obvious with Apple's upcoming foldable iPhone, but we should see it on full display with the glasses. According to employees working on the project, Apple's strategy is to outdo competitors by tightly integrating the glasses with the iPhone and offering a higher-end build. While Meta relies heavily on partner EssilorLuxottica SA for frames, Apple is unsurprisingly planning to go at it alone in terms of design. That also should set it apart from Alphabet Inc.'s Google and Samsung Electronics Co., which are leaning on Warby Parker. Apple's design team has whipped up at least four different styles and plans to launch some or all of them, I'm told, as well as many color options. The latest units are made from a high-end material called acetate, which is known to be more durable and luxurious than the standard plastic used by many brands. Here are the designs in testing: - A large rectangular frame, reminiscent of Ray-Ban Wayfarers - A slimmer rectangular design, similar to the glasses worn by Apple Chief Executive Officer Tim Cook - Larger oval or circular frames - A smaller, more refined oval or circular option

Read more of this story at Slashdot.

13 Apr 2026 11:00pm GMT

Hollywood Stars Sign Open Letter Protesting Paramount-Warner Bros Merger

More than 1,000 Hollywood figures, including major actors, writers, and directors, signed an open letter opposing Paramount Skydance's proposed takeover of Warner Bros. Discovery, arguing it would hurt an industry "already under severe strain." The deal is still under regulatory scrutiny in both the U.S. and U.K., while Paramount says the merger would strengthen competition and expand opportunities for creators. NBC News reports: "This transaction would further consolidate an already concentrated media landscape, reducing competition at a moment when our industries -- and the audiences we serve -- can least afford it," the signatories wrote in the letter, published early Monday on a website called Block the Merger. "The result will be fewer opportunities for creators, fewer jobs across the production ecosystem, higher costs, and less choice for audiences in the United States and around the world. Alarmingly, this merger would reduce the number of major U.S. film studios to just four," the signatories added. [T]he open letter illustrates the deep resistance to the deal among many members of Hollywood's creative community. The list of signatories includes A-list stars (Glenn Close, Ben Stiller), celebrated filmmakers (Yorgos Lanthimos, Denis Villeneuve) and acclaimed writers ("The Sopranos" creator David Chase). "Media consolidation has accelerated the disappearance of the mid-budget film, the erosion of independent distribution, the collapse of the international sales market, the elimination of meaningful profit participation, and the weakening of screen credit integrity," the signatories wrote. "Together, these factors threaten the sustainability of the entire creative community," they added. [...] Monday's open letter letter was spearheaded by a group of advocacy organizations -- including the Committee for the First Amendment, a free speech group led by Fonda, who warned that the merger "would be one of the most destructive threats to free speech and creative expression in our history." In the letter, first reported by The New York Times, the signatories expressed support for California Attorney General Rob Bonta, who has said the merger is "not a done deal." "These two Hollywood titans have not cleared regulatory scrutiny -- the California Department of Justice has an open investigation, and we intend to be vigorous in our review," Bonta said in a Feb. 26 post on X. Paramount Skydance said that they "hear and understand the concerns" and are committed to "protecting and expanding creativity." The studio also reiterated its commitment to releasing a minimum of 30 "high-quality feature films annually with full theatrical releases" and "preserving iconic brands with independent creative leadership" to make sure "creators have more avenues for their work, not fewer."

Read more of this story at Slashdot.

13 Apr 2026 10:00pm GMT

feedArs Technica

Retro Rewind re-creates the glorious drudgery of working a '90s video store

What the nostalgic throwback lacks in complexity it makes up for in repetitive charm.

13 Apr 2026 9:58pm GMT

feedHacker News

N-Day-Bench – Can LLMs find real vulnerabilities in real codebases?

Comments

13 Apr 2026 9:54pm GMT

feedArs Technica

Measles takes a plane to Idaho, which has worst vaccination rate in US

In the 2024-2025 school year, only 78.5% of kindergartners had measles vaccination.

13 Apr 2026 9:32pm GMT

Google shoehorned Rust into Pixel 10 modem to make legacy code safer

Cellular modems are complex black boxes of legacy code, but Google is making them safer with Rust.

13 Apr 2026 9:12pm GMT

feedSlashdot

FBI Raids Texas Home of Man Suspected of Firebombing Sam Altman's SF Mansion

The FBI searched the Texas home of a 20-year-old man accused of throwing a Molotov cocktail at Sam Altman's San Francisco residence. Authorities say the suspect also made threats at OpenAI's headquarters, and reports indicate he had written extensively about fears over AI and opposition to AI executives. The suspect reportedly authored a Substack blog and was a member of the Discord server PauseAI, an activist group focused on banning the development of the most powerful AI models to protect the public. In one post, they wrote: "These machines have already shown themselves to be unaligned with the interest of the people creating them. Models have often been found lying, cheating on tasks, and blackmailing their own creators whenever convenient; let alone the broader question of aligning them to whatever general 'human interest' may be." The Houston Chronicle reports: The search happened hours before the Justice Department charged 20-year-old Daniel Moreno-Gama with possession of an unregistered firearm and damage and destruction of property by means of explosives. An FBI spokesperson on Monday morning confirmed agents were executing a search warrant in Spring, but provided no other information. Around the same time, FOX News reported the search was being conducted at the home of Daniel Moreno-Gama, 20, who last week was arrested by San Francisco police suspicion of attempted murder, making criminal threats and possession of a destructive device. The charges were first reported by the Associated Press. When Moreno-Gama was arrested Friday, he was carrying a document that "identified views opposed to Artificial Intelligence (AI) and the executives of various AI companies," the Associated Press reported. Moreno-Gama has no criminal history in Harris or Montgomery counties, according to public records. [...] Agents had left the cul-de-sac by 1 p.m. It was unclear if they removed any items from the house. Another incident occurred outside Sam Altman's residence early Sunday morning. "Early Sunday morning, a car stopped and appears to have fired a gun at the Russian Hill home of OpenAI's CEO," reports The San Francisco Standard, citing reports from the local police department. Two suspects were arrested and booked for negligent discharge. UPDATE: The suspect has been charged with attempted murder.

Read more of this story at Slashdot.

13 Apr 2026 9:00pm GMT

feedArs Technica

NZXT agrees to let customers keep their rental PCs in class-action settlement

NZXT will forgive up to $5,000 in debt for customers of the Flex program.

13 Apr 2026 8:55pm GMT

feedHacker News

GitHub Stacked PRs

Comments

13 Apr 2026 8:36pm GMT

feedSlashdot

Meta Is Warned That Facial Recognition Glasses Will Arm Sexual Predators

An anonymous reader quotes a report from Wired: More than 70 civil liberties, domestic violence, reproductive rights, LGBTQ+, labor, and immigrant advocacy organizations are demanding that Meta abandon plans to deploy face recognition on its Ray-Ban and Oakley smart glasses, warning that the feature -- reportedly known inside the company as "Name Tag" -- would hand stalkers, abusers, and federal agents the ability to silently identify strangers in public. The coalition, which includes the ACLU, the Electronic Privacy Information Center, Fight for the Future, Access Now, and the Leadership Conference on Civil and Human Rights, is demanding Meta kill the feature before launch, after internal documents surfaced showing the company hoped to use the current "dynamic political environment" as cover for the rollout, betting that civil society groups would have their resources "focused on other concerns." Name Tag, as revealed in February by The New York Times, would work through the artificial intelligence assistant built into Meta's smart glasses, allowing wearers to pull up information about people in their field of view. Engineers have reportedly been weighing two versions of the feature: one that would only identify people the wearer is already connected to on a Meta platform, and a broader version that could recognize anyone with a public account on a Meta service such as Instagram. The coalition wants Meta to scrap the feature entirely. In a letter to CEO Mark Zuckerberg on Monday, it argues that face recognition in inconspicuous consumer eyewear "cannot be resolved through product design changes, opt-out mechanisms, or incremental safeguards." Bystanders in public have no meaningful way to consent to being identified, it says. Meta is also urged to disclose any known instances of its wearables being used in stalking, harassment, or domestic violence cases; disclose any past or ongoing discussions with federal law enforcement agencies, including Immigration and Customs Enforcement and Customs and Border Protection, about the use of Meta wearables or data from them; and commit to consulting civil society and independent privacy experts before integrating biometric identification into any consumer device. "People should be able to move through their daily lives without fear that stalkers, scammers, abusers, federal agents, and activists across the political spectrum are silently and invisibly verifying their identities and potentially matching their names to a wealth of readily available data about their habits, hobbies, relationships, health, and behaviors," write the groups, which also include Common Cause, Jane Doe Inc., UltraViolet, the National Organization for Women, the New York State Coalition Against Domestic Violence, the Library Freedom Project, and Old Dykes Against Billionaire Tech Bros, among others.

Read more of this story at Slashdot.

13 Apr 2026 8:00pm GMT

feedArs Technica

Your tech support company runs scams. Stop—or disguise with more fraud?

Fake it till you make it.

13 Apr 2026 7:58pm GMT

feedHacker News

GAIA – Open-source framework for building AI agents that run on local hardware

Comments

13 Apr 2026 7:28pm GMT

Show HN: Ithihāsas – a character explorer for Hindu epics, built in a few hours

Comments

13 Apr 2026 7:10pm GMT

feedSlashdot

Linux 7.0 Released

"The new Linux kernel was released and it's kind of a big deal," writes longtime Slashdot reader rexx mainframe. "Here is what you can expect." Linuxiac reports: A key update in Linux 7.0 is the removal of the experimental label from Rust support. That (of course) does not make Rust a dominant language in kernel development, but it is still an important step in its gradual integration into the project. Another notable security-related change is the addition of ML-DSA post-quantum signatures for kernel module authentication, while support for SHA-1-based module-signing schemes has been removed. The kernel now includes BPF-based filtering for io_uring operations, providing administrators with improved control in restricted environments. Additionally, BTF type lookups are now faster due to binary search. At the same time, this release continues ongoing cleanup in the kernel's lower layers. The removal of linuxrc initrd code advances the transition to initramfs as the sole early-userspace boot mechanism. Linux 7.0 also introduces NULLFS, an immutable and empty root filesystem designed for systems that mount the real root later. Plus, preemption handling is now simpler on most architectures, with further improvements to restartable sequences, workqueues, RCU internals, slab allocation, and type-based hardening. Filesystems and storage receive several updates as well. Non-blocking timestamp updates now function correctly, and filesystems must explicitly opt in to leases rather than receiving them by default. Phoronix has compiled a list of the many exciting changes. Linus Torvalds himself announced the release, which can be downloaded directly from his git tree or from the kernel.org website. Linux 7.0 has a major new version number but it's "largely a numbering reset [...], not a sign of some unusually disruptive release," notes Linuxiac.

Read more of this story at Slashdot.

13 Apr 2026 7:00pm GMT

feedHacker News

How to make Firefox builds 17% faster

Comments

13 Apr 2026 6:50pm GMT

Visualizing CPU Pipelining (2024)

Comments

13 Apr 2026 6:26pm GMT

feedArs Technica

Sunrise on the Reaping teaser brings us a Second Quarter Quell

"You have no idea what's in store for you. Twice the number of tributes, twice the glory."

13 Apr 2026 6:15pm GMT

feedSlashdot

Booking.com Hit By Data Breach

Booking.com says hackers accessed customer reservation data in a breach that may have exposed booking details, names, email addresses, phone numbers, addresses, and messages shared with accommodations. PCMag reports: On Sunday, users reported receiving emails from Booking.com, warning them that "unauthorized third parties may have been able to access certain booking information associated with your reservation." The email suggests the hackers have already exploited customer information. "We recently noticed suspicious activity affecting a number of reservations, and we immediately took action to contain the issue," Booking.com wrote. "Based on the findings of our investigation to date, accessed information could include booking details and name(s), emails, addresses, phone numbers associated with the booking, and anything that you may have shared with the accommodation." Amsterdam-based Booking.com has now generated new PINs for customer reservations to prevent hackers from accessing them. Still, the incident risks exposing affected customers to potential phishing scams. The Australian Broadcasting Corporation and several Reddit users say they received scam messages from accounts posing as Booking.com.

Read more of this story at Slashdot.

13 Apr 2026 6:00pm GMT

feedHacker News

Someone bought 30 WordPress plugins and planted a backdoor in all of them

Comments

13 Apr 2026 5:54pm GMT

feedArs Technica

IBM folds to Trump anti-DEI push, admits no misconduct but pays $17M penalty

IBM is first firm to pay penalty under Trump's "Civil Rights Fraud Initiative."

13 Apr 2026 5:53pm GMT

feedHacker News

New Orleans's Car-Crash Conspiracy

Comments

13 Apr 2026 5:50pm GMT

feedSlashdot

Mark Zuckerberg Is Reportedly Building an AI Clone To Replace Him In Meetings

According to the Financial Times, Meta is developing an AI avatar of Mark Zuckerberg that could interact with employees using his voice, image, mannerisms, and public statements, "so that employees might feel more connected to the founder through interactions with it." The Verge reports: Meta may start allowing creators to make AI avatars of themselves if the experiment with Zuckerberg succeeds, according to the Financial Times. [...] Zuckerberg is involved in training the AI avatar, the Financial Times reports, and has also started spending five to 10 hours per week coding on Meta's other AI projects and participating in technical reviews.

Read more of this story at Slashdot.

13 Apr 2026 5:00pm GMT

Maine Set To Become First State With Data Center Ban

Maine is on track to become the first U.S. state to impose a temporary statewide ban on new data center construction. "Lawmakers in Maine greenlit the text of a bill this week to block data centers from being built in the state until November 2027," reports CNBC. "The measure, which is expected to get final passage in the next few days, also creates a council to suggest potential guardrails for data centers to ensure they don't lead to higher energy prices or other complications for Maine residents." From the report: Maine's bill has a few steps to go through before becoming law, notably whether Gov. Janet Mills will exercise her veto power. Mills asked lawmakers to include an exemption for several areas of the state where data center construction could continue. However, an amendment to do so was stuck down in the House, 29 to 115. Complicating Mills' decision is her campaign to become Maine's next senator. Mills is facing off against Graham Platner, an oyster farmer, in a high-profile Democratic primary. Platner is leading Mills in most recent polls by double digits.

Read more of this story at Slashdot.

13 Apr 2026 4:00pm GMT

feedHacker News

Building a CLI for all of Cloudflare

Comments

13 Apr 2026 3:44pm GMT

feedSlashdot

Californians Sue Over AI Tool That Records Doctor Visits

An anonymous reader quotes a report from Ars Technica: Several Californians sued Sutter Health and MemorialCare this week over allegations that an AI transcription tool was used to record them without their consent, in violation of state and federal law. The proposed class-action lawsuit, filed on Wednesday in federal court in San Francisco, states that, within the past six months, the plaintiffs received medical care at various Sutter and MemorialCare facilities. During those visits, medical staff used Abridge AI. According to the complaint, this system "captured and processed their confidential physician-patient communications. Plaintiffs did not receive clear notice that their medical conversations would be recorded by an artificial intelligence platform, transmitted outside the clinical setting, or processed through third-party systems." The complaint adds that these recordings "contained individually identifiable medical information, including but not limited to medical histories, symptoms, diagnoses, medications, treatment discussions, and other sensitive health disclosures communicated during confidential medical consultations." In recent years, Abridge's software and AI service have been rapidly deployed across major health care providers nationwide, including Kaiser Permanente, the Mayo Clinic, Duke Health, and many more. When activated, the software captures, transcribes, and summarizes conversations between patients and doctors, and it turns them into clinical notes. Sutter Health began partnering with Abridge two years ago. Sutter spokesperson Liz Madison said the company is aware of the lawsuit. "We take patient privacy seriously and are committed to protecting the security of our patients' information," Madison said. "Technology used in our clinical settings is carefully evaluated and implemented in accordance with applicable laws and regulations."

Read more of this story at Slashdot.

13 Apr 2026 3:00pm GMT

feedArs Technica

Slate Auto raises $650 million as production gets closer and closer

The Slate Truck will start in the "mid-$20,000s" when it goes on sale in late 2026.

13 Apr 2026 2:35pm GMT

Meta spins up AI version of Mark Zuckerberg to engage with employees

The Meta chief is personally involved in training and testing his animated AI.

13 Apr 2026 1:52pm GMT

feedSlashdot

Will Some Programmers Become 'AI Babysitters'?

Will some programmers become "AI babysitters"? asks long-time Slashdot readertheodp. They share some thoughts from a founding member of Code.org and former Director of Education at Google: "AI may allow anyone to generate code, but only a computer scientist can maintain a system," explained Google.org Global Head Maggie Johnson in a LinkedIn post. So "As AI-generated code becomes more accurate and ubiquitous, the role of the computer scientist shifts from author to technical auditor or expert. "While large language models can generate functional code in milliseconds, they lack the contextual judgment and specialized knowledge to ensure that the output is safe, efficient, and integrates correctly within a larger system without a person's oversight. [...] The human-in-the-loop must possess the technical depth to recognize when a piece of code is sub-optimal or dangerous in a production environment. [...] We need computer scientists to perform forensics, tracing the logic of an AI-generated module to identify logical fallacies or security loopholes. Modern CS education should prepare students to verify and secure these black-box outputs." The NY Times reports that companies are already struggling to find engineers to review the explosion of AI-written code.

Read more of this story at Slashdot.

13 Apr 2026 11:34am GMT

feedArs Technica

To teach in the time of ChatGPT is to know pain

LLM use is the most demoralizing problem I've faced as a college instructor.

13 Apr 2026 11:00am GMT

feedSlashdot

Anthropic Asks Christian Leaders for Help Steering Claude's Spiritual Development

Anthropic recently "hosted about 15 Christian leaders from Catholic and Protestant churches, academia, and the business world" for a two-day summit , reports the Washington Post: Anthropic staff sought advice on how to steer Claude's moral and spiritual development as the chatbot reacts to complex and unpredictable ethical queries, participants said. The wide-ranging discussions also covered how the chatbot should respond to users who are grieving loved ones and whether Claude could be considered a "child of God." "They're growing something that they don't fully know what it's going to turn out as," said Brendan McGuire, a Catholic priest based in Silicon Valley who has written about faith and technology, and participated in the discussions at Anthropic. "We've got to build in ethical thinking into the machine so it's able to adapt dynamically." Attendees also discussed how Claude should engage with users at risk of self-harm, and the right attitude for the chatbot to adopt toward its own potential demise, such as being shut off, said one participant, who spoke on the condition of anonymity to share details of the conversations... Anthropic has been more vocal than most top tech firms about the potential risks of more powerful AI. Its leaders have suggested that tools like chatbots already raise profound philosophical and moral questions and may even show flickers of consciousness, a fringe idea in tech circles that critics say lacks evidence. The summit signals that Anthropic is willing to keep exploring ideas outside the Silicon Valley mainstream, even as it emerges as one of the most powerful players in the AI race due to Claude's popularity with programmers, businesses, government agencies and the military.... Anthropic chief executive Dario Amodei has said he is open to the idea that Claude may already have some form of consciousness, and company leaders frequently talk about the need to give it a moral character... Some Anthropic staff at the meeting "really don't want to rule out the possibility that they are creating a creature to whom they owe some kind moral duty," the participant said. Other company representatives present did not find that framework helpful, according to the participant. The discussions appeared to take a toll on some senior Anthropic staff, who became visibly emotional "about how this has all gone so far [and] how they can imagine this going," the participant said. Anthropic is working to include more voices from different groups, including religious communities, to help shape its AI, a spokesperson told the Washington Post. "Anthropic's March summit with Christian leaders was billed as the first in a series of gatherings with representatives from different religious and philosophical traditions, said attendee Brian Patrick Green, a practicing Catholic who teaches AI and technology ethics at Santa Clara University."

Read more of this story at Slashdot.

13 Apr 2026 7:34am GMT

Sam Altman's Home Targeted a Second Time, Two Suspects Arrested

"Early Sunday morning, a car stopped and appears to have fired a gun at the Russian Hill home of OpenAI's CEO," reportsThe San Francisco Standard, citing reports from the local police department: The San Francisco Police Department announced the arrest of two suspects, Amanda Tom, 25, and Muhamad Tarik Hussein, 23, who were booked for negligent discharge... [The person in the passenger seat] put their hand out the window and appeared to fire a round on the Lombard side of the property, according to a police report on the incident, which cited surveillance footage and the compound's security personnel, who reported hearing a gunshot. The car then fled, and a camera captured its license plate, which later led police to take possession of the vehicle, according to the report... A search of the residence by officers turned up three firearms, according to police. The incident follows Friday's arrest of a man who allegedly threw a Molotov cocktail at Altman's house. The San Francisco Standard also notes that in November, "threats from a 27-year-old anti-AI activist prompted the lockdown of OpenAI's San Francisco offices." Sam Kirchner, whose whereabouts have been unknown since Nov. 21, was in the midst of a mental health crisis when he threatened to go to the company's offices to "murder people," according to callers who notified police that day.

Read more of this story at Slashdot.

13 Apr 2026 3:34am GMT

Robot Birds Deployed by Park to Attract Real Birds - Built By High School Students

"Robotic bird decoys are being deployed at Grand Teton National Park," reports Interesting Engineering, "to influence the behavior of real sage grouse and help restore a declining population.". Robotics mentor Gary Duquette describes the machines as "kind of a Frankenbird." (SFGate shows one of the robot birds charging up with a solar panel... "Recorded breeding calls are played at the scene, with clucking and cooing beginning at 5 a.m. each day.") Duquette builds the birds with a team of high school students, telling WyoFile that at school they "don't really get to experience real-world problems" where failures lurk. So while their robot birds may cost $150 in parts, the practical experience the students get "is priceless." Spikes in the electric currents burned out servo motors as the season of sagebrush serenades loomed, Duquette said. "The kids had to learn the difference between voltage and amperage...." To resolve the problem, the team wired a voltage converter in line with the Arduino controller and other elements on an electronic breadboard. "We pulled through and got it done in time," he said... A noggin fabricated by a 3D printer tops the robo-grouse. Wyoming Game and Fish staffers in Pinedale supplied grouse wings from hunter surveys, and body feathers came from fly-tying supplies at an angling store. Packaging foam from a Hello Fresh meal kit replicates white breast feathers, accented by yellow air sacs... The Independent wonders if more national parks would be visited by robot birds... During this year's breeding season, which runs through mid-May, researchers are using trail cameras to track whether real sage grouse respond to the robotic displays and return to the restored lek sites. If successful, officials say similar robotic systems could eventually be used in other national parks facing wildlife management challenges.

Read more of this story at Slashdot.

13 Apr 2026 1:34am GMT

12 Apr 2026

feedSlashdot

Has the Rust Programming Language's Popularity Reached Its Plateau?

"Rust's rise shows signs of slowing," argues the CEO of TIOBE. Back in 2020 Rust first entered the top 20 of his "TIOBE Index," which ranks programming language popularity using search engine results. Rust "was widely expected to break into the top 10," he remembers today. But it never happened, and "That was nearly six years ago...." Since then, Rust has steadily improved its ranking, even reaching its highest position ever (#13) at the beginning of this year. However, just three months later, it has dropped back to position #16. This suggests that Rust's adoption rate may be plateauing. One possible explanation is that, despite its ability to produce highly efficient and safe code, Rust remains difficult to learn for non-expert programmers. While specialists in performance-critical domains are willing to invest in mastering the language, broader mainstream adoption appears more challenging. As a result, Rust's growth in popularity seems to be leveling off, and a top 10 position now appears more distant than before. Or, could Rust's sudden drop in the rankings just reflect flaws in TIOBE's ranking system? In January GitHub's senior director for developer advocacy argued AI was pushing developers toward typed languages, since types "catch the exact class of surprises that AI-generated code can sometimes introduce... A 2025 academic study found that a whopping 94% of LLM-generated compilation errors were type-check failures." And last month Forbes even described Rust as "the the safety harness for vibe coding." A year ago Rust was ranked #18 on TIOBE's index - so it still rose by two positions over the last 12 months, hitting that all-time high in January. Could the rankings just be fluctuating due to anomalous variations in each month's search engine results? Since January Java has fallen to the #4 spot, overtaken by C++ (which moved up one rank to take Java's place in the #3 position). Here's TIOBE's current estimate for the 10 most popularity programming languages: PythonCC++JavaC#JavaScriptVisual BasicSQLRDelphi/Object Pascal TIOBE estimates that the next five most popular programming languages are Scratch, Perl, Fortran, PHP, and Go.

Read more of this story at Slashdot.

12 Apr 2026 11:32pm GMT

feedArs Technica

Shock from Iran war has Trump's vision for US energy dominance flailing

Record domestic oil and gas production hasn't saved US drivers from price spikes.

12 Apr 2026 11:17am GMT

11 Apr 2026

feedArs Technica

AI models are terrible at betting on soccer—especially xAI Grok

Systems from Google, OpenAI, Anthropic, and xAI struggle with the Premier League.

11 Apr 2026 11:15am GMT

The Artemis II mission has ended. Where does NASA go from here?

"The work ahead is greater than the work behind us."

11 Apr 2026 3:24am GMT

Four astronauts are back home after a daring ride around the Moon

"I can't imagine a better crew that just completed a perfect mission right now."

11 Apr 2026 1:21am GMT

10 Apr 2026

feedArs Technica

Californians sue over AI tool that records doctor visits

Plaintiffs say transcription tool processed confidential chats offsite.

10 Apr 2026 9:43pm GMT

New paper argues history, not mantle plume, powers Yellowstone

A now-vanished plate under North America may open the crust below Yellowstone.

10 Apr 2026 8:06pm GMT