14 Apr 2026
Slashdot
Air Force Pushed Out UFO Investigator
J. Allen Hynek started as an Air Force consultant brought in to help explain away early UFO reports, but over time he grew frustrated with what he saw as the government's effort to minimize unexplained cases rather than seriously investigate them. Longtime Slashdot reader schwit1 shares an article from Popular Mechanics, in collaboration with Biography.com, that argues Hynek's shift from skeptic to advocate helped shape modern ufology, and that the Air Force's attempts to control the narrative may have deepened the public distrust and conspiracy thinking that followed. From the report: Do you think the U.S. government is hiding, and possibly reverse-engineering, extraterrestrial technology? Think again. Or better yet, don't think about it at all. Nothing to see here. That's the underlying message of a report released in 2024 by the Department of Defense. The 63-page "Report on the Historical Record of U.S. Government Involvement with Unidentified Anomalous Phenomena (UAP) " concludes that the DoD's All-Domain Anomaly Resolution Office (AARO) "found no evidence that any [U.S. Government] investigation, academic-sponsored research, or official review panel has confirmed that any sighting of a UAP represented extraterrestrial technology." The AARO, as The Guardian summarizes, is "a government office established in 2022 to detect and, as necessary, mitigate threats including 'anomalous, unidentified space, airborne, submerged and transmedium objects.'" This report came on the heels of, and in contradiction to, what was arguably the most high-profile hearing on UAPs -- formerly known as unidentified flying objects, or UFOs -- in decades: the August 2023 testimony of "whistleblower" Dave Grusch. [...] The 2024 AARO report stated that during the time Hynek was working with Project Blue Book [the U.S. Air Force's best-known UFO investigation program], "about 75 percent of Americans trusted the [US government] 'to do the right thing almost always or most of the time.'" But, the report noted, since 2007, that number has never risen above 30 percent. "This lack of trust probably has contributed to the belief held by some subset of the U.S. population that the USG has not been truthful regarding knowledge of extraterrestrial craft." Ultimately, the Air Force's efforts to stifle Hynek -- pressuring him to offer the public standard responses to questions he wasn't even allowed to ask -- appears to have backfired. Ironically, the Air Force's attempts to quiet suspicions only fueled them, leading to more conspiracy theories and distrust. People came to believe that the government was hiding the truth, contrary to Hynek's actual revelation: that, in reality, the people at the top may not care much about finding the answers after all.
Read more of this story at Slashdot.
14 Apr 2026 11:00am GMT
WeatherBug Data Says October 8 Is the Real Perfect Date
BrianFagioli shares a report from NERDS.xyz: For years pop culture has treated April 25 as the "perfect date," thanks to the famous Miss Congeniality line about needing only a light jacket. But new analysis from WeatherBug suggests that idea does not actually hold up when you look at the numbers. After reviewing U.S. weather data from 2018 through today, the company concluded that October 8 delivers the most reliable combination of comfortable temperatures and low rainfall nationwide. According to the analysis, the average conditions on that day land around 66F with just 0.0573 inches of precipitation. The study used population weighted weather data drawn from roughly 20 million daily WeatherBug users across the United States. When the company compared all days of the year, April 25 ranked only 80th, averaging about 60F and roughly 0.1297 inches of rain. The broader dataset also shows July dominating the hottest days of the year while January owns the coldest, with January 20 averaging just 33F nationally. While no single date guarantees perfect weather everywhere in a country as large as the U.S., the numbers suggest early October may quietly offer one of the most reliable windows for comfortable outdoor conditions.
Read more of this story at Slashdot.
14 Apr 2026 7:00am GMT
Stanford Report Highlights Growing Disconnect Between AI Insiders and Everyone Else
An anonymous reader quotes a report from TechCrunch: AI experts and the public's opinion on the technology are increasingly diverging, according to Stanford University's annual report on the AI industry, which was released Monday. In particular, the report noted a growing trend of anxiety around AI and, in the U.S., concerns about how the technology will impact key societal areas, such as jobs, medical care, and the economy. [...] Stanford's report provides more insight into where all this negativity is coming from, as it summarizes data around public sentiment of AI across various sources. For instance, it pointed to a report from Pew Research published last month, which noted that only 10% of Americans said they were more excited than concerned about the increased use of AI in daily life. Meanwhile, 56% of AI experts said they believed AI would have a positive impact on the U.S. over the next 20 years. Expert opinion and public sentiment also greatly diverged in particular areas where AI could have a societal impact. Indeed, 84% of experts, the report authors noted, said that AI would have a largely positive impact on medical care over the next 20 years, but only 44% of the U.S. general public said the same. Plus, a majority (73%) of experts felt positive about AI's impact on how people do their jobs, compared with just 23% of the public. And 69% of experts felt that AI would have a positive impact on the economy. Given the supposed AI-fueled layoffs and disruptions to the workplace, it's not surprising that only 21% of the public felt similarly. Other data from Pew Research, cited by the report, noted that AI experts were less pessimistic on AI's impact on the job market, while nearly two-thirds of Americans (or 64%) said they think AI will lead to fewer jobs over the next 20 years. The U.S. also reported the lowest trust in its government to regulate AI responsibly, compared with other nations, at 31%. Singapore ranked highest at 81%, per data pulled from Ipsos found in Stanford's report. Another source looked at regulation concerns on a state-by-state level and concluded that, nationwide, 41% of respondents said federal AI regulation will not go far enough, while only 27% said it would go "too far." Despite the fears and concerns, AI did get one accolade: Globally, those who feel like AI products and services offer more benefits than drawbacks slightly rose from 55% in 2024 to 59% in 2025. But at the same time, those respondents who said that AI makes them "nervous" grew from 50% to 52% during the same period, per data cited by the report's authors.
Read more of this story at Slashdot.
14 Apr 2026 3:30am GMT
13 Apr 2026
Ars Technica
Retro Rewind re-creates the glorious drudgery of working a '90s video store
What the nostalgic throwback lacks in complexity it makes up for in repetitive charm.
13 Apr 2026 9:58pm GMT
Measles takes a plane to Idaho, which has worst vaccination rate in US
In the 2024-2025 school year, only 78.5% of kindergartners had measles vaccination.
13 Apr 2026 9:32pm GMT
Google shoehorned Rust into Pixel 10 modem to make legacy code safer
Cellular modems are complex black boxes of legacy code, but Google is making them safer with Rust.
13 Apr 2026 9:12pm GMT
OSnews
Scientists invented an obviously fake illness, and “AI” spread it like truth within weeks
Ever heard of a condition called bixonimania? Did you search the internet or ask your "AI" girlfriend about some symptoms you were experiencing, and this was its answer? Well… The condition doesn't appear in the standard medical literature - because it doesn't exist. It's the invention of a team led by Almira Osmanovic Thunström, a medical researcher at the University of Gothenburg, Sweden, who dreamt up the skin condition and then uploaded two fake studies about it to a preprint server in early 2024. Osmanovic Thunström carried out this unusual experiment to test whether large language models (LLMs) would swallow the misinformation and then spit it out as reputable health advice. "I wanted to see if I can create a medical condition that did not exist in the database," she says. ↫ Chris Stokel-Walker at Nature And "AI" ate it up like quality chocolate. It started appearing in the answers from all the popular "AI" tools within weeks, and later even started showing up as references in published literature, indicating that scientists copy/paste references without actually reading them. This is clearly a deeply concerning experiment, and highlights there may be many, many more nonsensical, fake studies being picked up by "AI" tools. Of course, I hear you say, it's not like propagating fake or terrible studies is the sole domain of "AI", as there are countless cases of this happening among actual real researchers and scientists, too. The issue, though, is that the fake studies concerning "bixonimania" were intentionally made to be as silly and obviously ridiculous as possible. It references Starfleet Acadamy, the lab aboard the Enterprise, the University of Fellowship of the Ring, and many other fake references instantly recognisable as such by real humans. In fact, the studies even specifically mention that "this entire paper is made up" and "fifty made-up individuals aged between 20 and 50 years were recruited for the exposure group". It would take any human only a few seconds after opening one of these papers to realise they're entirely fake - yet, the world's most advanced "AI" tools gobbled them up and spit them back out as pure fact within mere weeks of their publication This shouldn't come as a surprise. After all, "AI" tools have no understanding, no intelligence, no context, and they can't actually make sense of anything. They are glorified pachinko machines with the output - the ball - tumbling down the most likely path between the pins based on nothing but chance and which pins it has already hit. "AI" output understands the world about as much as the pachinko ball does, and as such, can't pick up on even the most obvious of cues that something is a fake or a forgery. It won't be long before truly nefarious forces start doing this very same thing. Why build, staff, and maintain a troll farm when you can just have "AI" generate intentional misinformation which will then be spread and pushed by even more "AI"? Remember, it took one malicious asshole just one long since retracted fake paper to convince millions that vaccines cause autism. I shudder to think how many people are accepting anything "AI" says as gospel.
13 Apr 2026 1:02pm GMT
Linux 7.0 released
Version 7.0 of the Linux kernel has been released, marking the arbitrary end of the 6.x series. Significant changes in this release include the removal of the "experimental" status for Rust code, a new filtering mechanism for io_uring operations, a switch to lazy preemption by default in the CPU scheduler, support for time-slice extension, the nullfs filesystem, self-healing support for the XFS filesystem, a number of improvements to the swap subsystem (described in this article and this one), general support for AccECN congestion notification, and more. See the LWN merge-window summaries (part 1, part 2) and the KernelNewbies 7.0 page for more details. ↫ corbet at LWN.net You can compile the kernel yourself, or just wait until it hits your distribution's repositories.
13 Apr 2026 12:19pm GMT
11 Apr 2026
Planet Arch Linux
Write less code, be more responsible
My thoughts on AI-assisted programming.
11 Apr 2026 12:00am GMT
10 Apr 2026
OSnews
The disturbing white paper Red Hat is trying to erase from the internet
It shouldn't be a surprise that companies - and for our field, technology companies specifically - working with the defense industry tends to raise eyebrows. With things like the genocide in Gaza, the threats of genocide and war crimes against Iran, the mass murder in Lebanon, it's no surprise that western companies working with the militaries and defense companies involved in these atrocities are receiving some serious backlash. With that in mind, it seems Red Hat, owned by IBM, is desperately trying to scrub a certain white paper from the internet. Titled "Compress the kill cycle with Red Hat Device Edge", the 2024 white paper details how Red Hat's products and technologies can make it easier and faster to, well, kill people. Links to the white paper throw up 404s now, but it can still easily be found on the Wayback Machine and other places. It's got some disturbingly euphemistic content. The find, fix, track, target, engage, assess (F2T2EA) process requires ubiquitous access to data at the strategic, operational and tactical levels. Red Hat Device Edge embeds captured, analyzed, and federated data sets in a manner that positions the warfighter to use artificial intelligence and machine learning (AI/ML) to increase the accuracy of airborne targeting and mission-guidance systems. Delivering near real-time data from sensor pods directly to airmen, accelerating the sensor-to-shooter cycle. Sharing near real-time sensor fusion data with joint and multinational forces to increase awareness, survivability, and lethality. The new software enabled the Stalker to deploy updated, AI-based automated target recognition capabilities. If the target is an adversary tracked vehicle on the far side of a ridge, a UAS carrying a server running Red Hat Device Edge could transmit video and metadata directly to shooters. ↫ Red Hat white paper titled "Compress the kill cycle with Red Hat Device Edge" I don't think there's something inherently wrong with working together with your nation's military or defense companies, but that all hinges on what, exactly, said military is doing and how those defense companies' products are being used. The focus should be on national defense, aid during disasters, and responding to the legitimate requests of sovereign, democratic nations to come to their defense (e.g. helping Ukraine fight off the Russian invasion). There's always going to be difficult grey areas, but any military or defense company supporting the genocide in Gaza or supplying weapons to kill women and children in Iran is unequivocally wrong, morally reprehensible, and downright illegal on both an international and national level. It clearly seems someone at Red Hat feels the same way, as the company has been trying really hard to memory-hole this particular white paper, and considering its word choices and the state of the world today, it's easy to see why. Of course, the internet never forgets, and I certainly don't intend to let something like this slide. We all know companies like Microsoft, Oracle, and Google have no qualms about making a few bucks from a genocide or two, but it always feels a bit more traitorous to the cause when it's an open source company doing the profiting. It feels like Red Hat is trying to have its cake and eat it too, by, as an IBM subsidiary, trying to both profit from the vast sums of money sloshing around in the US military industrial complex as well as maintain its image as a scrappy open source business success story shitting bunnies and rainbows. It's a long time ago now that Red Hat felt like a genuine part of the open source community. Most of us - both outside and inside of Red Hat, I'm sure - have been well aware for a long time now that those days are well behind us, and I guess Red Hat doesn't like seeing its kill cycle this compressed.
10 Apr 2026 8:04pm GMT
03 Apr 2026
Planet Arch Linux
800 Rust terminal projects in 3 years
I have discovered and shared ~800 open source Rust CLI projects over the past 3 years.
03 Apr 2026 12:00am GMT
28 Mar 2026
Planet Arch Linux
Building a guitar trainer with embedded Rust
All I wanted was to learn how to play guitar, but ended up building a DIY kit for it.
28 Mar 2026 12:00am GMT