12 Mar 2026
TalkAndroid
“It’s Official: The Monster of Florence Becomes Netflix’s No. 1 Global Sensation in Just 48 Hours”
Barely two days after its launch on Netflix, "The Monster of Florence" has already clawed its way to…
12 Mar 2026 4:00pm GMT
If You Loved Stranger Things, You Need to Watch This ’80s Classic That Inspired It
Stranger Things is over-no need to panic! Why not take a trip down memory lane and dive into…
12 Mar 2026 7:30am GMT
Android Auto 16.0 is here: Long-awaited media redesign finally reaches all drivers
After months of quiet behind-the-scenes testing, Android Auto 16.0 is finally rolling out to the masses, and what…
12 Mar 2026 7:00am GMT
11 Mar 2026
Android Developers Blog
Level Up: Test Sidekick and prepare for upcoming program milestones
Last September, we shared our vision for the future of Google Play Games grounded in a core belief: the best way to drive your game's success is to deliver a world-class player experience. We launched the Google Play Games Level Up program to recognize and reward great gaming experiences, while providing you with a powerful toolkit and new promotional opportunities to grow your games.
The momentum since our announcement has been incredibly positive, with more than 600 million gamers now using Play Games Services every month. Developers are also finding success, with one-third of all game installs on the Play Store now coming from editorially-driven organic discovery. In fact, in 2025, Level Up features have driven over 2.5 billion incremental acquisitions for featured games, in addition to an average uplift of 25% in installs during the featuring windows.
Today, we're inviting you to start testing Play Games Sidekick to keep your players in the action, sharing new Play Console updates to optimize your reach, and helping you prepare for our upcoming program milestones.
- Pre-reg device breakdowns: To aid launch decisions, you can now analyze the device distribution of your pre-registered audience by key device attributes including Android version, RAM and SoC. This enables you to optimize game performance, minimum specs, and marketing spend for the players already waiting for your game.
- Real-time feedback: With Level Up+, our tier for high-performing games, qualifying titles can unlock promotional content featuring and tools like deep-links and audience targeting. While submissions must meet Play's quality guidelines, you no longer have to wait 24 hours to learn about issues. You can now get immediate feedback on quality whenever possible.
- Integrate Play Games Sidekick to offer a quick and easy entry point to access rewards, offers, and achievements through an in-game overlay.
- Implement achievements with Play Games Services, to support authentication with the modern Gamer Profile, and to keep players engaged across the lifespan of your game.
- Implement cloud save to enable progress sync across devices.
Last week, we announced that we're working on an expanded Level Up program that builds on our successful foundation to further improve gaming experiences. The update will introduce new requirements that will unlock additional benefits like lower service fees. Engaging with the program now ensures your work is strategically aligned with these future updates. We'll share more details in the coming months.
In the meantime, the path to your first program milestone begins today. By prioritizing these user experience guidelines now, you're investing in the long-term value of your game and ensuring it's built to thrive for every player. Head over to Play Console to start testing Sidekick and take the next step in your Level Up journey.
11 Mar 2026 8:02pm GMT
Expanding our stage for PC and paid titles
Posted by Aurash Mahbod, VP and GM, Games on Google Play
Google Play is proud to be the home of over 200,000 games-many of which defined the mobile-first era. But as cross-platform becomes the standard for players, we are evolving our ecosystem to match the scale of your ambitions. In recent years, we focused on elevating Android gaming quality while significantly deepening our support for native PC titles.
We know that maximizing your game's reach across different platforms is complex. The Level Up program serves as your strategic roadmap, helping you prioritize optimizations that drive great experiences on Android. Building on this foundation, we're doubling down on our investment to make Play the most accessible home for every category of play. We're adding new tools for paid games and making the PC game discovery to purchase seamless. Keep reading to learn more about how we're creating a bigger stage for your games.
Scale your discovery across mobile and PC platforms
Building a bigger stage starts with making your games easier to find-and easier to buy-no matter which device your players prefer. We're expanding your reach by bringing cross-platform discovery directly to the mobile storefront.
-
With the new PC section in the Games tab, your PC titles gain high visibility placement among our most active mobile players.
- The PC badge ensures your cross-platform investment is recognized. This creates more opportunities to acquire players on mobile and transition them seamlessly to your high-fidelity PC experience.
-
With 'buy once play anywhere' pricing, we're making it easier to sell your games across different devices. If you choose to opt-in your mobile game for Google Play Games on PC, you can now offer a single price that covers both mobile and PC versions. We're rolling out this feature in EAP with select games including Brotato: Premium.
-
For PC-only games, players can now complete the full purchase journey on Google Play Games on PC with the same trusted security and privacy standards they expect from Google Play.
Lower the purchase barrier with Game Trials
To help you convert high-intent buyers with less friction, we're introducing Game Trials, a feature that enables players to experience your game for a limited time before making a purchase on mobile. Accessible directly from your game's store listing, Game Trials provides a fast-track for players to start exploring your world with a single tap. Game trials are now in testing with select titles and we'll roll it out to more titles soon.
-
To ensure this is low maintenance for you, Game Trials is added directly into your Android App Bundle. This enables you to offer a high quality trial without the burden of a separate codebase or a demo version of your app.
-
Play ensures trials are secure and seamless. Game Trials are once per user and protects your game while the trial is active. When it ends, players can purchase your game and keep their progress.
-
We're also working on tools that will give you more control-such as specifying a custom time limit or an in-game event to conclude the trial.
Diversify your revenue with a dedicated player community on Play Pass
Play Pass is another way to diversify revenue and grow your player audience. It has been a strong launchpad for indie hits such as Isle of Arrows, Slay the Spire, and Dead Cells. With Play Pass, you can reach highly dedicated players seeking a more curated gaming experience, free of ads and in-app purchases. To help you deepen engagement, paid titles on Play Pass can now opt in to Google Play Games on PC - making it easy for players to find and play your games on a larger screen. Later this year, you can nominate your game through a streamlined opt-in process directly in Play Console.
Drive long term sales with Wishlists and Discounts
Wishlists and Discounts are one of the most effective ways to capture player intent and drive long term sales. To support players at every stage of their purchase journey, we're integrating them directly into Play. Players can save titles to their wishlist and manage them from library settings. To keep your game top-of-mind, players will receive automated notifications for your latest discounts - starting with mobile and expanding soon to PC games.
How leading studios are finding a new path to success on Play
We're thrilled to welcome Sledding Game, 9 Kings, Potion Craft, Moonlight Peaks, and Low Budget Repairs to Play [1]. It marks an exciting expansion of our catalog and a step forward in our mission to build a bigger gaming ecosystem for all developers. This growth is fueled by our developer community, whose feedback continues to shape our roadmap and help us better support your success.
That mission brings us to GDC and the Independent Games Festival (IGF) Awards [2], where the next generation of games awaits! This year, we're inviting you to come along for the ride as we go backstage to chat with the finalists and winners, sharing the moments of triumph and the creative stories behind their development. Not joining us at GDC? You can take the next step in your journey to launch your game on Google Play today.
1. Sledding Game, 9 Kings, Potion Craft, and Moonlight Peaks are coming to Google Play in 2026. Low Budget Repairs is scheduled for release in 2027. [Back]
2. Independent Games Festival (IGF) Awards is hosted by Game Developers Conference (GDC) and requires a valid GDC pass for entry. [Back]
11 Mar 2026 8:02pm GMT
10 Mar 2026
Android Developers Blog
Boosting Android Performance: Introducing AutoFDO for the Kernel
We are the Android LLVM toolchain team. One of our top priorities is to improve Android performance through optimization techniques in the LLVM ecosystem. We are constantly searching for ways to make Android faster, smoother, and more efficient. While much of our optimization work happens in userspace, the kernel remains the heart of the system. Today, we're excited to share how we are bringing Automatic Feedback-Directed Optimization (AutoFDO) to the Android kernel to deliver significant performance wins for users.
What is AutoFDO?
During a standard software build, the compiler makes thousands of small decisions, such as whether to inline a function and which branch of a conditional is likely to be taken, based on static code hints.While these heuristics are useful, they don't always accurately predict code execution during real-world phone usage.
AutoFDO changes this by using real-world execution patterns to guide the compiler. These patterns represent the most common instruction execution paths the code takes during actual use, captured by recording the CPU's branching history. While this data can be collected from fleet devices, for the kernel we synthesize it in a lab environment using representative workloads, such as running the top 100 most popular apps. We use a sampling profiler to capture this data, identifying which parts of the code are 'hot' (frequently used) and which are 'cold'. When we rebuild the kernel with these profiles, the compiler can make much smarter optimization decisions tailored to actual Android workloads.
To understand the impact of this optimization, consider these key facts:
- On Android, the kernel accounts for about 40% of CPU time.
- We are already using AutoFDO to optimize native executables and libraries in the userspace, achieving about 4% cold app launch improvement and a 1% boot time reduction.
Real-World Performance Wins
We have seen impressive improvements across key Android metrics by leveraging profiles from controlled lab environments. These profiles were collected using app crawling and launching, and measured on Pixel devices across the 6.1, 6.6, and 6.12 kernels.
The most noticeable improvements are listed below. Details on the AutoFDO profiles for these kernel versions can be found in the respective Android kernel repositories for android16-6.12 and android15-6.6 kernels.
How It Works: The Pipeline
Our deployment strategy involves a sophisticated pipeline to ensure profiles stay relevant and performance remains stable.Step 1: Profile Collection
While we rely on our internal test fleet to profile userspace binaries, we shifted to a controlled lab environment for the Generic Kernel Image (GKI). Decoupling profiling from the device release cycle allows for flexible, immediate updates independent of deployed kernel versions. Crucially, tests confirm that this lab-based data delivers performance gains comparable to those from real-world fleets.-
Tools & Environment: We flash test devices with the latest kernel image and use simpleperf to capture instruction execution streams. This process relies on hardware capabilities to record branching history, specifically utilizing ARM Embedded Trace Extension (ETE) and ARM Trace Buffer Extension (TRBE) on Pixel devices.
-
Workloads: We construct a representative workload using the top 100 most popular apps from the Android App Compatibility Test Suite (C-Suite). To capture the most accurate data, we focus on:
-
-
App Launching: Optimizing for the most visible user delays
-
AI-Driven App Crawling: Simulating contiguous, evolving user interactions
-
System-Wide Monitoring: Capturing not only foreground app activities, but also critical background workloads and inter-process communications
-
-
Validation: This synthesized workload shows an 85% similarity to execution patterns collected from our internal fleet.
-
Targeted Data: By repeating these tests sufficiently, we capture high-fidelity execution patterns that accurately represent real-world user interaction with the most popular applications. Furthermore, this extensible framework allows us to seamlessly integrate additional workloads and benchmarks to broaden our coverage.
Step 2: Profile Processing
We post-process the raw trace data to ensure it is clean, effective, and ready for the compiler.-
Aggregation: We consolidate data from multiple test runs and devices into a single system view.
-
Conversion: We convert raw traces into the AutoFDO profile format, filtering out unwanted symbols as needed.
-
Profile Trimming: We trim profiles to remove data for "cold" functions, allowing them to use standard optimization. This prevents regressions in rarely used code and avoids unnecessary increases in binary size.
Step 3: Profile Testing
Before deployment, profiles undergo rigorous verification to ensure they deliver consistent performance wins without stability risks.-
Profile & Binary Analysis: We strictly compare the new profile's content (including hot functions, sample counts, and profile size) against previous versions. We also use the profile to build a new kernel image, analyzing binaries to ensure that changes to the text section are consistent with expectations.
-
Performance Verification: We run targeted benchmarks on the new kernel image. This confirms that it maintains the performance improvements established by previous baselines.
Continuous Updates
Code naturally "drifts" over time, so a static profile would eventually lose its effectiveness. To maintain peak performance, we run the pipeline continuously to drive regular updates:-
Regular Refresh: We refresh profiles in Android kernel LTS branches ahead of each GKI release, ensuring every build includes the latest profile data.
-
Future Expansion: We are currently delivering these updates to the android16-6.12 and android15-6.6 branches and will expand support to newer GKI versions, such as the upcoming android17-6.18.
Ensuring Stability
To further guarantee consistent behavior, we apply a "conservative by default" strategy. Functions not captured in our high-fidelity profiles are optimized using standard compiler methods. This ensures that the "cold" or rarely executed parts of the kernel behave exactly as they would in a standard build, preventing performance regressions or unexpected behaviors in corner cases.
Looking Ahead
We are currently deploying AutoFDO across the android16-6.12 and android15-6.6 branches. Beyond this initial rollout, we see several promising avenues to further enhance the technology:
-
Expanded Reach: We look forward to deploying AutoFDO profiles to newer GKI kernel versions and additional build targets beyond the current aarch64 support.
-
GKI Module Optimization: Currently, our optimization is focused on the main kernel binary (vmlinux). Expanding AutoFDO to GKI modules could bring performance benefits to a larger portion of the kernel subsystem.
-
Vendor Module Support: We are also interested in supporting AutoFDO for vendor modules built using the Driver Development Kit (DDK). With support already available in our build system (Kleaf) and profiling tools (simpleperf), this allows vendors to apply these same optimization techniques to their specific hardware drivers.
-
Broader Profile Coverage: There is potential to collect profiles from a wider range of Critical User Journeys (CUJs) to optimize them.
By bringing AutoFDO to the Android kernel, we're ensuring that the very foundation of the OS is optimized for the way you use your device every day.
10 Mar 2026 11:00pm GMT
18 Sep 2022
Planet Openmoko
Harald "LaF0rge" Welte: Deployment of future community TDMoIP hub
I've mentioned some of my various retronetworking projects in some past blog posts. One of those projects is Osmocom Community TDM over IP (OCTOI). During the past 5 or so months, we have been using a number of GPS-synchronized open source icE1usb interconnected by a new, efficient but strill transparent TDMoIP protocol in order to run a distributed TDM/PDH network. This network is currently only used to provide ISDN services to retronetworking enthusiasts, but other uses like frame relay have also been validated.
So far, the central hub of this OCTOI network has been operating in the basement of my home, behind a consumer-grade DOCSIS cable modem connection. Given that TDMoIP is relatively sensitive to packet loss, this has been sub-optimal.
Luckily some of my old friends at noris.net have agreed to host a new OCTOI hub free of charge in one of their ultra-reliable co-location data centres. I'm already hosting some other machines there for 20+ years, and noris.net is a good fit given that they were - in their early days as an ISP - the driving force in the early 90s behind one of the Linux kernel ISDN stracks called u-isdn. So after many decades, ISDN returns to them in a very different way.
Side note: In case you're curious, a reconstructed partial release history of the u-isdn code can be found on gitea.osmocom.org
But I digress. So today, there was the installation of this new OCTOI hub setup. It has been prepared for several weeks in advance, and the hub contains two circuit boards designed entirely only for this use case. The most difficult challenge was the fact that this data centre has no existing GPS RF distribution, and the roof is ~ 100m of CAT5 cable (no fiber!) away from the roof. So we faced the challenge of passing the 1PPS (1 pulse per second) signal reliably through several steps of lightning/over-voltage protection into the icE1usb whose internal GPS-DO serves as a grandmaster clock for the TDM network.
The equipment deployed in this installation currently contains:
-
a rather beefy Supermicro 2U server with EPYC 7113P CPU and 4x PCIe, two of which are populated with Digium TE820 cards resulting in a total of 16 E1 ports
-
an icE1usb with RS422 interface board connected via 100m RS422 to an Ericsson GPS03 receiver. There's two layers of of over-voltage protection on the RS422 (each with gas discharge tubes and TVS) and two stages of over-voltage protection in the coaxial cable between antenna and GPS receiver.
-
a Livingston Portmaster3 RAS server
-
a Cisco AS5400 RAS server
For more details, see this wiki page and this ticket
Now that the physical deployment has been made, the next steps will be to migrate all the TDMoIP links from the existing user base over to the new hub. We hope the reliability and performance will be much better than behind DOCSIS.
In any case, this new setup for sure has a lot of capacity to connect many more more users to this network. At this point we can still only offer E1 PRI interfaces. I expect that at some point during the coming winter the project for remote TDMoIP BRI (S/T, S0-Bus) connectivity will become available.
Acknowledgements
I'd like to thank anyone helping this effort, specifically * Sylvain "tnt" Munaut for his work on the RS422 interface board (+ gateware/firmware) * noris.net for sponsoring the co-location * sysmocom for sponsoring the EPYC server hardware
18 Sep 2022 10:00pm GMT
08 Sep 2022
Planet Openmoko
Harald "LaF0rge" Welte: Progress on the ITU-T V5 access network front
Almost one year after my post regarding first steps towards a V5 implementation, some friends and I were finally able to visit Wobcom, a small German city carrier and pick up a lot of decommissioned POTS/ISDN/PDH/SDH equipment, primarily V5 access networks.
This means that a number of retronetworking enthusiasts now have a chance to play with Siemens Fastlink, Nokia EKSOS and DeTeWe ALIAN access networks/multiplexers.
My primary interest is in Nokia EKSOS, which looks like an rather easy, low-complexity target. As one of the first steps, I took PCB photographs of the various modules/cards in the shelf, take note of the main chip designations and started to search for the related data sheets.
The results can be found in the Osmocom retronetworking wiki, with https://osmocom.org/projects/retronetworking/wiki/Nokia_EKSOS being the main entry page, and sub-pages about
In short: Unsurprisingly, a lot of Infineon analog and digital ICs for the POTS and ISDN ports, as well as a number of Motorola M68k based QUICC32 microprocessors and several unknown ASICs.
So with V5 hardware at my disposal, I've slowly re-started my efforts to implement the LE (local exchange) side of the V5 protocol stack, with the goal of eventually being able to interface those V5 AN with the Osmocom Community TDM over IP network. Once that is in place, we should also be able to offer real ISDN Uk0 (BRI) and POTS lines at retrocomputing events or hacker camps in the coming years.
08 Sep 2022 10:00pm GMT
Harald "LaF0rge" Welte: Clock sync trouble with Digium cards and timing cables
If you have ever worked with Digium (now part of Sangoma) digital telephony interface cards such as the TE110/410/420/820 (single to octal E1/T1/J1 PRI cards), you will probably have seen that they always have a timing connector, where the timing information can be passed from one card to another.
In PDH/ISDN (or even SDH) networks, it is very important to have a synchronized clock across the network. If the clocks are drifting, there will be underruns or overruns, with associated phase jumps that are particularly dangerous when analog modem calls are transported.
In traditional ISDN use cases, the clock is always provided by the network operator, and any customer/user side equipment is expected to synchronize to that clock.
So this Digium timing cable is needed in applications where you have more PRI lines than possible with one card, but only a subset of your lines (spans) are connected to the public operator. The timing cable should make sure that the clock received on one port from the public operator should be used as transmit bit-clock on all of the other ports, no matter on which card.
Unfortunately this decades-old Digium timing cable approach seems to suffer from some problems.
bursty bit clock changes until link is up
The first problem is that downstream port transmit bit clock was jumping around in bursts every two or so seconds. You can see an oscillogram of the E1 master signal (yellow) received by one TE820 card and the transmit of the slave ports on the other card at https://people.osmocom.org/laforge/photos/te820_timingcable_problem.mp4
As you can see, for some seconds the two clocks seem to be in perfect lock/sync, but in between there are periods of immense clock drift.
What I'd have expected is the behavior that can be seen at https://people.osmocom.org/laforge/photos/te820_notimingcable_loopback.mp4 - which shows a similar setup but without the use of a timing cable: Both the master clock input and the clock output were connected on the same TE820 card.
As I found out much later, this problem only occurs until any of the downstream/slave ports is fully OK/GREEN.
This is surprising, as any other E1 equipment I've seen always transmits at a constant bit clock irrespective whether there's any signal in the opposite direction, and irrespective of whether any other ports are up/aligned or not.
But ok, once you adjust your expectations to this Digium peculiarity, you can actually proceed.
clock drift between master and slave cards
Once any of the spans of a slave card on the timing bus are fully aligned, the transmit bit clocks of all of its ports appear to be in sync/lock - yay - but unfortunately only at the very first glance.
When looking at it for more than a few seconds, one can see a slow, continuous drift of the slave bit clocks compared to the master :(
Some initial measurements show that the clock of the slave card of the timing cable is drifting at about 12.5 ppb (parts per billion) when compared against the master clock reference.
This is rather disappointing, given that the whole point of a timing cable is to ensure you have one reference clock with all signals locked to it.
The work-around
If you are willing to sacrifice one port (span) of each card, you can work around that slow-clock-drift issue by connecting an external loopback cable. So the master card is configured to use the clock provided by the upstream provider. Its other ports (spans) will transmit at the exact recovered clock rate with no drift. You can use any of those ports to provide the clock reference to a port on the slave card using an external loopback cable.
In this setup, your slave card[s] will have perfect bit clock sync/lock.
Its just rather sad that you need to sacrifice ports just for achieving proper clock sync - something that the timing connectors and cables claim to do, but in reality don't achieve, at least not in my setup with the most modern and high-end octal-port PCIe cards (TE820).
08 Sep 2022 10:00pm GMT
.png)
.gif)
%20(1).gif)



.gif)
