26 Jan 2026
Planet Maemo
Igalia Multimedia contributions in 2025
Now that 2025 is over, it's time to look back and feel proud of the path we've walked. Last year has been really exciting in terms of contributions to GStreamer and WebKit for the Igalia Multimedia team.
With more than 459 contributions along the year, we've been one of the top contributors to the GStreamer project, in areas like Vulkan Video, GstValidate, VA, GStreamer Editing Services, WebRTC or H.266 support.
In Vulkan Video we've worked on the VP9 video decoder, and cooperated with other contributors to push the AV1 decoder as well. There's now an H.264 base class for video encoding that is designed to support general hardware-accelerated processing.
GStreaming Editing Services, the framework to build video editing applications, has gained time remapping support, which now allows to include fast/slow motion effects in the videos. Video transformations (scaling, cropping, rounded corners, etc) are now hardware-accelerated thanks to the addition of new Skia-based GStreamer elements and integration with OpenGL. Buffer pool tuning and pipeline improvements have helped to optimize memory usage and performance, enabling the edition of 4K video at 60 frames per second. Much of this work to improve and ensure quality in GStreamer Editing Services has also brought improvements in the GstValidate testing framework, which will be useful for other parts of GStreamer.
Regarding H.266 (VVC), full playback support (with decoders such as vvdec and avdec_h266, demuxers and muxers for Matroska, MP4 and TS, and parsers for the vvc1 and vvi1 formats) is now available in GStreamer 1.26 thanks to Igalia's work. This allows user applications such as the WebKitGTK web browser to leverage the hardware accelerated decoding provided by VAAPI to play H.266 video using GStreamer.
Igalia has also been one of the top contributors to GStreamer Rust, with 43 contributions. Most of the commits there have been related to Vulkan Video.
In addition to GStreamer, the team also has a strong presence in WebKit, where we leverage our GStreamer knowledge to implement many features of the web engine related to multimedia. We've made 323 multimedia contributions to WebKit last year. Nearly one third of them have been related to generic multimedia playback, and the rest have been on areas such as WebRTC, MediaStream, MSE, WebAudio, a new Quirks system to provide adaptations for specific hardware multimedia platforms at runtime, WebCodecs or MediaRecorder.
We're happy about what we've achieved along the year and look forward to maintaining this success and bringing even more exciting features and contributions in 2026.
26 Jan 2026 9:34am GMT
TalkAndroid
HANNspree unveils Lumo, a colour paper-like Android tablet that can do video calls
A colour paper-like display with full Android flexibility
26 Jan 2026 9:17am GMT
Why this legal battle over search results could change the internet forever
Hold onto your bookmarks: a courtroom spar between Google and SerpApi is about to redraw the invisible boundaries…
26 Jan 2026 7:30am GMT
25 Jan 2026
TalkAndroid
This high-end smartphone’s hidden price is leaving buyers totally speechless
Is it magic? Is it an accounting error? No, it's the Google Pixel 8 at a stunningly low…
25 Jan 2026 4:30pm GMT
21 Jan 2026
Android Developers Blog
Ready to review some changes but not others? Try using Play Console’s new Save for later feature

Posted by Georgia Doyle, Senior UX Writer and Content Designer, and Kanu Tibrewal, Software Engineer
We've launched a new Save for later feature on Google Play Console's Publishing overview to give you more control over when you send changes for review.
In the past, changes to your app were bundled together before being sent for review. This presented challenges if you needed to reprioritize changes, or if the changes were no longer relevant. For example, updates to your test tracks grouped with marketing changes that need to be rescheduled. This lack of flexibility meant that if some changes were ready for review but not others, you could end up delaying urgent fixes, or publishing changes that you weren't quite ready to make.
Now, you have the ability to hold back the changes you're not ready to have reviewed.
How it works
In the 'Changes not yet sent for review' section of the Publishing overview page, select 'Save for later' on the groups of changes that you don't want to include in your next review. You can view and edit the list of saved changes, and return them to the Publishing overview if you change your mind. Once the review has started, your saved changes will be added back to 'Changes not yet sent for review'.
- If issues are isolated to an individual track, we'll show you an error beside that change, so you know what to save for later in order to proceed to review with your other changes.
- If you have issues that affect your whole app, for example, App content issues, Save for later will be unavailable and you will need to fix them before you can send any changes for review.
Greater flexibility in your workflows
Our goal for Save for later is to give you greater flexibility over your release schedule. With this feature you can manage what changes you send for review, and address issues affecting individual tracks without holding up ready-to-release changes, so you can iterate faster and minimize the impact of rejections on your release timeline.
So, what's next?
We're excited to see how Save for later helps you to streamline your release process and bring your app innovations to users even faster.
21 Jan 2026 5:00pm GMT
15 Jan 2026
Android Developers Blog
LLM flexibility, Agent Mode improvements, and new agentic experiences in Android Studio Otter 3 Feature Drop
Posted by Sandhya Mohan, Senior Product Manager and Trevor Johns, Developer Relations Engineer
- Bring Your Own Model: You can now use any LLM to power the AI functionality in Android Studio.
- Agent Mode Enhancements: You can now more easily have Agent Mode interact with your app on devices, review and accept suggested changes, and have multiple conversations threads.
- Run user journey tests using natural language: with Journeys in Android Studio.
- Enable Agent Mode to connect to more tools: including the ability to connect to remote servers via MCP.
- Build, iterate and test your UI: with UI agentic experiences in Android Studio.
- Build deep links using natural language: with the new app links assistant.
- Debug R8 optimized code: with Automatic Logcat retracing.
- Simplify Android library modules: with the Fused library plugin.
Here's a deep dive into what's new:
Bring Your Own Model (BYOM)
Every developer has a unique workflow when using AI, and different companies have different policies on AI model usage. With this release, Android Studio now brings you more flexibility by allowing you to choose the LLM that powers the AI functionality in Android Studio, giving you more control over performance, privacy, and cost.
Use a remote model
You can now integrate remote models-such as OpenAI's GPT, Anthropic's Claude, or a similar model-directly into Android Studio. This allows you to leverage your preferred model provider without changing your IDE. To get started, configure a remote model provider in Settings by adding your API endpoint and key. Once configured, you can select your custom model directly from the picker in the AI chat window.
Use a local model
Use your Gemini API key
While Android Studio includes access to a default Gemini model with generous quotas at no cost, some developers need more. By adding your Gemini API key, Android Studio can directly access all the latest Gemini models available from Google.
For example, this allows you to use the most recent Gemini 3 Pro and Gemini 3 Flash models (among others) with expanded context windows and quota. This is especially useful for developers who are using Agent Mode for extended coding sessions, where this additional processing power can provide higher fidelity responses.
Agent Mode enhancements
Run your app and interact with it on devices
Agent Mode can now deploy an application to the connected device, inspect what is currently shown on the screen, take screenshots, check Logcat for errors, and interact with the running application. This lets the agent help you with changes or fixes that involve re-running the application, checking for errors, and verifying that a particular update was made successfully (for example, by taking and reviewing screenshots).
Find and review changes using the changes drawer
Manage multiple conversation threads
Journeys for Android Studio
Support for remote MCP servers
Supercharge your UI development with Agent Mode
Create new UI from a design mock
Match your UI with a target image
Iterate on your UI with natural language
Find and fix UI quality issues
Beyond iterating on your UI, Gemini also helps streamline your development environment.
To accelerate your setup, you can:- Generate Compose Previews: This feature is now enhanced by Agent Mode to provide more accurate results. When working in a file that has Composable functions but no @Preview annotations, you can right-click on the Composable and select Gemini > Generate [Composable name] Preview. The agent will now better analyze your Composable to generate the necessary boilerplate with correct parameters, to help verify that a successfully rendered preview is added.
- Fix Preview rendering errors: When a Compose Preview fails to render, Gemini can now analyze the error message and your code to find the root cause and apply a fix.
App Links Assistant
The App Links Assistant now integrates with Agent Mode to automate the creation of deep link logic, simplifying one of the most time-consuming steps of implementation. Instead of manually writing code to parse incoming intents and navigate users to the correct screen, you can now let Gemini generate the necessary code and tests. Gemini presents a diff view of the suggested code changes for your review and approval, streamlining the process of handling deep links and ensuring users are seamlessly directed to the right content in your app.
To get started, open the App Links Assistant through the tools menu, then choose Create Applink. In the second step, Add logic to handle the intent, select Generate code with AI assistance. If a sample URL is available, enter it, and then click Insert Code.
Automatic Logcat Retracing
Debugging R8-optimized code just became seamless. Previously, when R8 was enabled (minifyEnabled = true in your build.gradle.kts file), it would obfuscate stack traces, changing class names, methods, and line numbers. To find the source of a crash, developers had to manually use the R8 retrace command line tool.
Starting with Android Studio Otter 3 Feature Drop with AGP versions 8.12 and above, this extra step is no longer necessary. Logcat now automatically detects and retraces R8-processed stack traces, so you can see the original, human-readable stack trace directly in the IDE. This provides a much-improved debugging experience with no extra work required.Fused Library Plugin: Publish multiple Android libraries as one
Get started
Ready to dive in and accelerate your development? Download Android Studio Otter 3 Feature Drop and start exploring these powerful new features today!
As always, your feedback is crucial to us. Check known issues, report bugs, and be part of our vibrant community on LinkedIn, Medium, YouTube, or X. Let's build the future of Android apps together!
15 Jan 2026 5:18pm GMT
08 Jan 2026
Android Developers Blog
Ultrahuman launches features 15% faster with Gemini in Android Studio
Posted by Amrit Sanjeev, Developer Relations Engineer and Trevor Johns, Developer Relations Engineer
Ultrahuman is a consumer health-tech startup that provides daily well-being insights to users based on biometric data from the company's wearables, like the RING Air and the M1 Live Continuous Glucose Monitor (CGM). The Ultrahuman team leaned on Gemini in Android Studio's contextually aware tools to streamline and accelerate their development process.
Ultrahuman's app is maintained by a lean team of just eight developers. They prioritize building features that their users love, and have a backlog of bugs and needed performance improvements that take a lot of time. The team needed to scale up their output of feature improvements, and also needed to handle their performance improvements, without increasing headcount. One of their biggest opportunities was reducing the amount of time and effort for their backlog: every hour saved on maintenance could be reinvested into working on features for their users.
Solving technical hurdles and boosting performance with Gemini
The team integrated Gemini in Android Studio to see if the AI enhanced tools could improve their workflow by handling many Android tasks. First, the team turned to the Gemini chat inside Android Studio. The goal was to prototype a GATT Server implementation for their application's Bluetooth Low Energy (BLE) connectivity.
As Ultrahuman's Android Development Lead, Arka, noted, "Gemini helped us reach a working prototype in under an hour-something that would have otherwise taken us several hours." The BLE implementation provided by Gemini worked perfectly for syncing large amounts of health sensor data while the app ran in the background, improving the data syncing process and saving battery life on both the user's Android phone and Ultrahuman's paired wearable device.
Beyond this core challenge, Gemini also proved invaluable for finding algorithmic optimizations in a custom open-source library, pointing to helpful documentation, assisting with code commenting, and analyzing crash logs. The Ultrahuman team also used code completion to help them breeze through writing otherwise repetitive code, Jetpack Compose Preview Generation to enable rapid iteration during UI design, and Agent Mode for managing complex, project-wide changes, such as rendering a new stacked bar graph that mapped to backend data models and UI models.
Transforming productivity and accelerating feature delivery
These improvements have saved the team dozens of hours each week. This reclaimed time is being used to deliver new features to Ultrahuman's beta users 10-15% faster. For example, the team built a new in-app AI assistant for users, powered by Gemini 2.5 Flash. The UI design, architecture, and parts of the user experience for this new feature were initially suggested by Gemini in Android Studio-showcasing a full-circle AI-assisted development process.
Accelerate your Android development with Gemini
Gemini's expert Android advice, closely integrated throughout Android Studio, helps Android developers spend less time digging through documentation and writing boilerplate code-freeing up more time to innovate.
Learn how Gemini in Android Studio can help your team resolve complex issues, streamline workflows, and ship new features faster.
08 Jan 2026 10:00pm GMT
05 Dec 2025
Planet Maemo
Meow: Process log text files as if you could make cat speak
Some years ago I had mentioned some command line tools I used to analyze and find useful information on GStreamer logs. I've been using them consistently along all these years, but some weeks ago I thought about unifying them in a single tool that could provide more flexibility in the mid term, and also as an excuse to unrust my Rust knowledge a bit. That's how I wrote Meow, a tool to make cat speak (that is, to provide meaningful information).
The idea is that you can cat a file through meow and apply the filters, like this:
cat /tmp/log.txt | meow appsinknewsample n:V0 n:video ht: \
ft:-0:00:21.466607596 's:#([A-za-z][A-Za-z]*/)*#'
which means "select those lines that contain appsinknewsample (with case insensitive matching), but don't contain V0 nor video (that is, by exclusion, only that contain audio, probably because we've analyzed both and realized that we should focus on audio for our specific problem), highlight the different thread ids, only show those lines with timestamp lower than 21.46 sec, and change strings like Source/WebCore/platform/graphics/gstreamer/mse/AppendPipeline.cpp to become just AppendPipeline.cpp", to get an output as shown in this terminal screenshot:

Cool, isn't it? After all, I'm convinced that the answer to any GStreamer bug is always hidden in the logs (or will be, as soon as I add "just a couple of log lines more, bro"
0
0 
05 Dec 2025 11:16am GMT
15 Oct 2025
Planet Maemo
Dzzee 1.9.0 for N800/N810/N900/N9/Leste
15 Oct 2025 11:31am GMT
18 Sep 2022
Planet Openmoko
Harald "LaF0rge" Welte: Deployment of future community TDMoIP hub
I've mentioned some of my various retronetworking projects in some past blog posts. One of those projects is Osmocom Community TDM over IP (OCTOI). During the past 5 or so months, we have been using a number of GPS-synchronized open source icE1usb interconnected by a new, efficient but strill transparent TDMoIP protocol in order to run a distributed TDM/PDH network. This network is currently only used to provide ISDN services to retronetworking enthusiasts, but other uses like frame relay have also been validated.
So far, the central hub of this OCTOI network has been operating in the basement of my home, behind a consumer-grade DOCSIS cable modem connection. Given that TDMoIP is relatively sensitive to packet loss, this has been sub-optimal.
Luckily some of my old friends at noris.net have agreed to host a new OCTOI hub free of charge in one of their ultra-reliable co-location data centres. I'm already hosting some other machines there for 20+ years, and noris.net is a good fit given that they were - in their early days as an ISP - the driving force in the early 90s behind one of the Linux kernel ISDN stracks called u-isdn. So after many decades, ISDN returns to them in a very different way.
Side note: In case you're curious, a reconstructed partial release history of the u-isdn code can be found on gitea.osmocom.org
But I digress. So today, there was the installation of this new OCTOI hub setup. It has been prepared for several weeks in advance, and the hub contains two circuit boards designed entirely only for this use case. The most difficult challenge was the fact that this data centre has no existing GPS RF distribution, and the roof is ~ 100m of CAT5 cable (no fiber!) away from the roof. So we faced the challenge of passing the 1PPS (1 pulse per second) signal reliably through several steps of lightning/over-voltage protection into the icE1usb whose internal GPS-DO serves as a grandmaster clock for the TDM network.
The equipment deployed in this installation currently contains:
-
a rather beefy Supermicro 2U server with EPYC 7113P CPU and 4x PCIe, two of which are populated with Digium TE820 cards resulting in a total of 16 E1 ports
-
an icE1usb with RS422 interface board connected via 100m RS422 to an Ericsson GPS03 receiver. There's two layers of of over-voltage protection on the RS422 (each with gas discharge tubes and TVS) and two stages of over-voltage protection in the coaxial cable between antenna and GPS receiver.
-
a Livingston Portmaster3 RAS server
-
a Cisco AS5400 RAS server
For more details, see this wiki page and this ticket
Now that the physical deployment has been made, the next steps will be to migrate all the TDMoIP links from the existing user base over to the new hub. We hope the reliability and performance will be much better than behind DOCSIS.
In any case, this new setup for sure has a lot of capacity to connect many more more users to this network. At this point we can still only offer E1 PRI interfaces. I expect that at some point during the coming winter the project for remote TDMoIP BRI (S/T, S0-Bus) connectivity will become available.
Acknowledgements
I'd like to thank anyone helping this effort, specifically * Sylvain "tnt" Munaut for his work on the RS422 interface board (+ gateware/firmware) * noris.net for sponsoring the co-location * sysmocom for sponsoring the EPYC server hardware
18 Sep 2022 10:00pm GMT
08 Sep 2022
Planet Openmoko
Harald "LaF0rge" Welte: Progress on the ITU-T V5 access network front
Almost one year after my post regarding first steps towards a V5 implementation, some friends and I were finally able to visit Wobcom, a small German city carrier and pick up a lot of decommissioned POTS/ISDN/PDH/SDH equipment, primarily V5 access networks.
This means that a number of retronetworking enthusiasts now have a chance to play with Siemens Fastlink, Nokia EKSOS and DeTeWe ALIAN access networks/multiplexers.
My primary interest is in Nokia EKSOS, which looks like an rather easy, low-complexity target. As one of the first steps, I took PCB photographs of the various modules/cards in the shelf, take note of the main chip designations and started to search for the related data sheets.
The results can be found in the Osmocom retronetworking wiki, with https://osmocom.org/projects/retronetworking/wiki/Nokia_EKSOS being the main entry page, and sub-pages about
In short: Unsurprisingly, a lot of Infineon analog and digital ICs for the POTS and ISDN ports, as well as a number of Motorola M68k based QUICC32 microprocessors and several unknown ASICs.
So with V5 hardware at my disposal, I've slowly re-started my efforts to implement the LE (local exchange) side of the V5 protocol stack, with the goal of eventually being able to interface those V5 AN with the Osmocom Community TDM over IP network. Once that is in place, we should also be able to offer real ISDN Uk0 (BRI) and POTS lines at retrocomputing events or hacker camps in the coming years.
08 Sep 2022 10:00pm GMT
Harald "LaF0rge" Welte: Clock sync trouble with Digium cards and timing cables
If you have ever worked with Digium (now part of Sangoma) digital telephony interface cards such as the TE110/410/420/820 (single to octal E1/T1/J1 PRI cards), you will probably have seen that they always have a timing connector, where the timing information can be passed from one card to another.
In PDH/ISDN (or even SDH) networks, it is very important to have a synchronized clock across the network. If the clocks are drifting, there will be underruns or overruns, with associated phase jumps that are particularly dangerous when analog modem calls are transported.
In traditional ISDN use cases, the clock is always provided by the network operator, and any customer/user side equipment is expected to synchronize to that clock.
So this Digium timing cable is needed in applications where you have more PRI lines than possible with one card, but only a subset of your lines (spans) are connected to the public operator. The timing cable should make sure that the clock received on one port from the public operator should be used as transmit bit-clock on all of the other ports, no matter on which card.
Unfortunately this decades-old Digium timing cable approach seems to suffer from some problems.
bursty bit clock changes until link is up
The first problem is that downstream port transmit bit clock was jumping around in bursts every two or so seconds. You can see an oscillogram of the E1 master signal (yellow) received by one TE820 card and the transmit of the slave ports on the other card at https://people.osmocom.org/laforge/photos/te820_timingcable_problem.mp4
As you can see, for some seconds the two clocks seem to be in perfect lock/sync, but in between there are periods of immense clock drift.
What I'd have expected is the behavior that can be seen at https://people.osmocom.org/laforge/photos/te820_notimingcable_loopback.mp4 - which shows a similar setup but without the use of a timing cable: Both the master clock input and the clock output were connected on the same TE820 card.
As I found out much later, this problem only occurs until any of the downstream/slave ports is fully OK/GREEN.
This is surprising, as any other E1 equipment I've seen always transmits at a constant bit clock irrespective whether there's any signal in the opposite direction, and irrespective of whether any other ports are up/aligned or not.
But ok, once you adjust your expectations to this Digium peculiarity, you can actually proceed.
clock drift between master and slave cards
Once any of the spans of a slave card on the timing bus are fully aligned, the transmit bit clocks of all of its ports appear to be in sync/lock - yay - but unfortunately only at the very first glance.
When looking at it for more than a few seconds, one can see a slow, continuous drift of the slave bit clocks compared to the master :(
Some initial measurements show that the clock of the slave card of the timing cable is drifting at about 12.5 ppb (parts per billion) when compared against the master clock reference.
This is rather disappointing, given that the whole point of a timing cable is to ensure you have one reference clock with all signals locked to it.
The work-around
If you are willing to sacrifice one port (span) of each card, you can work around that slow-clock-drift issue by connecting an external loopback cable. So the master card is configured to use the clock provided by the upstream provider. Its other ports (spans) will transmit at the exact recovered clock rate with no drift. You can use any of those ports to provide the clock reference to a port on the slave card using an external loopback cable.
In this setup, your slave card[s] will have perfect bit clock sync/lock.
Its just rather sad that you need to sacrifice ports just for achieving proper clock sync - something that the timing connectors and cables claim to do, but in reality don't achieve, at least not in my setup with the most modern and high-end octal-port PCIe cards (TE820).
08 Sep 2022 10:00pm GMT































