20 Nov 2025
TalkAndroid
Boba Story Lid Recipes – 2025
Look no further for all the latest Boba Story Lid Recipes. They are all right here!
20 Nov 2025 4:07pm GMT
Dice Dreams Free Rolls – Updated Daily
Get the latest Dice Dreams free rolls links, updated daily! Complete with a guide on how to redeem the links.
20 Nov 2025 4:07pm GMT
Huge Anker, Soundcore & Eufy Deals Slash Prices by Up to 53%
If you're looking to upgrade your charging setup, refresh your audio gear, or add some smart home convenience…
20 Nov 2025 3:40pm GMT
19 Nov 2025
Android Developers Blog
Jetpack Navigation 3 is stable
Posted by Don Turner - Developer Relations Engineer
Jetpack Navigation 3 version 1.0 is stable 🎉. Go ahead and use it in your production apps today. JetBrains are already using it in their KotlinConf app.
Navigation 3 is a new navigation library built from the ground up to embrace Jetpack Compose state. It gives you full control over your back stack, helps you retain navigation state, and allows you to easily create adaptive layouts (like list-detail). There's even a cross-platform version from JetBrains.
Why a new library?
The original Jetpack Navigation library (now Nav2) was designed 7 years ago and, while it serves its original goals well and has been improved iteratively, the way apps are now built has fundamentally changed.
Reactive programming with a declarative UI is now the norm. Nav3 embraces this approach. For example, NavDisplay (the Nav3 UI component that displays your screens) simply observes a list of keys (each one representing a screen) backed by Compose state and updates its UI when that list changes.
Figure 1. NavDisplay observes changes to a list backed by Compose state.
Nav2 can also make it difficult to have a single source of truth for your navigation state because it has its own internal state. With Nav3, you supply your own state, which gives you complete control.
Lastly, you asked for more flexibility and customizability. Rather than having a single, monolithic API, Nav3 provides smaller, decoupled APIs (or "building blocks") that can be combined together to create complex functionality. Nav3 itself uses these building blocks to provide sensible defaults for well-defined navigation use cases.
This approach allows you to:
-
Customize screen animations at both a global and individual level
-
Display multiple panes at the same time, and create flexible layouts using the Scenes API
-
Easily replace Nav3 components with your own implementations if you want custom behavior
Read more about its design and features in the launch blog.
Migrating from Navigation 2
If you're already using Nav2, specifically Navigation Compose, you should consider migrating to Nav3. To assist you with this, there is a migration guide. The key steps are:
-
Add the navigation 3 dependencies.
-
Update your navigation routes to implement NavKey. Your routes don't have to implement this interface to use Nav3, but if they do, you can take advantage of Nav3's rememberNavBackStack function to create a persistent back stack.
-
Create classes to hold and modify your navigation state - this is where your back stacks are held.
-
Replace NavController with these classes.
-
Move your destinations from NavHost's NavGraph into an entryProvider.
-
Replace NavHost with NavDisplay.
Experimenting with AI agent migration
You may want to experiment with using an AI agent to read the migration guide and perform the steps on your project. To try this with Gemini in Android Studio's Agent Mode:
-
Save this markdown version of the guide into your project.
-
Paste this prompt to the agent (but don't hit enter): "Migrate this project to Navigation 3 using ".
-
Type @migration-guide.md - this will supply the guide as context to the agent.
As always, make sure you carefully review the changes made by the AI agent - it can make mistakes!
We'd love to hear how you or your agent performed, please send your feedback here.
Tasty navigation recipes for common scenarios
For common but nuanced use cases, we have a recipes repository. This shows how to combine the Nav3 APIs in a particular way, allowing you to choose or modify the recipe to your particular needs. If a recipe turns out to be popular, we'll consider "graduating" the non-nuanced parts of it into the core Nav3 library or add-on libraries.
Figure 2. Useful code recipes can graduate into a library.
There are currently 19 recipes, including for:
-
Passing navigation arguments to ViewModels (including using Koin)
-
Returning results from screens by events and by shared state
We're currently working on a deeplinks recipe, plus a Koin integration, and have plenty of others planned. An engineer from JetBrains has also published a Compose Multiplatform version of the recipes.
If you have a common use case that you'd like to see a recipe for, please file a recipe request.
Summary
To get started with Nav3, check out the docs and the recipes. Plus, keep an eye out for a whole week of technical content including:
-
A deep dive video on the API covering modularization, animations and adaptive layouts.
-
A live Ask Me Anything (AMA) with the engineers who built Nav3.
Nav3 Spotlight Week starts Dec 1st 2025.
As always, if you find any issues, please file them here.
19 Nov 2025 8:02pm GMT
Stronger threat detection, simpler integration: Protect your growth with the Play Integrity API
Posted by Dom Elliott - Group Product Manager, Google Play and Eric Lynch - Senior Product Manager, Android Security
In the mobile ecosystem, abuse can threaten your revenue, growth, and user trust. To help developers thrive, Google Play offers a resilient threat detection service, Play Integrity API. Play Integrity API helps you verify that interactions and server requests are genuine-coming from your unmodified app on a certified Android device, installed by Google Play.
The impact is significant: apps using Play integrity features see 80% lower unauthorized usage on average compared to other apps. Today, leaders across diverse categories-including Uber, TikTok, Stripe, Kabam, Wooga, Radar.com, Zimperium, Paytm, and Remini-use it to help safeguard their businesses.
We're continuing to improve the Play Integrity API, making it easier to integrate, more resilient against sophisticated attacks, and better at recovering users who don't meet integrity standards or encounter errors with new Play in-app remediation prompts.
Detect threats to your business
The Play Integrity API offers verdicts designed to detect specific threats that impact your bottom line during critical interactions.
-
Unauthorized access: The accountDetails verdict helps you determine whether the user installed or paid for your app or game on Google Play.
-
Code tampering: The appIntegrity verdict helps you determine whether you're interacting with your unmodified binary that Google Play recognizes.
-
Risky devices and emulated environments: The deviceIntegrity verdict helps you determine whether your app is running on a genuine Play Protect certified Android device or a genuine instance of Google Play Games for PC.
-
Unpatched devices: For devices running Android 13 and higher, MEETS_STRONG_INTEGRITY response in the deviceIntegrity verdict helps you determine if a device has applied recent security updates. You can also opt in to deviceAttributes to include the attested Android SDK version in the response.
-
Risky access by other apps: The appAccessRiskVerdict helps you determine whether apps are running that could be used to capture the screen, display overlays, or control the device (for example, by misusing the accessibility permission). This verdict automatically excludes apps that serve genuine accessibility purposes.
-
Known malware: The playProtectVerdict helps you determine whether Google Play Protect is turned on and whether it has found risky or dangerous apps installed on the device.
-
Hyperactivity: The recentDeviceActivity level helps you determine whether a device has made an anomalously high volume of integrity token requests recently, which could indicate automated traffic and could be a sign of attack.
-
Repeat abuse and reused devices: deviceRecall (beta) helps you determine whether you're interacting with a device that you've previously flagged, even if your app was reinstalled or the device was reset. With device recall, you can customize the repeat actions you want to track.
The API can be used across Android form factors including phones, tablets, foldables, Android Auto, Android TV, Android XR, ChromeOS, Wear OS, and on Google Play Games for PC.
Make the most of Play Integrity API
Apps and games have found success with the Play Integrity API by following the security considerations and taking a phased approach to their anti-abuse strategy.
Step 1: Decide what you want to protect: Decide what actions and server requests in your apps and games are important to verify and protect. For example, you could perform integrity checks when a user is launching the app, signing in, joining a multiplayer game, generating AI content, or transferring money.
Step 2: Collect integrity verdict responses: Perform integrity checks at important moments to start collecting verdict data, without enforcement initially. That way you can analyze the responses for your install base and see how they correlate with your existing abuse signals and historical abuse data.
NEW: Let Play recover users with issues automatically
Deciding how to respond to different integrity signals can be complex, you need to handle various integrity responses and API error codes (like network issues or outdated Play services). We're simplifying this with new Play in-app remediation prompts. You can show a Google Play prompt to your users to automatically fix a wide range of issues directly within your app. This reduces integration complexity, ensures a consistent user interface, and helps get more users back to a good state.
GET_INTEGRITY automatically detects the issue
(in this example, a network error)
and resolves it.
You can trigger the GET_INTEGRITY dialog, available in Play Integrity API library version 1.5.0+, after a range of issues to automatically guide the user through the necessary fixes including:
-
Unauthorized access: GET_INTEGRITY guides the user back to a Play licensed response in accountDetails.
-
Code tampering: GET_INTEGRITY guides the user back to a Play recognized response in appIntegrity.
-
Device integrity issues: GET_INTEGRITY guides the user on how to get back to the MEETS_DEVICE_INTEGRITY state in deviceIntegrity.
-
Remediable error codes: GET_INTEGRITY resolves remediable API errors, such as prompting the user to fix network connectivity or update Google Play Services.
We also offer specialized dialogs including GET_STRONG_INTEGRITY (which works like GET_INTEGRITY while also getting the user back to the MEETS_STRONG_INTEGRITY state with no known malware issues in the playProtectVerdict), GET_LICENSED (which gets the user back to a Play licensed and Play recognized state), and CLOSE_UNKNOWN_ACCESS_RISK and CLOSE_ALL_ACCESS_RISK (which prompt the user to close potentially risky apps).
Choose modern integrity solutions
In addition to Play Integrity API, Google offers several other features to consider as part of your overall anti-abuse strategy. Both Play Integrity API and Play's automatic protection offer user experience and developer benefits for safeguarding app distribution. We encourage existing apps to migrate to these modern integrity solutions instead of using the legacy Play licensing library.
Automatic protection: Prevent unauthorized access with Google Play's automatic protection and ensure users continue getting your official app updates. Turn it on and Google Play will automatically add an installer check to your app's code, with no developer integration work required. If your protected app is redistributed or shared through another channel, then the user will be prompted to get your app from Google Play. Eligible Play developers also have access to Play's advanced anti-tamper protection, which uses obfuscation and runtime checks to make it harder and costlier for attackers to modify and redistribute protected apps.
Safeguard your business today
With a strong foundation in hardware-backed security and new automated remediation dialogs simplifying integration, the Play Integrity API is an essential tool for protecting your growth.
Get started with the Play Integrity API documentation.
19 Nov 2025 6:11pm GMT
Deeper Performance Considerations

Posted by Ben Weiss - Senior Developer Relations Engineer, Breana Tate - Developer Relations Engineer, Jossi Wolf - Software Engineer on Compose

Compose yourselves and let us guide you through more background on performance.
Welcome to day 3 of Performance Spotlight Week. Today we're continuing to share details and guidance on important areas of app performance. We're covering Profile Guided Optimization, Jetpack Compose performance improvements and considerations on working behind the scenes. Let's dive right in.
Profile Guided Optimization
Baseline Profiles and Startup Profiles are foundational to improve an Android app's startup and runtime performance. They are part of a group of performance optimizations called Profile Guided Optimization.
When an app is packaged, the d8 dexer takes classes and methods and populates your app's classes.dex files. When a user opens the app, these dex files are loaded, one after the other until the app can start. By providing a Startup Profile you let d8 know which classes and methods to pack in the first classes.dex files. This structure allows the app to load fewer files, which in turn improves startup speed.
Baseline Profiles effectively move the Just in Time (JIT) compilation steps away from user devices and onto developer machines. The generated Ahead Of Time (AOT) compiled code has proven to reduce startup time and rendering issues alike.
Trello and Baseline Profiles
We asked engineers on the Trello app how Baseline Profiles affected their app's performance. After applying Baseline Profiles to their main user journey, Trello saw a significant 25 % reduction in app startup time.
Trello was able to improve their app's startup time by 25 % by using baseline profiles.
Baseline Profiles at Meta
Also, engineers at Meta recently published an article on how they are accelerating their Android apps with Baseline Profiles.
Across Meta's apps the teams have seen various critical metrics improve by up to 40 % after applying Baseline Profiles.
Technical improvements like these help you improve user satisfaction and business success as well. Sharing this with your product owners, CTOs and decision makers can also help speed up your app's performance.
Get started with Baseline Profiles
To generate either a Baseline or Startup Profile, you write a macrobenchmark test that exercises the app. During the test profile data is collected which will be used during app compilation. The tests are written using the new UiAutomator API, which we'll cover tomorrow.
Writing a benchmark like this is straightforward and you can see the full sample on GitHub.
@Test
fun profileGenerator() {
rule.collect(
packageName = TARGET_PACKAGE,
maxIterations = 15,
stableIterations = 3,
includeInStartupProfile = true
) {
uiAutomator {
startApp(TARGET_PACKAGE)
}
}
}
Considerations
Start by writing a macrobenchmark tests Baseline Profile and a Startup Profile for the path most traveled by your users. This means the main entry point that your users take into your app which usually is after they logged in. Then continue to write more test cases to capture a more complete picture only for Baseline Profiles. You do not need to cover everything with a Baseline Profile. Stick to the most used paths and measure performance in the field. More on that in tomorrow's post.
Get started with Profile Guided Optimization
To learn how Baseline Profiles work under the hood, watch this video from the Android Developers Summit:
And check out the Android Build Time episode on Profile Guided Optimization for another in-depth look:
We also have extensive guidance on Baseline Profiles and Startup Profiles available for further reading.
Jetpack Compose performance improvements
The UI framework for Android has seen the performance investment of the engineering team pay off. From version 1.9 of Jetpack Compose, scroll jank has dropped to 0.2 % during an internal long scrolling benchmark test.
These improvements were made possible because of several features packed into the most recent releases.
Customizable cache window
By default, lazy layouts only compose one item ahead of time in the direction of scrolling, and after something scrolls off screen it is discarded. You can now customize the amount of items to retain through a fraction of the viewport or dp size. This helps your app perform more work upfront, and after enabling pausable composition in between frames, using the available time more efficiently.
To start using customizable cache windows, instantiate a LazyLayoutCacheWindow and pass it to your lazy list or lazy grid. Measure your app's performance using different cache window sizes, for example 50% of the viewport. The optimal value will depend on your content's structure and item size.
val dpCacheWindow = LazyLayoutCacheWindow(ahead = 150.dp, behind = 100.dp)
val state = rememberLazyListState(cacheWindow = dpCacheWindow)
LazyColumn(state = state) {
// column contents
}
Pausable composition
This feature allows compositions to be paused, and their work split up over several frames. The APIs landed in 1.9 and it is now used by default in 1.10 in lazy layout prefetch. You should see the most benefit with complex items with longer composition times.
More Compose performance optimizations
In the versions 1.9 and 1.10 of Compose the team also made several optimizations that are a bit less obvious.
Several APIs that use coroutines under the hood have been improved. For example, when using Draggable and Clickable, developers should see faster reaction times and improved allocation counts.
Optimizations in layout rectangle tracking have improved performance of Modifiers like onVisibilityChanged() and onLayoutRectChanged(). This speeds up the layout phase, even when not explicitly using these APIs.
Another performance improvement is using cached values when observing positions via onPlaced().
Prefetch text in the background
Starting with version 1.9, Compose adds the ability to prefetch text on a background thread. This enables you to pre-warm caches to enable faster text layout and is relevant for app rendering performance. During layout, text has to be passed into the Android framework where a word cache is populated. By default this runs on the Ui thread. Offloading prefetching and populating the word cache onto a background thread can speed up layout, especially for longer texts. To prefetch on a background thread you can pass a custom executor to any composable that's using BasicText under the hood by passing a LocalBackgroundTextMeasurementExecutor to a CompositionLocalProvider like so.
val defaultTextMeasurementExecutor = Executors.newSingleThreadExecutor()
CompositionLocalProvider(
LocalBackgroundTextMeasurementExecutor provides DefaultTextMeasurementExecutor
) {
BasicText("Some text that should be measured on a background thread!")
}
Depending on the text, this can provide a performance boost to your text rendering. To make sure that it improves your app's rendering performance, benchmark and compare the results.
Background work performance considerations
Background Work is an essential part of many apps. You may be using libraries like WorkManager or JobScheduler to perform tasks like:
-
Periodically uploading analytical events
-
Syncing data between a backend service and a database
-
Processing media (i.e. resizing or compressing images)
A key challenge while executing these tasks is balancing performance and power efficiency. WorkManager allows you to achieve this balance. It's designed to be power-efficient, and allow work to be deferred to an optimal execution window influenced by a number of factors, including constraints you specify or constraints imposed by the system.
WorkManager is not a one-size-fits-all solution, though. Android also has a number of power-optimized APIs that are designed specifically with certain common Core User Journeys (CUJs) in mind.
Reference the Background Work landing page for a list of just a few of these, including updating a widget and getting location in the background.
Local Debugging tools for Background Work: Common Scenarios
To debug Background Work and understand why a task may have been delayed or failed, you need visibility into how the system has scheduled your tasks.
To help with this, WorkManager has several related tools to help you debug locally and optimize performance (some of these work for JobScheduler as well)! Here are some common scenarios you might encounter when using WorkManager, and an explanation of tools you can use to debug them.
Debugging why scheduled work is not executing
Scheduled work being delayed or not executing at all can be due to a number of factors, including specified constraints not being met or constraints having been imposed by the system.
The first step in investigating why scheduled work is not running is to confirm the work was successfully scheduled. After confirming the scheduling status, determine whether there are any unmet constraints or preconditions preventing the work from executing.
There are several tools for debugging this scenario.
Background Task Inspector
The Background Task Inspector is a powerful tool integrated directly into Android Studio. It provides a visual representation of all WorkManager tasks and their associated states (Running, Enqueued, Failed, Succeeded).
To debug why scheduled work is not executing with the Background Task Inspector, consult the listed Work status(es). An 'Enqueued' status indicates your Work was scheduled, but is still waiting to run.
Benefits: Aside from providing an easy way to view all tasks, this tool is especially useful if you have chained work. The Background Task inspector offers a graph view that can visualize if a previous task failing may have impacted the execution of the following task.
Background Task Inspector list view
Background Task Inspector graph view
adb shell dumpsys jobscheduler
This command returns a list of all active JobScheduler jobs (which includes WorkManager Workers) along with specified constraints, and system-imposed constraints. It also returns job history.
Use this if you want a different way to view your scheduled work and associated constraints. For WorkManager versions earlier than WorkManager 2.10.0, adb shell dumpsys jobscheduler will return a list of Workers with this name:
[package name]/androidx.work.impl.background.systemjob.SystemJobService
If your app has multiple workers, updating to WorkManager 2.10.0 will allow you to see Worker names and easily distinguish between workers:
#WorkerName#@[package name]/androidx.work.impl.background.systemjob.SystemJobService
Benefits: This command is useful for understanding if there were any system-imposed constraints, which you cannot determine with the Background Task Inspector. For example, this will return your app's standby bucket, which can affect the window in which scheduled work completes.
Enable Debug logging
You can enable custom logging to see verbose WorkManager logs, which will have WM- attached.
Benefits: This allows you to gain visibility into when work is scheduled, constraints are fulfilled, and lifecycle events, and you can consult these logs while developing your app.
WorkInfo.StopReason
If you notice unpredictable performance with a specific worker, you can programmatically observe the reason your worker was stopped on the previous run attempt with WorkInfo.getStopReason.
It's a good practice to configure your app to observe WorkInfo using getWorkInfoByIdFlow to identify if your work is being affected by background restrictions, constraints, frequent timeouts, or even stopped by the user.
Benefits: You can use WorkInfo.StopReason to collect field data about your workers' performance.
Debugging WorkManager-attributed high wake lock duration flagged by Android vitals
Android vitals features an excessive partial wake locks metric, which highlights wake locks contributing to battery drain. You may be surprised to know that WorkManager acquires wake locks to execute tasks, and if the wake locks exceed the threshold set by Google Play, can have impacts to your app's visibility. How can you debug why there is so much wake lock duration attributed to your work? You can use the following tools.
Android vitals dashboard
First confirm in the Android vitals excessive wake lock dashboard that the high wake lock duration is from WorkManager and not an alarm or other wake lock. You can use the Identify wake locks created by other APIs documentation to understand which wake locks are held due to WorkManager.
Perfetto
Perfetto is a tool for analyzing system traces. When using it for debugging WorkManager specifically, you can view the "Device State" section to see when your work started, how long it ran, and how it contributes to power consumption.
Under "Device State: Jobs" track, you can see any workers that have been executed and their associated wake locks.
Device State section in Perfetto, showing CleanupWorker and BlurWorker execution.
Resources
Consult the Debug WorkManager page for an overview of the available debugging methods for other scenarios you might encounter.
And to try some of these methods hands on and learn more about debugging WorkManager, check out the Advanced WorkManager and Testing codelab.
Next steps
Today we moved beyond code shrinking and explored how the Android Runtime and Jetpack Compose actually render your app. Whether it's pre-compiling critical paths with Baseline Profiles or smoothing out scroll states with the new Compose 1.9 and 1.10 features, these tools focus on the feel of your app. And we dove deep into best practices on debugging background work.
Ask Android
On Friday we're hosting a live AMA on performance. Ask your questions now using #AskAndroid and get them answered by the experts.
The challenge
We challenged you on Monday to enable R8. Today, we are asking you to generate one Baseline Profile for your app.
With Android Studio Otter, the Baseline Profile Generator module wizard makes this easier than ever. Pick your most critical user journey-even if it's just your app startup and login-and generate a profile.
Once you have it, run a Macrobenchmark to compare CompilationMode.None vs. CompilationMode.Partial.
Share your startup time improvements on social media using #optimizationEnabled.
Tune in tomorrow
You have shrunk your app with R8 and optimized your runtime with Profile Guided Optimization. But how do you prove these wins to your stakeholders? And how do you catch regressions before they hit production?
Join us tomorrow for Day 4: The Performance Leveling Guide, where we will map out exactly how to measure your success, from field data in Play Vitals to deep local tracing with Perfetto.
19 Nov 2025 5:00pm GMT
15 Oct 2025
Planet Maemo
Dzzee 1.9.0 for N800/N810/N900/N9/Leste
15 Oct 2025 11:31am GMT
05 Jun 2025
Planet Maemo
Mobile blogging, the past and the future
This blog has been running more or less continuously since mid-nineties. The site has existed in multiple forms, and with different ways to publish. But what's common is that at almost all points there was a mechanism to publish while on the move.
Psion, documents over FTP
In the early 2000s we were into adventure motorcycling. To be able to share our adventures, we implemented a way to publish blogs while on the go. The device that enabled this was the Psion Series 5, a handheld computer that was very much a device ahead of its time.

The Psion had a reasonably sized keyboard and a good native word processing app. And battery life good for weeks of usage. Writing while underway was easy. The Psion could use a mobile phone as a modem over an infrared connection, and with that we could upload the documents to a server over FTP.
Server-side, a cron job would grab the new documents, converting them to HTML and adding them to our CMS.
In the early days of GPRS, getting this to work while roaming was quite tricky. But the system served us well for years.
If we wanted to include photos to the stories, we'd have to find an Internet cafe.
- To the Alps is a post from these times. Lots more in the motorcycling category
SMS and MMS
For an even more mobile setup, I implemented an SMS-based blogging system. We had an old phone connected to a computer back in the office, and I could write to my blog by simply sending a text. These would automatically end up as a new paragraph in the latest post. If I started the text with NEWPOST, an empty blog post would be created with the rest of that message's text as the title.
- In the Caucasus is a good example of a post from this era
As I got into neogeography, I could also send a NEWPOSITION message. This would update my position on the map, connecting weather metadata to the posts.
As camera phones became available, we wanted to do pictures too. For the Death Monkey rally where we rode minimotorcycles from Helsinki to Gibraltar, we implemented an MMS-based system. With that the entries could include both text and pictures. But for that you needed a gateway, which was really only realistic for an event with sponsors.
- Mystery of the Missing Monkey is typical. Some more in Internet Archive
Photos over email
A much easier setup than MMS was to slightly come back to the old Psion setup, but instead of word documents, sending email with picture attachments. This was something that the new breed of (pre-iPhone) smartphones were capable of. And by now the roaming question was mostly sorted.
And so my blog included a new "moblog" section. This is where I could share my daily activities as poor-quality pictures. Sort of how people would use Instagram a few years later.

- Internet Archive has some of my old moblogs but nowadays, I post similar stuff on Pixelfed
Pause
Then there was sort of a long pause in mobile blogging advancements. Modern smartphones, data roaming, and WiFi hotspots had become ubiquitous.
In the meanwhile the blog also got migrated to a Jekyll-based system hosted on AWS. That means the old Midgard-based integrations were off the table.
And I traveled off-the-grid rarely enough that it didn't make sense to develop a system.
But now that we're sailing offshore, that has changed. Time for new systems and new ideas. Or maybe just a rehash of the old ones?
Starlink, Internet from Outer Space
Most cruising boats - ours included - now run the Starlink satellite broadband system. This enables full Internet, even in the middle of an ocean, even video calls! With this, we can use normal blogging tools. The usual one for us is GitJournal, which makes it easy to write Jekyll-style Markdown posts and push them to GitHub.
However, Starlink is a complicated, energy-hungry, and fragile system on an offshore boat. The policies might change at any time preventing our way of using it, and also the dishy itself, or the way we power it may fail.
But despite what you'd think, even on a nerdy boat like ours, loss of Internet connectivity is not an emergency. And this is where the old-style mobile blogging mechanisms come handy.
- Any of the 2025 Atlantic crossing posts is a good example of this setup in action
Inreach, texting with the cloud
Our backup system to Starlink is the Garmin Inreach. This is a tiny battery-powered device that connects to the Iridium satellite constellation. It allows tracking as well as basic text messaging.
When we head offshore we always enable tracking on the Inreach. This allows both our blog and our friends ashore to follow our progress.
I also made a simple integration where text updates sent to Garmin MapShare get fetched and published on our blog. Right now this is just plain text-based entries, but one could easily implement a command system similar to what I had over SMS back in the day.
One benefit of the Inreach is that we can also take it with us when we go on land adventures. And it'd even enable rudimentary communications if we found ourselves in a liferaft.
- There are various InReach integration hacks that could be used for more sophisticated data transfer
Sailmail and email over HF radio
The other potential backup for Starlink failures would be to go seriously old-school. It is possible to get email access via a SSB radio and a Pactor (or Vara) modem.
Our boat is already equipped with an isolated aft stay that can be used as an antenna. And with the popularity of Starlink, many cruisers are offloading their old HF radios.
Licensing-wise this system could be used either as a marine HF radio (requiring a Long Range Certificate), or amateur radio. So that part is something I need to work on. Thankfully post-COVID, radio amateur license exams can be done online.
With this setup we could send and receive text-based email. The Airmail application used for this can even do some automatic templating for position reports. We'd then need a mailbox that can receive these mails, and some automation to fetch and publish.
- Sailmail and No Foreign Land support structured data via email to update position. Their formats could be useful inspiration
05 Jun 2025 12:00am GMT
16 Oct 2024
Planet Maemo
Adding buffering hysteresis to the WebKit GStreamer video player
The <video> element implementation in WebKit does its job by using a multiplatform player that relies on a platform-specific implementation. In the specific case of glib platforms, which base their multimedia on GStreamer, that's MediaPlayerPrivateGStreamer.
The player private can have 3 buffering modes:
- On-disk buffering: This is the typical mode on desktop systems, but is frequently disabled on purpose on embedded devices to avoid wearing out their flash storage memories. All the video content is downloaded to disk, and the buffering percentage refers to the total size of the video. A GstDownloader element is present in the pipeline in this case. Buffering level monitoring is done by polling the pipeline every second, using the
fillTimerFired()method. - In-memory buffering: This is the typical mode on embedded systems and on desktop systems in case of streamed (live) content. The video is downloaded progressively and only the part of it ahead of the current playback time is buffered. A GstQueue2 element is present in the pipeline in this case. Buffering level monitoring is done by listening to GST_MESSAGE_BUFFERING bus messages and using the buffering level stored on them. This is the case that motivates the refactoring described in this blog post, what we actually wanted to correct in Broadcom platforms, and what motivated the addition of hysteresis working on all the platforms.
- Local files: Files, MediaStream sources and other special origins of video don't do buffering at all (no GstDownloadBuffering nor GstQueue2 element is present on the pipeline). They work like the on-disk buffering mode in the sense that
fillTimerFired()is used, but the reported level is relative, much like in the streaming case. In the initial version of the refactoring I was unaware of this third case, and only realized about it when tests triggered the assert that I added to ensure that the on-disk buffering method was working in GST_BUFFERING_DOWNLOAD mode.
The current implementation (actually, its wpe-2.38 version) was showing some buffering problems on some Broadcom platforms when doing in-memory buffering. The buffering levels monitored by MediaPlayerPrivateGStreamer weren't accurate because the Nexus multimedia subsystem used on Broadcom platforms was doing its own internal buffering. Data wasn't being accumulated in the GstQueue2 element of playbin, because BrcmAudFilter/BrcmVidFilter was accepting all the buffers that the queue could provide. Because of that, the player private buffering logic was erratic, leading to many transitions between "buffer completely empty" and "buffer completely full". This, it turn, caused many transitions between the HaveEnoughData, HaveFutureData and HaveCurrentData readyStates in the player, leading to frequent pauses and unpauses on Broadcom platforms.

So, one of the first thing I tried to solve this issue was to ask the Nexus PlayPump (the subsystem in charge of internal buffering in Nexus) about its internal levels, and add that to the levels reported by GstQueue2. There's also a GstMultiqueue in the pipeline that can hold a significant amount of buffers, so I also asked it for its level. Still, the buffering level unstability was too high, so I added a moving average implementation to try to smooth it.
All these tweaks only make sense on Broadcom platforms, so they were guarded by ifdefs in a first version of the patch. Later, I migrated those dirty ifdefs to the new quirks abstraction added by Phil. A challenge of this migration was that I needed to store some attributes that were considered part of MediaPlayerPrivateGStreamer before. They still had to be somehow linked to the player private but only accessible by the platform specific code of the quirks. A special HashMap attribute stores those quirks attributes in an opaque way, so that only the specific quirk they belong to knows how to interpret them (using downcasting). I tried to use move semantics when storing the data, but was bitten by object slicing when trying to move instances of the superclass. In the end, moving the responsibility of creating the unique_ptr that stored the concrete subclass to the caller did the trick.
Even with all those changes, undesirable swings in the buffering level kept happening, and when doing a careful analysis of the causes I noticed that the monitoring of the buffering level was being done from different places (in different moments) and sometimes the level was regarded as "enough" and the moment right after, as "insufficient". This was because the buffering level threshold was one single value. That's something that a hysteresis mechanism (with low and high watermarks) can solve. So, a logical level change to "full" would only happen when the level goes above the high watermark, and a logical level change to "low" when it goes under the low watermark level.
For the threshold change detection to work, we need to know the previous buffering level. There's a problem, though: the current code checked the levels from several scattered places, so only one of those places (the first one that detected the threshold crossing at a given moment) would properly react. The other places would miss the detection and operate improperly, because the "previous buffering level value" had been overwritten with the new one when the evaluation had been done before. To solve this, I centralized the detection in a single place "per cycle" (in updateBufferingStatus()), and then used the detection conclusions from updateStates().
So, with all this in mind, I refactored the buffering logic as https://commits.webkit.org/284072@main, so now WebKit GStreamer has a buffering code much more robust than before. The unstabilities observed in Broadcom devices were gone and I could, at last, close Issue 1309.
16 Oct 2024 6:12am GMT
18 Sep 2022
Planet Openmoko
Harald "LaF0rge" Welte: Deployment of future community TDMoIP hub
I've mentioned some of my various retronetworking projects in some past blog posts. One of those projects is Osmocom Community TDM over IP (OCTOI). During the past 5 or so months, we have been using a number of GPS-synchronized open source icE1usb interconnected by a new, efficient but strill transparent TDMoIP protocol in order to run a distributed TDM/PDH network. This network is currently only used to provide ISDN services to retronetworking enthusiasts, but other uses like frame relay have also been validated.
So far, the central hub of this OCTOI network has been operating in the basement of my home, behind a consumer-grade DOCSIS cable modem connection. Given that TDMoIP is relatively sensitive to packet loss, this has been sub-optimal.
Luckily some of my old friends at noris.net have agreed to host a new OCTOI hub free of charge in one of their ultra-reliable co-location data centres. I'm already hosting some other machines there for 20+ years, and noris.net is a good fit given that they were - in their early days as an ISP - the driving force in the early 90s behind one of the Linux kernel ISDN stracks called u-isdn. So after many decades, ISDN returns to them in a very different way.
Side note: In case you're curious, a reconstructed partial release history of the u-isdn code can be found on gitea.osmocom.org
But I digress. So today, there was the installation of this new OCTOI hub setup. It has been prepared for several weeks in advance, and the hub contains two circuit boards designed entirely only for this use case. The most difficult challenge was the fact that this data centre has no existing GPS RF distribution, and the roof is ~ 100m of CAT5 cable (no fiber!) away from the roof. So we faced the challenge of passing the 1PPS (1 pulse per second) signal reliably through several steps of lightning/over-voltage protection into the icE1usb whose internal GPS-DO serves as a grandmaster clock for the TDM network.
The equipment deployed in this installation currently contains:
-
a rather beefy Supermicro 2U server with EPYC 7113P CPU and 4x PCIe, two of which are populated with Digium TE820 cards resulting in a total of 16 E1 ports
-
an icE1usb with RS422 interface board connected via 100m RS422 to an Ericsson GPS03 receiver. There's two layers of of over-voltage protection on the RS422 (each with gas discharge tubes and TVS) and two stages of over-voltage protection in the coaxial cable between antenna and GPS receiver.
-
a Livingston Portmaster3 RAS server
-
a Cisco AS5400 RAS server
For more details, see this wiki page and this ticket
Now that the physical deployment has been made, the next steps will be to migrate all the TDMoIP links from the existing user base over to the new hub. We hope the reliability and performance will be much better than behind DOCSIS.
In any case, this new setup for sure has a lot of capacity to connect many more more users to this network. At this point we can still only offer E1 PRI interfaces. I expect that at some point during the coming winter the project for remote TDMoIP BRI (S/T, S0-Bus) connectivity will become available.
Acknowledgements
I'd like to thank anyone helping this effort, specifically * Sylvain "tnt" Munaut for his work on the RS422 interface board (+ gateware/firmware) * noris.net for sponsoring the co-location * sysmocom for sponsoring the EPYC server hardware
18 Sep 2022 10:00pm GMT
08 Sep 2022
Planet Openmoko
Harald "LaF0rge" Welte: Progress on the ITU-T V5 access network front
Almost one year after my post regarding first steps towards a V5 implementation, some friends and I were finally able to visit Wobcom, a small German city carrier and pick up a lot of decommissioned POTS/ISDN/PDH/SDH equipment, primarily V5 access networks.
This means that a number of retronetworking enthusiasts now have a chance to play with Siemens Fastlink, Nokia EKSOS and DeTeWe ALIAN access networks/multiplexers.
My primary interest is in Nokia EKSOS, which looks like an rather easy, low-complexity target. As one of the first steps, I took PCB photographs of the various modules/cards in the shelf, take note of the main chip designations and started to search for the related data sheets.
The results can be found in the Osmocom retronetworking wiki, with https://osmocom.org/projects/retronetworking/wiki/Nokia_EKSOS being the main entry page, and sub-pages about
In short: Unsurprisingly, a lot of Infineon analog and digital ICs for the POTS and ISDN ports, as well as a number of Motorola M68k based QUICC32 microprocessors and several unknown ASICs.
So with V5 hardware at my disposal, I've slowly re-started my efforts to implement the LE (local exchange) side of the V5 protocol stack, with the goal of eventually being able to interface those V5 AN with the Osmocom Community TDM over IP network. Once that is in place, we should also be able to offer real ISDN Uk0 (BRI) and POTS lines at retrocomputing events or hacker camps in the coming years.
08 Sep 2022 10:00pm GMT
Harald "LaF0rge" Welte: Clock sync trouble with Digium cards and timing cables
If you have ever worked with Digium (now part of Sangoma) digital telephony interface cards such as the TE110/410/420/820 (single to octal E1/T1/J1 PRI cards), you will probably have seen that they always have a timing connector, where the timing information can be passed from one card to another.
In PDH/ISDN (or even SDH) networks, it is very important to have a synchronized clock across the network. If the clocks are drifting, there will be underruns or overruns, with associated phase jumps that are particularly dangerous when analog modem calls are transported.
In traditional ISDN use cases, the clock is always provided by the network operator, and any customer/user side equipment is expected to synchronize to that clock.
So this Digium timing cable is needed in applications where you have more PRI lines than possible with one card, but only a subset of your lines (spans) are connected to the public operator. The timing cable should make sure that the clock received on one port from the public operator should be used as transmit bit-clock on all of the other ports, no matter on which card.
Unfortunately this decades-old Digium timing cable approach seems to suffer from some problems.
bursty bit clock changes until link is up
The first problem is that downstream port transmit bit clock was jumping around in bursts every two or so seconds. You can see an oscillogram of the E1 master signal (yellow) received by one TE820 card and the transmit of the slave ports on the other card at https://people.osmocom.org/laforge/photos/te820_timingcable_problem.mp4
As you can see, for some seconds the two clocks seem to be in perfect lock/sync, but in between there are periods of immense clock drift.
What I'd have expected is the behavior that can be seen at https://people.osmocom.org/laforge/photos/te820_notimingcable_loopback.mp4 - which shows a similar setup but without the use of a timing cable: Both the master clock input and the clock output were connected on the same TE820 card.
As I found out much later, this problem only occurs until any of the downstream/slave ports is fully OK/GREEN.
This is surprising, as any other E1 equipment I've seen always transmits at a constant bit clock irrespective whether there's any signal in the opposite direction, and irrespective of whether any other ports are up/aligned or not.
But ok, once you adjust your expectations to this Digium peculiarity, you can actually proceed.
clock drift between master and slave cards
Once any of the spans of a slave card on the timing bus are fully aligned, the transmit bit clocks of all of its ports appear to be in sync/lock - yay - but unfortunately only at the very first glance.
When looking at it for more than a few seconds, one can see a slow, continuous drift of the slave bit clocks compared to the master :(
Some initial measurements show that the clock of the slave card of the timing cable is drifting at about 12.5 ppb (parts per billion) when compared against the master clock reference.
This is rather disappointing, given that the whole point of a timing cable is to ensure you have one reference clock with all signals locked to it.
The work-around
If you are willing to sacrifice one port (span) of each card, you can work around that slow-clock-drift issue by connecting an external loopback cable. So the master card is configured to use the clock provided by the upstream provider. Its other ports (spans) will transmit at the exact recovered clock rate with no drift. You can use any of those ports to provide the clock reference to a port on the slave card using an external loopback cable.
In this setup, your slave card[s] will have perfect bit clock sync/lock.
Its just rather sad that you need to sacrifice ports just for achieving proper clock sync - something that the timing connectors and cables claim to do, but in reality don't achieve, at least not in my setup with the most modern and high-end octal-port PCIe cards (TE820).
08 Sep 2022 10:00pm GMT







