18 Dec 2025
TalkAndroid
The best sci-fi show of 2025 is getting a jaw-dropping season 2
The summer of 2025 turned the tables on TV tradition, with massive blockbusters making waves under the sun-and…
18 Dec 2025 4:27pm GMT
Switch 2 Owners, Take Note: Handy Accessories With Last-Minute Christmas Savings
Swtich on to these last minute savings
18 Dec 2025 3:48pm GMT
Boba Story Lid Recipes – 2025
Look no further for all the latest Boba Story Lid Recipes. They are all right here!
18 Dec 2025 2:05pm GMT
17 Dec 2025
Android Developers Blog
Brighten Your Real-Time Camera Feeds with Low Light Boost
Posted by Donovan McMurray, Developer Relations Engineer
Today, we're diving into Low Light Boost (LLB), a powerful feature designed to brighten real-time camera streams. Unlike Night Mode, which requires a hold-still capture duration, Low Light Boost works instantaneously on your live preview and video recordings. LLB automatically adjusts how much brightening is needed based on available light, so it's optimized for every environment.
With a recent update, LLB allows Instagram users to line up the perfect shot, and then their existing Night Mode implementation results in the same high quality low-light photos their users have been enjoying for over a year.
Why Real-time Brightness Matters
While Night Mode aims to improve final image quality, Low Light Boost is intended for usability and interactivity in dark environments. Another important factor to consider is that - even though they work together very well - you can use LLB and Night Mode independently, and you'll see with some of these use cases, LLB has value on its own when Night Mode photos aren't needed. Here is how LLB improves the user experience:
-
Better Framing & Capture: In dimly lit scenes, a standard camera preview can be pitch black. LLB brightens the viewfinder, allowing users to actually see what they are framing before they hit the shutter button. For this experience, you can use Night Mode for the best quality low-light photo result, or you can let LLB give the user a "what you see is what you get" photo result.
-
Reliable Scanning: QR codes are ubiquitous, but scanning them in a dark restaurant or parking garage is often frustrating. With a significantly brighter camera feed, scanning algorithms can reliably detect and decode QR codes even in very dim environments.
-
Enhanced Interactions: For apps involving live video interactions (like AI assistants or video calls) LLB increases the amount of perceivable information, ensuring the computer vision models have enough data to work with
The Difference in Instagram
It's easy to imagine the difference this makes in the user experience. If users aren't able to see what they're capturing, then there's a higher chance they'll abandon the capture.
Choosing Your Implementation
There are two ways to implement Low Light Boost to provide the best experience across the widest range of devices:
-
Low Light Boost AE Mode: This is a hardware-layer auto-exposure mode. It offers the highest quality and performance because it fine-tunes the Image Signal Processor (ISP) pipeline directly. Always check for this first.
-
Google Low Light Boost: If the device doesn't support the AE mode, you can fall back to this software-based solution provided by Google Play services. It applies post-processing to the camera stream to brighten it. As an all-software solution, it is available on more devices, so this implementation helps you reach more devices with LLB.
Low Light Boost AE Mode (Hardware)
Mechanism:
This mode is supported on devices running Android 15 and newer and requires the OEM to have implemented the support in HAL (currently available on Pixel 10 devices). It integrates directly with the camera's Image Signal Processor (ISP). If you set CaptureRequest.CONTROL_AE_MODE to CameraMetadata.CONTROL_AE_MODE_ON_LOW_LIGHT_BOOST_BRIGHTNESS_PRIORITY, the camera system takes control.
Behavior:
The HAL/ISP analyzes the scene and adjusts sensor and processing parameters, often including increasing exposure time, to brighten the image. This can yield frames with a significantly improved signal-to-noise ratio (SNR) because the extended exposure time, rather than an increase in digital sensor gain (ISO), allows the sensor to capture more light information.
Advantage:
Potentially better image quality and power efficiency as it leverages dedicated hardware pathways.
Trade off:
May result in a lower frame rate in very dark conditions as the sensor needs more time to capture light. The frame rate can drop to as low as 10 FPS in very low light conditions.
Google Low Light Boost (Software via Google Play Services)
Mechanism:
This solution, distributed as an optional module via Google Play services, applies post-processing to the camera stream. It uses a sophisticated realtime image enhancement technology called HDRNet.
Google HDRNet:
This deep learning model analyzes the image at a lower resolution to predict a compact set of parameters (a bilateral grid). This grid then guides the efficient, spatially-varying enhancement of the full-resolution image on the GPU. The model is trained to brighten and improve image quality in low-light conditions, with a focus on face visibility.
Process Orchestration:
The HDRNet model and its accompanying logic are orchestrated by the Low Light Boost processor. This includes:
-
Scene Analysis:
A custom calculator that estimates the true scene brightness using camera metadata (sensor sensitivity, exposure time, etc.) and image content. This analysis determines the boost level. -
HDRNet Processing:
Applies the HDRNet model to brighten the frame. The model used is tuned for low light scenes and optimized for realtime performance. -
Blending:
The original and HDRNet processed frames are blended. The amount of blending applied is dynamically controlled by the scene brightness calculator, ensuring a smooth transition between boosted and unboosted states.
Advantage:
Works on a broader range of devices (currently supports Samsung S22 Ultra, S23 Ultra, S24 Ultra, S25 Ultra, and Pixel 6 through Pixel 9) without requiring specific HAL support. Maintains the camera's frame rate as it's a post-processing effect.
Trade-off:
As a post-processing method, the quality is limited by the information present in the frames delivered by the sensor. It cannot recover details lost due to extreme darkness at the sensor level.
By offering both hardware and software pathways, Low Light Boost provides a scalable solution to enhance low-light camera performance across the Android ecosystem. Developers should prioritize the AE mode where available and use the Google Low Light Boost as a robust fallback.
Implementing Low Light Boost in Your App
Now let's look at how to implement both LLB offerings. You can implement the following whether you use CameraX or Camera2 in your app. For the best results, we recommend implementing both Step 1 and Step 2.
Step 1: Low Light Boost AE Mode
Available on select devices running Android 15 and higher, LLB AE Mode functions as a specific Auto-Exposure (AE) mode.
1. Check for Availability
First, check if the camera device supports LLB AE Mode.
val cameraInfo = cameraProvider.getCameraInfo(cameraSelector) val isLlbSupported = cameraInfo.isLowLightBoostSupported
2. Enable the Mode
If supported, you can enable LLB AE Mode using CameraX's CameraControl object.
// After setting up your camera, use the CameraInfo object to enable LLB AE Mode. camera = cameraProvider.bindToLifecycle(...) if (isLlbSupported) { try { // The .await() extension suspends the coroutine until the // ListenableFuture completes. If the operation fails, it throws // an exception which we catch below. camera?.cameraControl.enableLowLightBoostAsync(true).await() } catch (e: IllegalStateException) { Log.e(TAG, "Failed to enable low light boost: not available on this device or with the current camera configuration", e) } catch (e: CameraControl.OperationCanceledException) { Log.e(TAG, "Failed to enable low light boost: camera is closed or value has changed", e) } }
3. Monitor the State
Just because you requested the mode doesn't mean it's currently "boosting." The system only activates the boost when the scene is actually dark. You can set up an Observer to update your UI (like showing a moon icon) or convert to a Flow using the extension function asFlow().
if (isLlbSupported) { camera?.cameraInfo.lowLightBoostState.asFlow().collectLatest { state -> // Update UI accordingly updateMoonIcon(state == LowLightBoostState.ACTIVE) } }
You can read the full guide on Low Light Boost AE Mode here.
Step 2: Google Low Light Boost
For devices that don't support the hardware AE mode, Google Low Light Boost acts as a powerful fallback. It uses a LowLightBoostSession to intercept and brighten the stream.
1. Add Dependencies
This feature is delivered via Google Play services.
implementation("com.google.android.gms:play-services-camera-low-light-boost:16.0.1-beta06") // Add coroutines-play-services to simplify Task APIs implementation("org.jetbrains.kotlinx:kotlinx-coroutines-play-services:1.10.2")
2. Initialize the Client
Before starting your camera, use the LowLightBoostClient to ensure the module is installed and the device is supported.
val llbClient = LowLightBoost.getClient(context) // Check support and install if necessary val isSupported = llbClient.isCameraSupported(cameraId).await() val isInstalled = llbClient.isModuleInstalled().await() if (isSupported && !isInstalled) { // Trigger installation llbClient.installModule(installCallback).await() }
3. Create a LLB Session
Google LLB processes each frame, so you must give your display Surface to the LowLightBoostSession, and it gives you back a Surface that has the brightening applied. For Camera2 apps, you can add the resulting Surface with CaptureRequest.Builder.addTarget(). For CameraX, this processing pipeline aligns best with the CameraEffect class, where you can apply the effect with a SurfaceProcessor and provide it back to your Preview with a SurfaceProvider, as seen in this code.
// With a SurfaceOutput from SurfaceProcessor.onSurfaceOutput() and a // SurfaceRequest from Preview.SurfaceProvider.onSurfaceRequested(), // create a LLB Session. suspend fun createLlbSession(surfaceRequest: SurfaceRequest, outputSurfaceForLlb: Surface) { // 1. Create the LLB Session configuration val options = LowLightBoostOptions( outputSurfaceForLlb, cameraId, surfaceRequest.resolution.width, surfaceRequest.resolution.height, true // Start enabled ) // 2. Create the session. val llbSession = llbClient.createSession(options, callback).await() // 3. Get the surface to use. val llbInputSurface = llbSession.getCameraSurface() // 4. Provide the surface to the CameraX Preview UseCase. surfaceRequest.provideSurface(llbInputSurface, executor, resultListener) // 5. Set the scene detector callback to monitor how much boost is being applied. val onSceneBrightnessChanged = object : SceneDetectorCallback { override fun onSceneBrightnessChanged( session: LowLightBoostSession, boostStrength: Float ) { // Monitor the boostStrength from 0 (no boosting) to 1 (maximum boosting) } } llbSession.setSceneDetectorCallback(onSceneBrightnessChanged, null) }
4. Pass in the Metadata
For the algorithm to work, it needs to analyze the camera's auto-exposure state. You must pass capture results back to the LLB session. In CameraX, this can be done by extending your Preview.Builder with Camera2Interop.Extender.setSessionCaptureCallback().
Camera2Interop.Extender(previewBuilder).setSessionCaptureCallback( object : CameraCaptureSession.CaptureCallback() { override fun onCaptureCompleted( session: CameraCaptureSession, request: CaptureRequest, result: TotalCaptureResult ) { super.onCaptureCompleted(session, request, result) llbSession?.processCaptureResult(result) } } )
Detailed implementation steps for the client and session can be found in the Google Low Light Boost guide.
Next Steps
By implementing these two options, you ensure that your users can see clearly, scan reliably, and interact effectively, regardless of the lighting conditions.
To see these features in action within a complete, production-ready codebase, check out the Jetpack Camera App on GitHub. It implements both LLB AE Mode and Google LLB, giving you a reference for your own integration.17 Dec 2025 5:00pm GMT
Build smarter apps with Gemini 3 Flash
Posted by Thomas Ezan, Senior Developer Relations Engineer
Gemini 3 optimized for low-latency
Gemini 3 is our most intelligent model family to date. With the launch of Gemini 3 Flash, we are making that intelligence more accessible for low-latency and cost-effective use cases. While Gemini 3 Pro is designed for complex reasoning, Gemini 3 Flash is engineered to be significantly faster and more cost-effective for your production apps.
Seamless integration with Firebase AI Logic
Just like the Pro model, Gemini 3 Flash is available in preview directly through the Firebase AI Logic SDK. This means you can integrate it into your Android app without needing to do any complex server side setup.Here is how to add it to your Kotlin code:
val model = Firebase.ai(backend = GenerativeBackend.googleAI())
.generativeModel(
modelName = "gemini-3-flash-preview")
Scale with Confidence
In addition, Firebase enables you to keep your growth secure and manageable with:AI Monitoring
The Firebase AI monitoring dashboard gives you visibility into latency, success rates, and costs, allowing you to slice data by model name to see exactly how the model performs.Server Prompt Templates
You can use server prompt templates to store your prompt and schema securely on Firebase servers instead of hardcoding them in your app binary. This capability ensures your sensitive prompts remain secure, prevents unauthorized prompt extraction, and allows for faster iteration without requiring app updates.--- model: 'gemini-3-flash-preview' input: schema: topic: type: 'string' minLength: 2 maxLength: 40 length: type: 'number' minimum: 1 maximum: 200 language: type: 'string' --- {{role "system"}} You're a storyteller that tells nice and joyful stories with happy endings. {{role "user"}} Create a story about {{topic}} with the length of {{length}} words in the {{language}} language.
Prompt template defined on the Firebase Console
val generativeModel = Firebase.ai.templateGenerativeModel() val response = generativeModel.generateContent("storyteller-v10", mapOf( "topic" to topic, "length" to length, "language" to language ) ) _output.value = response.text
Code snippet to access to the prompt template
Gemini 3 Flash for AI development assistance in Android Studio
Gemini 3 Flash is also available for AI assistance in Android Studio. While Gemini 3 Pro Preview is our best model for coding and agentic experiences, Gemini 3 Flash is engineered for speed, and great for common development tasks and questions.The new model is rolling out to developers using Gemini in Android Studio at no-cost (default model) starting today. For higher usage rate limits and longer sessions with Agent Mode, you can use an AI Studio API key to leverage the full capabilities of either Gemini 3 Flash or Gemini 3 Pro. We're also rolling out Gemini 3 model family access with higher usage rate limits to developers who have Gemini Code Assist Standard or Enterprise licenses. Your IT administrator will need to enable access to preview models through the Google Cloud console.
Get Started Today
You can start experimenting with Gemini 3 Flash via Firebase AI Logic today. Learn more about it in the Android and Firebase documentation. Try out any of the new Gemini 3 models in Android Studio for development assistance, and let us know what you think! As always you can follow us across LinkedIn, Blog, YouTube, and X.17 Dec 2025 4:13pm GMT
15 Dec 2025
Android Developers Blog
18% Faster Compiles, 0% Compromises

Posted by Santiago Aboy Solanes - Software Engineer, Vladimír Marko - Software Engineer
The Android Runtime (ART) team has reduced compile time by 18% without compromising the compiled code or any peak memory regressions. This improvement was part of our 2025 initiative to improve compile time without sacrificing memory usage or the quality of the compiled code.
Optimizing compile-time speed is crucial for ART. For example, when just-in-time (JIT) compiling it directly impacts the efficiency of applications and overall device performance. Faster compilations reduce the time before the optimizations kick in, leading to a smoother and more responsive user experience. Furthermore, for both JIT and ahead-of-time (AOT), improvements in compile-time speed translate to reduced resource consumption during the compilation process, benefiting battery life and device thermals, especially on lower-end devices.
Some of these compile-time speed improvements launched in the June 2025 Android release, and the rest will be available in the end-of-year release of Android. Furthermore, all Android users on versions 12 and above are eligible to receive these improvements through mainline updates.
Optimizing the optimizing compiler
Optimizing a compiler is always a game of trade-offs. You can't just get speed for free; you have to give something up. We set a very clear and challenging goal for ourselves: make the compiler faster, but do it without introducing memory regressions and, crucially, without degrading the quality of the code it produces. If the compiler is faster but the apps run slower, we've failed.
The one resource we were willing to spend was our own development time to dig deep, investigate, and find clever solutions that met these strict criteria. Let's take a closer look at how we work to find areas to improve, as well as finding the right solutions to the various problems.
Finding worthwhile possible optimizations
Before you can begin to optimize a metric, you have to be able to measure it. Otherwise, you can't ever be sure if you improved it or not. Luckily for us, compile time speed is fairly consistent as long as you take some precautions like using the same device you use for measuring before and after a change, and making sure you don't thermal throttle your device. On top of that, we also have deterministic measurements like compiler statistics that help us understand what's going on under the hood.
Since the resource we were sacrificing for these improvements was our development time, we wanted to be able to iterate as fast as we could. This meant that we grabbed a handful of representative apps (a mix of first-party apps, third-party apps, and the Android operating system itself) to prototype solutions. Later, we verified that the final implementation was worth it with both manual and automated testing in a widespread manner.
With that set of hand-picked apks we would trigger a manual compile locally, get a profile of the compilation, and use pprof to visualize where we are spending our time.
Example of a profile's flame graph in pprof
The pprof tool is very powerful and allows us to slice, filter, and sort the data to see, for example, which compiler phases or methods are taking most of the time. We will not go into detail about pprof itself; just know that if the bar is bigger then it means it took more time of the compilation.
One of these views is the "bottom up" one where you can see which methods are taking most of the time. In the image below we can see a method called Kill, accounting for over a 1% of the compile time. Some of the other top methods will also be discussed later in the blog post.
Bottom up view of a profile
In our optimizing compiler, there's a phase called Global Value Numbering (GVN). You don't have to worry about what it does as a whole, but the relevant part is to know that it has a method called `Kill` that it will delete some nodes according to a filter. This is time consuming as it has to iterate through all the nodes and check one by one. We noticed that there are some cases in which we know in advance that the check will be false, no matter the nodes we have alive at that point. In these cases, we can skip iterating altogether, bringing it from 1.023% down to ~0.3% and improving GVN's runtime by ~15%.
Implementing worthwhile optimizations
We covered how to measure and how to detect where the time is being spent, but this is only the beginning. The next step is how to optimize the time being spent compiling.
Usually, in a case like the `Kill` one above we would take a look at how we iterate through the nodes and do it faster by, for example, doing things in parallel or improving the algorithm itself. In fact, that's what we tried at first and only when we couldn't find anything to do we had a "Wait a minute…" moment and saw that the solution was to (in some cases) not iterate at all! When doing these kinds of optimizations, it is easy to miss the forest for the trees.
In other cases, we used a handful of different techniques including:
-
using heuristics to decide whether an optimization will fail to produce worthwhile results and therefore can be skipped
-
using extra data structures to cache computed data
-
changing the current data structures to get a speed boost
-
lazily computing results to avoid cycles in some cases
-
use the right abstraction - unnecessary features can slow down the code
-
avoid chasing a frequently used pointer through many loads
How do we know if the optimizations are worth pursuing?
That's the neat part, you don't. After detecting that an area is consuming a lot of compile time and after devoting development time to try to improve it, sometimes you can't just find a solution. Maybe there's nothing to do, it will take too long to implement, it will regress another metric significantly, increase code base complexity, etc. For every successful optimization that you can see in this blog post, know that there are countless others that just didn't come to fruition.
If you are in a similar situation, try to estimate how much you are going to improve the metric by doing as little work as you can. This means, in order:
-
Estimating with a metrics you have already collected, or just a gut feeling
-
Estimating with a quick and dirty prototype
-
Implement a solution.
Don't forget to consider estimating the drawbacks of your solution. For example, if you are going to rely on extra data structures, how much memory are you willing to use?
Diving deeper
Without further ado, let's look at some of the changes we implemented.
We implemented a change to optimize a method called FindReferenceInfoOf. This method was doing a linear search of a vector to find an entry. We updated that data structure to be indexed by the instruction's id so that FindReferenceInfoOf would be O(1) instead of O(n). Also, we pre-allocated the vector to avoid resizing. We slightly increased memory as we had to add an extra field which counted how many entries we inserted in the vector, but it was a small sacrifice to make as the peak memory didn't increase. This sped up our LoadStoreAnalysis phase by 34-66% which in turns gives ~0.5-1.8% compile time improvement.
We have a custom implementation of HashSet that we use in several places. Creating this data structure was taking a considerable amount of time and we found out why. Many years ago, this data structure was used in only a few places that were using very big HashSets and it was tweaked to be optimized for that. However, nowadays it was used in the opposite direction with only a few entries and with a short lifespan. This meant that we were wasting cycles by creating this huge HashSet but we only used it for a few entries before discarding it. With this change, we improved ~1.3-2% of compile time. As an added bonus, memory usage decreased by ~0.5-1% since we weren't using as big data structures as before.
We improved ~0.5-1% of compile time by passing data structures by reference to the lambda to avoid copying them around. This was something that was missed in the original review and sat in our codebase for years. It was thanks to taking a look at the profiles in pprof that we noticed that these methods were creating and destroying a lot of data structures, which led us to investigate and optimize them.
We sped up the phase that writes the compiled output by caching computed values, which translated to ~1.3-2.8% of total compile time improvement. Sadly, the extra bookkeeping was too much and our automated testing alerted us of the memory regression. Later, we took a second look at the same code and implemented a new version which not only took care of the memory regression but also improved the compile time a further ~0.5-1.8%! In this second change we had to refactor and reimagine how this phase should work, in order to get rid of one of the two data structures.
We have a phase in our optimizing compiler which inlines function calls in order to get better performance. To choose which methods to inline we use both heuristics before we do any computation, and final checks after doing work but right before we finalize the inlining. If any of those detect that the inlining is not worth it (for example, too many new instructions would be added), then we don't inline the method call.
We moved two checks from the "final checks" category to the "heuristic" category to estimate whether an inlining will succeed or not before we do any time-expensive computation. Since this is an estimate it is not perfect, but we verified that our new heuristics cover 99.9% of what was inlined before without affecting performance. One of these new heuristics was about the needed DEX registers (~0.2-1.3% improvement), and the other one about number of instructions (~2% improvement).
We have a custom implementation of a BitVector that we use in several places. We replaced the resizable BitVector class with a simpler BitVectorView for certain fixed-size bit vectors. This eliminates some indirections and run-time range checks and speeds up the construction of the bit vector objects.
Furthermore, the BitVectorView class was templatized on the underlying storage type (instead of always using uint32_t as the old BitVector). This allows some operations, for example Union(), to process twice as many bits together on 64-bit platforms. The samples of the affected functions were reduced by more than 1% in total when compiling the Android OS. This was done across several changes [1, 2, 3, 4, 5, 6]
If we talked in detail about all the optimizations we would be here all day! If you are interested in some more optimizations, take a look at some other changes we implemented:
-
Add bookkeeping to improve compilation times by ~0.6-1.6%.
-
Lazily compute data to avoid cycles, if possible.
-
Refactor our code to skip precomputing work when it will not be used.
-
Avoid some dependent load chains when the allocator can be readily obtained from other places.
-
Another case of adding a check to avoid unnecessary work.
-
Avoid frequent branching on register type (core/FP) in register allocator.
-
Make sure some arrays are initialized at compile time. Don't rely on clang to do it.
-
Clean up some loops. Use range loops that clang can optimize better because it does not need to reload the internal pointers of the container due to loop side effects. Avoid calling the virtual function `HInstruction::GetInputRecords()` in the loop via the inlined `InputAt(.)` for each input.
-
Avoid Accept() functions for the visitor pattern by exploiting a compiler optimization.
Conclusion
Our dedication to improving ART's compile-time speed has yielded significant improvements, making Android more fluid and efficient while also contributing to better battery life and device thermals. By diligently identifying and implementing optimizations, we've demonstrated that substantial compile-time gains are possible without compromising memory usage or code quality.
Our journey involved profiling with tools like pprof, a willingness to iterate, and sometimes even abandon less fruitful avenues. The collective efforts of the ART team have not only reduced compile time by a noteworthy percentage, but have also laid the groundwork for future advancements.
All of these improvements are available in the 2025 end-of-year Android update, and for Android 12 and above through mainline updates. We hope this deep dive into our optimization process provides valuable insights into the complexities and rewards of compiler engineering!
15 Dec 2025 5:00pm GMT
05 Dec 2025
Planet Maemo
Meow: Process log text files as if you could make cat speak
Some years ago I had mentioned some command line tools I used to analyze and find useful information on GStreamer logs. I've been using them consistently along all these years, but some weeks ago I thought about unifying them in a single tool that could provide more flexibility in the mid term, and also as an excuse to unrust my Rust knowledge a bit. That's how I wrote Meow, a tool to make cat speak (that is, to provide meaningful information).
The idea is that you can cat a file through meow and apply the filters, like this:
cat /tmp/log.txt | meow appsinknewsample n:V0 n:video ht: \
ft:-0:00:21.466607596 's:#([A-za-z][A-Za-z]*/)*#'
which means "select those lines that contain appsinknewsample (with case insensitive matching), but don't contain V0 nor video (that is, by exclusion, only that contain audio, probably because we've analyzed both and realized that we should focus on audio for our specific problem), highlight the different thread ids, only show those lines with timestamp lower than 21.46 sec, and change strings like Source/WebCore/platform/graphics/gstreamer/mse/AppendPipeline.cpp to become just AppendPipeline.cpp", to get an output as shown in this terminal screenshot:

Cool, isn't it? After all, I'm convinced that the answer to any GStreamer bug is always hidden in the logs (or will be, as soon as I add "just a couple of log lines more, bro"
0
0 
05 Dec 2025 11:16am GMT
15 Oct 2025
Planet Maemo
Dzzee 1.9.0 for N800/N810/N900/N9/Leste
15 Oct 2025 11:31am GMT
05 Jun 2025
Planet Maemo
Mobile blogging, the past and the future
This blog has been running more or less continuously since mid-nineties. The site has existed in multiple forms, and with different ways to publish. But what's common is that at almost all points there was a mechanism to publish while on the move.
Psion, documents over FTP
In the early 2000s we were into adventure motorcycling. To be able to share our adventures, we implemented a way to publish blogs while on the go. The device that enabled this was the Psion Series 5, a handheld computer that was very much a device ahead of its time.

The Psion had a reasonably sized keyboard and a good native word processing app. And battery life good for weeks of usage. Writing while underway was easy. The Psion could use a mobile phone as a modem over an infrared connection, and with that we could upload the documents to a server over FTP.
Server-side, a cron job would grab the new documents, converting them to HTML and adding them to our CMS.
In the early days of GPRS, getting this to work while roaming was quite tricky. But the system served us well for years.
If we wanted to include photos to the stories, we'd have to find an Internet cafe.
- To the Alps is a post from these times. Lots more in the motorcycling category
SMS and MMS
For an even more mobile setup, I implemented an SMS-based blogging system. We had an old phone connected to a computer back in the office, and I could write to my blog by simply sending a text. These would automatically end up as a new paragraph in the latest post. If I started the text with NEWPOST, an empty blog post would be created with the rest of that message's text as the title.
- In the Caucasus is a good example of a post from this era
As I got into neogeography, I could also send a NEWPOSITION message. This would update my position on the map, connecting weather metadata to the posts.
As camera phones became available, we wanted to do pictures too. For the Death Monkey rally where we rode minimotorcycles from Helsinki to Gibraltar, we implemented an MMS-based system. With that the entries could include both text and pictures. But for that you needed a gateway, which was really only realistic for an event with sponsors.
- Mystery of the Missing Monkey is typical. Some more in Internet Archive
Photos over email
A much easier setup than MMS was to slightly come back to the old Psion setup, but instead of word documents, sending email with picture attachments. This was something that the new breed of (pre-iPhone) smartphones were capable of. And by now the roaming question was mostly sorted.
And so my blog included a new "moblog" section. This is where I could share my daily activities as poor-quality pictures. Sort of how people would use Instagram a few years later.

- Internet Archive has some of my old moblogs but nowadays, I post similar stuff on Pixelfed
Pause
Then there was sort of a long pause in mobile blogging advancements. Modern smartphones, data roaming, and WiFi hotspots had become ubiquitous.
In the meanwhile the blog also got migrated to a Jekyll-based system hosted on AWS. That means the old Midgard-based integrations were off the table.
And I traveled off-the-grid rarely enough that it didn't make sense to develop a system.
But now that we're sailing offshore, that has changed. Time for new systems and new ideas. Or maybe just a rehash of the old ones?
Starlink, Internet from Outer Space
Most cruising boats - ours included - now run the Starlink satellite broadband system. This enables full Internet, even in the middle of an ocean, even video calls! With this, we can use normal blogging tools. The usual one for us is GitJournal, which makes it easy to write Jekyll-style Markdown posts and push them to GitHub.
However, Starlink is a complicated, energy-hungry, and fragile system on an offshore boat. The policies might change at any time preventing our way of using it, and also the dishy itself, or the way we power it may fail.
But despite what you'd think, even on a nerdy boat like ours, loss of Internet connectivity is not an emergency. And this is where the old-style mobile blogging mechanisms come handy.
- Any of the 2025 Atlantic crossing posts is a good example of this setup in action
Inreach, texting with the cloud
Our backup system to Starlink is the Garmin Inreach. This is a tiny battery-powered device that connects to the Iridium satellite constellation. It allows tracking as well as basic text messaging.
When we head offshore we always enable tracking on the Inreach. This allows both our blog and our friends ashore to follow our progress.
I also made a simple integration where text updates sent to Garmin MapShare get fetched and published on our blog. Right now this is just plain text-based entries, but one could easily implement a command system similar to what I had over SMS back in the day.
One benefit of the Inreach is that we can also take it with us when we go on land adventures. And it'd even enable rudimentary communications if we found ourselves in a liferaft.
- There are various InReach integration hacks that could be used for more sophisticated data transfer
Sailmail and email over HF radio
The other potential backup for Starlink failures would be to go seriously old-school. It is possible to get email access via a SSB radio and a Pactor (or Vara) modem.
Our boat is already equipped with an isolated aft stay that can be used as an antenna. And with the popularity of Starlink, many cruisers are offloading their old HF radios.
Licensing-wise this system could be used either as a marine HF radio (requiring a Long Range Certificate), or amateur radio. So that part is something I need to work on. Thankfully post-COVID, radio amateur license exams can be done online.
With this setup we could send and receive text-based email. The Airmail application used for this can even do some automatic templating for position reports. We'd then need a mailbox that can receive these mails, and some automation to fetch and publish.
- Sailmail and No Foreign Land support structured data via email to update position. Their formats could be useful inspiration
05 Jun 2025 12:00am GMT
18 Sep 2022
Planet Openmoko
Harald "LaF0rge" Welte: Deployment of future community TDMoIP hub
I've mentioned some of my various retronetworking projects in some past blog posts. One of those projects is Osmocom Community TDM over IP (OCTOI). During the past 5 or so months, we have been using a number of GPS-synchronized open source icE1usb interconnected by a new, efficient but strill transparent TDMoIP protocol in order to run a distributed TDM/PDH network. This network is currently only used to provide ISDN services to retronetworking enthusiasts, but other uses like frame relay have also been validated.
So far, the central hub of this OCTOI network has been operating in the basement of my home, behind a consumer-grade DOCSIS cable modem connection. Given that TDMoIP is relatively sensitive to packet loss, this has been sub-optimal.
Luckily some of my old friends at noris.net have agreed to host a new OCTOI hub free of charge in one of their ultra-reliable co-location data centres. I'm already hosting some other machines there for 20+ years, and noris.net is a good fit given that they were - in their early days as an ISP - the driving force in the early 90s behind one of the Linux kernel ISDN stracks called u-isdn. So after many decades, ISDN returns to them in a very different way.
Side note: In case you're curious, a reconstructed partial release history of the u-isdn code can be found on gitea.osmocom.org
But I digress. So today, there was the installation of this new OCTOI hub setup. It has been prepared for several weeks in advance, and the hub contains two circuit boards designed entirely only for this use case. The most difficult challenge was the fact that this data centre has no existing GPS RF distribution, and the roof is ~ 100m of CAT5 cable (no fiber!) away from the roof. So we faced the challenge of passing the 1PPS (1 pulse per second) signal reliably through several steps of lightning/over-voltage protection into the icE1usb whose internal GPS-DO serves as a grandmaster clock for the TDM network.
The equipment deployed in this installation currently contains:
-
a rather beefy Supermicro 2U server with EPYC 7113P CPU and 4x PCIe, two of which are populated with Digium TE820 cards resulting in a total of 16 E1 ports
-
an icE1usb with RS422 interface board connected via 100m RS422 to an Ericsson GPS03 receiver. There's two layers of of over-voltage protection on the RS422 (each with gas discharge tubes and TVS) and two stages of over-voltage protection in the coaxial cable between antenna and GPS receiver.
-
a Livingston Portmaster3 RAS server
-
a Cisco AS5400 RAS server
For more details, see this wiki page and this ticket
Now that the physical deployment has been made, the next steps will be to migrate all the TDMoIP links from the existing user base over to the new hub. We hope the reliability and performance will be much better than behind DOCSIS.
In any case, this new setup for sure has a lot of capacity to connect many more more users to this network. At this point we can still only offer E1 PRI interfaces. I expect that at some point during the coming winter the project for remote TDMoIP BRI (S/T, S0-Bus) connectivity will become available.
Acknowledgements
I'd like to thank anyone helping this effort, specifically * Sylvain "tnt" Munaut for his work on the RS422 interface board (+ gateware/firmware) * noris.net for sponsoring the co-location * sysmocom for sponsoring the EPYC server hardware
18 Sep 2022 10:00pm GMT
08 Sep 2022
Planet Openmoko
Harald "LaF0rge" Welte: Progress on the ITU-T V5 access network front
Almost one year after my post regarding first steps towards a V5 implementation, some friends and I were finally able to visit Wobcom, a small German city carrier and pick up a lot of decommissioned POTS/ISDN/PDH/SDH equipment, primarily V5 access networks.
This means that a number of retronetworking enthusiasts now have a chance to play with Siemens Fastlink, Nokia EKSOS and DeTeWe ALIAN access networks/multiplexers.
My primary interest is in Nokia EKSOS, which looks like an rather easy, low-complexity target. As one of the first steps, I took PCB photographs of the various modules/cards in the shelf, take note of the main chip designations and started to search for the related data sheets.
The results can be found in the Osmocom retronetworking wiki, with https://osmocom.org/projects/retronetworking/wiki/Nokia_EKSOS being the main entry page, and sub-pages about
In short: Unsurprisingly, a lot of Infineon analog and digital ICs for the POTS and ISDN ports, as well as a number of Motorola M68k based QUICC32 microprocessors and several unknown ASICs.
So with V5 hardware at my disposal, I've slowly re-started my efforts to implement the LE (local exchange) side of the V5 protocol stack, with the goal of eventually being able to interface those V5 AN with the Osmocom Community TDM over IP network. Once that is in place, we should also be able to offer real ISDN Uk0 (BRI) and POTS lines at retrocomputing events or hacker camps in the coming years.
08 Sep 2022 10:00pm GMT
Harald "LaF0rge" Welte: Clock sync trouble with Digium cards and timing cables
If you have ever worked with Digium (now part of Sangoma) digital telephony interface cards such as the TE110/410/420/820 (single to octal E1/T1/J1 PRI cards), you will probably have seen that they always have a timing connector, where the timing information can be passed from one card to another.
In PDH/ISDN (or even SDH) networks, it is very important to have a synchronized clock across the network. If the clocks are drifting, there will be underruns or overruns, with associated phase jumps that are particularly dangerous when analog modem calls are transported.
In traditional ISDN use cases, the clock is always provided by the network operator, and any customer/user side equipment is expected to synchronize to that clock.
So this Digium timing cable is needed in applications where you have more PRI lines than possible with one card, but only a subset of your lines (spans) are connected to the public operator. The timing cable should make sure that the clock received on one port from the public operator should be used as transmit bit-clock on all of the other ports, no matter on which card.
Unfortunately this decades-old Digium timing cable approach seems to suffer from some problems.
bursty bit clock changes until link is up
The first problem is that downstream port transmit bit clock was jumping around in bursts every two or so seconds. You can see an oscillogram of the E1 master signal (yellow) received by one TE820 card and the transmit of the slave ports on the other card at https://people.osmocom.org/laforge/photos/te820_timingcable_problem.mp4
As you can see, for some seconds the two clocks seem to be in perfect lock/sync, but in between there are periods of immense clock drift.
What I'd have expected is the behavior that can be seen at https://people.osmocom.org/laforge/photos/te820_notimingcable_loopback.mp4 - which shows a similar setup but without the use of a timing cable: Both the master clock input and the clock output were connected on the same TE820 card.
As I found out much later, this problem only occurs until any of the downstream/slave ports is fully OK/GREEN.
This is surprising, as any other E1 equipment I've seen always transmits at a constant bit clock irrespective whether there's any signal in the opposite direction, and irrespective of whether any other ports are up/aligned or not.
But ok, once you adjust your expectations to this Digium peculiarity, you can actually proceed.
clock drift between master and slave cards
Once any of the spans of a slave card on the timing bus are fully aligned, the transmit bit clocks of all of its ports appear to be in sync/lock - yay - but unfortunately only at the very first glance.
When looking at it for more than a few seconds, one can see a slow, continuous drift of the slave bit clocks compared to the master :(
Some initial measurements show that the clock of the slave card of the timing cable is drifting at about 12.5 ppb (parts per billion) when compared against the master clock reference.
This is rather disappointing, given that the whole point of a timing cable is to ensure you have one reference clock with all signals locked to it.
The work-around
If you are willing to sacrifice one port (span) of each card, you can work around that slow-clock-drift issue by connecting an external loopback cable. So the master card is configured to use the clock provided by the upstream provider. Its other ports (spans) will transmit at the exact recovered clock rate with no drift. You can use any of those ports to provide the clock reference to a port on the slave card using an external loopback cable.
In this setup, your slave card[s] will have perfect bit clock sync/lock.
Its just rather sad that you need to sacrifice ports just for achieving proper clock sync - something that the timing connectors and cables claim to do, but in reality don't achieve, at least not in my setup with the most modern and high-end octal-port PCIe cards (TE820).
08 Sep 2022 10:00pm GMT


.png)


.png)





