15 Nov 2025

feedTalkAndroid

Boba Story Lid Recipes – 2025

Look no further for all the latest Boba Story Lid Recipes. They are all right here!

15 Nov 2025 4:34pm GMT

Dice Dreams Free Rolls – Updated Daily

Get the latest Dice Dreams free rolls links, updated daily! Complete with a guide on how to redeem the links.

15 Nov 2025 4:33pm GMT

Here’s how to blur your house on Google Maps Street View in seconds

With Street View offering sharper and more detailed images than ever, some people have begun to notice just…

15 Nov 2025 4:30pm GMT

13 Nov 2025

feedAndroid Developers Blog

Introducing CameraX 1.5: Powerful Video Recording and Pro-level Image Capture

Posted by Scott Nien, Software Engineer




The CameraX team is thrilled to announce the release of version 1.5! This latest update focuses on bringing professional-grade capabilities to your fingertips while making the camera session easier to configure than ever before.

For video recording, users can now effortlessly capture stunning slow-motion or high-frame-rate videos. More importantly, the new Feature Group API allows you to confidently enable complex combinations like 10-bit HDR and 60 FPS, ensuring consistent results across supported devices.

On the image capture front, you gain maximum flexibility with support for capturing unprocessed, uncompressed DNG (RAW) files. Plus, you can now leverage Ultra HDR output even when using powerful Camera Extensions.

Underpinning these features is the new SessionConfig API, which streamlines camera setup and reconfiguration. Now, let's dive into the details of these exciting new features.

Powerful Video Recording: High-Speed and Feature Combinations

CameraX 1.5 significantly expands its video capabilities, enabling more creative and robust recording experiences.

Slow Motion & High Frame Rate Video

One of our most anticipated features, slow-motion video, is now available. You can now capture high-speed video (e.g., 120 or 240 fps) and encode it directly into a dramatic slow-motion video. Alternatively, you can record at the same high frame rate to produce exceptionally smooth video.

Implementing this is straightforward if you're familiar with the VideoCapture API.

  1. Check for High-Speed Support: Use the new Recorder.getHighSpeedVideoCapabilities() method to query if the device supports this feature.

val cameraInfo = cameraProvider.getCameraInfo(cameraSelector)

val highSpeedCapabilities = Recorder.getHighSpeedVideoCapabilities(cameraInfo)

if (highSpeedCapabilities == null) {
    // This camera device does not support high-speed video.
    return
}
  1. Configure and Bind the Use Case: Use the returned videoCapabilities (which contains supported video quality information) to build a HighSpeedVideoSessionConfig. You must then query the supported frame rate ranges via cameraInfo.getSupportedFrameRateRanges() and set the desired range. Invoke setSlowMotionEnabled(true) to record slow motion videos, otherwise it will record high-frame-rate videos. The final step is to use the regular Recorder.prepareRecording().start() to begin recording the video.


val preview = Preview.Builder().build()
val quality = highSpeedCapabilities
        .getSupportedQualities(DynamicRange.SDR).first()

val recorder = Recorder.Builder()
      .setQualitySelector(QualitySelector.from(quality)))
      .build()

val videoCapture = VideoCapture.withOutput(recorder)

val frameRateRange = cameraInfo.getSupportedFrameRateRanges(      
       HighSpeedVideoSessionConfig(videoCapture, preview)
).first()

val sessionConfig = HighSpeedVideoSessionConfig(
    videoCapture, 
    preview, 
    frameRateRange = frameRateRange, 
    // Set true for slow-motion playback, or false for high-frame-rate
    isSlowMotionEnabled = true
)

cameraProvider.bindToLifecycle(
     lifecycleOwner, cameraSelector, sessionConfig)

// Start recording slow motion videos. 
val recording = recorder.prepareRecording(context, outputOption)
      .start(executor, {})

Compatibility and Limitations

High-speed recording requires specific CameraConstrainedHighSpeedCaptureSession and CamcorderProfile support. Always perform the capability check, and enable high-speed recording only on supported devices to prevent bad user experience. Currently, this feature is supported on the rear cameras of almost all Pixel devices and select models from other manufacturers.

Check the blog post for more details.

Combine Features with Confidence: The Feature Group API

CameraX 1.5 introduces the Feature Group API, which eliminates the guesswork of feature compatibility. Based on Android 15's feature combination query API, you can now confidently enable multiple features together, guaranteeing a stable camera session. The Feature Group currently supports: HDR (HLG), 60 fps, Preview Stabilization, and Ultra HDR. For instance, you can enable HDR, 60 fps, and Preview Stabilization simultaneously on Pixel 10 and Galaxy S25 series. Future enhancements are planned to include 4K recording and ultra-wide zoom.

The feature group API enables two essential use cases:

Use Case 1: Prioritizing the Best Quality

If you want to capture using the best possible combination of features, you can provide a prioritized list. CameraX will attempt to enable them in order, selecting the first combination the device fully supports.

val sessionConfig = SessionConfig(
    useCases = listOf(preview, videoCapture),
    preferredFeatureGroup = listOf(
        GroupableFeature.HDR_HLG10,
        GroupableFeature.FPS_60,
        GroupableFeature.PREVIEW_STABILIZATION
    )
).apply {
    // (Optional) Get a callback with the enabled features to update your UI.
    setFeatureSelectionListener { selectedFeatures ->
        updateUiIndicators(selectedFeatures)
    }
}
processCameraProvider.bindToLifecycle(activity, cameraSelector, sessionConfig)

In this example, CameraX tries to enable features in this order:

  1. HDR + 60 FPS + Preview Stabilization

  2. HDR + 60 FPS

  3. HDR + Preview Stabilization

  4. HDR

  5. 60 FPS + Preview Stabilization

  6. 60 FPS

  7. Preview Stabilization

  8. None

Use Case 2: Building a User-Facing Settings UI

You can now accurately reflect which feature combinations are supported in your app's settings UI, disabling toggles for unsupported options like the picture below.

To determine whether to gray out a toggle, use the following codes to check for feature combination support. Initially, query the status of every individual feature. Once a feature is enabled, re-query the remaining features with the enabled features to see if their toggles must now be grayed out due to compatibility constraints.

fun disableFeatureIfNotSuported(
   enabledFeatures: Set<GroupableFeature>,     
   featureToCheck:GroupableFeature
) {
 val sessionConfig = SessionConfig(
     useCases = useCases,
     requiredFeatureGroup = enabledFeatures + featureToCheck
 )
 val isSupported = cameraInfo.isFeatureGroupSupported(sessionConfig)

 if (!isSupported) {
     // disable the toggle for featureToCheck
 }
}

Please refer to the Feature Group blog post for more information.

More Video Enhancements

  • Concurrent Camera Improvements: With CameraX 1.5.1, you can now bind Preview + ImageCapture + VideoCapture use cases concurrently for each SingleCameraConfig in non-composition mode. Additionally, in composition mode (same use cases with CompositionSettings), you can now set the CameraEffect that is applied to the final composition result.

  • Dynamic Muting: You can now start a recording in a muted state using PendingRecording.withAudioEnabled(boolean initialMuted) and allow the user to unmute later using Recording.mute(boolean muted).

  • Improved Insufficient Storage Handling: CameraX now reliably dispatches the VideoRecordEvent.Finalize.ERROR_INSUFFICIENT_STORAGE error, allowing your app to gracefully handle low storage situations and inform the user.

  • Low Light Boost: On supported devices (like the Pixel 10 series), you can enable CameraControl.enableLowLightBoostAsync to automatically brighten the preview and video streams in dark environments.

Professional-Grade Image Capture

CameraX 1.5 brings major upgrades to ImageCapture for developers who demand maximum quality and flexibility.

Unleash Creative Control with DNG (RAW) Capture

For complete control over post-processing, CameraX now supports DNG (RAW) capture. This gives you access to the unprocessed, uncompressed image data directly from the camera sensor, enabling professional-grade editing and color grading. The API supports capturing the DNG file alone, or capturing simultaneous JPEG and DNG outputs. See the sample code below for how to capture JPEG and DNG files simultaneously.

val capabilities = ImageCapture.getImageCaptureCapabilities(cameraInfo)
val imageCapture = ImageCapture.Builder().apply {
    if (capabilities.supportedOutputFormats
             .contains(OUTPUT_FORMAT_RAW_JPEG)) {
        // Capture both RAW and JPEG formats.
        setOutputFormat(OUTPUT_FORMAT_RAW_JPEG)
    }
}.build()
// ... bind imageCapture to lifecycle ...


// Provide separate output options for each format.
val outputOptionRaw = /* ... configure for image/x-adobe-dng ... */
val outputOptionJpeg = /* ... configure for image/jpeg ... */
imageCapture.takePicture(
    outputOptionRaw,
    outputOptionJpeg,
    executor,
    object : ImageCapture.OnImageSavedCallback {
        override fun onImageSaved(results: OutputFileResults) {
            // This callback is invoked twice: once for the RAW file
            // and once for the JPEG file.
        }

        override fun onError(exception: ImageCaptureException) {}
    }
)

Ultra HDR for Camera Extensions

Get the best of both worlds: the stunning computational photography of Camera Extensions (like Night Mode) combined with the brilliant color and dynamic range of Ultra HDR. This feature is now supported on many recent premium Android phones, such as the Pixel 9/10 series and Samsung S24/S25 series.

// Support UltraHDR when Extension is enabled. 

val extensionsEnabledCameraSelector = extensionsManager
     .getExtensionEnabledCameraSelector(
        CameraSelector.DEFAULT_BACK_CAMERA, ExtensionMode.NIGHT)

val imageCapabilities = ImageCapture.getImageCaptureCapabilities(
               cameraProvider.getCameraInfo(extensionsEnabledCameraSelector)

val imageCapture = ImageCapture.Builder()
     .apply {
       if (imageCapabilities.supportedOutputFormats
                .contains(OUTPUT_FORMAT_JPEG_ULTRA_HDR) {
           setOutputFormat(OUTPUT_FORMAT_JPEG_ULTRA_HDR)

       }

     }.build()

Core API and Usability Enhancements

A New Way to Configure: SessionConfig

As seen in the examples above, SessionConfig is a new concept in CameraX 1.5. It centralizes configuration and simplifies the API in two key ways:

  1. No More Manual unbind() Calls: CameraX APIs are lifecycle-aware. It will implicitly "unbind" your use cases when the activity or other LifecycleOwner is destroyed. But updating use cases or switching cameras still requires you to call unbind() or unbindAll() before rebinding. Now with CameraX 1.5, when you bind a new SessionConfig, CameraX seamlessly updates the session for you, eliminating the need for unbind calls.

  2. Deterministic Frame Rate Control: The new SessionConfig API introduces a deterministic way to manage the frame rate. Unlike the previous setTargetFrameRate, which was only a hint, this new method guarantees the specified frame rate range will be applied upon successful configuration. To ensure accuracy, you must query supported frame rates using CameraInfo.getSupportedFrameRateRanges(SessionConfig). By passing the full SessionConfig, CameraX can accurately determine the supported ranges based on stream configurations.

Camera-Compose is Now Stable

We know how much you enjoy Jetpack Compose, and we're excited to announce that the camera-compose library is now stable at version 1.5.1! This release includes critical bug fixes related to CameraXViewfinder usage with Compose features like moveableContentOf and Pager, as well as resolving a preview stretching issue. We will continue to add more features to camera-compose in future releases.

ImageAnalysis and CameraControl Improvements

  • Torch Strength Adjustment: Gain fine-grained control over the device's torch with new APIs. You can query the maximum supported strength using CameraInfo.getMaxTorchStrengthLevel() and then set the desired level with CameraControl.setTorchStrengthLevel().

  • NV21 Support in ImageAnalysis: You can now request the NV21 image format directly from ImageAnalysis, simplifying integration with other libraries and APIs. This is enabled by invoking ImageAnalysis.Builder.setOutputImageFormat(OUTPUT_IMAGE_FORMAT_NV21).

Get Started Today

Update your dependencies to CameraX 1.5 today and explore the exciting new features. We can't wait to see what you build.

To use CameraX 1.5, please add the following dependencies to your libs.versions.toml. (We recommend using 1.5.1 which contains many critical bug fixes and concurrent camera improvements.)

[versions]

camerax = "1.5.1"


[libraries]

..

androidx-camera-core = { module = "androidx.camera:camera-core", version.ref = "camerax" }

androidx-camera-compose = { module = "androidx.camera:camera-compose", version.ref = "camerax" }

androidx-camera-view = { module = "androidx.camera:camera-view", version.ref = "camerax" }

androidx-camera-lifecycle = { group = "androidx.camera", name = "camera-lifecycle", version.ref = "camerax" }

androidx-camera-camera2 = { module = "androidx.camera:camera-camera2", version.ref = "camerax" }

androidx-camera-extensions = { module = "androidx.camera:camera-extensions", version.ref = "camerax" }

And then add these to your module build.gradle.kts dependencies:

dependencies {

  ..

  implementation(libs.androidx.camera.core)
  implementation(libs.androidx.camera.lifecycle)

  implementation(libs.androidx.camera.camera2)

  implementation(libs.androidx.camera.view) // for PreviewView 
  implementation(libs.androidx.camera.compose) // for compose UI

  implementation(libs.androidx.camera.extensions) // For Extensions 

}

Have questions or want to connect with the CameraX team? Join the CameraX developer discussion group or file a bug report:

13 Nov 2025 5:00pm GMT

#WeArePlay: Meet the game creators who entertain, inspire and spark imagination

Posted by Robbie McLachlan, Developer Marketing

In our latest #WeArePlay stories, we meet the game creators who entertain, inspire and spark imagination in players around the world on Google Play. From delivering action-packed 3D kart racing to creating a calming, lofi world for plant lovers - here are a few of our favourites:

Ralf and Matt, co-founders of Vector Unit

San Rafael (CA), U.S.

With over 557 million downloads, Ralf and Matt's game, Beach Buggy Racing, brings the joy of classic, action-packed kart racing to gamers worldwide.

After meeting at a California game company back in the late '90s, Matt and Ralf went on to work at major studios. Years later, they reunited to form Vector Unit, a new company where they could finally have full creative freedom. They channeled their passion for classic kart-racers into Beach Buggy Racing, a vibrant 3D title that brought a console-quality feel to phones. The fan reception was immense, with players celebrating by baking cakes and dressing up for in-game events. Today, the team keeps Beach Buggy Racing 2 updated with global collaborations and is already working on a new prototype, all to fulfill their mission: sparking joy.




Camilla, founder of Clover-Fi Games

Batangas, Philippines

Camilla's game, Window Garden, lets players slow down by decorating and caring for digital plants.

While living with her mother during the pandemic, tech graduate Camilla made the leap from software engineer to self-taught game developer. Her mom's indoor plants sparked an idea: Window Garden. She created the lofi idle game to encourage players to slow down. In the game, players water flowers and fruits, and decorate cozy spaces in their own style. With over 1 million downloads to date, this simple loop has become a calming daily ritual since its launch. The game's success earned it a "Best of 2024" award from Google Play, and Camilla now hopes to expand her studio and collaborate with other creatives.


Rodrigo, founder of Kolb Apps

Curitiba, Brazil

Rodrigo's game, Real Drum, puts a complete, realistic-sounding virtual drum set in your pocket, making it easy for anyone to play.

Rodrigo started coding at just 12 years old, creating software for his family's businesses. This technical skill later combined with his hobby as an amateur musician. While pursuing a career in programming, he noticed a clear gap: there were no high-quality percussion apps. He united his two passions, technology and rhythm, to create Real Drum. The result is a realistic, easy-to-use virtual set that has amassed over 437 million downloads, letting people around the world play drums and cymbals without the noise. His game has made learning music accessible to many and inspired new artists. Now, Rodrigo's team plans to launch new apps for children to continue nurturing musical creativity.

Discover other inspiring app and game founders featured in #WeArePlay.

13 Nov 2025 5:00pm GMT

12 Nov 2025

feedAndroid Developers Blog

Android developer verification: Early access starts now as we continue to build with your feedback

Posted by Matthew Forsythe Director - Product Management, Android App Safety


We recently announced new developer verification requirements, which serve as an additional layer of defense in our ongoing effort to keep Android users safe. We know that security works best when it accounts for the diverse ways people use our tools. This is why we announced this change early: to gather input and ensure our solutions are balanced. We appreciate the community's engagement and have heard the early feedback - specifically from students and hobbyists who need an accessible path to learn, and from power users who are more comfortable with security risks. We are making changes to address the needs of both groups.

To understand how these updates fit into our broader mission, it is important to first look at the specific threats we are tackling.

Why verification is important

Keeping users safe on Android is our top priority. Combating scams and digital fraud is not new for us - it has been a central focus of our work for years. From Scam Detection in Google Messages to Google Play Protect and real-time alerts for scam calls, we have consistently acted to keep our ecosystem safe.

However, online scams and malware campaigns are becoming more aggressive. At the global scale of Android, this translates to real harm for people around the world - especially in rapidly digitizing regions where many are coming online for the first time. Technical safeguards are critical, but they cannot solve for every scenario where a user is manipulated. Scammers use high-pressure social engineering tactics to trick users into bypassing the very warnings designed to protect them.

For example, a common attack we track in Southeast Asia illustrates this threat clearly. A scammer calls a victim claiming their bank account is compromised and uses fear and urgency to direct them to sideload a "verification app" to secure their funds, often coaching them to ignore standard security warnings. Once installed, this app - actually malware - intercepts the victim's notifications. When the user logs into their real banking app, the malware captures their two-factor authentication codes, giving the scammer everything they need to drain the account.

While we have advanced safeguards and protections to detect and take down bad apps, without verification, bad actors can spin up new harmful apps instantly. It becomes an endless game of whack-a-mole. Verification changes the math by forcing them to use a real identity to distribute malware, making attacks significantly harder and more costly to scale. We have already seen how effective this is on Google Play, and we are now applying those lessons to the broader Android ecosystem to ensure there is a real, accountable identity behind the software you install.

Supporting students and hobbyists

We heard from developers who were concerned about the barrier to entry when building apps intended only for a small group, like family or friends. We are using your input to shape a dedicated account type for students and hobbyists. This will allow you to distribute your creations to a limited number of devices without going through the full verification requirements.

Empowering experienced users

While security is crucial, we've also heard from developers and power users who have a higher risk tolerance and want the ability to download unverified apps.

Based on this feedback and our ongoing conversations with the community, we are building a new advanced flow that allows experienced users to accept the risks of installing software that isn't verified. We are designing this flow specifically to resist coercion, ensuring that users aren't tricked into bypassing these safety checks while under pressure from a scammer. It will also include clear warnings to ensure users fully understand the risks involved, but ultimately, it puts the choice in their hands. We are gathering early feedback on the design of this feature now and will share more details in the coming months.

Getting started with early access

Today, we're excited to start inviting developers to the early access for developer verification in Android Developer Console for developers that distribute exclusively outside of Play, and will share invites to the Play Console experience soon for Play developers. We are looking forward to your questions and feedback on streamlining the experience for all developers.

Watch our video below for a walkthrough of the new Android Developer Console experience and see our guides for more details and FAQs.

We are committed to working with you to keep the ecosystem safe while getting this right.


12 Nov 2025 11:47pm GMT

15 Oct 2025

feedPlanet Maemo

Dzzee 1.9.0 for N800/N810/N900/N9/Leste

I was playing around with Xlib this summer, and one thing led to another, and here we are with four fresh ports to retro mobile X11 platforms. There is even a Maemo Leste port, but due to some SGX driver woes on the N900, I opted for using XSHM and software rendering, which works well and has the nice, crisp pixel look (on Fremantle, it's using EGL+GLESv2). Even the N8x0 port has very fluid motion by utilizing Xv for blitting software-rendered pixels to the screen. The game is available over at itch.io.





1 Add to favourites0 Bury

15 Oct 2025 11:31am GMT

05 Jun 2025

feedPlanet Maemo

Mobile blogging, the past and the future

This blog has been running more or less continuously since mid-nineties. The site has existed in multiple forms, and with different ways to publish. But what's common is that at almost all points there was a mechanism to publish while on the move.

Psion, documents over FTP

In the early 2000s we were into adventure motorcycling. To be able to share our adventures, we implemented a way to publish blogs while on the go. The device that enabled this was the Psion Series 5, a handheld computer that was very much a device ahead of its time.

Psion S5, also known as the Ancestor

The Psion had a reasonably sized keyboard and a good native word processing app. And battery life good for weeks of usage. Writing while underway was easy. The Psion could use a mobile phone as a modem over an infrared connection, and with that we could upload the documents to a server over FTP.

Server-side, a cron job would grab the new documents, converting them to HTML and adding them to our CMS.

In the early days of GPRS, getting this to work while roaming was quite tricky. But the system served us well for years.

If we wanted to include photos to the stories, we'd have to find an Internet cafe.

SMS and MMS

For an even more mobile setup, I implemented an SMS-based blogging system. We had an old phone connected to a computer back in the office, and I could write to my blog by simply sending a text. These would automatically end up as a new paragraph in the latest post. If I started the text with NEWPOST, an empty blog post would be created with the rest of that message's text as the title.

As I got into neogeography, I could also send a NEWPOSITION message. This would update my position on the map, connecting weather metadata to the posts.

As camera phones became available, we wanted to do pictures too. For the Death Monkey rally where we rode minimotorcycles from Helsinki to Gibraltar, we implemented an MMS-based system. With that the entries could include both text and pictures. But for that you needed a gateway, which was really only realistic for an event with sponsors.

Photos over email

A much easier setup than MMS was to slightly come back to the old Psion setup, but instead of word documents, sending email with picture attachments. This was something that the new breed of (pre-iPhone) smartphones were capable of. And by now the roaming question was mostly sorted.

And so my blog included a new "moblog" section. This is where I could share my daily activities as poor-quality pictures. Sort of how people would use Instagram a few years later.

My blog from that era

Pause

Then there was sort of a long pause in mobile blogging advancements. Modern smartphones, data roaming, and WiFi hotspots had become ubiquitous.

In the meanwhile the blog also got migrated to a Jekyll-based system hosted on AWS. That means the old Midgard-based integrations were off the table.

And I traveled off-the-grid rarely enough that it didn't make sense to develop a system.

But now that we're sailing offshore, that has changed. Time for new systems and new ideas. Or maybe just a rehash of the old ones?

Starlink, Internet from Outer Space

Most cruising boats - ours included - now run the Starlink satellite broadband system. This enables full Internet, even in the middle of an ocean, even video calls! With this, we can use normal blogging tools. The usual one for us is GitJournal, which makes it easy to write Jekyll-style Markdown posts and push them to GitHub.

However, Starlink is a complicated, energy-hungry, and fragile system on an offshore boat. The policies might change at any time preventing our way of using it, and also the dishy itself, or the way we power it may fail.

But despite what you'd think, even on a nerdy boat like ours, loss of Internet connectivity is not an emergency. And this is where the old-style mobile blogging mechanisms come handy.

Inreach, texting with the cloud

Our backup system to Starlink is the Garmin Inreach. This is a tiny battery-powered device that connects to the Iridium satellite constellation. It allows tracking as well as basic text messaging.

When we head offshore we always enable tracking on the Inreach. This allows both our blog and our friends ashore to follow our progress.

I also made a simple integration where text updates sent to Garmin MapShare get fetched and published on our blog. Right now this is just plain text-based entries, but one could easily implement a command system similar to what I had over SMS back in the day.

One benefit of the Inreach is that we can also take it with us when we go on land adventures. And it'd even enable rudimentary communications if we found ourselves in a liferaft.

Sailmail and email over HF radio

The other potential backup for Starlink failures would be to go seriously old-school. It is possible to get email access via a SSB radio and a Pactor (or Vara) modem.

Our boat is already equipped with an isolated aft stay that can be used as an antenna. And with the popularity of Starlink, many cruisers are offloading their old HF radios.

Licensing-wise this system could be used either as a marine HF radio (requiring a Long Range Certificate), or amateur radio. So that part is something I need to work on. Thankfully post-COVID, radio amateur license exams can be done online.

With this setup we could send and receive text-based email. The Airmail application used for this can even do some automatic templating for position reports. We'd then need a mailbox that can receive these mails, and some automation to fetch and publish.

0 Add to favourites0 Bury

05 Jun 2025 12:00am GMT

16 Oct 2024

feedPlanet Maemo

Adding buffering hysteresis to the WebKit GStreamer video player

The <video> element implementation in WebKit does its job by using a multiplatform player that relies on a platform-specific implementation. In the specific case of glib platforms, which base their multimedia on GStreamer, that's MediaPlayerPrivateGStreamer.

WebKit GStreamer regular playback class diagram

The player private can have 3 buffering modes:

The current implementation (actually, its wpe-2.38 version) was showing some buffering problems on some Broadcom platforms when doing in-memory buffering. The buffering levels monitored by MediaPlayerPrivateGStreamer weren't accurate because the Nexus multimedia subsystem used on Broadcom platforms was doing its own internal buffering. Data wasn't being accumulated in the GstQueue2 element of playbin, because BrcmAudFilter/BrcmVidFilter was accepting all the buffers that the queue could provide. Because of that, the player private buffering logic was erratic, leading to many transitions between "buffer completely empty" and "buffer completely full". This, it turn, caused many transitions between the HaveEnoughData, HaveFutureData and HaveCurrentData readyStates in the player, leading to frequent pauses and unpauses on Broadcom platforms.

So, one of the first thing I tried to solve this issue was to ask the Nexus PlayPump (the subsystem in charge of internal buffering in Nexus) about its internal levels, and add that to the levels reported by GstQueue2. There's also a GstMultiqueue in the pipeline that can hold a significant amount of buffers, so I also asked it for its level. Still, the buffering level unstability was too high, so I added a moving average implementation to try to smooth it.

All these tweaks only make sense on Broadcom platforms, so they were guarded by ifdefs in a first version of the patch. Later, I migrated those dirty ifdefs to the new quirks abstraction added by Phil. A challenge of this migration was that I needed to store some attributes that were considered part of MediaPlayerPrivateGStreamer before. They still had to be somehow linked to the player private but only accessible by the platform specific code of the quirks. A special HashMap attribute stores those quirks attributes in an opaque way, so that only the specific quirk they belong to knows how to interpret them (using downcasting). I tried to use move semantics when storing the data, but was bitten by object slicing when trying to move instances of the superclass. In the end, moving the responsibility of creating the unique_ptr that stored the concrete subclass to the caller did the trick.

Even with all those changes, undesirable swings in the buffering level kept happening, and when doing a careful analysis of the causes I noticed that the monitoring of the buffering level was being done from different places (in different moments) and sometimes the level was regarded as "enough" and the moment right after, as "insufficient". This was because the buffering level threshold was one single value. That's something that a hysteresis mechanism (with low and high watermarks) can solve. So, a logical level change to "full" would only happen when the level goes above the high watermark, and a logical level change to "low" when it goes under the low watermark level.

For the threshold change detection to work, we need to know the previous buffering level. There's a problem, though: the current code checked the levels from several scattered places, so only one of those places (the first one that detected the threshold crossing at a given moment) would properly react. The other places would miss the detection and operate improperly, because the "previous buffering level value" had been overwritten with the new one when the evaluation had been done before. To solve this, I centralized the detection in a single place "per cycle" (in updateBufferingStatus()), and then used the detection conclusions from updateStates().

So, with all this in mind, I refactored the buffering logic as https://commits.webkit.org/284072@main, so now WebKit GStreamer has a buffering code much more robust than before. The unstabilities observed in Broadcom devices were gone and I could, at last, close Issue 1309.

0 Add to favourites0 Bury

16 Oct 2024 6:12am GMT

18 Sep 2022

feedPlanet Openmoko

Harald "LaF0rge" Welte: Deployment of future community TDMoIP hub

I've mentioned some of my various retronetworking projects in some past blog posts. One of those projects is Osmocom Community TDM over IP (OCTOI). During the past 5 or so months, we have been using a number of GPS-synchronized open source icE1usb interconnected by a new, efficient but strill transparent TDMoIP protocol in order to run a distributed TDM/PDH network. This network is currently only used to provide ISDN services to retronetworking enthusiasts, but other uses like frame relay have also been validated.

So far, the central hub of this OCTOI network has been operating in the basement of my home, behind a consumer-grade DOCSIS cable modem connection. Given that TDMoIP is relatively sensitive to packet loss, this has been sub-optimal.

Luckily some of my old friends at noris.net have agreed to host a new OCTOI hub free of charge in one of their ultra-reliable co-location data centres. I'm already hosting some other machines there for 20+ years, and noris.net is a good fit given that they were - in their early days as an ISP - the driving force in the early 90s behind one of the Linux kernel ISDN stracks called u-isdn. So after many decades, ISDN returns to them in a very different way.

Side note: In case you're curious, a reconstructed partial release history of the u-isdn code can be found on gitea.osmocom.org

But I digress. So today, there was the installation of this new OCTOI hub setup. It has been prepared for several weeks in advance, and the hub contains two circuit boards designed entirely only for this use case. The most difficult challenge was the fact that this data centre has no existing GPS RF distribution, and the roof is ~ 100m of CAT5 cable (no fiber!) away from the roof. So we faced the challenge of passing the 1PPS (1 pulse per second) signal reliably through several steps of lightning/over-voltage protection into the icE1usb whose internal GPS-DO serves as a grandmaster clock for the TDM network.

The equipment deployed in this installation currently contains:

For more details, see this wiki page and this ticket

Now that the physical deployment has been made, the next steps will be to migrate all the TDMoIP links from the existing user base over to the new hub. We hope the reliability and performance will be much better than behind DOCSIS.

In any case, this new setup for sure has a lot of capacity to connect many more more users to this network. At this point we can still only offer E1 PRI interfaces. I expect that at some point during the coming winter the project for remote TDMoIP BRI (S/T, S0-Bus) connectivity will become available.

Acknowledgements

I'd like to thank anyone helping this effort, specifically * Sylvain "tnt" Munaut for his work on the RS422 interface board (+ gateware/firmware) * noris.net for sponsoring the co-location * sysmocom for sponsoring the EPYC server hardware

18 Sep 2022 10:00pm GMT

08 Sep 2022

feedPlanet Openmoko

Harald "LaF0rge" Welte: Progress on the ITU-T V5 access network front

Almost one year after my post regarding first steps towards a V5 implementation, some friends and I were finally able to visit Wobcom, a small German city carrier and pick up a lot of decommissioned POTS/ISDN/PDH/SDH equipment, primarily V5 access networks.

This means that a number of retronetworking enthusiasts now have a chance to play with Siemens Fastlink, Nokia EKSOS and DeTeWe ALIAN access networks/multiplexers.

My primary interest is in Nokia EKSOS, which looks like an rather easy, low-complexity target. As one of the first steps, I took PCB photographs of the various modules/cards in the shelf, take note of the main chip designations and started to search for the related data sheets.

The results can be found in the Osmocom retronetworking wiki, with https://osmocom.org/projects/retronetworking/wiki/Nokia_EKSOS being the main entry page, and sub-pages about

In short: Unsurprisingly, a lot of Infineon analog and digital ICs for the POTS and ISDN ports, as well as a number of Motorola M68k based QUICC32 microprocessors and several unknown ASICs.

So with V5 hardware at my disposal, I've slowly re-started my efforts to implement the LE (local exchange) side of the V5 protocol stack, with the goal of eventually being able to interface those V5 AN with the Osmocom Community TDM over IP network. Once that is in place, we should also be able to offer real ISDN Uk0 (BRI) and POTS lines at retrocomputing events or hacker camps in the coming years.

08 Sep 2022 10:00pm GMT

Harald "LaF0rge" Welte: Clock sync trouble with Digium cards and timing cables

If you have ever worked with Digium (now part of Sangoma) digital telephony interface cards such as the TE110/410/420/820 (single to octal E1/T1/J1 PRI cards), you will probably have seen that they always have a timing connector, where the timing information can be passed from one card to another.

In PDH/ISDN (or even SDH) networks, it is very important to have a synchronized clock across the network. If the clocks are drifting, there will be underruns or overruns, with associated phase jumps that are particularly dangerous when analog modem calls are transported.

In traditional ISDN use cases, the clock is always provided by the network operator, and any customer/user side equipment is expected to synchronize to that clock.

So this Digium timing cable is needed in applications where you have more PRI lines than possible with one card, but only a subset of your lines (spans) are connected to the public operator. The timing cable should make sure that the clock received on one port from the public operator should be used as transmit bit-clock on all of the other ports, no matter on which card.

Unfortunately this decades-old Digium timing cable approach seems to suffer from some problems.

bursty bit clock changes until link is up

The first problem is that downstream port transmit bit clock was jumping around in bursts every two or so seconds. You can see an oscillogram of the E1 master signal (yellow) received by one TE820 card and the transmit of the slave ports on the other card at https://people.osmocom.org/laforge/photos/te820_timingcable_problem.mp4

As you can see, for some seconds the two clocks seem to be in perfect lock/sync, but in between there are periods of immense clock drift.

What I'd have expected is the behavior that can be seen at https://people.osmocom.org/laforge/photos/te820_notimingcable_loopback.mp4 - which shows a similar setup but without the use of a timing cable: Both the master clock input and the clock output were connected on the same TE820 card.

As I found out much later, this problem only occurs until any of the downstream/slave ports is fully OK/GREEN.

This is surprising, as any other E1 equipment I've seen always transmits at a constant bit clock irrespective whether there's any signal in the opposite direction, and irrespective of whether any other ports are up/aligned or not.

But ok, once you adjust your expectations to this Digium peculiarity, you can actually proceed.

clock drift between master and slave cards

Once any of the spans of a slave card on the timing bus are fully aligned, the transmit bit clocks of all of its ports appear to be in sync/lock - yay - but unfortunately only at the very first glance.

When looking at it for more than a few seconds, one can see a slow, continuous drift of the slave bit clocks compared to the master :(

Some initial measurements show that the clock of the slave card of the timing cable is drifting at about 12.5 ppb (parts per billion) when compared against the master clock reference.

This is rather disappointing, given that the whole point of a timing cable is to ensure you have one reference clock with all signals locked to it.

The work-around

If you are willing to sacrifice one port (span) of each card, you can work around that slow-clock-drift issue by connecting an external loopback cable. So the master card is configured to use the clock provided by the upstream provider. Its other ports (spans) will transmit at the exact recovered clock rate with no drift. You can use any of those ports to provide the clock reference to a port on the slave card using an external loopback cable.

In this setup, your slave card[s] will have perfect bit clock sync/lock.

Its just rather sad that you need to sacrifice ports just for achieving proper clock sync - something that the timing connectors and cables claim to do, but in reality don't achieve, at least not in my setup with the most modern and high-end octal-port PCIe cards (TE820).

08 Sep 2022 10:00pm GMT