16 Sep 2024

feedTalkAndroid

Android 15 Could Sync Notifications Across Your Android Devices

If you've got more than one Android device, you won't need to worry about dismissed notifications persisting on your other device.

16 Sep 2024 1:22pm GMT

Car Dealership Tycoon Codes 

Find all the latest Car Dealership Tycoon codes right here! Read on for more!

16 Sep 2024 6:17am GMT

Base Battles Codes 

Here you will find all of the latest Base Battles Codes for Roblox! Read on for more!

16 Sep 2024 6:11am GMT

12 Sep 2024

feedAndroid Developers Blog

Developer Preview: Desktop windowing on Android Tablets

Posted by Francesco Romano - Developer Relations Engineer on Android, and Fahd Imtiaz - Product Manager, Android Developer


To empower tablet users to get more done, we're enhancing freeform windowing, allowing them to run multiple apps simultaneously and resize windows for optimal multitasking. Today, we're excited to share that desktop windowing on Android tablets is available in developer preview.

For app developers, the concept of Android apps running in freeform windows has already existed with solutions like Samsung DeX and ChromeOS. Updating your apps to support adaptive layouts, more robust multitasking, and adaptive inputs will ensure your apps work well on large screens across the Android ecosystem.

Let's explore how to optimize your apps for desktop windowing and deliver the optimal experience to users.

What is desktop windowing?

Desktop windowing allows users to run multiple apps simultaneously and resize app windows, offering a more flexible and desktop-like experience. This, along with a refreshed System UI and new APIs, allows users to be even more productive and creates a more seamless, desktop-like experience on tablets.

In Figure 1, you can see the anatomy of the screen with desktop windowing enabled. Things to make note of:

  • Users can run multiple apps side-by-side, simultaneously
  • Taskbar is fixed and shows the running apps, users can pin apps for quick access
  • New header bar with window controls at the top of each window which apps can customize
Desktop windowing on a Pixel Tablet
Figure 1: Desktop windowing on a Pixel Tablet.
Note: Images are examples and subject to change


How can users invoke desktop windowing?

By default, apps open in full screen on Android tablets. To run the apps as a desktop window on Pixel Tablet, press and hold the window handle at the top in the middle of the screen and drag it within the UI, as seen in Figure 2.

Once you are in the desktop space, all future apps will be launched as desktop windows as well.

A moving image demonstrating what completing the action 'press, hold, and drag the window handle to enter desktop windowing' looks like.
Figure 2. Press, hold, and drag the window handle to enter desktop windowing.
Note: Images are examples and subject to change


You can also invoke desktop windowing from the menu that shows up below the window handle when you tap/click on it or use the keyboard shortcut meta key (Windows, Command, or Search) + Ctrl + Down.

You can exit desktop windowing and display an app as full screen by closing all active windows or by grabbing the window handle at the top of the window and dragging the app to the top of the screen. You can also use the meta + H keyboard shortcut to run apps as full screen again.

To return to the desktop, move a full screen app to the desktop space by using the methods mentioned above, or simply tap on the desktop space tile in the Recents screen.

What does this mean for app developers?

Desktop windowing on Android tablets creates new opportunities for your apps, particularly around productivity and multitasking. The possibility to resize and reposition multiple app windows allows users to easily compare documents, reference information while composing emails, and multitask efficiently.

By optimizing for desktop windowing, you can deliver unique user experiences to match the growing demand for tablet-based productivity. At the same time, you'll enhance the overall user experience on tablets, making your apps more versatile and adaptable to different scenarios.

If your app already meets the Tier 2 (Large Screens optimized) quality bar in the Large screen app quality guidelines, then there is minimal additional optimization required! If your app has not been optimized for large screens yet, updating it according to the Large screen app quality guidelines becomes even more crucial in the context of desktop windowing. Let's see why:

  • Freeform resizing enables users to resize apps to their preference for maximized productivity. Considering this, developers should note:
    • Apps with locked orientation are freely resizable. That means, even if an activity is locked to portrait orientation, users can still resize the app to landscape orientation window. In a future update, apps declared as non-resizable will have their UI scaled while keeping the same aspect ratio.
    • Adaptive layouts: By adapting your UI, apps have an opportunity to effortlessly handle a wide range of window sizes, from compact to expanded screen layouts. In desktop windowing, apps can be resized down to a minimum size of 386dp x 352dp, so make sure to leverage window size classes to adjust your app's layout, content, and interactions to adapt to different window dimensions.
    • State management: With freeform resizing, configuration changes happen each time the window resizes, so your app should either handle these configuration changes gracefully or make sure you are preserving the app state when the OS initiates the re-creation of the app. As a reminder, users can change the screen density while your app is running, so it's best to ensure that your app can handle screen density configuration changes as well.

    A moving image demonstrating how apps are fully resizable
    Figure 3. Apps with locked orientation are freely resizable.

  • Desktop windowing takes productivity on tablets to the next level with multiple apps running simultaneously. Similar to split screen, Desktop windowing encourages users to have multiple windows open. Considering this, developers should note:
    • Multitasking support: For enhanced productivity, users can have two or more apps open simultaneously, and they expect to easily share content between apps, so add support for drag and drop gestures. Also, ensure your app continues to function correctly even when not in focus, and if your app uses exclusive resources like camera or microphone, the app needs to handle resource loss gracefully when other apps acquire the resource.
    • Multi-instance support: Users can run multiple instances of your app side-by-side; for example, a document editor application may allow users to start new documents while still being able to reference the already open documents. Apps can set this new Multi-instance property to declare that System UI should be shown for this app to allow it to be launched as multiple instances. Also note that in desktop windowing, new tasks open in a new window, so double-check the user journey if your app starts multiple tasks.

    A moving image demonstrating how you can start another instance of Chrome by dragging a tab out og the app window.
    Figure 4. Start another instance of Chrome by dragging a tab out of the app window.
    Note: Images are examples and subject to change

    • With desktop windowing, input methods beyond touch and insets handling become even more important for a seamless user experience.
      • More input methods (keyboard, mouse): Users are more likely to use your app with a variety of input methods like external keyboards, mice, and trackpads. Check that users can interact smoothly with your app using keyboard and mouse peripherals or through the emulator. Developers can add support for app shortcuts and publish them using the keyboard shortcuts API, which allows users to easily view the supported app shortcuts through a standardized surface on Android devices.
      • Insets handling: All apps when running in desktop windowing have a header bar, even in immersive mode. Ensure your app's content isn't obscured by this. The new header bar is reported as a caption bar in Compose (androidx.compose.foundation:foundation-layout.WindowInsets.Companion.captionBar) and in Views (android.view.WindowInsets.Type.CAPTION_BAR), which is part of the system bars. API 35 also introduced a new appearance type, to make the header bar transparent, to allow apps to draw custom content inside.

Get hands-on!

Today we're announcing a developer preview that provides you with an early opportunity to experience and test desktop windowing. You can try it out on Pixel Tablet before it's released to AOSP more broadly. The preview is available today. Update your Pixel Tablet to the latest Android 15 QPR1 Beta 2 release to try out desktop windowing. If you don't have a Pixel Tablet handy, access the Pixel Tablet emulator in Android Studio Preview, and select the Android 15.0 (Google APIs Tablet) target. Once your device is set up, select Enable freeform windows option in Developer options to explore the capabilities of desktop windowing and how your app behaves within this new environment.

By optimizing your apps for desktop windowing on Pixel Tablet, you are not only enhancing the app experience on that specific device but also future-proofing your apps for the broader Android ecosystem where freeform windowing will become prevalent. We're excited about the windows of opportunities enabled by desktop windowing, and we look forward to seeing how you adapt your apps for an enhanced user experience.

We're committed to improving the desktop windowing experience through future updates. Make sure to test your app and give us feedback. Say tuned for more developer guides and resources!

12 Sep 2024 6:00pm GMT

11 Sep 2024

feedAndroid Developers Blog

Streamlining Android authentication: Credential Manager replaces legacy APIs

Posted by Diego Zavala and Jason Lucibello - Product Managers


In 2023, we introduced Credential Manager for Android. Credential Manager creates a unified experience for passkeys, Sign in with Google, and passwords, allowing seamless sign-in and eliminating the need for users to type in usernames or passwords.

ALT TEXT
Fig 1. Sample app showing Credential Manager dialog in a sign-in flow with a passkey, a password, and a Sign in with Google options


To bring Credential Manager's benefits to more Android users and simplify developers' integration efforts, APIs that were previously deprecated will continue their phased removals and shutdowns. These APIs include:

Developers with apps that still use these APIs should migrate to Credential Manager as soon as possible. Credential Manager supports all authentication features included in these legacy APIs, as well as streamlined journeys for users and modernizes the experience with passkey support and streamlined user journeys. Developers looking to implement authorization functionality for Google Accounts, such as scoped access to a service like Google Drive, should continue to use the AuthorizationClient API.

Current status of APIs as of September 2024, update plans, and recommended migration guides.

API: Smart Lock for Passwords
Status: Removed
Next Update: Fully shut down in Q1 2025
Migration guide: Migrate from Smart Lock for Passwords to Credential Manager

API: Credential Saving API
Status: Deprecated
Next Update: Removed in H1 2025
Migration guide: Migrate passwords to Credential Manager

API: Sign in with Google button
Status: Deprecated
Next Update: Removed in H1 2025
Migration guide: Migrate to the Sign in with Google button and Credential Manager

API: One Tap Sign-in
Status: Deprecated
Next Update: Removed in H2 2025
Migration guide: Authenticate users with Sign in with Google and Credential Manager

API: Google Sign-In for Android
Status: Deprecated
Next Update: Removed in H2 2025
Migration guide: Migrate from legacy Google Sign-In to Credential Manager

What does each status mean?

  • Deprecated: API is still in the SDK and functional, but will be removed and fully shut down in the future. Developers are recommended to migrate to Credential Manager at this time.
  • Removed: API is still functional for users, but is no longer included in the SDK. New app versions compiled with the most recent SDK would fail in the build process if your code still utilizes the removed API. If your app still relies on any of these APIs, you should migrate to Credential Manager as soon as possible.
  • Fully shut down: API is no longer functional, and it will fail when a request is sent by an app.

Credential Manager offers streamlined, more secure auth journeys

Credential Manager delivers multiple advantages to users and developers over the deprecated APIs:

1. Easier, more secure sign-ins with passkeys: Passkeys are an alternative to passwords that provide an easier and more secure authentication experience, based on industry standards. Credential Manager brings support for passkeys to Android apps.


2. Frictionless, one-tap sign-in: Users select their preferred saved credential from the options offered, without needing to remember or type username or passwords.


3. Unified UI across all credentials: Credential Manager's one-tap sign-in works with passkeys, Sign in with Google, and passwords. It deduplicates methods for the same account, so users no longer need to remember which method they last used, or which one is the "right" method.


4. Extended support for password managers: Users benefit from using the credentials stored in their preferred password manager on Credential Manager flows, and can even enable multiple password managers at the same time! Passwords managers not only protect users' credentials, but they also provide additional action and protections to keep users safe, such as upgrading passwords to passkeys, alerting users to password reuse, and containing usage to affiliated apps and domains.


5. Simplified development: Developers can consolidate their sign-in logic into a single, modern API, reducing development overhead and maintenance efforts. New authentication functionality will be released through Credential Manager going forward.


Adopting Credential Manager is an intuitive upgrade for developers

For developers previously using our deprecated APIs, the transition to Credential Manager is smooth and intuitive. Developers like X (formerly known as Twitter), Pinterest have already experienced the benefits of the upgrade. X shared with us that Credential Manager's unified approach made migration and maintenance effortless, while Pinterest expressed a smooth process for both users and engineers with Credential Manager.

Quote text reads: 'The Credential Manager library allowed us to unify Smart Lock, Sign in with Google, and passkeys under one cohesive umbrella, significantly reducing the amount of code required. The unified process made migration and maintenance effortless, empowering us to enhance security and user experience with ease' Saurabh Arora, Staff SoftwareEngineer, X (formerly Twitter)


Quote text reads: 'Migrartingo ur codebase to Credential Manager on Android was a smooth process for users and engineers, which aallowed us to have more cohesive and simplified process to support and maintain authentication at Pinterest. Our Android users have benfitted from frictionless sign-in and sign-up using Google, currently accounting for over 75% of user authentications.' - Jorge Garmendia Identity Product safety and Compliance Client Engineering Lead, Pinterest

Developers can use the following guides to make adopting Credential Manager even easier:

Share your feedback

Your input is very valuable to us as we continue to refine and improve our authentication services. Please keep providing us feedback on the issue tracker and share your experience integrating Credential Manager!

11 Sep 2024 4:00pm GMT

09 Sep 2024

feedAndroid Developers Blog

Jetpack Compose APIs for building adaptive layouts using Material guidance now stable

Posted by Alex Vanyo - Developer Relations Engineer


The 1.0 stable version of the Compose adaptive APIs with Material guidance is out, ready to be used in production. The library helps you build adaptive layouts that provide an optimized user experience on any window size.

The team at SAP Mobile Start were early adopters of the Compose adaptive APIs. It took their developers only five minutes to integrate the NavigationSuiteScaffold from the new Compose Material 3 adaptive library, rapidly adapting the app's navigation UI to different window sizes.

Each of the new components in the library, NavigationSuiteScaffold, ListDetailPaneScaffold and SupportingPaneScaffold are adaptive: based on the window size and posture, different components are displayed to the user based on which one is most appropriate in the current context. This helps build UI that adapts to a wide variety of window sizes instead of just stretching layouts.

For an overview of the components, check out the dedicated I/O session and our new documentation pages to get started.

In this post, we're going to take a more detailed look at the layering of the new library so you have a better understanding of how customisable it is, to fit a wide variety of use cases you might have.

Similar to Compose itself, the adaptive libraries are layered into multiple dependencies, so that you can choose the appropriate level of abstraction for your application.There are four new artifacts as part of the adaptive libraries:

  • For the core building blocks for building adaptive UI, including computing the window size class and the current posture, add androidx.compose.material3.adaptive:adaptive:1.0.0

  • For implementing multi-pane layouts, add androidx.compose.material3.adaptive:adaptive-layout:1.0.0


  • For standalone navigators for the multi-pane scaffold layouts, add androidx.compose.material3.adaptive:adaptive-navigation:1.0.0

  • For implementing adaptive navigation UI, add androidx.compose.material3:material3-adaptive-navigation-suite:1.3.0

The libraries have the following dependencies:

Flow diagram showing dependencies between material3-adaptive 1.0.0 and material 1.3.0 libraries
New library dependency graph


To explore this layering more, let's start with the highest level example with the most built-in functionality using a NavigableListDetailPaneScaffold from androidx.compose.material3.adaptive:adaptive-navigation:

val navigator = rememberListDetailPaneScaffoldNavigator<Any>()

NavigableListDetailPaneScaffold(
    navigator = navigator,
    listPane = {
        // List pane
    },
    detailPane = {
        // Detail pane
    },
)


This snippet of code gives you all of our recommended adaptive behavior out of the box for a list-detail layout: determining how many panes to show based on the current window size, hiding and showing the correct pane when the window size changes depending on the previous state of the UI, and having the back button conditionally bring the user back to the list, depending on the window size and the current state.

A list layout adapting to and from a list detail layout depending on the window size


This encapsulates a lot of behavior - and this might be all you need, and you don't need to go any deeper!

However, there may be reasons why you may want to tweak this behavior, or more directly manage the state by hoisting parts of it in a different way.

Remember, each layer builds upon the last. This snippet is at the outermost layer, and we can start unwrapping the layers to customize it where we need.

Let's go one level deeper with NavigableListDetailPaneScaffold and drop down one layer. Behavior won't change at all with these direct inlinings, since we are just inlining the default behavior at each step:

(Fun fact: You can follow along with this directly in Android Studio and for any other component you desire. If you choose Refactor > Inline function, you can directly replace a component with its implementation. You can't delete the original function in the library of course.)

val navigator = rememberListDetailPaneScaffoldNavigator<Any>()

BackHandler(
    enabled = navigator.canNavigateBack(BackNavigationBehavior.PopUntilContentChange)
) {
    navigator.navigateBack(BackNavigationBehavior.PopUntilContentChange)
}
ListDetailPaneScaffold(
    directive = navigator.scaffoldDirective,
    value = navigator.scaffoldValue,
    listPane = {
        // List pane
    },
    detailPane = {
        // Detail pane
    },
)


With the first inlining, we see the BackHandler that NavigableListDetailPaneScaffold includes by default. If using ListDetailPaneScaffold directly, back handling is left up to the developer to include and hoist to the appropriate place.

This also reveals how the navigator provides two pieces of state to control the ListDetailPaneScaffold:

  • directive -- how the panes should be arranged in the ListDetailPaneScaffold, and
  • value -- the current state of the panes, as calculated from the directive and the current navigation state.

These are both controlled by the navigator, and the next unpeeling shows us the default arguments to the navigator for directive and the adapt strategy, which is used to calculate value:

val navigator = rememberListDetailPaneScaffoldNavigator<Any>(
    scaffoldDirective = calculatePaneScaffoldDirective(currentWindowAdaptiveInfo()),
    adaptStrategies = ListDetailPaneScaffoldDefaults.adaptStrategies(),
)

BackHandler(
    enabled = navigator.canNavigateBack(BackNavigationBehavior.PopUntilContentChange)
) {
    navigator.navigateBack(BackNavigationBehavior.PopUntilContentChange)
}
ListDetailPaneScaffold(
    directive = navigator.scaffoldDirective,
    value = navigator.scaffoldValue,
    listPane = {
        // List pane
    },
    detailPane = {
        // Detail pane
    },
)


The directive controls the behavior for how many panes to show and the pane spacing, based on currentWindowAdaptiveInfo, which contains the size and posture of the window.

This can be customized with a different directive, to show two panes side-by-side at a smaller medium width:

val navigator = rememberListDetailPaneScaffoldNavigator<Any>(
    scaffoldDirective = calculatePaneScaffoldDirectiveWithTwoPanesOnMediumWidth(currentWindowAdaptiveInfo()),
    adaptStrategies = ListDetailPaneScaffoldDefaults.adaptStrategies(),
)


By default, showing two panes at a medium width can result in UI that is too narrow, especially for complex content. However, this can be a good option to use the window space more optimally by showing two panes for less complex content.

The AdaptStrategy controls what happens to panes when there isn't enough space to show all of them. Right now, this always hides panes for which there isn't enough space.

This directive is used by the navigator to drive its logic and, combined with the adapt strategy to determine the scaffold value, the resulting target state for each of the panes.

The scaffold directive and the scaffold value are then passed to the ListDetailPaneScaffold, driving the behavior of the scaffold.

This layering allows hoisting the scaffold state away from the display of the scaffold itself. This layering also allows custom implementations for controlling how the scaffold works and for hoisting related state. For example, if you are using a custom navigation solution instead of the navigator, you could drive the ListDetailPaneScaffold directly with state derived from your custom navigation solution.

The layering is enforced in the library with the different artifacts:

  • androidx.compose.material3.adaptive:adaptive contains the underlying methods to calculate the current window adaptive info
  • androidx.compose.material3.adaptive:adaptive-layout contains the layouts ListDetailPaneScaffold and SupportingPaneScaffold
  • androidx.compose.material3.adaptive:adaptive-navigation contains the navigator APIs (like rememberListDetailPaneScaffoldNavigator)

Therefore, if you aren't going to use the navigator and instead use a custom navigation solution, you can skip using androidx.compose.material3.adaptive:adaptive-navigation and depend on androidx.compose.material3.adaptive:adaptive-layout directly.

When adding the Compose Adaptive library to your app, start with the most fully featured layer, and then unwrap if needed to tweak behavior. As we continue to work on the library and add new features, we'll keep adding them to the appropriate layer. Using the higher-level layers will mean that you will be able to get these new features most easily. If you need to, you can use lower layers to get more fine-grained control, but that also means that more responsibility for behavior is transferred to your app, just like the layering in Compose itself.

Try out the new components today, and send us your feedback for bugs and feature requests.

09 Sep 2024 5:01pm GMT

17 Jun 2024

feedPlanet Maemo

Incorporating 3D Gaussian Splats into the graphics pipeline

3D Gaussian splatting is the emerging rendering technique that is overtaking NeRFs. Since it is centered around point primitives, it is more compatible with traditional graphics pipelines that already support point rendering.

Gaussian splats essentially enhance the concept of point rendering by converting the point primitive into a 3D ellipsoid, which is then projected into 2D during the rendering process.. This concept was initially described in 2002 [3], but the technique of extending Structure from Motion scans in this way was only detailed more recently [1].

In this post, I explore how to integrate Gaussian splats into the traditional graphics pipeline. This allows them to be used alongside triangle-based primitives and interact with them through the depth buffer for occlusion (see header image). This approach also simplifies deployment by eliminating the need for CUDA.

Storage

The original implementation uses .ply files as their checkpoint format, focusing on maintaining training-relevant data structures at the expense of storage efficiency, leading to increased file sizes.

For example, it stores the covariance as scaling and a rotation quaternion, necessitating reconstruction during rendering. A more efficient approach would be to leverage orthogonality, storing only the diagonal and upper triangular vectors, thereby eliminating reconstruction and reducing storage requirements.

Further analysis of the storage usage for each attribute shows that the spherical harmonics of orders 1-3 are the main contributors to the file size. However, according to the ablation study in the original publication [1], these harmonics only lead to a modest PSNR improvement of 0.5.

Therefore, the most straightforward way to decrease storage is by discarding the higher-order spherical harmonics. Additionally, the level 0 spherical harmonics can be converted into a diffuse color and merged with opacity to form a single RGBA value. These simple yet effective methods were implemented in one of the early WebGL implementations, resulting in the .splat format. As an added benefit, this format can be easily interpreted by viewers unaware of Gaussian splats as a simple colored point cloud:

Results using a non Gaussian-splat aware renderer

By directly storing the covariance as previously mentioned we can reduce the precision from float32 to float16, thereby halving the storage needed for that data. Furthermore, since most splats have limited spatial extents, we can also utilize float16 for position data, yielding additional storage savings.

With these changes, we achieve a storage requirement of 22 bytes per splat, in contrast to the 44 bytes needed by the .splat format and 236 bytes in the original implementation. Thus, we have attained a 10x reduction in storage compared to the original implementation simply by using more suitable data types.

Blending

The image formation model presented in the original paper [1] is similar to the NeRF rendering, as it is compared to it. This involves casting a ray and observing its intersection with the splats, which leads to front-to-back blending. This is precisely the approach taken by the provided CUDA implementation.

Blending remains a component of the fixed-function unit within the graphics pipeline, which can be set up for front-to-back blending [2] by using the factors (one_minus_dest_alpha, one) and by multiplying color and alpha in the shader as color.rgb * color.a. This results in the following equation:

\begin{aligned}C_{dst} &= (1 - \alpha_{dst}) \cdot \alpha_{src} C_{src} &+ C_{dst}\\ \alpha_{dst} &= (1 - \alpha_{dst})\cdot\alpha_{src} &+ \alpha_{dst}\end{aligned}

However, this method requires the framebuffer alpha value to be zero before rendering the splats, which is not typically the case as any previous render pass could have written an arbitrary alpha value.

A simple solution is to switch to back-to-front sorting and use the standard alpha blending factors (src_alpha, one_minus_src_alpha) for the following blending equation:

C_{dst} = \alpha_{src} \cdot C_{src} + (1 - \alpha_{src}) \cdot C_{dst}

This allows us to regard Gaussian splats as a special type of particles that can be rendered together with other transparent elements within a scene.

References

  1. Kerbl, Bernhard, et al. "3d gaussian splatting for real-time radiance field rendering." ACM Transactions on Graphics 42.4 (2023): 1-14.
  2. Green, Simon. "Volumetric particle shadows." NVIDIA Developer Zone (2008).
  3. Zwicker, Matthias, et al. "EWA splatting." IEEE Transactions on Visualization and Computer Graphics 8.3 (2002): 223-238.

0 Add to favourites0 Bury

17 Jun 2024 1:28pm GMT

30 Apr 2024

feedPlanet Maemo

Dissecting GstSegments

During all these years using GStreamer, I've been having to deal with GstSegments in many situations. I've always have had an intuitive understanding of the meaning of each field, but never had the time to properly write a good reference explanation for myself, ready to be checked at those times when the task at hand stops being so intuitive and nuisances start being important. I used the notes I took during an interesting conversation with Alba and Alicia about those nuisances, during the GStreamer Hackfest in A Coruña, as the seed that evolved into this post.

But what are actually GstSegments? They are the structures that track the values needed to synchronize the playback of a region of interest in a media file.

GstSegments are used to coordinate the translation between Presentation Timestamps (PTS), supplied by the media, and Runtime.

PTS is the timestamp that specifies, in buffer time, when the frame must be displayed on screen. This buffer time concept (called buffer running-time in the docs) refers to the ideal time flow where rate isn't being had into account.

Decode Timestamp (DTS) is the timestamp that specifies, in buffer time, when the frame must be supplied to the decoder. On decoders supporting P-frames (forward-predicted) and B-frames (bi-directionally predicted), the PTS of the frames reaching the decoder may not be monotonic, but the PTS of the frames reaching the sinks are (the decoder outputs monotonic PTSs).

Runtime (called clock running time in the docs) is the amount of physical time that the pipeline has been playing back. More specifically, the Runtime of a specific frame indicates the physical time that has passed or must pass until that frame is displayed on screen. It starts from zero.

Base time is the point when the Runtime starts with respect to the input timestamp in buffer time (PTS or DTS). It's the Runtime of the PTS=0.

Start, stop, duration: Those fields are buffer timestamps that specify when the piece of media that is going to be played starts, stops and how long that portion of the media is (the absolute difference between start and stop, and I mean absolute because a segment being played backwards may have a higher start buffer timestamp than what its stop buffer timestamp is).

Position is like the Runtime, but in buffer time. This means that in a video being played back at 2x, Runtime would flow at 1x (it's physical time after all, and reality goes at 1x pace) and Position would flow at 2x (the video moves twice as fast than physical time).

The Stream Time is the position in the stream. Not exactly the same concept as buffer time. When handling multiple streams, some of them can be offset with respect to each other, not starting to be played from the begining, or even can have loops (eg: repeating the same sound clip from PTS=100 until PTS=200 intefinitely). In this case of repeating, the Stream time would flow from PTS=100 to PTS=200 and then go back again to the start position of the sound clip (PTS=100). There's a nice graphic in the docs illustrating this, so I won't repeat it here.

Time is the base of Stream Time. It's the Stream time of the PTS of the first frame being played. In our previous example of the repeating sound clip, it would be 100.

There are also concepts such as Rate and Applied Rate, but we didn't get into them during the discussion that motivated this post.

So, for translating between Buffer Time (PTS, DTS) and Runtime, we would apply this formula:

Runtime = BufferTime * ( Rate * AppliedRate ) + BaseTime

And for translating between Buffer Time (PTS, DTS) and Stream Time, we would apply this other formula:

StreamTime = BufferTime * AppliedRate + Time

And that's it. I hope these notes in the shape of a post serve me as reference in the future. Again, thanks to Alicia, and especially to Alba, for the valuable clarifications during the discussion we had that day in the Igalia office. This post wouldn't have been possible without them.

0 Add to favourites0 Bury

30 Apr 2024 6:00am GMT

02 Feb 2024

feedPlanet Maemo

libSDL2 and VVVVVV for the Wii

Just a quick update on something that I've been working on in my free time.

I recently refurbished my old Nintendo Wii, and for some reason I cannot yet explain (not even to myself) I got into homebrew programming and decided to port libSDL (the 2.x version -- a 1.x port already existed) to it. The result of this work is already available via the devkitPro distribution, and although I'm sure there are still many bugs, it's overall quite usable.

In order to prove it, I ported the game VVVVVV to the Wii:

During the process I had to fix quite a few bugs in my libSDL port and in a couple of other libraries used by VVVVVV, which will hopefully will make it easier to port more games. There's still an issue that bothers me, where the screen resolution seems to be not totally supported by my TV (or is it the HDMI adaptor's fault?), resulting in a few pixels being cut at the top and at the bottom of the screen. But unless you are a perfectionist, it's a minor issue.

In case you have a Wii to spare, or wouldn't mind playing on the Dolphin emulator, here's the link to the VVVVVV release. Have fun! :-)

0 Add to favourites0 Bury

02 Feb 2024 5:50pm GMT

18 Sep 2022

feedPlanet Openmoko

Harald "LaF0rge" Welte: Deployment of future community TDMoIP hub

I've mentioned some of my various retronetworking projects in some past blog posts. One of those projects is Osmocom Community TDM over IP (OCTOI). During the past 5 or so months, we have been using a number of GPS-synchronized open source icE1usb interconnected by a new, efficient but strill transparent TDMoIP protocol in order to run a distributed TDM/PDH network. This network is currently only used to provide ISDN services to retronetworking enthusiasts, but other uses like frame relay have also been validated.

So far, the central hub of this OCTOI network has been operating in the basement of my home, behind a consumer-grade DOCSIS cable modem connection. Given that TDMoIP is relatively sensitive to packet loss, this has been sub-optimal.

Luckily some of my old friends at noris.net have agreed to host a new OCTOI hub free of charge in one of their ultra-reliable co-location data centres. I'm already hosting some other machines there for 20+ years, and noris.net is a good fit given that they were - in their early days as an ISP - the driving force in the early 90s behind one of the Linux kernel ISDN stracks called u-isdn. So after many decades, ISDN returns to them in a very different way.

Side note: In case you're curious, a reconstructed partial release history of the u-isdn code can be found on gitea.osmocom.org

But I digress. So today, there was the installation of this new OCTOI hub setup. It has been prepared for several weeks in advance, and the hub contains two circuit boards designed entirely only for this use case. The most difficult challenge was the fact that this data centre has no existing GPS RF distribution, and the roof is ~ 100m of CAT5 cable (no fiber!) away from the roof. So we faced the challenge of passing the 1PPS (1 pulse per second) signal reliably through several steps of lightning/over-voltage protection into the icE1usb whose internal GPS-DO serves as a grandmaster clock for the TDM network.

The equipment deployed in this installation currently contains:

For more details, see this wiki page and this ticket

Now that the physical deployment has been made, the next steps will be to migrate all the TDMoIP links from the existing user base over to the new hub. We hope the reliability and performance will be much better than behind DOCSIS.

In any case, this new setup for sure has a lot of capacity to connect many more more users to this network. At this point we can still only offer E1 PRI interfaces. I expect that at some point during the coming winter the project for remote TDMoIP BRI (S/T, S0-Bus) connectivity will become available.

Acknowledgements

I'd like to thank anyone helping this effort, specifically * Sylvain "tnt" Munaut for his work on the RS422 interface board (+ gateware/firmware) * noris.net for sponsoring the co-location * sysmocom for sponsoring the EPYC server hardware

18 Sep 2022 10:00pm GMT

08 Sep 2022

feedPlanet Openmoko

Harald "LaF0rge" Welte: Progress on the ITU-T V5 access network front

Almost one year after my post regarding first steps towards a V5 implementation, some friends and I were finally able to visit Wobcom, a small German city carrier and pick up a lot of decommissioned POTS/ISDN/PDH/SDH equipment, primarily V5 access networks.

This means that a number of retronetworking enthusiasts now have a chance to play with Siemens Fastlink, Nokia EKSOS and DeTeWe ALIAN access networks/multiplexers.

My primary interest is in Nokia EKSOS, which looks like an rather easy, low-complexity target. As one of the first steps, I took PCB photographs of the various modules/cards in the shelf, take note of the main chip designations and started to search for the related data sheets.

The results can be found in the Osmocom retronetworking wiki, with https://osmocom.org/projects/retronetworking/wiki/Nokia_EKSOS being the main entry page, and sub-pages about

In short: Unsurprisingly, a lot of Infineon analog and digital ICs for the POTS and ISDN ports, as well as a number of Motorola M68k based QUICC32 microprocessors and several unknown ASICs.

So with V5 hardware at my disposal, I've slowly re-started my efforts to implement the LE (local exchange) side of the V5 protocol stack, with the goal of eventually being able to interface those V5 AN with the Osmocom Community TDM over IP network. Once that is in place, we should also be able to offer real ISDN Uk0 (BRI) and POTS lines at retrocomputing events or hacker camps in the coming years.

08 Sep 2022 10:00pm GMT

Harald "LaF0rge" Welte: Clock sync trouble with Digium cards and timing cables

If you have ever worked with Digium (now part of Sangoma) digital telephony interface cards such as the TE110/410/420/820 (single to octal E1/T1/J1 PRI cards), you will probably have seen that they always have a timing connector, where the timing information can be passed from one card to another.

In PDH/ISDN (or even SDH) networks, it is very important to have a synchronized clock across the network. If the clocks are drifting, there will be underruns or overruns, with associated phase jumps that are particularly dangerous when analog modem calls are transported.

In traditional ISDN use cases, the clock is always provided by the network operator, and any customer/user side equipment is expected to synchronize to that clock.

So this Digium timing cable is needed in applications where you have more PRI lines than possible with one card, but only a subset of your lines (spans) are connected to the public operator. The timing cable should make sure that the clock received on one port from the public operator should be used as transmit bit-clock on all of the other ports, no matter on which card.

Unfortunately this decades-old Digium timing cable approach seems to suffer from some problems.

bursty bit clock changes until link is up

The first problem is that downstream port transmit bit clock was jumping around in bursts every two or so seconds. You can see an oscillogram of the E1 master signal (yellow) received by one TE820 card and the transmit of the slave ports on the other card at https://people.osmocom.org/laforge/photos/te820_timingcable_problem.mp4

As you can see, for some seconds the two clocks seem to be in perfect lock/sync, but in between there are periods of immense clock drift.

What I'd have expected is the behavior that can be seen at https://people.osmocom.org/laforge/photos/te820_notimingcable_loopback.mp4 - which shows a similar setup but without the use of a timing cable: Both the master clock input and the clock output were connected on the same TE820 card.

As I found out much later, this problem only occurs until any of the downstream/slave ports is fully OK/GREEN.

This is surprising, as any other E1 equipment I've seen always transmits at a constant bit clock irrespective whether there's any signal in the opposite direction, and irrespective of whether any other ports are up/aligned or not.

But ok, once you adjust your expectations to this Digium peculiarity, you can actually proceed.

clock drift between master and slave cards

Once any of the spans of a slave card on the timing bus are fully aligned, the transmit bit clocks of all of its ports appear to be in sync/lock - yay - but unfortunately only at the very first glance.

When looking at it for more than a few seconds, one can see a slow, continuous drift of the slave bit clocks compared to the master :(

Some initial measurements show that the clock of the slave card of the timing cable is drifting at about 12.5 ppb (parts per billion) when compared against the master clock reference.

This is rather disappointing, given that the whole point of a timing cable is to ensure you have one reference clock with all signals locked to it.

The work-around

If you are willing to sacrifice one port (span) of each card, you can work around that slow-clock-drift issue by connecting an external loopback cable. So the master card is configured to use the clock provided by the upstream provider. Its other ports (spans) will transmit at the exact recovered clock rate with no drift. You can use any of those ports to provide the clock reference to a port on the slave card using an external loopback cable.

In this setup, your slave card[s] will have perfect bit clock sync/lock.

Its just rather sad that you need to sacrifice ports just for achieving proper clock sync - something that the timing connectors and cables claim to do, but in reality don't achieve, at least not in my setup with the most modern and high-end octal-port PCIe cards (TE820).

08 Sep 2022 10:00pm GMT