20 May 2025

feedAndroid Developers Blog

16 things to know for Android developers at Google I/O 2025

Posted by Matthew McCullough - VP of Product Management, Android Developer

Today at Google I/O, we announced the many ways we're helping you build excellent, adaptive experiences, and helping you stay more productive through updates to our tooling that put AI at your fingertips and throughout your development lifecycle. Here's a recap of 16 of our favorite announcements for Android developers; you can also see what was announced last week in The Android Show: I/O Edition. And stay tuned over the next two days as we dive into all of the topics in more detail!

Building AI into your Apps

1: Building intelligent apps with Generative AI

Generative AI enhances apps' experience by making them intelligent, personalized and agentic. This year, we announced new ML Kit GenAI APIs using Gemini Nano for common on-device tasks like summarization, proofreading, rewrite, and image description. We also provided capabilities for developers to harness more powerful models such as Gemini Pro, Gemini Flash, and Imagen via Firebase AI Logic for more complex use cases like image generation and processing extensive data across modalities, including bringing AI to life in Android XR, and a new AI sample app, Androidify, that showcases how these APIs can transform your selfies into unique Android robots! To start building intelligent experiences by leveraging these new capabilities, explore the developer documentation, sample apps, and watch the overview session to choose the right solution for your app.

New experiences across devices

2: One app, every screen: think adaptive and unlock 500 million screens

Mobile Android apps form the foundation across phones, foldables, tablets and ChromeOS, and this year we're helping you bring them to cars and XR and expanding usages with desktop windowing and connected displays. This expansion means tapping into an ecosystem of 500 million devices - a significant opportunity to engage more users when you think adaptive, building a single mobile app that works across form factors. Resources, including Compose Layouts library and Jetpack Navigation updates, help make building these dynamic experiences easier than before. You can see how Peacock, NBCUniveral's streaming service (available in the US) is building adaptively to meet users where they are.

Disclaimer: Peacock is available in the US only. This video will only be viewable to US viewers.


3: Material 3 Expressive: design for intuition and emotion

The new Material 3 Expressive update provides tools to enhance your product's appeal by harnessing emotional UX, making it more engaging, intuitive, and desirable for users. Check out the I/O talk to learn more about expressive design and how it inspires emotion, clearly guides users toward their goals, and offers a flexible and personalized experience.

moving image of Material 3 Expressive demo


4: Smarter widgets, engaging live updates

Measure the return on investment of your widgets (available soon) and easily create personalized widget previews with Glance 1.2. Promoted Live Updates notify users of important ongoing notifications and come with a new Progress Style standardized template.

moving image of Material 3 Expressive demo


5: Enhanced Camera & Media: low light boost and battery savings

This year's I/O introduces several camera and media enhancements. These include a software low light boost for improved photography in dim lighting and native PCM offload, allowing the DSP to handle more audio playback processing, thus conserving user battery. Explore our detailed sessions on built-in effects within CameraX and Media3 for further information.

6: Build next-gen app experiences for Cars

We're launching expanded opportunities for developers to build in-car experiences, including new Gemini integrations, support for more app categories like Games and Video, and enhanced capabilities for media and communication apps via the Car App Library and new APIs. Alongside updated car app quality tiers and simplified distribution, we'll soon be providing improved testing tools like Android Automotive OS on Pixel Tablet and Firebase Test Lab access to help you bring your innovative apps to cars. Learn more from our technical session and blog post on new in-car app experiences.

7: Build for Android XR's expanding ecosystem with Developer Preview 2 of the SDK

We announced Android XR in December, and today at Google I/O we shared a bunch of updates coming to the platform including Developer Preview 2 of the Android XR SDK plus an expanding ecosystem of devices: in addition to the first Android XR headset, Samsung's Project Moohan, you'll also see more devices including a new portable Android XR device from our partners at XREAL. There's lots more to cover for Android XR: Watch the Compose and AI on Android XR session, and the Building differentiated apps for Android XR with 3D content session, and learn more about building for Android XR.

product image of XREAL’s Project Aura against a nebulous black background
XREAL's Project Aura


8: Express yourself on Wear OS: meet Material Expressive on Wear OS 6

This year we are launching Wear OS 6: the most powerful and expressive version of Wear OS. Wear OS 6 features Material 3 Expressive, a new UI design with personalized visuals and motion for user creativity, coming to Wear, Android, and Google apps later this year. Developers gain access to Material 3 Expressive on Wear OS by utilizing new Jetpack libraries: Wear Compose Material 3, which provides components for apps and Wear ProtoLayout Material 3 which provides components and layouts for tiles. Get started with Material 3 libraries and other updates on Wear.

moving image displays examples of Material 3 Expressive on Wear OS experiences
Some examples of Material 3 Expressive on Wear OS experiences


9: Engage users on Google TV with excellent TV apps

You can leverage more resources within Compose's core and Material libraries with the stable release of Compose for TV, empowering you to build excellent adaptive UIs across your apps. We're also thrilled to share exciting platform updates and developer tools designed to boost app engagement, including bringing Gemini capabilities to TV in the fall, opening enrollment for our Video Discovery API, and more.

Developer productivity

10: Build beautiful apps faster with Jetpack Compose

Compose is our big bet for UI development. The latest stable BOM release provides the features, performance, stability, and libraries that you need to build beautiful adaptive apps faster, so you can focus on what makes your app valuable to users.

moving image of compose adaptive layouts updates in the Google Play app
Compose Adaptive Layouts Updates in the Google Play app


11: Kotlin Multiplatform: new Shared Template lets you build across platforms, easily

Kotlin Multiplatform (KMP) enables teams to reach new audiences across Android and iOS with less development time. We've released a new Android Studio KMP shared module template, updated Jetpack libraries and new codelabs (Getting started with Kotlin Multiplatform and Migrating your Room database to KMP) to help developers who are looking to get started with KMP. Shared module templates make it easier for developers to craft, maintain, and own the business logic. Read more on what's new in Android's Kotlin Multiplatform.

12: Gemini in Android Studio: AI Agents to help you work

Gemini in Android Studio is the AI-powered coding companion that makes Android developers more productive at every stage of the dev lifecycle. In March, we introduced Image to Code to bridge the gap between UX teams and software engineers by intelligently converting design mockups into working Compose UI code. And today, we previewed new agentic AI experiences, Journeys for Android Studio and Version Upgrade Agent. These innovations make it easier to build and test code. You can read more about these updates in What's new in Android development tools.

13: Android Studio: smarter with Gemini

In this latest release, we're empowering devs with AI-driven tools like Gemini in Android Studio, streamlining UI creation, making testing easier, and ensuring apps are future-proofed in our ever-evolving Android ecosystem. These innovations accelerate development cycles, improve app quality, and help you stay ahead in a dynamic mobile landscape. To take advantage, upgrade to the latest Studio release. You can read more about these innovations in What's new in Android development tools.

moving image of Gemini in Android Studio Agentic Experiences including Journeys and Version Upgrade


And the latest on driving business growth

14: What's new in Google Play

Get ready for exciting updates from Play designed to boost your discovery, engagement and revenue! Learn how we're continuing to become a content-rich destination with enhanced personalization and fresh ways to showcase your apps and content. Plus, explore powerful new subscription features designed to streamline checkout and reduce churn. Read I/O 2025: What's new in Google Play to learn more.

a moving image of three mobile devices displaying how content is displayed on the Play Store


15: Start migrating to Play Games Services v2 today

Play Games Services (PGS) connects over 2 billion gamer profiles on Play, powering cross-device gameplay, personalized gaming content and rewards for your players throughout the gaming journey. We are moving PGS v1 features to v2 with more advanced features and an easier integration path. Learn more about the migration timeline and new features.

16: And of course, Android 16

We unpacked some of the latest features coming to users in Android 16, which we've been previewing with you for the last few months. If you haven't already, make sure to test your apps with the latest Beta of Android 16. Android 16 includes Live Updates, professional media and camera features, desktop windowing and connected displays, major accessibility enhancements and much more.

Check out all of the Android and Play content at Google I/O

This was just a preview of some of the cool updates for Android developers at Google I/O, but stay tuned to Google I/O over the next two days as we dive into a range of Android developer topics in more detail. You can check out the What's New in Android and the full Android track of sessions, and whether you're joining in person or around the world, we can't wait to engage with you!

Explore this announcement and all Google I/O 2025 updates on io.google starting May 22.


20 May 2025 6:03pm GMT

What’s new in Wear OS 6

Posted by Chiara Chiappini - Developer Relations Engineer

This year, we're excited to introduce Wear OS 6: the most power-efficient and expressive version of Wear OS yet.

Wear OS 6 introduces the new design system we call Material 3 Expressive. It features a major refresh with visual and motion components designed to give users an experience with more personalization. The new design offers a great level of expression to meet user demand for experiences that are modern, relevant, and distinct. Material 3 Expressive is coming to Wear OS, Android, and all your favorite Google apps on these devices later this year.

The good news is that you don't need to compromise battery for beauty: thanks to Wear OS platform optimizations, watches updating from Wear OS 5 to Wear OS 6 can see up to 10% improvement in battery life.1

Wear OS 6 developer preview

Today we're releasing the Developer Preview of Wear OS 6, the next version of Google's smartwatch platform, based on Android 16.

Wear OS 6 brings a number of developer-facing changes, such as refining the always-on display experience. Check out what's changed and try the new Wear OS 6 emulator to test your app for compatibility with the new platform version.

Material 3 Expressive on Wear OS

moving image displays examples of Material 3 Expressive on Wear OS experiences
Some examples of Material 3 Expressive on Wear OS experiences


Material 3 Expressive for the watch is fully optimized for the round display. We recommend developers embrace the new design system in their apps and tiles. To help you adopt Material 3 Expressive in your app, we have begun releasing new design guidance for Wear OS, along with corresponding Figma design kits.

As a developer, you can get access the Material 3 Expressive on Wear OS using new Jetpack libraries:

These two libraries provide implementations for the components catalog that adheres to the Material 3 Expressive design language.

Make it personal with richer color schemes using themes

moving image showing how dynamic color theme updates colors of apps and Tiles
Dynamic color theme updates colors of apps and Tiles


The Wear Compose Material 3 and Wear Protolayout Material 3 libraries provide updated and extended color schemes, typography, and shapes to bring both depth and variety to your designs. Additionally, your tiles now align with the system font by default (on Wear OS 6+ devices), offering a more cohesive experience on the watch.

Both libraries introduce dynamic color theming, which automatically generates a color theme for your app or tile to match the colors of the watch face of Pixel watches.

Make it more glanceable with new tile components

Tiles now support a new framework and a set of components that embrace the watch's circular form factor. These components make tiles more consistent and glanceable, so users can more easily take swift action on the information included in them.

We've introduced a 3-slot tile layout to improve visual consistency in the Tiles carousel. This layout includes a title slot, a main content slot, and a bottom slot, designed to work across a range of different screen sizes:

moving image showing some examples of Tiles with the 3-slot tile layout
Some examples of Tiles with the 3-slot tile layout.


Highlight user actions and key information with components optimized for round screen

The new Wear OS Material 3 components automatically adapt to larger screen sizes, building on the Large Display support added as part of Wear OS 5. Additionally, components such as Buttons and Lists support shape morphing on apps.

The following sections highlight some of the most exciting changes to these components.

Embrace the round screen with the Edge Hugging Button

We introduced a new EdgeButton for apps and tiles with an iconic design pattern that maximizes the space within the circular form factor, hugs the edge of the screen, and comes in 4 standard sizes.

moving image of a sreenshot representing an EdgeButton in a scrollable screen.
Screenshot representing an EdgeButton in a scrollable screen.


Fluid navigation through lists using new indicators

The new TransformingLazyColumn from the Foundation library makes expressive motion easy with motion that fluidly traces the edges of the display. Developers can customize the collapsing behavior of the list when scrolling to the top, bottom and both sides of the screen. For example, components like Cards can scale down as they are closer to the top of the screen.

moving image showing a TransformingLazyColumn with content that collapses and changes in size when approaching the edge of the screens. .
TransformingLazyColumn allows content to collapse and change in size when approaching the edge of the screens


Material 3 Expressive also includes a ScrollIndicator that features a new visual and motion design to make it easier for users to visualize their progress through a list. The ScrollIndicator is displayed by default when you use a TransformingLazyColumn and ScreenScaffold.

moving image showing side by side examples of ScrollIndicator in action
ScrollIndicator


Lastly, you can now use segments with the new ProgressIndicator, which is now available as a full-screen component for apps and as a small-size component for both apps and tiles.

moving image showing a full-screen ProgressIndicator
Example of a full-screen ProgressIndicator


To learn more about the new features and see the full list of updates, see the release notes of the latest beta release of the Wear Compose and Wear Protolayout libraries. Check out the migration guidance for apps and tiles on how to upgrade your existing apps, or try one of our codelabs if you want to start developing using Material 3 Expressive design.

Watch Faces

With Wear OS 6 we are launching updates for watch face developers:

  • New options for customizing the appearance of your watch face using version 4 of Watch Face Format, such as animated state transitions from ambient to interactive and photo watch faces.
  • A new API for building watch face marketplaces.

Learn more about what's new in Watch Face updates.

Look for more information about the general availability of Wear OS 6 later this year.

Library updates

ProtoLayout

Since our last major release, we've improved capabilities and the developer experience of the Tiles and ProtoLayout libraries to address feedback we received from developers. Some of these enhancements include:

The example below shows how to display a layout with a text on a Tile using new enhancements:

// returns a LayoutElement for use in onTileRequest()
materialScope(context, requestParams.deviceConfiguration) {
    primaryLayout(
        mainSlot = {
            text(
                text = "Hello, World!".layoutString,
                typography = BODY_LARGE,
            )
        }
    )
}


For more information, see the migration instructions.

Credential Manager for Wear OS

The CredentialManager API is now available on Wear OS, starting with Google Pixel Watch devices running Wear OS 5.1. It introduces passkeys to Wear OS with a platform-standard authentication UI that is consistent with the experience on mobile.

The Credential Manager Jetpack library provides developers with a unified API that simplifies and centralizes their authentication implementation. Developers with an existing implementation on another form factor can use the same CredentialManager code, and most of the same supporting code to fulfill their Wear OS authentication workflow.

Credential Manager provides integration points for passkeys, passwords, and Sign in With Google, while also allowing you to keep your other authentication solutions as backups.

Users will benefit from a consistent, platform-standard authentication UI; the introduction of passkeys and other passwordless authentication methods, and the ability to authenticate without their phone nearby.

Check out the Authentication on Wear OS guidance to learn more.

Richer Wear Media Controls

New media controls for a Podcast
New media controls for a Podcast


Devices that run Wear OS 5.1 or later support enhanced media controls. Users who listen to media content on phones and watches can now benefit from the following new media control features on their watch:

  • They can fast-forward and rewind while listening to podcasts.
  • They can access the playlist and controls such as shuffle, like, and repeat through a new menu.

Developers with an existing implementation of action buttons and playlist can benefit from this feature without additional effort. Check out how users will get more controls from your media app on a Google Pixel Watch device.

Start building for Wear OS 6 now

With these updates, there's never been a better time to develop an app on Wear OS. These technical resources are a great place to learn more how to get started:

Earlier this year, we expanded our smartwatch offerings with Galaxy Watch for Kids, a unique, phone-free experience designed specifically for children. This launch gives families a new way to stay connected, allowing children to explore Wear OS independently with a dedicated smartwatch. Consult our developer guidance to create a Wear OS app for kids.

We're looking forward to seeing the experiences that you build on Wear OS!

Explore this announcement and all Google I/O 2025 updates on io.google starting May 22.


1 Actual battery performance varies.

20 May 2025 6:02pm GMT

What’s new in Watch Faces

Posted by Garan Jenkin - Developer Relations Engineer

Wear OS has a thriving watch face ecosystem featuring a variety of designs that also aims to minimize battery impact. Developers have embraced the simplicity of creating watch faces using Watch Face Format - in the last year, the number of published watch faces using Watch Face Format has grown by over 180%*.

Today, we're continuing our investment and announcing version 4 of the Watch Face Format, available as part of Wear OS 6. These updates allow developers to express even greater levels of creativity through the new features we've added. And we're supporting marketplaces, which gives flexibility and control to developers and more choice for users.

In this blog post we'll cover key new features, check out the documentation for more details of changes introduced in recent versions.

Supporting marketplaces with Watch Face Push

We're also announcing a completely new API, the Watch Face Push API, aimed at developers who want to create their own watch face marketplaces.

Watch Face Push, available on devices running Wear OS 6 and above, works exclusively with watch faces that use the Watch Face Format watch faces.

We've partnered with well-known watch face developers - including Facer, TIMEFLIK, WatchMaker, Pujie, and Recreative - in designing this new API. We're excited that all of these developers will be bringing their unique watch face experiences to Wear OS 6 using Watch Face Push.

Three mobile devices representing watch face marketplace apps for watches running Wear OS 6
From left to right, Facer, Recreative and TIMEFLIK watch faces have been developing marketplace apps to work with watches running Wear OS 6.


Watch faces managed and deployed using Watch Face Push are all written using Watch Face Format. Developers publish these watch faces in the same way as publishing through Google Play, though there are some additional checks the developer must make which are described in the Watch Face Push guidance.

A flow diagram demonstrating the flow of information from Cloud-based storage to the user's phone where the app is installed, then transferred to be installed on a wearable device using the Wear OS App via the Watch Face Push API


The Watch Face Push API covers only the watch part of this typical marketplace system diagram - as the app developer, you have control and responsibility for the phone app and cloud components, as well as for building the Wear OS app using Watch Face Push. You're also in control of the phone-watch communications, for which we recommend using the Data Layer APIs.

Adding Watch Face Push to your project

To start using Watch Face Push on Wear OS 6, include the following dependency in your Wear OS app:

// Ensure latest version is used by checking the repository
implementation("androidx.wear.watchface:watchface-push:1.3.0-alpha07")


Declare the necessary permission in your AndroidManifest.xml:

<uses-permission android:name="com.google.wear.permission.PUSH_WATCH_FACES" />


Obtain a Watch Face Push client:

val manager = WatchFacePushManagerFactory.createWatchFacePushManager(context)


You're now ready to start using the Watch Face Push API, for example to list the watch faces you have already installed, or add a new watch face:

// List existing watch faces, installed by this app
val listResponse = manager.listWatchFaces()

// Add a watch face
manager.addWatchFace(watchFaceFileDescriptor, validationToken)


Understanding Watch Face Push

While the basics of the Watch Face Push API are easy to understand and access through the WatchFacePushManager interface, it's important to consider several other factors when working with the API in practice to build an effective marketplace app, including:

  • Setting active watch faces - Through an additional permission, the app can set the active watch face. Learn about how to integrate this feature, as well as how to handle the different permission scenarios.

To learn more about using Watch Face Push, see the guidance and reference documentation.

Updates to Watch Face Format

Photos

Available from Watch Face Format v4

The new Photos element allows the watch face to contain user-selectable photos. The element supports both individual photos and a gallery of photos. For a gallery of photos, developers can choose whether the photos advance automatically or when the user taps the watch face.

a wearable device and small screen mobile device side by side demonstrating how a user may configure photos for the watch face through the Companion app on the mobile device
Configuring photos through the watch Companion app


The user is able to select the photos of their choice through the companion app, making this a great way to include true personalization in your watch face. To use this feature, first add the necessary configuration:

<UserConfigurations>
  <PhotosConfiguration id="myPhoto" configType="SINGLE"/>
</UserConfigurations>


Then use the Photos element within any PartImage, in the same way as you would for an Image element:

<PartImage ...>
  <Photos source="[CONFIGURATION.myPhoto]"
          defaultImageResource="placeholder_photo"/>
</PartImage>


For details on how to support multiple photos, and how to configure the different change behaviors, refer to the Photos section of the guidance and reference, as well as the GitHub samples.

Transitions

Available from Watch Face Format v4

Watch Face Format now supports transitions when exiting and entering ambient mode.

moving image demonstrating an overshoot effect adjusting the time on a watch face to reveal the seconds digit
State transition animation: Example using an overshoot effect in revealing the seconds digits


This is achieved through the existing Variant tag. For example, the hours and minutes in the above watch face are animated as follows:

<DigitalClock ...>
  <Variant mode="AMBIENT" target="x" value="100" interpolation="OVERSHOOT" />

   <!-- Rest of "hh:mm" clock definition here -->
</DigitalClock>

By default, the animation takes the full extent of allowed time for the transition. The new interpolation attribute controls the animation effect - in this case the use of OVERSHOOT adds a playful experience.

The seconds are implemented in a separate DigitalClock element, which shows the use of the new duration attribute:

<DigitalClock ...>
  <Variant mode="AMBIENT" target="alpha" value="0" duration="0.5"/>
   <!-- Rest of "ss" clock definition here -->
</DigitalClock>


The duration attribute takes a value between 0.0 and 1.0, with 1.0 representing the full extent of the allowed time. In this example, by using a value of 0.5, the seconds animation is quicker - taking half the allowed time, in comparison to the hours and minutes, which take the entire transition period.

For more details on using transitions, see the guidance documentation, as well as the reference documentation for Variant.

Color Transforms

Available from Watch Face Format v4

We've extended the usefulness of the Transform element by allowing color to be transformed on the majority of elements where it is an attribute, and also allowing tintColor to be transformed on Group and Part* elements such as PartDraw and PartText.

The main exceptions to this addition are the clock elements, DigitalClock and AnalogClock, and also ComplicationSlot, which do not currently support Transform.

In addition to extending the list of transformable attributes to include colors, we've also added a handful of useful functions for manipulating color:

To see these in action, let's consider an example.

The Weather data source provides the current UV index through [WEATHER.UV_INDEX]. When representing the UV index, these values are typically also assigned a color:

moving image demonstrating an overshoot effect adjusting the time on a watch face to reveal the seconds digit


We want to represent this information as an Arc, not only showing the value, but also using the appropriate color. We can achieve this as follows:

<Arc centerX="0" centerY="0" height="420" width="420"
  startAngle="165" endAngle="165" direction="COUNTER_CLOCKWISE">
  <Transform target="endAngle"
    value="165 - 40 * (clamp(11, 0.0, 11.0) / 11.0)" />
  <Stroke thickness="20" color="#ffffff" cap="ROUND">
    <Transform target="color"
      value="extractColorFromWeightedColors(#97d700 #FCE300 #ff8200 #f65058 #9461c9, 3 3 2 3 1, false, clamp([WEATHER.UV_INDEX] + 0.5, 0.0, 12.0) / 12.0)" />
  </Stroke>
</Arc>


Let's break this down:

  • The first Transform restricts the UV index to the range 0.0 to 11.0 and adjusts the sweep of the Arc according to that value.
  • The second Transform uses the new extractColorFromWeightedColors function.
    • The first argument is our list of colors
    • The second argument is a list of weights - you can see from the chart above that green covers 3 values, whereas orange only covers 2, so we use weights to represent this.
    • The third argument is whether or not to interpolate the color values. In this case we want to stick strictly to the color convention for UV index, so this is false.
    • Finally in the fourth argument we coerce the UV value into the range 0.0 to 1.0, which is used as an index into our weighted colors.

The result looks like this:

side by side quadrants of watch face examples showing using the new color functions in applying color transforms to a Stroke in an Arc
Using the new color functions in applying color transforms to a Stroke in an Arc.


As well as being able to provide raw colors and weights to these functions, they can also be used with values from complications, such as HR, temperature or steps goal. For example, to use the color range specified in a goal complication:

<Transform target="color"
    value="extractColorFromColors(
        [COMPLICATION.GOAL_PROGRESS_COLORS],
        [COMPLICATION.GOAL_PROGRESS_COLOR_INTERPOLATE],
        [COMPLICATION.GOAL_PROGRESS_VALUE] /    
            [COMPLICATION.GOAL_PROGRESS_TARGET_VALUE]
)"/>


Introducing the Reference element

Available from Watch Face Format v4

The new Reference element allows you to refer to any transformable attribute from one part of your watch face scene in other parts of the scene tree.

In our UV index example above, we'd also like the text labels to use the same color scheme.

We could perform the same color transform calculation as on our Arc, using [WEATHER.UV_INDEX], but this is duplicative work which could lead to inconsistencies, for example if we change the exact color hues in one place but not the other.

Returning to the Arc definition, let's create a Reference to the color:

<Arc centerX="0" centerY="0" height="420" width="420"
  startAngle="165" endAngle="165" direction="COUNTER_CLOCKWISE">
  <Transform target="endAngle"
    value="165 - 40 * (clamp(11, 0.0, 11.0) / 11.0)" />
  <Stroke thickness="20" color="#ffffff" cap="ROUND">
    <Reference source="color" name="uv_color" defaultValue="#ffffff" />
    <Transform target="color"
      value="extractColorFromWeightedColors(#97d700 #FCE300 #ff8200 #f65058 #9461c9, 3 3 2 3 1, false, clamp([WEATHER.UV_INDEX] + 0.5, 0.0, 12.0) / 12.0)" />
  </Stroke>
</Arc>


The color of the Arc is calculated from the relatively complex extractColorFromWeightedColors function. To avoid repeating this elsewhere in our watch face, we have added a Reference element, which takes as its source the Stroke color.

Let's now look at how we can consume this value in a PartText elsewhere in the watch face. We gave the Reference the name uv_color, so we can simply refer to this in any expression:

<PartText x="0" y="225" width="450" height="225">
  <TextCircular centerX="225" centerY="0" width="420" height="420"
    startAngle="120" endAngle="90"
    align="START" direction="COUNTER_CLOCKWISE">
    <Font family="SYNC_TO_DEVICE" size="24">
      <Transform target="color" value="[REFERENCE.uv_color]" />
      <Template>%d<Parameter expression="[WEATHER.UV_INDEX]" /></Template>
    </Font>
  </TextCircular>
</PartText>
<!-- Similar PartText here for the "UV:" label -->

As a result, the color of the Arc and the UV numeric value are now coordinated:

side by side quadrants of watch face examples showing Coordinating colors across elements using the Reference element
Coordinating colors across elements using the Reference element


For more details on how to use the Reference element, refer to the Reference guidance.

Text autosizing

Available from Watch Face Format v3

Sometimes the exact length of the text to be shown on the watch face can vary, and as a developer you want to balance being able to display text that is both legible, but also complete.

Auto-sizing text can help solve this problem, and can be enabled through the isAutoSize attribute introduced to the Text element:

<Text align="CENTER" isAutoSize="true">


Having set this attribute, text will then automatically fit the available space, starting at the maximum size specified in your Font element, and with a minimum size of 12.

As an example, step count could range from tens or hundreds through to many thousands, and the new isAutoSize attribute enables best use of the available space for every possible value:

side by side examples of text sizing adjustments on watch face using isAutosize
Making the best use of the available text space through isAutoSize


For more details on isAutoSize, see the Text reference.

Android Studio support

For developers working in Android Studio, we've added support to make working with Watch Face Format easier, including:

  • Run configuration support
  • Auto-complete and resource reference
  • Lint checking

This is available from Android Studio Canary version 2025.1.1 Canary 10.

Learn More

To learn more about building watch faces, please take a look at the following resources:

We've also recently launched a codelab for Watch Face Format and have updated samples on GitHub to showcase new features. The issue tracker is available for providing feedback.

We're excited to see the watch face experiences that you create and share!

Explore this announcement and all Google I/O 2025 updates on io.google starting May 22.


* Google Play data for period 2025-03-24 to 2025-03-23

20 May 2025 6:01pm GMT

feedTalkAndroid

The New Thinnest Foldable? Honor Magic V5 Will Arrive in June

Honor refuses to let any other company takes its "thin and light" crown.

20 May 2025 5:00pm GMT

“So good it’s scary”: the Netflix series getting a flawless 10/10

The Four Seasons has taken Netflix by storm this May, captivating audiences with its perfect 10/10 rating. This…

20 May 2025 3:30pm GMT

Qualcomm’s Snapdragon 8 Elite 2 Now Has an Official Arrival Date

The Snapdragon 8 Elite will launch about a month ahead of last year's schedule.

20 May 2025 3:30pm GMT

16 Oct 2024

feedPlanet Maemo

Adding buffering hysteresis to the WebKit GStreamer video player

The <video> element implementation in WebKit does its job by using a multiplatform player that relies on a platform-specific implementation. In the specific case of glib platforms, which base their multimedia on GStreamer, that's MediaPlayerPrivateGStreamer.

WebKit GStreamer regular playback class diagram

The player private can have 3 buffering modes:

The current implementation (actually, its wpe-2.38 version) was showing some buffering problems on some Broadcom platforms when doing in-memory buffering. The buffering levels monitored by MediaPlayerPrivateGStreamer weren't accurate because the Nexus multimedia subsystem used on Broadcom platforms was doing its own internal buffering. Data wasn't being accumulated in the GstQueue2 element of playbin, because BrcmAudFilter/BrcmVidFilter was accepting all the buffers that the queue could provide. Because of that, the player private buffering logic was erratic, leading to many transitions between "buffer completely empty" and "buffer completely full". This, it turn, caused many transitions between the HaveEnoughData, HaveFutureData and HaveCurrentData readyStates in the player, leading to frequent pauses and unpauses on Broadcom platforms.

So, one of the first thing I tried to solve this issue was to ask the Nexus PlayPump (the subsystem in charge of internal buffering in Nexus) about its internal levels, and add that to the levels reported by GstQueue2. There's also a GstMultiqueue in the pipeline that can hold a significant amount of buffers, so I also asked it for its level. Still, the buffering level unstability was too high, so I added a moving average implementation to try to smooth it.

All these tweaks only make sense on Broadcom platforms, so they were guarded by ifdefs in a first version of the patch. Later, I migrated those dirty ifdefs to the new quirks abstraction added by Phil. A challenge of this migration was that I needed to store some attributes that were considered part of MediaPlayerPrivateGStreamer before. They still had to be somehow linked to the player private but only accessible by the platform specific code of the quirks. A special HashMap attribute stores those quirks attributes in an opaque way, so that only the specific quirk they belong to knows how to interpret them (using downcasting). I tried to use move semantics when storing the data, but was bitten by object slicing when trying to move instances of the superclass. In the end, moving the responsibility of creating the unique_ptr that stored the concrete subclass to the caller did the trick.

Even with all those changes, undesirable swings in the buffering level kept happening, and when doing a careful analysis of the causes I noticed that the monitoring of the buffering level was being done from different places (in different moments) and sometimes the level was regarded as "enough" and the moment right after, as "insufficient". This was because the buffering level threshold was one single value. That's something that a hysteresis mechanism (with low and high watermarks) can solve. So, a logical level change to "full" would only happen when the level goes above the high watermark, and a logical level change to "low" when it goes under the low watermark level.

For the threshold change detection to work, we need to know the previous buffering level. There's a problem, though: the current code checked the levels from several scattered places, so only one of those places (the first one that detected the threshold crossing at a given moment) would properly react. The other places would miss the detection and operate improperly, because the "previous buffering level value" had been overwritten with the new one when the evaluation had been done before. To solve this, I centralized the detection in a single place "per cycle" (in updateBufferingStatus()), and then used the detection conclusions from updateStates().

So, with all this in mind, I refactored the buffering logic as https://commits.webkit.org/284072@main, so now WebKit GStreamer has a buffering code much more robust than before. The unstabilities observed in Broadcom devices were gone and I could, at last, close Issue 1309.

0 Add to favourites0 Bury

16 Oct 2024 6:12am GMT

10 Sep 2024

feedPlanet Maemo

Don’t shoot yourself in the foot with the C++ move constructor

Move semantics can be very useful to transfer ownership of resources, but as many other C++ features, it's one more double edge sword that can harm yourself in new and interesting ways if you don't read the small print.

For instance, if object moving involves super and subclasses, you have to keep an extra eye on what's actually happening. Consider the following classes A and B, where the latter inherits from the former:

#include <stdio.h>
#include <utility>

#define PF printf("%s %p\n", __PRETTY_FUNCTION__, this)

class A {
 public:
 A() { PF; }
 virtual ~A() { PF; }
 A(A&& other)
 {
  PF;
  std::swap(i, other.i);
 }

 int i = 0;
};

class B : public A {
 public:
 B() { PF; }
 virtual ~B() { PF; }
 B(B&& other)
 {
  PF;
  std::swap(i, other.i);
  std::swap(j, other.j);
 }

 int j = 0;
};

If your project is complex, it would be natural that your code involves abstractions, with part of the responsibility held by the superclass, and some other part by the subclass. Consider also that some of that code in the superclass involves move semantics, so a subclass object must be moved to become a superclass object, then perform some action, and then moved back to become the subclass again. That's a really bad idea!

Consider this usage of the classes defined before:

int main(int, char* argv[]) {
 printf("Creating B b1\n");
 B b1;
 b1.i = 1;
 b1.j = 2;
 printf("b1.i = %d\n", b1.i);
 printf("b1.j = %d\n", b1.j);
 printf("Moving (B)b1 to (A)a. Which move constructor will be used?\n");
 A a(std::move(b1));
 printf("a.i = %d\n", a.i);
 // This may be reading memory beyond the object boundaries, which may not be
 // obvious if you think that (A)a is sort of a (B)b1 in disguise, but it's not!
 printf("(B)a.j = %d\n", reinterpret_cast<B&>(a).j);
 printf("Moving (A)a to (B)b2. Which move constructor will be used?\n");
 B b2(reinterpret_cast<B&&>(std::move(a)));
 printf("b2.i = %d\n", b2.i);
 printf("b2.j = %d\n", b2.j);
 printf("^^^ Oops!! Somebody forgot to copy the j field when creating (A)a. Oh, wait... (A)a never had a j field in the first place\n");
 printf("Destroying b2, a, b1\n");
 return 0;
}

If you've read the code, those printfs will have already given you some hints about the harsh truth: if you move a subclass object to become a superclass object, you're losing all the subclass specific data, because no matter if the original instance was one from a subclass, only the superclass move constructor will be used. And that's bad, very bad. This problem is called object slicing. It's specific to C++ and can also happen with copy constructors. See it with your own eyes:

Creating B b1
A::A() 0x7ffd544ca690
B::B() 0x7ffd544ca690
b1.i = 1
b1.j = 2
Moving (B)b1 to (A)a. Which move constructor will be used?
A::A(A&&) 0x7ffd544ca6a0
a.i = 1
(B)a.j = 0
Moving (A)a to (B)b2. Which move constructor will be used?
A::A() 0x7ffd544ca6b0
B::B(B&&) 0x7ffd544ca6b0
b2.i = 1
b2.j = 0
^^^ Oops!! Somebody forgot to copy the j field when creating (A)a. Oh, wait... (A)a never had a j field in the first place
Destroying b2, a, b1
virtual B::~B() 0x7ffd544ca6b0
virtual A::~A() 0x7ffd544ca6b0
virtual A::~A() 0x7ffd544ca6a0
virtual B::~B() 0x7ffd544ca690
virtual A::~A() 0x7ffd544ca690

Why can something that seems so obvious become such a problem, you may ask? Well, it depends on the context. It's not unusual for the codebase of a long lived project to have started using raw pointers for everything, then switching to using references as a way to get rid of null pointer issues when possible, and finally switch to whole objects and copy/move semantics to get rid or pointer issues (references are just pointers in disguise after all, and there are ways to produce null and dangling references by mistake). But this last step of moving from references to copy/move semantics on whole objects comes with the small object slicing nuance explained in this post, and when the size and all the different things to have into account about the project steals your focus, it's easy to forget about this.

So, please remember: never use move semantics that convert your precious subclass instance to a superclass instance thinking that the subclass data will survive. You can regret about it and create difficult to debug problems inadvertedly.

Happy coding!

0 Add to favourites0 Bury

10 Sep 2024 7:58am GMT

17 Jun 2024

feedPlanet Maemo

Incorporating 3D Gaussian Splats into the graphics pipeline

3D Gaussian splatting is the emerging rendering technique that is overtaking NeRFs. Since it is centered around point primitives, it is more compatible with traditional graphics pipelines that already support point rendering.

Gaussian splats essentially enhance the concept of point rendering by converting the point primitive into a 3D ellipsoid, which is then projected into 2D during the rendering process.. This concept was initially described in 2002 [3], but the technique of extending Structure from Motion scans in this way was only detailed more recently [1].

In this post, I explore how to integrate Gaussian splats into the traditional graphics pipeline. This allows them to be used alongside triangle-based primitives and interact with them through the depth buffer for occlusion (see header image). This approach also simplifies deployment by eliminating the need for CUDA.

Storage

The original implementation uses .ply files as their checkpoint format, focusing on maintaining training-relevant data structures at the expense of storage efficiency, leading to increased file sizes.

For example, it stores the covariance as scaling and a rotation quaternion, necessitating reconstruction during rendering. A more efficient approach would be to leverage orthogonality, storing only the diagonal and upper triangular vectors, thereby eliminating reconstruction and reducing storage requirements.

Further analysis of the storage usage for each attribute shows that the spherical harmonics of orders 1-3 are the main contributors to the file size. However, according to the ablation study in the original publication [1], these harmonics only lead to a modest PSNR improvement of 0.5.

Therefore, the most straightforward way to decrease storage is by discarding the higher-order spherical harmonics. Additionally, the level 0 spherical harmonics can be converted into a diffuse color and merged with opacity to form a single RGBA value. These simple yet effective methods were implemented in one of the early WebGL implementations, resulting in the .splat format. As an added benefit, this format can be easily interpreted by viewers unaware of Gaussian splats as a simple colored point cloud:

Results using a non Gaussian-splat aware renderer

By directly storing the covariance as previously mentioned we can reduce the precision from float32 to float16, thereby halving the storage needed for that data. Furthermore, since most splats have limited spatial extents, we can also utilize float16 for position data, yielding additional storage savings.

With these changes, we achieve a storage requirement of 22 bytes per splat, in contrast to the 44 bytes needed by the .splat format and 236 bytes in the original implementation. Thus, we have attained a 10x reduction in storage compared to the original implementation simply by using more suitable data types.

Blending

The image formation model presented in the original paper [1] is similar to the NeRF rendering, as it is compared to it. This involves casting a ray and observing its intersection with the splats, which leads to front-to-back blending. This is precisely the approach taken by the provided CUDA implementation.

Blending remains a component of the fixed-function unit within the graphics pipeline, which can be set up for front-to-back blending [2] by using the factors (one_minus_dest_alpha, one) and by multiplying color and alpha in the shader as color.rgb * color.a. This results in the following equation:

\begin{aligned}C_{dst} &= (1 - \alpha_{dst}) \cdot \alpha_{src} C_{src} &+ C_{dst}\\ \alpha_{dst} &= (1 - \alpha_{dst})\cdot\alpha_{src} &+ \alpha_{dst}\end{aligned}

However, this method requires the framebuffer alpha value to be zero before rendering the splats, which is not typically the case as any previous render pass could have written an arbitrary alpha value.

A simple solution is to switch to back-to-front sorting and use the standard alpha blending factors (src_alpha, one_minus_src_alpha) for the following blending equation:

C_{dst} = \alpha_{src} \cdot C_{src} + (1 - \alpha_{src}) \cdot C_{dst}

This allows us to regard Gaussian splats as a special type of particles that can be rendered together with other transparent elements within a scene.

References

  1. Kerbl, Bernhard, et al. "3d gaussian splatting for real-time radiance field rendering." ACM Transactions on Graphics 42.4 (2023): 1-14.
  2. Green, Simon. "Volumetric particle shadows." NVIDIA Developer Zone (2008).
  3. Zwicker, Matthias, et al. "EWA splatting." IEEE Transactions on Visualization and Computer Graphics 8.3 (2002): 223-238.

0 Add to favourites0 Bury

17 Jun 2024 1:28pm GMT

18 Sep 2022

feedPlanet Openmoko

Harald "LaF0rge" Welte: Deployment of future community TDMoIP hub

I've mentioned some of my various retronetworking projects in some past blog posts. One of those projects is Osmocom Community TDM over IP (OCTOI). During the past 5 or so months, we have been using a number of GPS-synchronized open source icE1usb interconnected by a new, efficient but strill transparent TDMoIP protocol in order to run a distributed TDM/PDH network. This network is currently only used to provide ISDN services to retronetworking enthusiasts, but other uses like frame relay have also been validated.

So far, the central hub of this OCTOI network has been operating in the basement of my home, behind a consumer-grade DOCSIS cable modem connection. Given that TDMoIP is relatively sensitive to packet loss, this has been sub-optimal.

Luckily some of my old friends at noris.net have agreed to host a new OCTOI hub free of charge in one of their ultra-reliable co-location data centres. I'm already hosting some other machines there for 20+ years, and noris.net is a good fit given that they were - in their early days as an ISP - the driving force in the early 90s behind one of the Linux kernel ISDN stracks called u-isdn. So after many decades, ISDN returns to them in a very different way.

Side note: In case you're curious, a reconstructed partial release history of the u-isdn code can be found on gitea.osmocom.org

But I digress. So today, there was the installation of this new OCTOI hub setup. It has been prepared for several weeks in advance, and the hub contains two circuit boards designed entirely only for this use case. The most difficult challenge was the fact that this data centre has no existing GPS RF distribution, and the roof is ~ 100m of CAT5 cable (no fiber!) away from the roof. So we faced the challenge of passing the 1PPS (1 pulse per second) signal reliably through several steps of lightning/over-voltage protection into the icE1usb whose internal GPS-DO serves as a grandmaster clock for the TDM network.

The equipment deployed in this installation currently contains:

For more details, see this wiki page and this ticket

Now that the physical deployment has been made, the next steps will be to migrate all the TDMoIP links from the existing user base over to the new hub. We hope the reliability and performance will be much better than behind DOCSIS.

In any case, this new setup for sure has a lot of capacity to connect many more more users to this network. At this point we can still only offer E1 PRI interfaces. I expect that at some point during the coming winter the project for remote TDMoIP BRI (S/T, S0-Bus) connectivity will become available.

Acknowledgements

I'd like to thank anyone helping this effort, specifically * Sylvain "tnt" Munaut for his work on the RS422 interface board (+ gateware/firmware) * noris.net for sponsoring the co-location * sysmocom for sponsoring the EPYC server hardware

18 Sep 2022 10:00pm GMT

08 Sep 2022

feedPlanet Openmoko

Harald "LaF0rge" Welte: Progress on the ITU-T V5 access network front

Almost one year after my post regarding first steps towards a V5 implementation, some friends and I were finally able to visit Wobcom, a small German city carrier and pick up a lot of decommissioned POTS/ISDN/PDH/SDH equipment, primarily V5 access networks.

This means that a number of retronetworking enthusiasts now have a chance to play with Siemens Fastlink, Nokia EKSOS and DeTeWe ALIAN access networks/multiplexers.

My primary interest is in Nokia EKSOS, which looks like an rather easy, low-complexity target. As one of the first steps, I took PCB photographs of the various modules/cards in the shelf, take note of the main chip designations and started to search for the related data sheets.

The results can be found in the Osmocom retronetworking wiki, with https://osmocom.org/projects/retronetworking/wiki/Nokia_EKSOS being the main entry page, and sub-pages about

In short: Unsurprisingly, a lot of Infineon analog and digital ICs for the POTS and ISDN ports, as well as a number of Motorola M68k based QUICC32 microprocessors and several unknown ASICs.

So with V5 hardware at my disposal, I've slowly re-started my efforts to implement the LE (local exchange) side of the V5 protocol stack, with the goal of eventually being able to interface those V5 AN with the Osmocom Community TDM over IP network. Once that is in place, we should also be able to offer real ISDN Uk0 (BRI) and POTS lines at retrocomputing events or hacker camps in the coming years.

08 Sep 2022 10:00pm GMT

Harald "LaF0rge" Welte: Clock sync trouble with Digium cards and timing cables

If you have ever worked with Digium (now part of Sangoma) digital telephony interface cards such as the TE110/410/420/820 (single to octal E1/T1/J1 PRI cards), you will probably have seen that they always have a timing connector, where the timing information can be passed from one card to another.

In PDH/ISDN (or even SDH) networks, it is very important to have a synchronized clock across the network. If the clocks are drifting, there will be underruns or overruns, with associated phase jumps that are particularly dangerous when analog modem calls are transported.

In traditional ISDN use cases, the clock is always provided by the network operator, and any customer/user side equipment is expected to synchronize to that clock.

So this Digium timing cable is needed in applications where you have more PRI lines than possible with one card, but only a subset of your lines (spans) are connected to the public operator. The timing cable should make sure that the clock received on one port from the public operator should be used as transmit bit-clock on all of the other ports, no matter on which card.

Unfortunately this decades-old Digium timing cable approach seems to suffer from some problems.

bursty bit clock changes until link is up

The first problem is that downstream port transmit bit clock was jumping around in bursts every two or so seconds. You can see an oscillogram of the E1 master signal (yellow) received by one TE820 card and the transmit of the slave ports on the other card at https://people.osmocom.org/laforge/photos/te820_timingcable_problem.mp4

As you can see, for some seconds the two clocks seem to be in perfect lock/sync, but in between there are periods of immense clock drift.

What I'd have expected is the behavior that can be seen at https://people.osmocom.org/laforge/photos/te820_notimingcable_loopback.mp4 - which shows a similar setup but without the use of a timing cable: Both the master clock input and the clock output were connected on the same TE820 card.

As I found out much later, this problem only occurs until any of the downstream/slave ports is fully OK/GREEN.

This is surprising, as any other E1 equipment I've seen always transmits at a constant bit clock irrespective whether there's any signal in the opposite direction, and irrespective of whether any other ports are up/aligned or not.

But ok, once you adjust your expectations to this Digium peculiarity, you can actually proceed.

clock drift between master and slave cards

Once any of the spans of a slave card on the timing bus are fully aligned, the transmit bit clocks of all of its ports appear to be in sync/lock - yay - but unfortunately only at the very first glance.

When looking at it for more than a few seconds, one can see a slow, continuous drift of the slave bit clocks compared to the master :(

Some initial measurements show that the clock of the slave card of the timing cable is drifting at about 12.5 ppb (parts per billion) when compared against the master clock reference.

This is rather disappointing, given that the whole point of a timing cable is to ensure you have one reference clock with all signals locked to it.

The work-around

If you are willing to sacrifice one port (span) of each card, you can work around that slow-clock-drift issue by connecting an external loopback cable. So the master card is configured to use the clock provided by the upstream provider. Its other ports (spans) will transmit at the exact recovered clock rate with no drift. You can use any of those ports to provide the clock reference to a port on the slave card using an external loopback cable.

In this setup, your slave card[s] will have perfect bit clock sync/lock.

Its just rather sad that you need to sacrifice ports just for achieving proper clock sync - something that the timing connectors and cables claim to do, but in reality don't achieve, at least not in my setup with the most modern and high-end octal-port PCIe cards (TE820).

08 Sep 2022 10:00pm GMT