17 Feb 2025

feedTalkAndroid

King Legacy Codes

Find all the latest King Legacy codes right here!

17 Feb 2025 6:11am GMT

The Resistance Tycoon Codes

Here you will find all the latest The Resistance Tycoon Codes! Tap the link for more!

17 Feb 2025 6:10am GMT

Base Battles Codes 

Here you will find all of the latest Base Battles Codes for Roblox! Read on for more!

17 Feb 2025 6:09am GMT

13 Feb 2025

feedAndroid Developers Blog

The Second Beta of Android 16

Posted by Matthew McCullough - VP of Product Management, Android Developer


Today we're releasing the second beta of Android 16, continuing our work to build a platform that enables creative expression. You can enroll any supported Pixel device to get this and future Android Beta updates over-the-air.

This build adds new support for professional camera experiences, graphical effects, extends our performance framework, and continues the evolution of features related to privacy, security, and background tasks. We're looking forward to hearing what you think, and thank you in advance for your continued help in making Android a platform that works for everyone.

Media and camera updates

Android 16 enhances support for professional camera users, allowing for hybrid auto exposure along with precise color temperature and tint adjustments. It's easier than ever to capture motion photos with new Intent actions, and we're continuing to improve UltraHDR images, with support for HEIC encoding and new parameters from the ISO 21496-1 draft standard.

Hybrid auto-exposure

Android 16 adds new hybrid auto-exposure modes to Camera2, allowing you to manually control specific aspects of exposure while letting the auto-exposure (AE) algorithm handle the rest. You can control ISO + AE, and exposure time + AE, providing greater flexibility compared to the current approach where you either have full manual control or rely entirely on auto-exposure.

fun setISOPriority() {
   // ...

    val availablePriorityModes = mStaticInfo.characteristics.get(
        CameraCharacteristics.CONTROL_AE_AVAILABLE_PRIORITY_MODES
    )
    // ...
    
    // Turn on AE mode to set priority mode
    reqBuilder[CaptureRequest.CONTROL_AE_MODE] = CameraMetadata.CONTROL_AE_MODE_ON
    reqBuilder[CaptureRequest.CONTROL_AE_PRIORITY_MODE] = CameraMetadata.CONTROL_AE_PRIORITY_MODE_SENSOR_SENSITIVITY_PRIORITY
    reqBuilder[CaptureRequest.SENSOR_SENSITIVITY] = TEST_SENSITIVITY_VALUE
    val request: CaptureRequest = reqBuilder.build()

    // ...

}

Precise color temperature and tint adjustments

Android 16 adds camera support for fine color temperature and tint adjustments to better support professional video recording applications. White balance settings are currently controlled through CONTROL_AWB_MODE, which contains options limited to a preset list, such as Incandescent, Cloudy, and Twilight. The COLOR_CORRECTION_MODE_CCT enables the use of COLOR_CORRECTION_COLOR_TEMPERATURE and COLOR_CORRECTION_COLOR_TINT for precise adjustments of white balance based on the correlated color temperature.

fun setCCT() {
    // ... (Your existing code before this point) ...

    val colorTemperatureRange: Range<Int> =
        mStaticInfo.characteristics[CameraCharacteristics.COLOR_CORRECTION_COLOR_TEMPERATURE_RANGE]

    // Set to manual mode to enable CCT mode
    reqBuilder[CaptureRequest.CONTROL_AWB_MODE] = CameraMetadata.CONTROL_AWB_MODE_OFF
    reqBuilder[CaptureRequest.COLOR_CORRECTION_MODE] = CameraMetadata.COLOR_CORRECTION_MODE_CCT
    reqBuilder[CaptureRequest.COLOR_CORRECTION_COLOR_TEMPERATURE] = 5000
    reqBuilder[CaptureRequest.COLOR_CORRECTION_COLOR_TINT] = 30

    val request: CaptureRequest = reqBuilder.build()

    // ... (Your existing code after this point) ...
}
Five photos of the back of a Google Pixel phone demonstrate different color temperatures and tints. The original photo is in the top left, followed by Tint -50, Tint +50, Temp 3000, and Temp 7000.


Motion photo capture intent actions

Android 16 adds standard Intent actions - ACTION_MOTION_PHOTO_CAPTURE, and ACTION_MOTION_PHOTO_CAPTURE_SECURE - which request that the camera application capture a motion photo and return it.

Moving image of a diverse group of friends playing a game of horseshoe


You must either pass an extra EXTRA_OUTPUT to control where the image will be written, or a Uri through Intent setClipData. If you don't set a ClipData, it will be copied there for you when calling Context.startActivity.

UltraHDR image enhancements

A split-screen image compares Standard Dynamic Range (SDR) and High Dynamic Range (HDR) image quality side-by-side using a singular image of a detailed landscape. The HDR side is more vivid and vibrant.

Android 16 continues our work to deliver dazzling image quality with UltraHDR images. It adds support for UltraHDR images in the HEIC file format. These images will get ImageFormat type HEIC_ULTRAHDR and will contain an embedded gainmap similar to the existing UltraHDR JPEG format. We're working on AVIF support for UltraHDR as well, so stay tuned.

In addition, Android 16 implements additional parameters in UltraHDR from the ISO 21496-1 draft standard, including the ability to get and set the colorspace that gainmap math should be applied in, as well as support for HDR encoded base images with SDR gainmaps.

Custom graphical effects with AGSL

Android 16 adds RuntimeColorFilter and RuntimeXfermode, allowing you to author complex effects like Threshold, Sepia, and Hue Saturation and apply them to draw calls. Since Android 13, you've been able to use AGSL to create custom RuntimeShaders that extend Shaders. The new API mirrors this, adding an AGSL-powered RuntimeColorFilter that extends ColorFilters, and a Xfermode effect that allows you to implement AGSL-based custom compositing and blending between source and destination pixels.

private val thresholdEffectString = """
    uniform half threshold;
    half4 main(half4 c) {
        half luminosity = dot(c.rgb, half3(0.2126, 0.7152, 0.0722));
        half bw = step(threshold, luminosity);
        return bw.xxx1 * c.a;
    }"""

fun setCustomColorFilter(paint: Paint) {
   val filter = RuntimeColorFilter(thresholdEffectString)
   filter.setFloatUniform(0.5)
   paint.colorFilter = filter
}

Behavior changes

With every Android release, we seek to make the platform more efficient, privacy conscious, internationalization friendly, and robust, balancing the needs of apps against hardware support, system performance, user privacy, and battery life. This can result in behavior changes that impact compatibility.

Edge to edge opt-out going away

Android 15 enforced edge-to-edge for apps targeting Android 15 (SDK 35), but your app could opt-out by setting R.attr#windowOptOutEdgeToEdgeEnforcement to true. Once your app targets Android 16 (Baklava), R.attr#windowOptOutEdgeToEdgeEnforcement is deprecated and disabled and your app cannot opt-out of going edge-to-edge. To be compatible with Android 16 Beta 2, ensure your app supports edge-to-edge and remove any use of R.attr#windowOptOutEdgeToEdgeEnforcement. To support edge-to-edge, see the Compose and Views guidance. Please let us know about concerns in our tracker on the feedback page.

Health and fitness permissions

For apps targeting Android 16 or higher, BODY_SENSORS permissions are transitioning to the granular permissions under android.permissions.health also used by Health Connect. Any API previously requiring BODY_SENSORS or BODY_SENSORS_BACKGROUND will now require the corresponding android.permissions.health permission. This affects the following data types, APIs, and foreground service types:

If your app uses these APIs, it should now request the respective granular permissions:

These permissions are the same as those that guard access to reading data from Health Connect, the Android datastore for health, fitness, and wellness data.

Abandoned empty jobs stop reason

An abandoned job occurs when the JobParameters object associated with the job has been garbage collected, but jobFinished has not been called to signal job completion. This indicates that the job may be running and being rescheduled without the application's awareness.

Applications in Android 16 that rely on JobScheduler without maintaining a strong reference to the JobParameters object will now be granted the new job stop reason STOP_REASON_TIMEOUT_ABANDONED on timeout, instead of STOP_REASON_TIMEOUT.

If there are frequent occurrences of the new abandoned stop reason, the system will take mitigation steps to reduce job frequency. Please use the new stop reason to detect and reduce abandoned jobs.

Note: If you're using WorkManager, you're not expected to be impacted by this change - one nice side effect of using Android Jetpack to schedule your work.

Intent redirect changes

Android 16 introduces default security hardening against Intent redirection attacks regardless of your app's targetSDK version. The removeLaunchSecurityProtection API allows you to opt-out of this protection if your testing reveals issues.

Note: Opting out of security protections should be done with caution and only when absolutely necessary, as it can increase the risk of security vulnerabilities.

val iSublevel = intent.getParcelableExtra("sub_intent", Intent::class.java)
iSublevel?.let {
    it.removeLaunchSecurityProtection()
    startActivity(it)
}

Elegant font APIs deprecated and disabled

Apps targeting Android 15 (API level 35) have the elegantTextHeight TextView attribute set to true by default, replacing the compact font with one that is much more readable. You could override this by setting the elegantTextHeight attribute to false.

Android 16 deprecates the elegantTextHeight attribute, and the attribute will be ignored once your app targets Android 16. The "UI fonts" controlled by these APIs are being discontinued, so you should adapt any layouts to ensure consistent and future proof text rendering in Arabic, Lao, Myanmar, Tamil, Gujarati, Kannada, Malayalam, Odia, Telugu or Thai.

Example of default eleganttextHeight behavior for apps targeting Android 14 (API level 34) and lower
default elegantTextHeight behavior for apps targeting Android 14 (API level 34) and lower


Example of default elegantTextHeight behavior for apps targeting Android 15 (API level 35) and higher
default elegantTextHeight behavior for apps targeting Android 15 (API level 35) and higher


16 KB page size compatibility mode

Android 15 introduced support for 16KB memory pages to optimize performance of the platform. Android 16 adds a compatibility mode, allowing some apps built for 4K memory pages to run on a device configured for 16KB memory pages.

If Android detects that your app has 4KB aligned memory pages, it will automatically use compatibility mode and display a notification dialog to the user. Setting the android:pageSizeCompat property in the AndroidManifest.xml to enable the backwards compatibility mode will prevent the display of the dialog when your app launches. For best performance, reliability, and stability, your app should still be 16KB aligned. Read our recent blog post about updating your apps to support 16KB memory pages for more details.

Screenshot of PageSizeCompatTestApp in Android 16


Measurement system customization

Users can now customize their measurement system in regional preferences within Settings. The user preference is included as part of the locale code, so you can register a BroadcastReceiver on ACTION_LOCALE_CHANGED to handle locale configuration changes when regional preferences change.

Using formatters can help match the local experience. For example, "0.5 in" in English (United States), is "12,7 mm" for a user who has set their phone to English (Denmark) or who uses their phone in English (United States) with the metric system as the measurement system preference.

To find these settings in Android 16 Beta 2, open the Settings app and navigate to System > Languages & region.

Content handling for live wallpapers

In Android 16, the live wallpaper framework is gaining a new content API to address the challenges of dynamic, user-driven wallpapers. Currently, live wallpapers incorporating user-provided content require complex, service-specific implementations. Android 16 introduces WallpaperDescription and WallpaperInstance. WallpaperDescription allows you to identify distinct instances of a live wallpaper from the same service. For example, a wallpaper that has instances on both the home screen and on the lock screen may have unique content in both places. The wallpaper picker and WallpaperManager use this metadata to better present wallpapers to users, streamlining the process for you to create diverse and personalized live wallpaper experiences.

Headroom APIs in ADPF

The SystemHealthManager introduces the getCpuHeadroom and getGpuHeadroom APIs, designed to provide games and resource-intensive apps with estimates of available CPU and GPU resources. These methods offer a way for you to gauge how your app or game can best improve system health, particularly when used in conjunction with other Android Dynamic Performance Framework (ADPF) APIs that detect thermal throttling. By using CpuHeadroomParams and GpuHeadroomParams on supported devices, you will be able to customize the time window used to compute the headroom and select between average or minimum resource availability. This can help you reduce your CPU or GPU resource usage accordingly, leading to better user experiences and improved battery life.

Key sharing API

Android 16 adds APIs that support sharing access to Android Keystore keys with other apps. The new KeyStoreManager class supports granting and revoking access to keys by app uid, and includes an API for apps to access shared keys.

Standardized picture and audio quality framework for TVs

The new MediaQuality package in Android 16 exposes a set of standardized APIs for access to audio and picture profiles and hardware-related settings. This allows streaming apps to query profiles and apply them to media dynamically:

  • Movies mastered with a wider dynamic range require greater color accuracy to see subtle details in shadows and adjust to ambient light, so a profile that prefers color accuracy over brightness may be appropriate.
  • Live sporting events are often mastered with a narrow dynamic range, but are often watched in daylight, so a profile that gives preference to brightness over color accuracy can give better results.
  • Fully interactive content wants minimal processing to reduce latency, and wants higher frame rates, which is why many TV's ship with a game profile.

The API allows apps to switch between profiles and users to enjoy the benefits of tuning supported TVs to best suit their content.

Accessibility

Android 16 adds additional APIs to enhance UI semantics that help improve consistency for users that rely on accessibility services, such as TalkBack.

Duration added to TtsSpan

Android 16 extends TtsSpan with a TYPE_DURATION, consisting of ARG_HOURS, ARG_MINUTES, and ARG_SECONDS. This allows you to directly annotate time duration, ensuring accurate and consistent text-to-speech output with services like TalkBack.

Support elements with multiple labels

Android currently allows UI elements to derive their accessibility label from another, and now offers the ability for multiple labels to be associated, a common scenario in web content. By introducing a list-based API within AccessibilityNodeInfo, Android can directly support these multi-label relationships. As part of this change, we've deprecated AccessibilityNodeInfo setLabeledBy and getLabeledBy in favor of addLabeledBy, removeLabeledBy, and getLabeledByList.

Improved support for expandable elements

Android 16 adds accessibility APIs that allow you to convey the expanded or collapsed state of interactive elements, such as menus and expandable lists. By setting the expanded state using setExpandedState and dispatching TYPE_WINDOW_CONTENT_CHANGED AccessibilityEvents with a CONTENT_CHANGE_TYPE_EXPANDED content change type, you can ensure that screen readers like TalkBack announce state changes, providing a more intuitive and inclusive user experience.

Indeterminate ProgressBars

Android 16 adds RANGE_TYPE_INDETERMINATE, giving a way for you to expose RangeInfo for both determinate and indeterminate ProgressBar widgets, allowing services like TalkBack to more consistently provide feedback for progress indicators.

Tri-state CheckBox

The new AccessibilityNodeInfo getChecked and setChecked(int) methods in Android 16 now support a "partially checked" state in addition to "checked" and "unchecked." This replaces the deprecated boolean isChecked and setChecked(boolean).

Two Android API releases in 2025

This preview is for the next major release of Android with a planned launch in Q2 of 2025 and we plan to have another release with new developer APIs in Q4. The Q2 major release will be the only release in 2025 to include behavior changes that could affect apps. The Q4 minor release will pick up feature updates, optimizations, and bug fixes; like our non-SDK quarterly releases, it will not include any intentional app-impacting behavior changes.

2025 SDK release timeline showing a features only update in Q1 and Q3, a major SDK release with behavior changes, APIs, and features in Q2, and a minor SDK release with APIs and features in Q4

We'll continue to have quarterly Android releases. The Q1 and Q3 updates provide incremental updates to ensure continuous quality. We're putting additional energy into working with our device partners to bring the Q2 release to as many devices as possible.

There's no change to the target API level requirements and the associated dates for apps in Google Play; our plans are for one annual requirement each year, tied to the major API level.

How to get ready

In addition to performing compatibility testing on this next major release, make sure that you're compiling your apps against the new SDK, and use the compatibility framework to enable targetSdkVersion-gated behavior changes as they become available for early testing.

App compatibility

The Android 16 production timeline shows the release stages, highlighting 'Beta Releases' and 'Platform Stability' in blue and green, respectively, from December to the final release.

The Android 16 Preview program runs from November 2024 until the final public release in Q2 of 2025. At key development milestones, we'll deliver updates for your development and testing environments. Each update includes SDK tools, system images, emulators, API reference, and API diffs. We'll highlight critical APIs as they are ready to test in the preview program in blogs and on the Android 16 developer website.

We're targeting March of 2025 for our Platform Stability milestone. At this milestone, we'll deliver final SDK/NDK APIs and also final internal APIs and app-facing system behaviors. From that time you'll have several months before the final release to complete your testing. Learn more by checking the release timeline details.

Get started with Android 16

You can enroll any supported Pixel device to get this and future Android Beta updates over-the-air. If you don't have a Pixel device, you can use the 64-bit system images with the Android Emulator in Android Studio. If you are currently on Android 16 Beta 1 or are already in the Android Beta program, you will be offered an over-the-air update to Beta 2.

We're looking for your feedback so please report issues and submit feature requests on the feedback page. The earlier we get your feedback, the more we can include in our work on the final release.

For the best development experience with Android 16, we recommend that you use the latest preview of Android Studio (Meerkat). Once you're set up, here are some of the things you should do:

  • Compile against the new SDK, test in CI environments, and report any issues in our tracker on the feedback page.
  • Test your current app for compatibility, learn whether your app is affected by changes in Android 16, and install your app onto a device or emulator running Android 16 and extensively test it.

We'll update the beta system images and SDK regularly throughout the Android 16 release cycle. Once you've installed a beta build, you'll automatically get future updates over-the-air for all later previews and Betas.

For complete information, visit the Android 16 developer site.

13 Feb 2025 6:58pm GMT

12 Feb 2025

feedAndroid Developers Blog

Meet the Android Studio Team: A Conversation with Staff Developer Programs Engineer, Trevor Johns

Posted by Ashley Tschudin - Social Media Specialist, MTP at Google

Android Studio isn't just code and algorithms - it's built by real people with fascinating stories. Our "Meet the Android Studio Team" series gives you a glimpse into the lives and passions of the talented individuals who craft the tools you use every day. Tune in each month to meet new team members and discover their unique journey.


Trevor Johns: Building Android Studio for You

Trevor Johns, Staff Developer Programs Engineer

Meet Trevor Johns, a seasoned Staff Developer Programs Engineer at Google.

Reflecting on his journey, Trevor sheds light on the most impactful advancements in the Android ecosystem and offers a glimpse into his vision for the future where AI plays a pivotal role in streamlining development workflows.

Trevor discusses the Android Studio team's dedication to enhancing developer productivity through AI, highlighting their focus on understanding and addressing developer needs, and reflects on the dynamic journey of Android development while sharing valuable insights.


Can you tell us about your journey to becoming a part of the Android Studio team? What sparked your interest in Android development?

I've been at Google in various roles since Google since 2007, and transferred to Android team in 2009 shortly after the launch of the HTC G1 - the first publicly available Android phone. Even in those early days it was clear that mobile computing was a unique opportunity to reimagine many of the limitations of desktop computers and how users interact with the digital world.

Among my first projects were helping developers optimize their apps for the MyTouch 3G and Motorola Droid, as well as creating developer resources for Android's 1.6 Donut release.

Over the years, I've worked on various parts of the Android OS including our first tablet devices, Android Wear, helping develop the original Android support libraries (which later became Jetpack), and the migration to Kotlin.

Recently I joined the Android Studio team to help improve developer productivity, using AI to streamline common developer tasks and help developers have more time to focus on creativity.

How does the Android Studio team ensure that products or features meet the ever-changing needs of developers?

Like the rest of Android, we approach development of new features by listening to our developer community. We hold regular listening sessions with publishers, work with our UX research team to conduct case studies, and participate in online discussions to get a sense for where developers face the most friction - and then try to find ways to reduce that friction.

For example, we developed Gemini in Android Studio's integration with Play Vitals and Firebase Crashlytics based on feedback from members of the developer community who commented to let us know where they would find AI most useful across their developer workflow.

Speaking of, if you'd like to provide us with feedback, you can always file a bug or feature request on the Android Studio issue tracker.

How does the Studio team contribute to Google's broader vision for the Android platform?

In addition to listening to the Android community, we also keep an eye on what's being developed across the rest of the Android team and make sure that Android Studio has the right tools to help developers quickly migrate between Android versions and adopt those new platform features.

Beyond that, the Studio team provides leading edge editing tools to make sure that Android remains one of the easiest computing platforms to develop for - unlocking this unique computing platform for millions of developers.

In your opinion, what is the most impactful feature or improvement the Android team has introduced in recent years, and why?

For developers, my answer would have to be the migration to Kotlin. This language has modernized the Android developer experience - letting developers write apps with less code and fewer errors. It's also the foundation for Jetpack Compose, which is the future of Android UI development.

If you could wave a magic wand and add one dream feature to the Android universe, what would it be and why?

I'd love to see Gemini be able to not just autocomplete code for me, but generate scaffolds for new projects. That way I can focus on building features rather than worrying about basic structure when starting a new project.

Develop Android Apps with Kotlin

Follow Trevor's lead and embrace the power of Kotlin for modern Android development. Enhance your skills and write better Android apps faster with Kotlin.

Stay tuned!

Get ready for another inspiring story! The "Meet the Android Studio Team" series continues next week with a new team member in the spotlight. Don't miss their unique insights and journey.

Find Trevor Johns on LinkedIn, X, Bluesky, and Medium.

12 Feb 2025 10:00pm GMT

TrustedTime API: Introducing a reliable approach to time keeping for your apps

Posted by Kanyinsola Fapohunda - Software Engineer, and Geoffrey Boullanger - Technical Lead

Accurate time is crucial for a wide variety of app functionalities, from scheduling and event management to transaction logging and security protocols. However, a user can change the device's time, so a more accurate source of time than the device's local system time may be required. That's why we're introducing the TrustedTime API that leverages Google's infrastructure to deliver a trustworthy timestamp, independent of the device's potentially manipulated local time settings.

How does TrustedTime work?

The new API leverages Google's secure infrastructure to provide a trusted time source to your app. TrustedTime periodically syncs its clock to Google's servers, which have access to a highly accurate time source, so that you do not need to make a server request every time you want to know the current network time. Additionally, we've integrated a unique model that calculates the device's clock drift. This will inform you when the time may be inaccurate between network synchronizations.

Why is an accurate source of time important?

Many apps rely on the device's clock for various features. However, users can change their device's time settings, either intentionally or unintentionally, therefore changing the time that your app gets. This can lead to problems such as:

  • Data Inconsistency: Apps relying on chronological event ordering are vulnerable to data corruption if users manipulate device time. TrustedTime mitigates this risk by providing a trustworthy time source.
  • Security Gaps: Time-based security measures, like one-time passwords or timed access controls require an unaltered time source to be effective.
  • Unreliable Scheduling: Apps that depend on accurate scheduling, like calendar or reminder apps, can malfunction if the device clock (i.e. Unix timestamp) is incorrect.
  • Inaccurate Time: The device's internal clock can drift due to various factors, such as temperature, doze mode, battery level, etc. This can lead to problems in applications that require more precision. The TrustedTime API also provides the estimated error with the timestamps, so that you can ensure your app's time-sensitive operations are performed correctly.
  • Lack of Consistency Between Devices: Inconsistent time across devices can cause problems in multi-device scenarios, such as gaming or collaborative applications. The TrustedTime API helps ensure that all devices have a consistent view of time, improving the user experience.
  • Unnecessary Power and Data Consumption: TrustedTime is designed to be more efficient than calling an NTP server every time an app needs the current time. It avoids the overhead of repeated network requests by periodically syncing its clock with time servers. This synced time is then used as a reference point, and the TrustedTime API calculates the current time based on the device's internal clock. This approach reduces network usage and improves performance for apps that need frequent time checks.

TrustedTime Use Cases

The TrustedTime API opens up a range of possibilities for enhancing the reliability and security of your apps, with use cases in areas such as:

  • Financial Applications: Ensure the accuracy of transaction timestamps even when the device is offline, preventing fraud and disputes.
  • Gaming: Implement fair play by preventing users from manipulating the game clock to gain an unfair advantage.
  • Limited-Time Offers: Guarantee that promotions and offers expire at the correct time, regardless of the user's device settings.
  • E-commerce: Accurately track order processing and delivery times.
  • Content Licensing: Enforce time-based restrictions on digital content, like rentals or subscriptions.
  • IoT Devices: Synchronize clocks across multiple devices for consistent data logging and control.
  • Productivity apps: Accurately record the time of any changes made to cloud documents while offline.

Getting started with the TrustedTime API

The TrustedTime API is built on top of Google Play services, making integration seamless for most Android developers.

The simplest way to integrate is to initialize the TrustedTimeClient early in your app lifecycle, such as in the onCreate() method of your Application class. The following example uses dependency injection with Hilt to make the time client available to components throughout the app.

[Optional] Setup dependency injection

// TrustedTimeClientAccessor.kt
import com.google.android.gms.tasks.Task
import com.google.android.gms.time.TrustedTimeClient

interface TrustedTimeClientAccessor {
  fun createClient(): Task<TrustedTimeClient>
}

// TrustedTimeModule.kt
@Module
@InstallIn(SingletonComponent::class)
class TrustedTimeModule {
  @Provides
  fun provideTrustedTimeClientAccessor(
    @ApplicationContext context: Context
  ): TrustedTimeClientAccessor {
    return object : TrustedTimeClientAccessor {
      override fun createClient(): Task<TrustedTimeClient> {
        return TrustedTime.createClient(context)
      }
    }
  }
}


Initialize early in your app's lifecycle

// TrustedTimeDemoApplication.kt
@HiltAndroidApp
class TrustedTimeDemoApplication : Application() {

  @Inject
  lateinit var trustedTimeClientAccessor: TrustedTimeClientAccessor

  var trustedTimeClient: TrustedTimeClient? = null
    private set

  override fun onCreate() {
    super.onCreate()
    trustedTimeClientAccessor.createClient().addOnCompleteListener { task ->
      if (task.isSuccessful) {
        // Stash the client
        trustedTimeClient = task.result
      } else {
        // Handle error, maybe retry later
        val exception = task.exception
      }
    }
    // To use Kotlin Coroutine, you can use the await() method, 
    // see https://developers.google.com/android/guides/tasks#kotlin_coroutine for more info.
  }
}

NOTE: If you don't use dependency injection in your app. You can simply call
`TrustedTime.createClient(context)` instead of using a TrustedTimeClientAccessor.


Use TrustedTimeClient anywhere in your app

// Retrieve the TrustedTimeClient from your application class
  val myApp = applicationContext as TrustedTimeDemoApplication

  // In this example, System.currentTimeMillis() is used as a fallback if the
  // client is null (i.e. client creation task failed) or when there is no time
  // signal available. You may not want to do this if using the system clock is
  // not suitable for your use case.
  val currentTimeMillis =
    myApp.trustedTimeClient?.computeCurrentUnixEpochMillis()
        ?: System.currentTimeMillis()
  // trustedTimeClient.computeCurrentInstant() can be used if Instant is
  // preferred to long for Unix epoch times and you are able to use the APIs.

Use in short-lived components like Activity

@AndroidEntryPoint
class MainActivity : AppCompatActivity() {
  @Inject
  lateinit var trustedTimeAccessor: TrustedTimeAccessor

   private var trustedTimeClient: TrustedTimeClient? = null

  override fun onCreate(savedInstanceState: Bundle?) {
    super.onCreate(savedInstanceState)
    ...
    trustedTimeAccessor.createClient().addOnCompleteListener { task ->
      if (task.isSuccessful) {
          // Stash the client
          trustedTimeClient = task.result
        } else {
         // Handle error, maybe retry later or use another time source.
          val exception = task.exception
        }
    }
  }

  private fun getCurrentTimeInMillis() : Long? {
    return trustedTimeClient?.computeCurrentUnixEpochMillis()
  }
}

TrustedTime API availability and limitations

The TrustedTime API is available on all devices running Google Play services on Android 5 (Lollipop) and above. You need to add the dependency com.google.android.gms:play-services-time:16.0.1 (or above) to access the new API. No additional permission is required to use this API. However, TrustedTime needs an internet connection after the device starts up to provide timestamps. If the device hasn't connected to the internet since booting, the TrustedTime APIs won't return timestamps.

It's important to note that the device's internal clock can drift due to factors like temperature, doze mode, and battery level. TrustedTime doesn't prevent this drift, but its APIs provide an error estimate for each timestamp. Use this estimate to determine if the timestamp's accuracy meets your application's requirements. While TrustedTime makes it more difficult for users to manipulate the time accessed by your app, it does not guarantee complete safety. Advanced techniques can still be used to tamper with the device's time.

Next steps

To learn more about the TrustedTime API, check out the following resources:

12 Feb 2025 5:00pm GMT

16 Oct 2024

feedPlanet Maemo

Adding buffering hysteresis to the WebKit GStreamer video player

The <video> element implementation in WebKit does its job by using a multiplatform player that relies on a platform-specific implementation. In the specific case of glib platforms, which base their multimedia on GStreamer, that's MediaPlayerPrivateGStreamer.

WebKit GStreamer regular playback class diagram

The player private can have 3 buffering modes:

The current implementation (actually, its wpe-2.38 version) was showing some buffering problems on some Broadcom platforms when doing in-memory buffering. The buffering levels monitored by MediaPlayerPrivateGStreamer weren't accurate because the Nexus multimedia subsystem used on Broadcom platforms was doing its own internal buffering. Data wasn't being accumulated in the GstQueue2 element of playbin, because BrcmAudFilter/BrcmVidFilter was accepting all the buffers that the queue could provide. Because of that, the player private buffering logic was erratic, leading to many transitions between "buffer completely empty" and "buffer completely full". This, it turn, caused many transitions between the HaveEnoughData, HaveFutureData and HaveCurrentData readyStates in the player, leading to frequent pauses and unpauses on Broadcom platforms.

So, one of the first thing I tried to solve this issue was to ask the Nexus PlayPump (the subsystem in charge of internal buffering in Nexus) about its internal levels, and add that to the levels reported by GstQueue2. There's also a GstMultiqueue in the pipeline that can hold a significant amount of buffers, so I also asked it for its level. Still, the buffering level unstability was too high, so I added a moving average implementation to try to smooth it.

All these tweaks only make sense on Broadcom platforms, so they were guarded by ifdefs in a first version of the patch. Later, I migrated those dirty ifdefs to the new quirks abstraction added by Phil. A challenge of this migration was that I needed to store some attributes that were considered part of MediaPlayerPrivateGStreamer before. They still had to be somehow linked to the player private but only accessible by the platform specific code of the quirks. A special HashMap attribute stores those quirks attributes in an opaque way, so that only the specific quirk they belong to knows how to interpret them (using downcasting). I tried to use move semantics when storing the data, but was bitten by object slicing when trying to move instances of the superclass. In the end, moving the responsibility of creating the unique_ptr that stored the concrete subclass to the caller did the trick.

Even with all those changes, undesirable swings in the buffering level kept happening, and when doing a careful analysis of the causes I noticed that the monitoring of the buffering level was being done from different places (in different moments) and sometimes the level was regarded as "enough" and the moment right after, as "insufficient". This was because the buffering level threshold was one single value. That's something that a hysteresis mechanism (with low and high watermarks) can solve. So, a logical level change to "full" would only happen when the level goes above the high watermark, and a logical level change to "low" when it goes under the low watermark level.

For the threshold change detection to work, we need to know the previous buffering level. There's a problem, though: the current code checked the levels from several scattered places, so only one of those places (the first one that detected the threshold crossing at a given moment) would properly react. The other places would miss the detection and operate improperly, because the "previous buffering level value" had been overwritten with the new one when the evaluation had been done before. To solve this, I centralized the detection in a single place "per cycle" (in updateBufferingStatus()), and then used the detection conclusions from updateStates().

So, with all this in mind, I refactored the buffering logic as https://commits.webkit.org/284072@main, so now WebKit GStreamer has a buffering code much more robust than before. The unstabilities observed in Broadcom devices were gone and I could, at last, close Issue 1309.

0 Add to favourites0 Bury

16 Oct 2024 6:12am GMT

10 Sep 2024

feedPlanet Maemo

Don’t shoot yourself in the foot with the C++ move constructor

Move semantics can be very useful to transfer ownership of resources, but as many other C++ features, it's one more double edge sword that can harm yourself in new and interesting ways if you don't read the small print.

For instance, if object moving involves super and subclasses, you have to keep an extra eye on what's actually happening. Consider the following classes A and B, where the latter inherits from the former:

#include <stdio.h>
#include <utility>

#define PF printf("%s %p\n", __PRETTY_FUNCTION__, this)

class A {
 public:
 A() { PF; }
 virtual ~A() { PF; }
 A(A&& other)
 {
  PF;
  std::swap(i, other.i);
 }

 int i = 0;
};

class B : public A {
 public:
 B() { PF; }
 virtual ~B() { PF; }
 B(B&& other)
 {
  PF;
  std::swap(i, other.i);
  std::swap(j, other.j);
 }

 int j = 0;
};

If your project is complex, it would be natural that your code involves abstractions, with part of the responsibility held by the superclass, and some other part by the subclass. Consider also that some of that code in the superclass involves move semantics, so a subclass object must be moved to become a superclass object, then perform some action, and then moved back to become the subclass again. That's a really bad idea!

Consider this usage of the classes defined before:

int main(int, char* argv[]) {
 printf("Creating B b1\n");
 B b1;
 b1.i = 1;
 b1.j = 2;
 printf("b1.i = %d\n", b1.i);
 printf("b1.j = %d\n", b1.j);
 printf("Moving (B)b1 to (A)a. Which move constructor will be used?\n");
 A a(std::move(b1));
 printf("a.i = %d\n", a.i);
 // This may be reading memory beyond the object boundaries, which may not be
 // obvious if you think that (A)a is sort of a (B)b1 in disguise, but it's not!
 printf("(B)a.j = %d\n", reinterpret_cast<B&>(a).j);
 printf("Moving (A)a to (B)b2. Which move constructor will be used?\n");
 B b2(reinterpret_cast<B&&>(std::move(a)));
 printf("b2.i = %d\n", b2.i);
 printf("b2.j = %d\n", b2.j);
 printf("^^^ Oops!! Somebody forgot to copy the j field when creating (A)a. Oh, wait... (A)a never had a j field in the first place\n");
 printf("Destroying b2, a, b1\n");
 return 0;
}

If you've read the code, those printfs will have already given you some hints about the harsh truth: if you move a subclass object to become a superclass object, you're losing all the subclass specific data, because no matter if the original instance was one from a subclass, only the superclass move constructor will be used. And that's bad, very bad. This problem is called object slicing. It's specific to C++ and can also happen with copy constructors. See it with your own eyes:

Creating B b1
A::A() 0x7ffd544ca690
B::B() 0x7ffd544ca690
b1.i = 1
b1.j = 2
Moving (B)b1 to (A)a. Which move constructor will be used?
A::A(A&&) 0x7ffd544ca6a0
a.i = 1
(B)a.j = 0
Moving (A)a to (B)b2. Which move constructor will be used?
A::A() 0x7ffd544ca6b0
B::B(B&&) 0x7ffd544ca6b0
b2.i = 1
b2.j = 0
^^^ Oops!! Somebody forgot to copy the j field when creating (A)a. Oh, wait... (A)a never had a j field in the first place
Destroying b2, a, b1
virtual B::~B() 0x7ffd544ca6b0
virtual A::~A() 0x7ffd544ca6b0
virtual A::~A() 0x7ffd544ca6a0
virtual B::~B() 0x7ffd544ca690
virtual A::~A() 0x7ffd544ca690

Why can something that seems so obvious become such a problem, you may ask? Well, it depends on the context. It's not unusual for the codebase of a long lived project to have started using raw pointers for everything, then switching to using references as a way to get rid of null pointer issues when possible, and finally switch to whole objects and copy/move semantics to get rid or pointer issues (references are just pointers in disguise after all, and there are ways to produce null and dangling references by mistake). But this last step of moving from references to copy/move semantics on whole objects comes with the small object slicing nuance explained in this post, and when the size and all the different things to have into account about the project steals your focus, it's easy to forget about this.

So, please remember: never use move semantics that convert your precious subclass instance to a superclass instance thinking that the subclass data will survive. You can regret about it and create difficult to debug problems inadvertedly.

Happy coding!

0 Add to favourites0 Bury

10 Sep 2024 7:58am GMT

17 Jun 2024

feedPlanet Maemo

Incorporating 3D Gaussian Splats into the graphics pipeline

3D Gaussian splatting is the emerging rendering technique that is overtaking NeRFs. Since it is centered around point primitives, it is more compatible with traditional graphics pipelines that already support point rendering.

Gaussian splats essentially enhance the concept of point rendering by converting the point primitive into a 3D ellipsoid, which is then projected into 2D during the rendering process.. This concept was initially described in 2002 [3], but the technique of extending Structure from Motion scans in this way was only detailed more recently [1].

In this post, I explore how to integrate Gaussian splats into the traditional graphics pipeline. This allows them to be used alongside triangle-based primitives and interact with them through the depth buffer for occlusion (see header image). This approach also simplifies deployment by eliminating the need for CUDA.

Storage

The original implementation uses .ply files as their checkpoint format, focusing on maintaining training-relevant data structures at the expense of storage efficiency, leading to increased file sizes.

For example, it stores the covariance as scaling and a rotation quaternion, necessitating reconstruction during rendering. A more efficient approach would be to leverage orthogonality, storing only the diagonal and upper triangular vectors, thereby eliminating reconstruction and reducing storage requirements.

Further analysis of the storage usage for each attribute shows that the spherical harmonics of orders 1-3 are the main contributors to the file size. However, according to the ablation study in the original publication [1], these harmonics only lead to a modest PSNR improvement of 0.5.

Therefore, the most straightforward way to decrease storage is by discarding the higher-order spherical harmonics. Additionally, the level 0 spherical harmonics can be converted into a diffuse color and merged with opacity to form a single RGBA value. These simple yet effective methods were implemented in one of the early WebGL implementations, resulting in the .splat format. As an added benefit, this format can be easily interpreted by viewers unaware of Gaussian splats as a simple colored point cloud:

Results using a non Gaussian-splat aware renderer

By directly storing the covariance as previously mentioned we can reduce the precision from float32 to float16, thereby halving the storage needed for that data. Furthermore, since most splats have limited spatial extents, we can also utilize float16 for position data, yielding additional storage savings.

With these changes, we achieve a storage requirement of 22 bytes per splat, in contrast to the 44 bytes needed by the .splat format and 236 bytes in the original implementation. Thus, we have attained a 10x reduction in storage compared to the original implementation simply by using more suitable data types.

Blending

The image formation model presented in the original paper [1] is similar to the NeRF rendering, as it is compared to it. This involves casting a ray and observing its intersection with the splats, which leads to front-to-back blending. This is precisely the approach taken by the provided CUDA implementation.

Blending remains a component of the fixed-function unit within the graphics pipeline, which can be set up for front-to-back blending [2] by using the factors (one_minus_dest_alpha, one) and by multiplying color and alpha in the shader as color.rgb * color.a. This results in the following equation:

\begin{aligned}C_{dst} &= (1 - \alpha_{dst}) \cdot \alpha_{src} C_{src} &+ C_{dst}\\ \alpha_{dst} &= (1 - \alpha_{dst})\cdot\alpha_{src} &+ \alpha_{dst}\end{aligned}

However, this method requires the framebuffer alpha value to be zero before rendering the splats, which is not typically the case as any previous render pass could have written an arbitrary alpha value.

A simple solution is to switch to back-to-front sorting and use the standard alpha blending factors (src_alpha, one_minus_src_alpha) for the following blending equation:

C_{dst} = \alpha_{src} \cdot C_{src} + (1 - \alpha_{src}) \cdot C_{dst}

This allows us to regard Gaussian splats as a special type of particles that can be rendered together with other transparent elements within a scene.

References

  1. Kerbl, Bernhard, et al. "3d gaussian splatting for real-time radiance field rendering." ACM Transactions on Graphics 42.4 (2023): 1-14.
  2. Green, Simon. "Volumetric particle shadows." NVIDIA Developer Zone (2008).
  3. Zwicker, Matthias, et al. "EWA splatting." IEEE Transactions on Visualization and Computer Graphics 8.3 (2002): 223-238.

0 Add to favourites0 Bury

17 Jun 2024 1:28pm GMT

18 Sep 2022

feedPlanet Openmoko

Harald "LaF0rge" Welte: Deployment of future community TDMoIP hub

I've mentioned some of my various retronetworking projects in some past blog posts. One of those projects is Osmocom Community TDM over IP (OCTOI). During the past 5 or so months, we have been using a number of GPS-synchronized open source icE1usb interconnected by a new, efficient but strill transparent TDMoIP protocol in order to run a distributed TDM/PDH network. This network is currently only used to provide ISDN services to retronetworking enthusiasts, but other uses like frame relay have also been validated.

So far, the central hub of this OCTOI network has been operating in the basement of my home, behind a consumer-grade DOCSIS cable modem connection. Given that TDMoIP is relatively sensitive to packet loss, this has been sub-optimal.

Luckily some of my old friends at noris.net have agreed to host a new OCTOI hub free of charge in one of their ultra-reliable co-location data centres. I'm already hosting some other machines there for 20+ years, and noris.net is a good fit given that they were - in their early days as an ISP - the driving force in the early 90s behind one of the Linux kernel ISDN stracks called u-isdn. So after many decades, ISDN returns to them in a very different way.

Side note: In case you're curious, a reconstructed partial release history of the u-isdn code can be found on gitea.osmocom.org

But I digress. So today, there was the installation of this new OCTOI hub setup. It has been prepared for several weeks in advance, and the hub contains two circuit boards designed entirely only for this use case. The most difficult challenge was the fact that this data centre has no existing GPS RF distribution, and the roof is ~ 100m of CAT5 cable (no fiber!) away from the roof. So we faced the challenge of passing the 1PPS (1 pulse per second) signal reliably through several steps of lightning/over-voltage protection into the icE1usb whose internal GPS-DO serves as a grandmaster clock for the TDM network.

The equipment deployed in this installation currently contains:

For more details, see this wiki page and this ticket

Now that the physical deployment has been made, the next steps will be to migrate all the TDMoIP links from the existing user base over to the new hub. We hope the reliability and performance will be much better than behind DOCSIS.

In any case, this new setup for sure has a lot of capacity to connect many more more users to this network. At this point we can still only offer E1 PRI interfaces. I expect that at some point during the coming winter the project for remote TDMoIP BRI (S/T, S0-Bus) connectivity will become available.

Acknowledgements

I'd like to thank anyone helping this effort, specifically * Sylvain "tnt" Munaut for his work on the RS422 interface board (+ gateware/firmware) * noris.net for sponsoring the co-location * sysmocom for sponsoring the EPYC server hardware

18 Sep 2022 10:00pm GMT

08 Sep 2022

feedPlanet Openmoko

Harald "LaF0rge" Welte: Progress on the ITU-T V5 access network front

Almost one year after my post regarding first steps towards a V5 implementation, some friends and I were finally able to visit Wobcom, a small German city carrier and pick up a lot of decommissioned POTS/ISDN/PDH/SDH equipment, primarily V5 access networks.

This means that a number of retronetworking enthusiasts now have a chance to play with Siemens Fastlink, Nokia EKSOS and DeTeWe ALIAN access networks/multiplexers.

My primary interest is in Nokia EKSOS, which looks like an rather easy, low-complexity target. As one of the first steps, I took PCB photographs of the various modules/cards in the shelf, take note of the main chip designations and started to search for the related data sheets.

The results can be found in the Osmocom retronetworking wiki, with https://osmocom.org/projects/retronetworking/wiki/Nokia_EKSOS being the main entry page, and sub-pages about

In short: Unsurprisingly, a lot of Infineon analog and digital ICs for the POTS and ISDN ports, as well as a number of Motorola M68k based QUICC32 microprocessors and several unknown ASICs.

So with V5 hardware at my disposal, I've slowly re-started my efforts to implement the LE (local exchange) side of the V5 protocol stack, with the goal of eventually being able to interface those V5 AN with the Osmocom Community TDM over IP network. Once that is in place, we should also be able to offer real ISDN Uk0 (BRI) and POTS lines at retrocomputing events or hacker camps in the coming years.

08 Sep 2022 10:00pm GMT

Harald "LaF0rge" Welte: Clock sync trouble with Digium cards and timing cables

If you have ever worked with Digium (now part of Sangoma) digital telephony interface cards such as the TE110/410/420/820 (single to octal E1/T1/J1 PRI cards), you will probably have seen that they always have a timing connector, where the timing information can be passed from one card to another.

In PDH/ISDN (or even SDH) networks, it is very important to have a synchronized clock across the network. If the clocks are drifting, there will be underruns or overruns, with associated phase jumps that are particularly dangerous when analog modem calls are transported.

In traditional ISDN use cases, the clock is always provided by the network operator, and any customer/user side equipment is expected to synchronize to that clock.

So this Digium timing cable is needed in applications where you have more PRI lines than possible with one card, but only a subset of your lines (spans) are connected to the public operator. The timing cable should make sure that the clock received on one port from the public operator should be used as transmit bit-clock on all of the other ports, no matter on which card.

Unfortunately this decades-old Digium timing cable approach seems to suffer from some problems.

bursty bit clock changes until link is up

The first problem is that downstream port transmit bit clock was jumping around in bursts every two or so seconds. You can see an oscillogram of the E1 master signal (yellow) received by one TE820 card and the transmit of the slave ports on the other card at https://people.osmocom.org/laforge/photos/te820_timingcable_problem.mp4

As you can see, for some seconds the two clocks seem to be in perfect lock/sync, but in between there are periods of immense clock drift.

What I'd have expected is the behavior that can be seen at https://people.osmocom.org/laforge/photos/te820_notimingcable_loopback.mp4 - which shows a similar setup but without the use of a timing cable: Both the master clock input and the clock output were connected on the same TE820 card.

As I found out much later, this problem only occurs until any of the downstream/slave ports is fully OK/GREEN.

This is surprising, as any other E1 equipment I've seen always transmits at a constant bit clock irrespective whether there's any signal in the opposite direction, and irrespective of whether any other ports are up/aligned or not.

But ok, once you adjust your expectations to this Digium peculiarity, you can actually proceed.

clock drift between master and slave cards

Once any of the spans of a slave card on the timing bus are fully aligned, the transmit bit clocks of all of its ports appear to be in sync/lock - yay - but unfortunately only at the very first glance.

When looking at it for more than a few seconds, one can see a slow, continuous drift of the slave bit clocks compared to the master :(

Some initial measurements show that the clock of the slave card of the timing cable is drifting at about 12.5 ppb (parts per billion) when compared against the master clock reference.

This is rather disappointing, given that the whole point of a timing cable is to ensure you have one reference clock with all signals locked to it.

The work-around

If you are willing to sacrifice one port (span) of each card, you can work around that slow-clock-drift issue by connecting an external loopback cable. So the master card is configured to use the clock provided by the upstream provider. Its other ports (spans) will transmit at the exact recovered clock rate with no drift. You can use any of those ports to provide the clock reference to a port on the slave card using an external loopback cable.

In this setup, your slave card[s] will have perfect bit clock sync/lock.

Its just rather sad that you need to sacrifice ports just for achieving proper clock sync - something that the timing connectors and cables claim to do, but in reality don't achieve, at least not in my setup with the most modern and high-end octal-port PCIe cards (TE820).

08 Sep 2022 10:00pm GMT