05 Feb 2025
TalkAndroid
Best cases for Samsung Galaxy S25+
Your Galaxy S25+ deserves a case that's not just attractive, but one that can survive clumsy moments. Here the best picks guaranteed to provide protection.
05 Feb 2025 2:07pm GMT
Verizon’s New Deal Offers Gemini Advanced at Half the Price
You can't avoid AI in 2025. Might as well have it by your side, especially if you can get it at half the price.
05 Feb 2025 2:00pm GMT
Microsoft Kills Off Defender VPN Service
Say goodbye to Microsoft 365 VPN as the company shuts it down for good.
05 Feb 2025 1:55pm GMT
30 Jan 2025
Android Developers Blog
Meet the Android Studio Team: A Conversation with Product Manager, Paris Hsu
Posted by Ashley Tschudin - Social Media Specialist, MTP at Google
Welcome to "Meet the Android Studio Team"; a short blog series where we pull back the curtain and introduce you to the passionate people who build your favorite Android development tools. Get to know the talented minds - engineers, designers, product managers, and more - who pour their hearts into crafting the best possible experience for Android developers.
Join us each week to meet a new member of the team and explore their unique perspectives.
Paris Hsu: Empowering Android developers with Compose tools
Meet Paris Hsu, a Product Manager at Google passionate about empowering developers to build incredible Android apps.
Her journey to the Android Studio team started with a serendipitous internship at Microsoft, where she discovered the power of developer tools. Now, as part of the UI Tools team, Paris champions intuitive solutions that streamline the development process, like the innovative Compose Tools suite.
In this installment of "Meet the Android Studio Team," Paris shares insights into her work, the importance of developer feedback, and her dream Android feature (hint: it involves acing that forehand).
Can you tell us about your journey to becoming a part of the Android Studio team? What sparked your interest in Android development?
Honestly, I joined a bit by chance! The summer before my last year of grad school, I was in the Microsoft's Garage incubator internship program. Our project, InkToCode, turned handwritten designs into code. It was my first experience building developer tools and made me realize how powerful developer tools can be, which led me to the Android Studio team. Now, after 6 years, I'm constantly amazed by what Android developers create - from innovative productivity apps to immersive games. It's incredibly rewarding to build tools that empower developers to create more.
In your opinion, what is the most impactful feature or improvement the Android Studio team has introduced in recent years, and why?
As part of the UI Tools team in Android Studio, I'm biased towards Compose Tools! Our team spent a lot of time rethinking how we can take a code-first approach for tools as we transition the community for XML to Compose. Features like the Compose Preview and its submodes (Interactive, Animation, Deploy preview) enable fast UI iteration, while features such as Layout Inspector or Compose UI Check helps find and diagnose UI issues with ease. We are also exploring ways to apply multimodal AI into these tools to help developers write more high quality, adaptive, and inclusive Compose code quicker.
How does the Android Studio team ensure that products or features meet the ever-changing needs of developers?
We are constantly engaging and listening to developer feedback to ensure we are meeting their needs! some examples:
- Direct feedback: UXR studies, Annual developer surveys, and Buganizer reports provide valuable insights.
- Early access: We release Early Access Programs (EAPs) for new features, allowing developers to test them and provide feedback before official launch.
- Community engagement: We have advisory boards with experienced Android developers, gather feedback from Google Developer Experts (GDEs), and attend conferences to connect directly with the community.
How does the Studio team contribute to Google's broader vision for the Android platform?
I think Android Studio contributes to Google's broader mission by providing Android developers with powerful and intuitive tools. This way, developers are empowered to create amazing apps that bring the best of Google's services and information to our users. Whether it's accessing knowledge through Search, leveraging Gemini, staying connected with Maps, or enjoying entertainment on YouTube, Android Studio helps developers build the experiences that connect people to what matters most.
If you could wave a magic wand and add one dream feature to the Android universe, what would it be and why?
Anyone who knows me knows that I am recently super obsessed with tennis. I would love to see more coaching wearables (e.g. Pixel Watch, Pixel Racket?!). I would love real-time feedback on my serve and especially forehand stroke analysis.
Learn more about Compose Tools
Inspired by Paris' passion for empowering developers to build incredible Android apps? To learn more about how Compose Tools can streamline your app development process, check out the Compose Tools documentation and get started with the Jetpack Compose Tutorial.
Stay tuned
Keep an eye out for the next installment in our "Meet the Android Studio Team" series, where we'll shine the spotlight on another team member and delve into their unique insights.
Find Paris Hsu on LinkedIn, X, and Medium.
30 Jan 2025 9:00pm GMT
29 Jan 2025
Android Developers Blog
Production-ready generative AI on Android with Vertex AI in Firebase
Posted by Thomas Ezan - Sr. Developer Relation Engineer (@lethargicpanda)
Gemini can help you build and launch new user features that will boost engagement and create personalized experiences for your users.
The Vertex AI in Firebase SDK lets you access Google's Gemini Cloud models (like Gemini 1.5 Flash and Gemini 1.5 Pro) and add GenAI capabilities to your Android app. It became generally available last October which means it's now ready for production and it is already used by many apps in Google Play.
Here are tips for a successful deployment to production.
Implement App Check to prevent API abuse
When using the Vertex AI in Firebase API it is crucial to implement robust security measures to prevent unauthorized access and misuse.
Firebase App Check helps protect backend resources (like Vertex AI in Firebase, Cloud Functions for Firebase, or even your own custom backend) from abuse. It does this by attesting that incoming traffic is coming from your authentic app running on an authentic and untampered Android device.
![A flow diagram illustrating App Check, with green lines depicting 'User Request' going through App Check to 'Backend'. A red line depicting 'Bad Request' is being blocked by App Check.](https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjBEmVpndsWYei5SO0HXPb6WWPILxMscIK4WfXbHXT3x2R3HcGq4fKwYTC1TuABK12FBMSIfnta8Z5z5lNjHWfXudXvxAc9DCImSw0bIOdL7NrWwgEABdXId3pb-cPXQiDgIupe8dF9FjZNq774DrWUupGZAWbExFrSOzzbbH9eCSKsRViOhFxgz2eXsR0/s1600/image3.png)
To get started, add Firebase to your Android project and enable the Play Integrity API for your app in the Google Play console. Back in the Firebase console, go to the App Check section of your Firebase project to register your app by providing its SHA-256 fingerprint.
Then, update your Android project's Gradle dependencies with the App Check library for Android:
dependencies { // BoM for the Firebase platform implementation(platform("com.google.firebase:firebase-bom:33.7.0")) // Dependency for App Check implementation("com.google.firebase:firebase-appcheck-playintegrity") }
Finally, in your Kotlin code, initialize App Check before using any other Firebase SDK:
Firebase.initialize(context) Firebase.appCheck.installAppCheckProviderFactory( PlayIntegrityAppCheckProviderFactory.getInstance(), )
To enhance the security of your generative AI feature, you should implement and enforce App Check before releasing your app to production. Additionally, if your app utilizes other Firebase services like Firebase Authentication, Firestore, or Cloud Functions, App Check provides an extra layer of protection for those resources as well.
Once App Check is enforced, you'll be able to monitor your app's requests in the Firebase console.
![An area chart of the Apps Check metrics page in Firebase console, showing the percentages of verified and unverified requests over several days. Numerical breakdowns of verified (51%) and unverified requests (49%) are shown.](https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhVVLy95XmqxX2WgBXmEc2URFkBVlE8NFBn_7IOZ0ApoLGohZxS1cDDU-gEidnzac4gR9sFgPjkgRkDi7mMTVcwJS14Qz3GIBdS67IPXA-02fgj8G_VV9cw_ELVfUWl5kJtFbdnD5418OF5-Fw0DK2_42-NgTyRINI3MwTIDaNa_67dSmrNcriP-Q4YhjM/s1600/image2.png)
You can learn more about App Check on Android in the Firebase documentation.
Use Remote Config for server-controlled configuration
The generative AI landscape evolves quickly. Every few months, new Gemini model iterations become available and some models are removed. See the Vertex AI in Firebase Gemini models page for details.
Because of this, instead of hardcoding the model name in your app, we recommend using a server-controlled variable using Firebase Remote Config. This allows you to dynamically update the model your app uses without having to deploy a new version of your app or require your users to pick up a new version.
You define parameters that you want to control (like model name) using the Firebase console. Then, you add these parameters into your app, along with default "fallback" values for each parameter. Back in the Firebase console, you can change the value of these parameters at any time. Your app will automatically fetch the new value.
Here's how to implement Remote Config in your app:
// Initialize the remote configuration by defining the refresh time val remoteConfig: FirebaseRemoteConfig = Firebase.remoteConfig val configSettings = remoteConfigSettings { minimumFetchIntervalInSeconds = 3600 } remoteConfig.setConfigSettingsAsync(configSettings) // Set default values defined in your app resources remoteConfig.setDefaultsAsync(R.xml.remote_config_defaults) // Load the model name val modelName = remoteConfig.getString("model_name")
Read more about using Remote Config with Vertex AI in Firebase.
Gather user feedback to evaluate impact
As you roll out your AI-enabled feature to production, it's critical to build feedback mechanisms into your product and allow users to easily signal whether the AI output was helpful, accurate, or relevant. For example, you can incorporate interactive elements such as thumb-up and thumb-down buttons and detailed feedback forms within the user interface. The Material Icons in Compose package provides ready to use icons to help you implement it.
You can easily track the user interaction with these elements as custom analytics events by using Google Analytics logEvent() function:
Row { Button ( onClick = { firebaseAnalytics.logEvent("model_response_feedback") { param("feedback", "thumb_up") } } ) { Icon(Icons.Default.ThumbUp, contentDescription = "Thumb up") }, Button ( onClick = { firebaseAnalytics.logEvent("model_response_feedback") { param("feedback", "thumb_down") } } ) { Icon(Icons.Default.ThumbDown, contentDescription = "Thumb down") } }
Learn more about Google Analytics and its event logging capabilities in the Firebase documentation.
User privacy and responsible AI
When you use Vertex AI in Firebase for inference, you have the guarantee that the data sent to Google won't be used by Google to train AI models (see Vertex AI documentation for details).
It's also important to be transparent with your users when they're engaging with generative AI technology. You should highlight the possibility of unexpected model behavior.
Finally, users should have control within your app over how their activity related to AI model interactions is stored and deleted.
You can learn more about how Google is approaching Generative AI responsibly in the Google Cloud documentation.
29 Jan 2025 5:00pm GMT
28 Jan 2025
Android Developers Blog
Helping users find trusted apps on Google Play
Posted by JJ Zou - Product Manager, and Scott Lin - Product Manager
At Google Play, we're committed to empowering you with the tools and resources you need to build successful and secure apps that users can rely on. That's why we're introducing a new way to recognize VPN apps that go above and beyond to protect their users: a "Verified" badge for consumer-facing VPN apps.
This new badge is designed to highlight apps that prioritize user privacy and safety, help users make more informed choices about the VPN apps they use, and build confidence in the apps they ultimately download. This badge complements existing features such as the Google Play Store banner for VPNs and Data Safety section declaration in the Play Store.
![A screenshot of the NordVPN app page on the Google Play Store. The app has a 4.6-star rating and is verified by Google Play Protect and description mentions 6,000+ servers in 110+ locations and highlights its data safety features.](https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhbaD2ONt66HN0zu09uvHSNjT8SoO9ezWZLuoLw8QSgae-_S_lO0as66ri2Nq-e5GQEEz65ElVOQwhn0z43mzSqmZOMRyt50fT08xtHKcOtuDvU9rbRtfcc8JzzEYyB0wbIgzbiIxDXc3CPXT-QPOg-HefqfLoDIApNcLXjJbvVJS1eVXLMuz5nQavoAaI/s1600/VPN%20Detail%20Page@2x.png)
Build user trust with more transparency
Earning the VPN badge isn't just about checking a box- it's proof that your VPN app invests in app safety. This badge signifies that your app has gone above and beyond, adhering to the Play safety and security guidelines and successfully completed a Mobile Application Security Assessment (MASA) Level 2 validation.
The VPN badge helps your app stand out in a crowded marketplace. Once awarded, the badge is prominently displayed on your app's details page and in search results. Additionally, we have built new surfaces to showcase verified VPN applications.
Demonstrating commitment to security and safety
We're excited to share insights from some of our partners who have already earned the VPN badge and are leading the way in building a safe and trusted Google Play ecosystem. Learn how partners like NordVPN, hide.me, and Aloha are using the badge and implementing best practices for user security:
NordVPN
![NordVPN Logo](https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhWMnOLJ4H3gUmnljPeRA4BGVvQ9-ZCsRMD6n17_K4c9-4LYm8F4RgRLEm0OwHM0k9GUdUfuiQOSNKQguXWq89udBN5cYDl5ZbQCIUonNod4Pw9Fu8-T65cL3dcyMSG9uCqJdNbqxStOymZd9Grj_ZBoworN_4vlI6NxIE3gYqpmbvIxK-DNe7ixO4yyhw/w200-h189/Nord-VPN-logo.png)
"We're excited that the new 'Verified' badge will help users easily identify VPNs that meet high standards for security and privacy. In a market where trust is key, this badge not only provides reassurance to customers, but also highlights the integrity of developers committed to delivering secure and reliable products."
hide.me
![hide.me Logo](https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhR3j4fCIg1Jo1v6K5YGGUzUCr5QQH8v1iei_aQxvfjr_mzLPs7ueq3rMZxc-09bJ4dryfZ5O91d2rARejs5s6Do3AfmGBNIQyACcS7vuvoaIAFURCU-TUgCne_CwYyQpdxytPmrszyYup5ojRTo-_a6hpMzODvspPBZD55b6ZeFaFsNQzfZmaRrgv1rNs/s1600/image5.png)
"Privacy and user safety are fundamental to our VPN's architecture. The MASA program has been valuable in validating our security practices and maintaining high standards. This accreditation provides independent verification of our commitment to protecting user privacy."
Aloha Browser
![Aloha Logo](https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjHW3vmSLpvueeL1kbJCf9upIg1hJHCIHM4YBJHkz-9DDPbRpAWajj_oFprNBjbMaBMO7aRNOT4qSvu6pCa-5E07Du9-FenltZ6l0WY4-JmpY2g2UEGuAbknCTJywB6LUXEsRAMLj3Tc9QmR4aIhKy2lH_bDrWpScnkTFzAeAN8A5-rwSu7v7jqs5veCnI/s1600/image4.png)
"The certification process is well-organized and accessible to any company. If your product is developed with security as a core focus, passing the required audits should not pose any difficulty. We regularly conduct third-party audits and have been active participants in the MASA program since its inception. Additionally, it fosters discipline in your development practices, knowing that regular re-certification is required. Ultimately, it's the end user who benefits the most-a secure and satisfied user is the ultimate goal for every app developer."
Getting your App Badge-Ready
To take advantage of this opportunity to enhance your app's profile and attract more users, learn more about the specific criteria and start the validation process today.
To be considered for the "Verified" badge, your VPN app needs to:
- Complete a Mobile Application Security Assessment (MASA) Level 2 validation
- Have an Organization developer account type
- Meet target API level requirements for Google Play apps
- Have at least 10,000 installs and 250 reviews
- Be published on Google Play for at least 90 days
- Submit a Data Safety section declaration, opting into:
- Independent security review, under 'Additional badges'
- Encryption in transit
Note: This list is not exhaustive and doesn't fully represent all the criteria used to display the badge. While other factors contribute to the evaluation, fulfilling these requirements significantly increases your chances of seeing your VPN app "Verified."
Join us in our mission to create a safer and more transparent Google Play ecosystem. We're here to support you with the tools and resources you need to build trusted apps.
28 Jan 2025 6:00pm GMT
16 Oct 2024
Planet Maemo
Adding buffering hysteresis to the WebKit GStreamer video player
The <video>
element implementation in WebKit does its job by using a multiplatform player that relies on a platform-specific implementation. In the specific case of glib platforms, which base their multimedia on GStreamer, that's MediaPlayerPrivateGStreamer.
The player private can have 3 buffering modes:
- On-disk buffering: This is the typical mode on desktop systems, but is frequently disabled on purpose on embedded devices to avoid wearing out their flash storage memories. All the video content is downloaded to disk, and the buffering percentage refers to the total size of the video. A GstDownloader element is present in the pipeline in this case. Buffering level monitoring is done by polling the pipeline every second, using the
fillTimerFired()
method. - In-memory buffering: This is the typical mode on embedded systems and on desktop systems in case of streamed (live) content. The video is downloaded progressively and only the part of it ahead of the current playback time is buffered. A GstQueue2 element is present in the pipeline in this case. Buffering level monitoring is done by listening to GST_MESSAGE_BUFFERING bus messages and using the buffering level stored on them. This is the case that motivates the refactoring described in this blog post, what we actually wanted to correct in Broadcom platforms, and what motivated the addition of hysteresis working on all the platforms.
- Local files: Files, MediaStream sources and other special origins of video don't do buffering at all (no GstDownloadBuffering nor GstQueue2 element is present on the pipeline). They work like the on-disk buffering mode in the sense that
fillTimerFired()
is used, but the reported level is relative, much like in the streaming case. In the initial version of the refactoring I was unaware of this third case, and only realized about it when tests triggered the assert that I added to ensure that the on-disk buffering method was working in GST_BUFFERING_DOWNLOAD mode.
The current implementation (actually, its wpe-2.38 version) was showing some buffering problems on some Broadcom platforms when doing in-memory buffering. The buffering levels monitored by MediaPlayerPrivateGStreamer weren't accurate because the Nexus multimedia subsystem used on Broadcom platforms was doing its own internal buffering. Data wasn't being accumulated in the GstQueue2 element of playbin, because BrcmAudFilter/BrcmVidFilter was accepting all the buffers that the queue could provide. Because of that, the player private buffering logic was erratic, leading to many transitions between "buffer completely empty" and "buffer completely full". This, it turn, caused many transitions between the HaveEnoughData, HaveFutureData and HaveCurrentData readyStates in the player, leading to frequent pauses and unpauses on Broadcom platforms.
![](https://eocanha.org/blog/wp-content/uploads/2024/10/0.00.03.621786886-video-player-0_PAUSED_PLAYING-1024x137.png)
So, one of the first thing I tried to solve this issue was to ask the Nexus PlayPump (the subsystem in charge of internal buffering in Nexus) about its internal levels, and add that to the levels reported by GstQueue2. There's also a GstMultiqueue in the pipeline that can hold a significant amount of buffers, so I also asked it for its level. Still, the buffering level unstability was too high, so I added a moving average implementation to try to smooth it.
All these tweaks only make sense on Broadcom platforms, so they were guarded by ifdefs in a first version of the patch. Later, I migrated those dirty ifdefs to the new quirks abstraction added by Phil. A challenge of this migration was that I needed to store some attributes that were considered part of MediaPlayerPrivateGStreamer before. They still had to be somehow linked to the player private but only accessible by the platform specific code of the quirks. A special HashMap attribute stores those quirks attributes in an opaque way, so that only the specific quirk they belong to knows how to interpret them (using downcasting). I tried to use move semantics when storing the data, but was bitten by object slicing when trying to move instances of the superclass. In the end, moving the responsibility of creating the unique_ptr that stored the concrete subclass to the caller did the trick.
Even with all those changes, undesirable swings in the buffering level kept happening, and when doing a careful analysis of the causes I noticed that the monitoring of the buffering level was being done from different places (in different moments) and sometimes the level was regarded as "enough" and the moment right after, as "insufficient". This was because the buffering level threshold was one single value. That's something that a hysteresis mechanism (with low and high watermarks) can solve. So, a logical level change to "full" would only happen when the level goes above the high watermark, and a logical level change to "low" when it goes under the low watermark level.
For the threshold change detection to work, we need to know the previous buffering level. There's a problem, though: the current code checked the levels from several scattered places, so only one of those places (the first one that detected the threshold crossing at a given moment) would properly react. The other places would miss the detection and operate improperly, because the "previous buffering level value" had been overwritten with the new one when the evaluation had been done before. To solve this, I centralized the detection in a single place "per cycle" (in updateBufferingStatus()), and then used the detection conclusions from updateStates().
So, with all this in mind, I refactored the buffering logic as https://commits.webkit.org/284072@main, so now WebKit GStreamer has a buffering code much more robust than before. The unstabilities observed in Broadcom devices were gone and I could, at last, close Issue 1309.
16 Oct 2024 6:12am GMT
10 Sep 2024
Planet Maemo
Don’t shoot yourself in the foot with the C++ move constructor
Move semantics can be very useful to transfer ownership of resources, but as many other C++ features, it's one more double edge sword that can harm yourself in new and interesting ways if you don't read the small print.
For instance, if object moving involves super and subclasses, you have to keep an extra eye on what's actually happening. Consider the following classes A and B, where the latter inherits from the former:
#include <stdio.h> #include <utility> #define PF printf("%s %p\n", __PRETTY_FUNCTION__, this) class A { public: A() { PF; } virtual ~A() { PF; } A(A&& other) { PF; std::swap(i, other.i); } int i = 0; }; class B : public A { public: B() { PF; } virtual ~B() { PF; } B(B&& other) { PF; std::swap(i, other.i); std::swap(j, other.j); } int j = 0; };
If your project is complex, it would be natural that your code involves abstractions, with part of the responsibility held by the superclass, and some other part by the subclass. Consider also that some of that code in the superclass involves move semantics, so a subclass object must be moved to become a superclass object, then perform some action, and then moved back to become the subclass again. That's a really bad idea!
Consider this usage of the classes defined before:
int main(int, char* argv[]) { printf("Creating B b1\n"); B b1; b1.i = 1; b1.j = 2; printf("b1.i = %d\n", b1.i); printf("b1.j = %d\n", b1.j); printf("Moving (B)b1 to (A)a. Which move constructor will be used?\n"); A a(std::move(b1)); printf("a.i = %d\n", a.i); // This may be reading memory beyond the object boundaries, which may not be // obvious if you think that (A)a is sort of a (B)b1 in disguise, but it's not! printf("(B)a.j = %d\n", reinterpret_cast<B&>(a).j); printf("Moving (A)a to (B)b2. Which move constructor will be used?\n"); B b2(reinterpret_cast<B&&>(std::move(a))); printf("b2.i = %d\n", b2.i); printf("b2.j = %d\n", b2.j); printf("^^^ Oops!! Somebody forgot to copy the j field when creating (A)a. Oh, wait... (A)a never had a j field in the first place\n"); printf("Destroying b2, a, b1\n"); return 0; }
If you've read the code, those printfs will have already given you some hints about the harsh truth: if you move a subclass object to become a superclass object, you're losing all the subclass specific data, because no matter if the original instance was one from a subclass, only the superclass move constructor will be used. And that's bad, very bad. This problem is called object slicing. It's specific to C++ and can also happen with copy constructors. See it with your own eyes:
Creating B b1 A::A() 0x7ffd544ca690 B::B() 0x7ffd544ca690 b1.i = 1 b1.j = 2 Moving (B)b1 to (A)a. Which move constructor will be used? A::A(A&&) 0x7ffd544ca6a0 a.i = 1 (B)a.j = 0 Moving (A)a to (B)b2. Which move constructor will be used? A::A() 0x7ffd544ca6b0 B::B(B&&) 0x7ffd544ca6b0 b2.i = 1 b2.j = 0 ^^^ Oops!! Somebody forgot to copy the j field when creating (A)a. Oh, wait... (A)a never had a j field in the first place Destroying b2, a, b1 virtual B::~B() 0x7ffd544ca6b0 virtual A::~A() 0x7ffd544ca6b0 virtual A::~A() 0x7ffd544ca6a0 virtual B::~B() 0x7ffd544ca690 virtual A::~A() 0x7ffd544ca690
Why can something that seems so obvious become such a problem, you may ask? Well, it depends on the context. It's not unusual for the codebase of a long lived project to have started using raw pointers for everything, then switching to using references as a way to get rid of null pointer issues when possible, and finally switch to whole objects and copy/move semantics to get rid or pointer issues (references are just pointers in disguise after all, and there are ways to produce null and dangling references by mistake). But this last step of moving from references to copy/move semantics on whole objects comes with the small object slicing nuance explained in this post, and when the size and all the different things to have into account about the project steals your focus, it's easy to forget about this.
So, please remember: never use move semantics that convert your precious subclass instance to a superclass instance thinking that the subclass data will survive. You can regret about it and create difficult to debug problems inadvertedly.
Happy coding!
10 Sep 2024 7:58am GMT
17 Jun 2024
Planet Maemo
Incorporating 3D Gaussian Splats into the graphics pipeline
3D Gaussian splatting is the emerging rendering technique that is overtaking NeRFs. Since it is centered around point primitives, it is more compatible with traditional graphics pipelines that already support point rendering.
Gaussian splats essentially enhance the concept of point rendering by converting the point primitive into a 3D ellipsoid, which is then projected into 2D during the rendering process.. This concept was initially described in 2002 [3], but the technique of extending Structure from Motion scans in this way was only detailed more recently [1].
In this post, I explore how to integrate Gaussian splats into the traditional graphics pipeline. This allows them to be used alongside triangle-based primitives and interact with them through the depth buffer for occlusion (see header image). This approach also simplifies deployment by eliminating the need for CUDA.
Storage
The original implementation uses .ply files as their checkpoint format, focusing on maintaining training-relevant data structures at the expense of storage efficiency, leading to increased file sizes.
For example, it stores the covariance as scaling and a rotation quaternion, necessitating reconstruction during rendering. A more efficient approach would be to leverage orthogonality, storing only the diagonal and upper triangular vectors, thereby eliminating reconstruction and reducing storage requirements.
![](https://www.rojtberg.net/wp-content/uploads/2024/06/psize-1024x579.webp)
Further analysis of the storage usage for each attribute shows that the spherical harmonics of orders 1-3 are the main contributors to the file size. However, according to the ablation study in the original publication [1], these harmonics only lead to a modest PSNR improvement of 0.5.
Therefore, the most straightforward way to decrease storage is by discarding the higher-order spherical harmonics. Additionally, the level 0 spherical harmonics can be converted into a diffuse color and merged with opacity to form a single RGBA value. These simple yet effective methods were implemented in one of the early WebGL implementations, resulting in the .splat format. As an added benefit, this format can be easily interpreted by viewers unaware of Gaussian splats as a simple colored point cloud:
![](https://www.rojtberg.net/wp-content/uploads/2024/06/opoints-1024x576.webp)
By directly storing the covariance as previously mentioned we can reduce the precision from float32
to float16
, thereby halving the storage needed for that data. Furthermore, since most splats have limited spatial extents, we can also utilize float16
for position data, yielding additional storage savings.
![](https://www.rojtberg.net/wp-content/uploads/2024/06/psize_splat-1024x564.webp)
With these changes, we achieve a storage requirement of 22 bytes per splat, in contrast to the 44 bytes needed by the .splat format and 236 bytes in the original implementation. Thus, we have attained a 10x reduction in storage compared to the original implementation simply by using more suitable data types.
Blending
The image formation model presented in the original paper [1] is similar to the NeRF rendering, as it is compared to it. This involves casting a ray and observing its intersection with the splats, which leads to front-to-back blending. This is precisely the approach taken by the provided CUDA implementation.
Blending remains a component of the fixed-function unit within the graphics pipeline, which can be set up for front-to-back blending [2] by using the factors (one_minus_dest_alpha, one)
and by multiplying color and alpha in the shader as color.rgb * color.a
. This results in the following equation:
\begin{aligned}C_{dst} &= (1 - \alpha_{dst}) \cdot \alpha_{src} C_{src} &+ C_{dst}\\ \alpha_{dst} &= (1 - \alpha_{dst})\cdot\alpha_{src} &+ \alpha_{dst}\end{aligned}
However, this method requires the framebuffer alpha value to be zero before rendering the splats, which is not typically the case as any previous render pass could have written an arbitrary alpha value.
A simple solution is to switch to back-to-front sorting and use the standard alpha blending factors (src_alpha, one_minus_src_alpha)
for the following blending equation:
C_{dst} = \alpha_{src} \cdot C_{src} + (1 - \alpha_{src}) \cdot C_{dst}
This allows us to regard Gaussian splats as a special type of particles that can be rendered together with other transparent elements within a scene.
References
- Kerbl, Bernhard, et al. "3d gaussian splatting for real-time radiance field rendering." ACM Transactions on Graphics 42.4 (2023): 1-14.
- Green, Simon. "Volumetric particle shadows." NVIDIA Developer Zone (2008).
- Zwicker, Matthias, et al. "EWA splatting." IEEE Transactions on Visualization and Computer Graphics 8.3 (2002): 223-238.
17 Jun 2024 1:28pm GMT
18 Sep 2022
Planet Openmoko
Harald "LaF0rge" Welte: Deployment of future community TDMoIP hub
I've mentioned some of my various retronetworking projects in some past blog posts. One of those projects is Osmocom Community TDM over IP (OCTOI). During the past 5 or so months, we have been using a number of GPS-synchronized open source icE1usb interconnected by a new, efficient but strill transparent TDMoIP protocol in order to run a distributed TDM/PDH network. This network is currently only used to provide ISDN services to retronetworking enthusiasts, but other uses like frame relay have also been validated.
So far, the central hub of this OCTOI network has been operating in the basement of my home, behind a consumer-grade DOCSIS cable modem connection. Given that TDMoIP is relatively sensitive to packet loss, this has been sub-optimal.
Luckily some of my old friends at noris.net have agreed to host a new OCTOI hub free of charge in one of their ultra-reliable co-location data centres. I'm already hosting some other machines there for 20+ years, and noris.net is a good fit given that they were - in their early days as an ISP - the driving force in the early 90s behind one of the Linux kernel ISDN stracks called u-isdn. So after many decades, ISDN returns to them in a very different way.
Side note: In case you're curious, a reconstructed partial release history of the u-isdn code can be found on gitea.osmocom.org
But I digress. So today, there was the installation of this new OCTOI hub setup. It has been prepared for several weeks in advance, and the hub contains two circuit boards designed entirely only for this use case. The most difficult challenge was the fact that this data centre has no existing GPS RF distribution, and the roof is ~ 100m of CAT5 cable (no fiber!) away from the roof. So we faced the challenge of passing the 1PPS (1 pulse per second) signal reliably through several steps of lightning/over-voltage protection into the icE1usb whose internal GPS-DO serves as a grandmaster clock for the TDM network.
The equipment deployed in this installation currently contains:
-
a rather beefy Supermicro 2U server with EPYC 7113P CPU and 4x PCIe, two of which are populated with Digium TE820 cards resulting in a total of 16 E1 ports
-
an icE1usb with RS422 interface board connected via 100m RS422 to an Ericsson GPS03 receiver. There's two layers of of over-voltage protection on the RS422 (each with gas discharge tubes and TVS) and two stages of over-voltage protection in the coaxial cable between antenna and GPS receiver.
-
a Livingston Portmaster3 RAS server
-
a Cisco AS5400 RAS server
For more details, see this wiki page and this ticket
Now that the physical deployment has been made, the next steps will be to migrate all the TDMoIP links from the existing user base over to the new hub. We hope the reliability and performance will be much better than behind DOCSIS.
In any case, this new setup for sure has a lot of capacity to connect many more more users to this network. At this point we can still only offer E1 PRI interfaces. I expect that at some point during the coming winter the project for remote TDMoIP BRI (S/T, S0-Bus) connectivity will become available.
Acknowledgements
I'd like to thank anyone helping this effort, specifically * Sylvain "tnt" Munaut for his work on the RS422 interface board (+ gateware/firmware) * noris.net for sponsoring the co-location * sysmocom for sponsoring the EPYC server hardware
18 Sep 2022 10:00pm GMT
08 Sep 2022
Planet Openmoko
Harald "LaF0rge" Welte: Progress on the ITU-T V5 access network front
Almost one year after my post regarding first steps towards a V5 implementation, some friends and I were finally able to visit Wobcom, a small German city carrier and pick up a lot of decommissioned POTS/ISDN/PDH/SDH equipment, primarily V5 access networks.
This means that a number of retronetworking enthusiasts now have a chance to play with Siemens Fastlink, Nokia EKSOS and DeTeWe ALIAN access networks/multiplexers.
My primary interest is in Nokia EKSOS, which looks like an rather easy, low-complexity target. As one of the first steps, I took PCB photographs of the various modules/cards in the shelf, take note of the main chip designations and started to search for the related data sheets.
The results can be found in the Osmocom retronetworking wiki, with https://osmocom.org/projects/retronetworking/wiki/Nokia_EKSOS being the main entry page, and sub-pages about
In short: Unsurprisingly, a lot of Infineon analog and digital ICs for the POTS and ISDN ports, as well as a number of Motorola M68k based QUICC32 microprocessors and several unknown ASICs.
So with V5 hardware at my disposal, I've slowly re-started my efforts to implement the LE (local exchange) side of the V5 protocol stack, with the goal of eventually being able to interface those V5 AN with the Osmocom Community TDM over IP network. Once that is in place, we should also be able to offer real ISDN Uk0 (BRI) and POTS lines at retrocomputing events or hacker camps in the coming years.
08 Sep 2022 10:00pm GMT
Harald "LaF0rge" Welte: Clock sync trouble with Digium cards and timing cables
If you have ever worked with Digium (now part of Sangoma) digital telephony interface cards such as the TE110/410/420/820 (single to octal E1/T1/J1 PRI cards), you will probably have seen that they always have a timing connector, where the timing information can be passed from one card to another.
In PDH/ISDN (or even SDH) networks, it is very important to have a synchronized clock across the network. If the clocks are drifting, there will be underruns or overruns, with associated phase jumps that are particularly dangerous when analog modem calls are transported.
In traditional ISDN use cases, the clock is always provided by the network operator, and any customer/user side equipment is expected to synchronize to that clock.
So this Digium timing cable is needed in applications where you have more PRI lines than possible with one card, but only a subset of your lines (spans) are connected to the public operator. The timing cable should make sure that the clock received on one port from the public operator should be used as transmit bit-clock on all of the other ports, no matter on which card.
Unfortunately this decades-old Digium timing cable approach seems to suffer from some problems.
bursty bit clock changes until link is up
The first problem is that downstream port transmit bit clock was jumping around in bursts every two or so seconds. You can see an oscillogram of the E1 master signal (yellow) received by one TE820 card and the transmit of the slave ports on the other card at https://people.osmocom.org/laforge/photos/te820_timingcable_problem.mp4
As you can see, for some seconds the two clocks seem to be in perfect lock/sync, but in between there are periods of immense clock drift.
What I'd have expected is the behavior that can be seen at https://people.osmocom.org/laforge/photos/te820_notimingcable_loopback.mp4 - which shows a similar setup but without the use of a timing cable: Both the master clock input and the clock output were connected on the same TE820 card.
As I found out much later, this problem only occurs until any of the downstream/slave ports is fully OK/GREEN.
This is surprising, as any other E1 equipment I've seen always transmits at a constant bit clock irrespective whether there's any signal in the opposite direction, and irrespective of whether any other ports are up/aligned or not.
But ok, once you adjust your expectations to this Digium peculiarity, you can actually proceed.
clock drift between master and slave cards
Once any of the spans of a slave card on the timing bus are fully aligned, the transmit bit clocks of all of its ports appear to be in sync/lock - yay - but unfortunately only at the very first glance.
When looking at it for more than a few seconds, one can see a slow, continuous drift of the slave bit clocks compared to the master :(
Some initial measurements show that the clock of the slave card of the timing cable is drifting at about 12.5 ppb (parts per billion) when compared against the master clock reference.
This is rather disappointing, given that the whole point of a timing cable is to ensure you have one reference clock with all signals locked to it.
The work-around
If you are willing to sacrifice one port (span) of each card, you can work around that slow-clock-drift issue by connecting an external loopback cable. So the master card is configured to use the clock provided by the upstream provider. Its other ports (spans) will transmit at the exact recovered clock rate with no drift. You can use any of those ports to provide the clock reference to a port on the slave card using an external loopback cable.
In this setup, your slave card[s] will have perfect bit clock sync/lock.
Its just rather sad that you need to sacrifice ports just for achieving proper clock sync - something that the timing connectors and cables claim to do, but in reality don't achieve, at least not in my setup with the most modern and high-end octal-port PCIe cards (TE820).
08 Sep 2022 10:00pm GMT