01 Nov 2024
JBoss Blogs
Keycloak 26.0.5 released
To download the release go to . HIGHLIGHTS LDAP USERS ARE CREATED AS ENABLED BY DEFAULT WHEN USING MICROSOFT ACTIVE DIRECTORY If you are using Microsoft AD and creating users through the administrative interfaces, the user will created as enabled by default. In previous versions, it was only possible to update the user status after setting a (non-temporary) password to the user. This behavior was not consistent with other built-in user storages as well as not consistent with others LDAP vendors supported by the LDAP provider. UPGRADING Before upgrading refer to for a complete list of changes. ALL RESOLVED ISSUES BUGS * Selection list does not close after outside click admin/ui * Fix v2 login layout login/ui * No message for `policyGroupsHelp` admin/ui * Customizable footer (Keycloak 26) not displaying in keycloak.v2 login theme login/ui * Remove inaccurate statement about master realm imports docs * [26.0.2] Migration from 25.0.1 Identity Provider Errors identity-brokering * Do not rely on the `pwdLastSet` attribute when updating AD entries ldap
01 Nov 2024 12:00am GMT
30 Oct 2024
JBoss Blogs
Quarkus 3.16 - OpenTelemetry Logging, LGTM Quarkus dashboard and too many things to list here
After the 3.15 LTS release comes Quarkus 3.16 and a lot of new features and enhancements. Quarkus 3.16 is the result of the 2 month worth of work so it's more packed than your usual Quarkus minor. We went directly to 3.16.1 due to the inclusion of a last minute fix. Notable changes are: * - Drop the compatibility layer for the Big Reactive Rename * - OpenTelemetry Logging support * - LGTM Quarkus dashboard * - Generate reflection-free Jackson deserializers * - Quarkus REST - Support record parameter containers * - Introduce per invocation override of REST Client's base URL * - Add quarkus-oidc-client-registration extension * - Add new AuthorizationPolicy annotation to bind named HttpSecurityPolicy to a Jakarta REST endpoints * - Add OIDC Client SPI * - Support @PermissionsAllowed defined on meta-annotation * - Introduce OidcResponseFilter * - Support for two or more authentications for a single request * - Support Keycloak Dev Service when OIDC client is used without OIDC extension * - Integrate GraphQL clients with the TLS registry extension * - Integrate Keycloak Admin Client with TLS registry * - Auto log for Dev Services in containers * - Add HTTP Access Log to Dev UI * - Allow multiple format and themes in the Config Doc generator Note that we haven't forgotten 3.15 LTS, a 3.15.2 LTS is in the works and will be released in November. We are carefully selecting the fixes we will backport to it. UPDATE To update to Quarkus 3.16, we recommend updating to the latest version of the Quarkus CLI and run: quarkus update Note that quarkus update can update your applications from any version of Quarkus (including 2.x) to Quarkus 3.16. For more information about the adjustments you need to make to your applications, please refer to the . WHAT'S NEW? BIG REACTIVE RENAME COMPATIBILITY LAYER DROPPED Remember the Big Reactive Rename? It happened in 3.9 with the ultimate goal of avoiding the confusion between extensions that have a reactive core and are supporting equally well reactive and non-reactive workloads and extensions that are purely designed as reactive. A lot of extensions were renamed and we put in place relocations both for the artifacts and for the configuration. Compatibility layers have a cost and we decided to drop this one in 3.16, after the 3.15 LTS release. If you encounter issues with this change, please refer to the . If you are using quarkus update to update to each new version, the changes were handled for you already. OPENTELEMETRY LOGGING Quarkus 3.16 supports distributed logging via OpenTelemetry Logging. This is the natural continuation of the OpenTelemetry work in Quarkus. This support is part of the already existing OpenTelemetry extension and can be easily enabled via configuration properties. To learn more about it, have a look at the LGTM DASHBOARD When using the LGTM Dev Services, an out of the box Quarkus dashboard will now be provided. QUARKUS REST In Quarkus REST, you can use custom classes as parameters of your REST methods but records were not supported. It is now the case in Quarkus 3.16. When using the REST Client, providing a URL is mandatory and you usually configure it globally. However, from time to time, you might want to configure it per-invocation. The @Url annotation was introduced for that: annotate a parameter of your REST Client method with it and you can dynamically provide a URL. JACKSON You might remember that in Quarkus 3.14, we introduced . And you might have wondered "where are my faster reflection-free deserializers"? They just landed in 3.16! SECURITY As usual, this version comes with several new features and enhancements related to our security layer: * - Add quarkus-oidc-client-registration extension - see here * - Add new AuthorizationPolicy annotation to bind named HttpSecurityPolicy to a Jakarta REST endpoints - see here * - Add OIDC Client SPI - see here * - Support @PermissionsAllowed defined on meta-annotation - see here * - Introduce OidcResponseFilter - see here * - Support for two or more authentications for a single request - described * - Support Keycloak Dev Service when OIDC client is used without OIDC extension TLS REGISTRY EVERYWHERE The TLS registry was introduced in Quarkus a while ago and we are iterating in each version to migrate more extensions to it. In Quarkus 3.16, two new extensions have been adapted to rely on the centralized TLS registry: * SmallRye GraphQL Client * Keycloak Admin Client DEV UI The Dev UI is continuously enhanced but we wanted to highlight a very nice addition: logs from your Dev Services containers and HTTP access logs are now available in the Dev UI. CONFIGURATION DOCUMENTATION When developing extensions, it can be handy to publish your configuration documentation. Until now, it was only possible to publish it in Asciidoc. With 3.16, you can also generate Markdown by passing markdown to the configuration of the Config Doc Maven Plugin. PLATFORM COMPONENT UPGRADES CAMEL QUARKUS Camel Quarkus has been updated to 3.16.0. QUARKUS CXF Quarkus CXF 3.16 was released and is now available in . Check the release notes for more information about what is new in this release. FULL CHANGELOG You can get the full changelog of , , and on GitHub. CONTRIBUTORS The Quarkus community is growing and has now . Many many thanks to each and everyone of them. In particular for the 3.16 release, thanks to AB, Adriano Moreira, Akulov S V, Ales Justin, Alex Martel, Alexandros Antonakakis, Alexey Loubyansky, Andreas Stangl, Andy Damevin, Auri Munoz, AxiomaticFixedChimpanzee, Bassel Rachid, Bruno Baptista, Chris Cranford, Chris Laprun, Christian Navolskyi, Claudio Miranda, Clement Escoffier, Dale Peakall, Daniel Bobbert, Daniel Cunha, Daniel Ezihe, Dannier Leonides Galicia Chinchilla, David M. Lloyd, Davide D'Alto, Dimitris Polissiou, Domenico Briganti, Falko Modler, Foivos Zakkak, Francesco Nigro, Galder Zamarreño, George Gastaldi, Georgios Andrianakis, Guillaume Smet, Gunnar Morling, Gunther C. Wenda, Holly Cummins, Inaki Villar, Ioannis Canellos, Jakub Gardo, Jakub Jedlicka, Jan Martiska, jcarranzan, Jeremy Whiting, Jerome Prinet, Jonathan Kolberg, Jorge Solórzano, Julien Ponge, Jérémie Bresson, Jérémie Panzer, Katia Aresti, KERN Christian, Konrad Durnoga, KS, Ladislav Thon, Lars, Laurent Perez, Loic Hermann, Lorenzo De Francesco, Loïc Hermann, Loïc Mathieu, luneo7, Marc Nuri, Marcel Stör, Marcelo Ataxexe Guimarães, Marek Skacelik, mariofusco, marko-bekhta, Martin Bartoš, Martin Kouba, Matej Novotny, Matheus Cruz, Matthias Schorsch, mauroantonio.depalma, Max Rydahl Andersen, Maximilian Rehberger, Melloware, Michael Edgar, Michal Maléř, Michal Vavřík, Nathan Erwin, Nicholas Kolatsis, Ozan Gunalp, Ozzy Osborne, Paul6552, Paulo Casaes, Peer Bech Hansen, Peter Palaga, peubouzon, PhilKes, Phillip Krüger, polarctos, Ralf Ueberfuhr, rghara, Robert Stupp, Roberto Cortez, RobinDM, Rod Cheater, Rolfe Dlugy-Hegwer, Roman Lovakov, Rostislav Svoboda, Sanne Grinovero, Sebastian Schuster, Sergey Beryozkin, Seto, sku20, Stéphane Épardaud, Thomas Canava, Thomas Segismont, Tiago Bento, tmulle, Vincent Sevel, xstefank, yamada-y0, Yasser Greyeb, Yoann Rodière, Yurii Dubinka, and Žan Ožbot. COME JOIN US We value your feedback a lot so please report bugs, ask for improvements… Let's build something great together! If you are a Quarkus user or just curious, don't be shy and join our welcoming community: * provide feedback on ; * craft some code and ; * discuss with us on and on the ; * ask your questions on .
30 Oct 2024 12:00am GMT
Keycloak 26.0.4 released
To download the release go to . HIGHLIGHTS LDAP USERS ARE CREATED AS ENABLED BY DEFAULT WHEN USING MICROSOFT ACTIVE DIRECTORY If you are using Microsoft AD and creating users through the administrative interfaces, the user will created as enabled by default. In previous versions, it was only possible to update the user status after setting a (non-temporary) password to the user. This behavior was not consistent with other built-in user storages as well as not consistent with others LDAP vendors supported by the LDAP provider. UPGRADING Before upgrading refer to for a complete list of changes. ALL RESOLVED ISSUES BUGS * Selection list does not close after outside click admin/ui * Fix v2 login layout login/ui * No message for `policyGroupsHelp` admin/ui * Customizable footer (Keycloak 26) not displaying in keycloak.v2 login theme login/ui * Remove inaccurate statement about master realm imports docs * [26.0.2] Migration from 25.0.1 Identity Provider Errors identity-brokering * Do not rely on the `pwdLastSet` attribute when updating AD entries ldap
30 Oct 2024 12:00am GMT
28 Oct 2024
JBoss Blogs
WildFly 35 moves to SE 17, drops SE 11
As I noted in the , our plan is that WildFly 34.x will be the final release series that would run on Java SE 11. Beginning with the WildFly 35 release, the minimum Java SE version for a WildFly server will be SE 17. We have now executed on this plan by updating the WildFly 'main' branch to build SE 17 binaries. If you are consuming WildFly nightly builds or building WildFly snapshots, you will need to move to an SE 17 baseline in those workflows. Any WildFly 35 releases will require SE 17 as a minimum. Note We recommend that you run WildFly on SE 21, as that is the latest supported LTS release. Note The WildFly 34.0.1 release will support SE 11. If we do any further releases in the 34.x series, they will support SE 11. The primary driver for this change is the fact that libraries we integrate are beginning to withdraw SE 11 support in their feature development branches and the availability of bug fix releases in older branches is increasingly uncertain. We also don't wish to continue to act as an innovation constraint on projects we integrate that are looking to move past SE 11. We recognize that changing SE versions can be a significant task for some users, so we didn't make this decision lightly. I hope that the fact that WildFly has supported SE 17 since the WildFly 25 release has made it easier for our users to prepare for this change. Best regards, Brian
28 Oct 2024 12:00am GMT
Save the Date: WildFly Mini Conference on Nov 20th
Hello, WildFly Community! We are happy to announce that the next WildFly Mini Conference is scheduled for November 20th, 2024! Building on the success of our previous mini conference event, we are eager to bring together our community for another day of insightful discussions and online networking. As you know, a few weeks ago we sent a . Based on the feedback we received, the most voted topics were: * Jakarta EE11: Dive deep into the latest advancements and explore how Jakarta EE11 is shaping the future of enterprise Java. * Latest News on WildFly: Stay updated with the latest news on WildFly * DevOps Related Topics: Discover best practices and innovative approaches to integrating WildFly within your DevOps pipelines. We are committed to not only covering these popular topics but also addressing the suggestions submitted by our community. Your input is invaluable, and we're dedicated to crafting a conference agenda that reflects your interests and needs. Stay tuned for upcoming announcements regarding the detailed schedule and our lineup of expert speakers. We are looking forward to another successful event and can't wait to connect with all of you on November 20th! Best regards, Flavia
28 Oct 2024 12:00am GMT
25 Oct 2024
JBoss Blogs
Keycloak DevDay 2025 Pre-Conf Event Announcement
is just around the corner, and we would like to invite you to a special pre-event: the Keycloak Hackathon! HACKATHON: ACTIVELY HELP SHAPE KEYCLOAK On the day before DevDay, on March 5, our hackathon will give you the opportunity to actively contribute to the further development of Keycloak. Whether you write code, work on the documentation, improve translations or maintain issues in the issue tracker - everyone can take part. The hackathon offers you the opportunity to pitch new ideas and work together in small groups on exciting projects. SCHEDULE OF THE HACKATHON 10:00: Start of the first iteration with a pitch round. Here you can present your ideas and topics, ranging from new features and bug fixes to documentation improvements. The teams start working on the pitched topics. Our goal is to achieve measurable results by the end of the day - be it through code contributions, documentation or other important improvements for the Keycloak community. 12:30: Lunch-break 13:30: Another start for everyone arriving later in the day. 17:00: Closing with presentation and honouring the results WHY SHOULD YOU PARTICIPATE? The hackathon is a great opportunity to network and actively participate with other members of the Keycloak community. It's the perfect chance for: * Participants arriving early who want to make good use of the previous day. * Experienced contributors and maintainers who want to advance their projects or work on new topics. * Newcomers who want to contribute for the first time and get involved in the community - whether through code, documentation or organisational tasks. IDEAS AND TOPICS If you have an idea or a topic that you would like to work on at the hackathon, get in touch with us! We will be happy to support you with the preparation and help you present your topic successfully. If you would like to work on a topic but don't yet know exactly what you would like to take part in, please let us know. We try to organize teams and topics at an early stage so that you can get in touch with like-minded people in advance. HOW CAN YOU TAKE PART? Participation is easy: Grab your free pre-event ticket on the and join us! The hackathon offers a great opportunity to contribute in a relaxed atmosphere and to talk to other participants. We look forward to seeing you at the hackathon and working together on the future of Keycloak. Let's code, document, and contribute - together for Keycloak!
25 Oct 2024 12:00am GMT
24 Oct 2024
JBoss Blogs
Keycloak 26.0.2 released
To download the release go to . HIGHLIGHTS LDAP USERS ARE CREATED AS ENABLED BY DEFAULT WHEN USING MICROSOFT ACTIVE DIRECTORY If you are using Microsoft AD and creating users through the administrative interfaces, the user will created as enabled by default. In previous versions, it was only possible to update the user status after setting a (non-temporary) password to the user. This behavior was not consistent with other built-in user storages as well as not consistent with others LDAP vendors supported by the LDAP provider. UPGRADING Before upgrading refer to for a complete list of changes. ALL RESOLVED ISSUES BUGS * Selection list does not close after outside click admin/ui * Fix v2 login layout login/ui * No message for `policyGroupsHelp` admin/ui * Customizable footer (Keycloak 26) not displaying in keycloak.v2 login theme login/ui * Remove inaccurate statement about master realm imports docs * [26.0.2] Migration from 25.0.1 Identity Provider Errors identity-brokering * Do not rely on the `pwdLastSet` attribute when updating AD entries ldap
24 Oct 2024 12:00am GMT
18 Oct 2024
JBoss Blogs
Connecting to Redis from a Jakarta EE application
In this tutorial, we'll learn how to integrate Redis into a Jakarta EE application using the Lettuce client library. Redis is a powerful in-memory data structure store, often used as a cache, message broker, or database. Lettuce is a popular Java client for Redis, providing both synchronous and asynchronous capabilities. We will use a simple ... The post appeared first on .
18 Oct 2024 5:28pm GMT
17 Oct 2024
JBoss Blogs
WildFly 34 is released!
I'm pleased to announce that the new WildFly and WildFly Preview 34.0.0.Final releases are available for download at . NEW AND NOTABLE This quarter we had a heavy focus on . * WildFly Preview now includes . This feature is provided at the . * WildFly Preview now . * WildFly Preview now . * As part of a , we introduced new BOMs for WildFly Preview. * org.wildfly.bom:wildfly-ee-preview is the WildFly Preview analogue to the existing standard WildFly org.wildfly.bom:wildfly-ee BOM. * org.wildfly.bom:wildfly-expansion-preview is the WildFly Preview analogue to the standard WildFly BOM formerly known as org.wildfly.bom:wildfly-microprofile, now to org.wildfly.bom:wildfly-expansion. * Previously under four system properties with default values were added to enable configuration of constraints affecting the HTTP management interface. In WildFly 35 (backlog, connection-high-water, connection-low-water, no-request-timeout) to allow these constraints to be configured directly in the management model. This feature is provided at the . * We updated our Hibernate ORM version . * Along with that we updated Hibernate Search . SUPPORTED SPECIFICATIONS JAKARTA EE Standard WildFly 34 is a compatible implementation of the EE 10 as well as the and the . WildFly is EE 10 Platform, Web Profile and Core Profile compatible when running on Java SE 11, Java SE 17 and Java SE 21. Evidence supporting our certification is available in the repository on GitHub: Specification Compatibility Evidence Jakarta EE 10 Full Platform Jakarta EE 10 Web Profile Jakarta EE 10 Core Profile MICROPROFILE WildFly supports numerous MicroProfile specifications. Because we no longer support MicroProfile Metrics, WildFly 34 cannot claim to be a compatible implementation of the MicroProfile 6.1 specification. However, WildFly's MicroProfile support includes implementations of the following specifications in our "full" (e.g. standalone-full.xml) and "default" (e.g standalone.xml) configurations as well as our "microprofile" configurations (e.g. standalone-microprofile.xml): MicroProfile Technology WildFly Full/Default Configurations WildFly MicroProfile Configuration MicroProfile Config 3.1 X X MicroProfile Fault Tolerance 4.0 - X MicroProfile Health 4.0 - X MicroProfile JWT Authentication 2.1 X X MicroProfile LRA 2.0 - X MicroProfile OpenAPI 3.1 - X MicroProfile Reactive Messaging 3.0 - - MicroProfile Reactive Streams Operators 3.0 - - MicroProfile Rest Client 3.0 X X MicroProfile Telemetry 1.1 - X for the above specifications that are part of MicroProfile 6.1 can be found in the WildFly Certifications repository on GitHub. As noted in the section, instead of the version listed above, WildFly Preview 34 now supports later releases of two MicroProfile specifications: * * WILDFLY PREVIEW, EE 11 AND SE 17 As I noted in the , beginning with that release we are using WildFly Preview to provide a look at what we're doing for Jakarta EE 11 support. EE 11 won't go GA before later this year, and we don't expect standard WildFly to support EE 11 before the WildFly 36 release. But there are milestone, Release Candidate and Final releases of many EE 11 specs and implementations available, so we are providiong those in WildFly Preview. This means for a number of EE APIs, WildFly Preview no longer provides an EE 10 compatible implementation. However, for a number of specifications that are planning changes for EE 11 we are still offering the EE 10 variant. In future releases we'll shift those to the EE 11 variants. As a result of this shift to EE 11 APIs, WildFly Preview no longer supports running on Java SE 11. If you want to use WildFly Preview you'll need to use SE 17 or higher. A number of EE 11 APIs no longer produce SE 11 compatible binaries, which means an EE 11 runtime can no longer support SE 11. The following table lists the various Jakarta EE technologies offered by WildFly Preview 34, along with information about which EE platform version the specification relates to. Note that a number of Jakarta specifications are unchanged between EE 10 and EE 11, while other EE technologies that WildFly offers are not part of EE 11. Jakarta EE Technology WildFly Preview Version EE Version Jakarta Activation 2.1 10 & 11 Jakarta Annotations 3.0 11 Jakarta Authentication 3.0 10 Jakarta Authorization 3.0 11 Jakarta Batch 2.1 10 & 11 Jakarta Concurrency 3.1 11 Jakarta Connectors 2.1 10 & 11 Jakarta Contexts and Dependency Injection 4.1 11 Jakarta Data (preview stability only) 1.0 11 Jakarta Debugging Support for Other Languages 2.0 10 & 11 Jakarta Dependency Injection 2.0 10 & 11 Jakarta Enterprise Beans 4.0 10 & 11 Jakarta Enterprise Web Services 2.0 10 Jakarta Expression Language 6.0 11 Jakarta Faces 4.1 11 Jakarta Interceptors 2.2 11 Jakarta JSON Binding 3.0 10 & 11 Jakarta JSON Processing 2.1 10 & 11 Jakarta Mail 2.1 10 & 11 Jakarta Messaging 3.1 10 & 11 Jakarta MVC (preview stability only) 2.1 N/A Jakarta Pages 3.1 10 Jakarta Persistence 3.2.0 11 Jakarta RESTful Web Services 4.0 11 Jakarta Security 4.0.0 11 Jakarta Servlet 6.1.0 11 Jakarta SOAP with Attachments 3.0 10 Jakarta Standard Tag Library 3.0 10 & 11 Jakarta Transactions 2.0 10 & 11 Jakarta Validation 3.1.0 11 Jakarta WebSocket 2.2.0 11 Jakarta XML Binding 4.0 10 Jakarta XML Web Services 4.0 10 Notes: 1. This Jakarta EE 10 technology is not part of EE 11 but is still provided by WildFly. 2. Jakarta Data is a new specification in EE 11. 3. Jakarta MVC is not of the Jakarta EE Platform or the Web or Core Profile. JAVA SE SUPPORT Our recommendation is that you run WildFly 34 on Java SE 21, as that is the latest LTS JDK release where we have completed the full set of testing we like to do before recommending a particular SE version. WildFly 34 also is heavily tested and runs well on Java 17 and Java 11. Our recommendation of SE 21 over earlier LTS releases is solely because as a general principle we recommend being on later LTS releases, not because of any problems with WildFly on SE 17 or SE 11. However, one reason to use later SE versions is because it gets you ahead of the curve as WildFly and other projects begin to move on from supporting older SE releases. This is certainly happening, and we do not intend to support SE 11 in WildFly in WildFly 35! Warning The WildFly 34 series will be the last to support SE 11, so if you are running WildFly on SE 11 you should move to SE 17 or 21 as soon as possible. WildFly Preview no longer supports SE 11, as the baseline for Jakarta EE 11 is SE 17. While we recommend using an LTS JDK release, I do believe WildFly runs well on SE 23. By runs well, I mean the main WildFly testsuite runs with no more than a few failures in areas not expected to be commonly used. We want developers who are trying to evaluate what a newer JVM means for their applications to be able to look to WildFly as a useful development platform. Please note that WildFly runs in classpath mode. INCOMPATIBLE CHANGES We changed the Maven artifactId of the org.wildfly.bom:wildfly-microprofile user BOM to org.wildfly.bom:wildfly-expansion, so users of this BOM will need to update their poms. This BOM is intended to help developers develop applications that can run in a server provisioned using the wildfly feature pack, but which can't run in a server only using its wildfly-ee feature pack dependency. (The org.wildfly.bom:wildfly-ee BOM is used for the wildfly-ee feature pack dependencies.) For a while now the additional functionality in the wildfly feature pack has gone beyond MicroProfile, to include things like Micrometer, so we've updated to the more general 'expansion' term that we use to describe this feature pack. RELEASE NOTES The full WildFly 34 release notes are . Issues fixed in the underlying WildFly Core 26.0.0 and 26.0.1 releases are listed in the . Please try it out and give us your feedback, in the , or . And, with that, I'm moving on to what I think will be a very busy WildFly 35! Best regards, Brian
17 Oct 2024 12:00am GMT
Strengthening the Release Process for Quarkiverse and SmallRye
In May, we were alerted about a potential leak in the release process. We acted swiftly to mitigate the issue; fortunately, no damage was done. Even if Quarkiverse has no reported leak, during our investigation, we uncovered a deeper flaw that affected not only SmallRye but also Quarkiverse. In this blog post, we'll explain the vulnerability we discovered and introduce a more secure release pipeline for both Quarkiverse and SmallRye repositories. TL;DR: We've uncovered a security flaw in the release process for Quarkiverse and SmallRye that could have allowed malicious actors to impersonate projects and publish compromised artifacts. We've implemented a new, more secure release pipeline to address this. If you're a maintainer, you've received a pull request to migrate to the new process. Quarkus itself is not affected by this issue, only SmallRye and Quarkiverse. Please act immediately, as the old release process will be retired by October 16th, 2024. So make sure to merge the pull request before then to avoid any disruptions in your releases. If you have any questions or concerns, please contact us on or . Details on this change are . For more details on the issue, the solution, and how to adapt, read on! THE FLAW: A CLOSER LOOK AT THE RELEASE PROCESS To understand the flaw, it's important to outline the release process Quarkiverse and SmallRye used first. Quarkiverse and SmallRye offer development facilities to ease the development of Quarkus extensions and SmallRye projects used in Quarkus, respectively. There is no central supervision of all these repositories; they evolve at their own pace, individually. Both organizations use GitHub repositories and GitHub Actions as CI and automation framework. Here's how the release project worked: 1. A developer opens a pull request in the repository, updating the version number in the project's project.yaml file (See as an example). 2. The regular build workflow runs to ensure it builds successfully. A specific pre-release flow also runs to verify that the YAML file is correctly formatted. 3. Once the pull request is merged, a release workflow is triggered. 4. The release workflow starts by preparing the release. It sets the project's version to the configured version and creates a tag with the new updated code. It also updates the main branch (or the source branch of the pull request) to the next development version and commits this change to the branch. 5. Once the preparation step is complete, the tag is checked out, and the release artifacts are created. This phase is called release perform. During that phase, binary artifacts are created from the tagged sources. The artifacts are signed and pushed to Maven Central. The last step, the release perform, is where the flaw exists. Here's why: * To sign the artifacts, the workflow uses an organization-wide GPG key * To publish the artifacts, the workflow uses organization-wide credentials The GPG passphrase and the Maven Central credential are stored as secrets in the project's GitHub repository but shared across the entire organization. They are not freely accessible. You cannot print them in the log (without a bit of magic), and cannot be accessed from forks. At this point, everything seems fine. Both SmallRye and Quarkiverse provide maintainers with great freedom to customize GitHub Action workflows to fit their needs. This flexibility, while empowering, also introduces risks. And … here we go… . THE PROBLEM: A RISK OF CREDENTIAL EXPOSURE (AND IMPERSONIFICATION) We said that secrets are not freely accessible. That's true, except for one case. GitHub Actions (see Github Action Security overview) running in the project itself can access them. Even tests can access them. Anything running during the workflow (actions, scripts… ) can access these secrets… and leak them. When a developer includes an external or third-party GitHub Action, Maven / Gradle plugin, or Junit Extension… in their workflow, that code gains access to the organization-wide credentials. Any code running during the workflow on the repository - not a fork - can potentially expose these secrets. The ramifications are severe: * An attacker could release compromised yet legitimate-looking project versions signed with the organization's GPG key to Maven Central. * Worse still, they could push malicious artifacts to Maven Central under the Quarkiverse or SmallRye banner, impersonating the entire organization. In short, with access to these credentials, an attacker could impersonate Quarkiverse or SmallRye, bypassing typical protections like signed commits or branch protection. The vulnerability arises from the fact that these credentials are shared and available to any code running during the workflow. Despite quickly mitigating the initial SmallRye leak, discovering this larger flaw prompted us to reevaluate our release process. It became clear that we needed a more secure and resilient approach to prevent such risks in the future. THE SOLUTION: A NEW RELEASE PROCESS After careful consideration, we concluded that relying on organization-wide secrets for releases was no longer viable. We needed a more secure approach. At first, we explored the idea of using repository-specific credentials. While this would limit the blast radius in case of a leak, it would be difficult to manage at scale and slow down the onboarding process. Additionally, an individual repository could still be compromised and impersonated even with this approach. Therefore, we decided against this solution. Instead, we devised a more robust and secure solution involving two repositories: one for the code being released and a separate one for executing the release perform phase itself. Crucially, the repository with the source code no longer has access to organization-wide credentials-only the second repository does. When the second workflow (red) is complete, it unblocks the first one (blue). Thus, you know when the second workflow is completed and if it was successful. HOW IT WORKS: A STEP-BY-STEP BREAKDOWN With this new approach, the initial stages of the release process remain unchanged. Here's what happens now: 1. A developer opens a pull request, updating the version number in the project.yaml file. 2. The pre-release workflow is triggered within the repository, ensuring the build is correct and the version is appropriately updated. 3. Once the pull request is merged, the release process diverges from the previous approach: * The first repository executes the preparation steps, such as version updates, tag creation, and setting the next development version. * The release artifacts are generated but not signed or pushed to Maven Central. At this point, a second workflow is triggered in a separate repository. This is where the critical actions happen: * The second repository, which contains the necessary credentials (Maven Central credentials and GPG passphrase), downloads the release artifacts. * It verifies the integrity of the artifacts using attestations. * The artifacts are then signed and pushed to Maven Central. This second repository is crucial for security. It's locked down and non-modifiable, meaning no developer can customize the workflow or inadvertently introduce a vulnerability. By isolating the sensitive release steps in this secured environment, we've significantly reduced the risk of leaks or unauthorized access. This new process provides a much-needed layer of separation, ensuring that the credentials remain protected and that the possibility of a leak is greatly diminished. BALANCING SECURITY WITH DEVELOPER FREEDOM As highlighted earlier, both Quarkiverse and SmallRye strongly emphasize empowering developers by minimizing the overhead of maintaining open-source components. Our new release process maintains this philosophy, ensuring developers still have the flexibility to adjust workflows in their component repositories as needed. Developers and maintainers can continue to modify workflows, introduce custom CI steps, and tailor their processes to meet specific project needs. The only significant change is that part of the release process-the critical signing and publishing steps-now occurs in a separate, secured repository. Importantly, maintainers retain the ability to trigger a release at any time, from any branch, just as they could before. The handoff to the second repository happens seamlessly, so the developer experience remains largely the same. This flexibility remains intact for projects that have heavily customized their release pipelines (for example, incorporating pre-release validations or automating tasks like website updates, release note generation, or breaking change detection). These projects can still trigger: * Validation workflows when the project.yaml file is updated via a pull request. * Post-release workflows, triggered when a new tag is created, allow tasks such as documentation updates or notifications to continue unhindered. By preserving this level of freedom, we ensure that developers can adapt their workflows to the needs of their projects while benefiting from a more secure release pipeline. RESILIENCE: PREPARING FOR THE UNEXPECTED The release process, by its nature, is a complex and multi-step operation where things can occasionally go wrong. While the new release pipeline adds another layer of complexity due to its split-repository design, we have built resilience into the system to mitigate potential issues. To address this, we've ensured that the new process is idempotent, meaning it can be safely retried without causing inconsistencies or errors. If a failure occurs at any point during the release - whether due to network issues, build failures, or artifact verification problems - the process can be restarted from the failed workflow. This allows the release to proceed without needing to repeat previous steps unnecessarily. Additionally, we have built in various checks and verifications at key stages of the release process, such as verifying artifact integrity (using attestation) are completed before moving on to the next stage. These safeguards help reduce the risk of an incomplete or erroneous release. Should any unexpected issues arise, both the component repository and the secured release repository provide detailed logs, allowing developers to diagnose and resolve problems quickly. This transparency ensures that maintainers remain in control, even when things don't go as planned. These measures aim to provide a more resilient, fault-tolerant release process that doesn't compromise on security or developer experience. CALL FOR ACTION: MIGRATING TO THE NEW RELEASE PROCESS If you are a Quarkiverse or SmallRye project maintainer, you've received a pull request that updates your project to the new, more secure release process. For most maintainers, this update will be seamless and require no other changes. However, as mentioned earlier, if your project uses a customized or more sophisticated release pipeline, you may need to make a few adjustments to ensure compatibility with the new system. This could involve updating custom workflows that handle pre-validation steps, website publishing, or release note generation. Please take the time to review and test the changes in your repository to ensure everything works as expected. IMPORTANT TIMELINE: DEPRECATION OF THE OLD RELEASE PROCESS The previous release process has now been deprecated and will be fully blocked by October 16th, 2024. After this date, releasing your project using the old pipeline will no longer be possible. Thus, you must adopt the new release process pull request before this deadline to avoid disrupting your project's release cycle. For maintainers with more complex setups, we encourage you to start the migration as soon as possible to ensure a smooth transition. Roberto Cortez, George Gastaldi, and the rest of the Quarkus and SmallRye teams are here to help if you need assistance. Next Steps: * Review the Pull Request: Check the automated pull request in your repository and verify that it updates your release process to the new system. * Merge the Changes: Merge the changes before the deprecation date to avoid release interruptions. * Test Your Workflow: If you've customized your release process, run tests to ensure everything still functions as expected under the new pipeline. * Reach Out for Support: If you have any questions or need help with the migration, please contact us on or . This new release process is a vital step in improving the security of Quarkiverse and SmallRye, and your swift action in migrating will help us ensure the integrity of these projects moving forward. SUMMARY: NOTHING CHANGES FOR YOU - IT'S JUST MORE SECURE From a Smallrye and Quarkiverse developer's perspective, the release process for Quarkiverse and SmallRye remains essentially the same. You still have the freedom to modify workflows, customize release steps, and trigger releases as needed. The flexibility and control you've come to rely on haven't changed. The main difference is behind the scenes: a separate, secured repository now handles the critical steps of signing and publishing your release. This means the process is more robust, with sensitive credentials locked down, and the risk of leaks or impersonation significantly reduced. In short, while we've enhanced the security of the release pipeline, we've done so in a way that minimizes disruption. You'll still enjoy the same developer experience - only now, with the added peace of mind that your releases are more secure than ever. A SPECIAL THANK YOU Redefining a more secure and reliable release process was no small feat, and it certainly wasn't something we could accomplish without some dedicated and enthusiastic developers. I'd like to extend our heartfelt thanks to George Gastaldi and Roberto Cortez for carrying out much of the heavy lifting throughout this process. Your dedication and expertise were invaluable. I'd also like to give a special shoutout to Andres Almiray, whose support with JReleaser was absolutely instrumental. The new release process simply wouldn't have been possible without his reactivity and guidance.
17 Oct 2024 12:00am GMT
Quarkus Newsletter #49 - October
Explore how the combination of Quarkus and LangChain4J boosts productivity and efficiency in developing Java-based AI systems as well as understanding the key concepts underlying the development of such applications in "Leveraging Quarkus and LangChain4j" by Thiago dos Santos Hora. Read "Enhancing the Quarkus developer experience: New updates for IntelliJ and VS Code tools" by Mohit Suman & Angelo Zerr for information on the latest releases of Quarkus Tools for IntelliJ 2.0.2 and Quarkus Tools for Visual Studio Code 1.18.1 that deliver significant enhancements to support Quarkus and Qute, making development smoother. Check out Yatin Batra's article "Quarkus Citrus Test Tutorial" to Learn how to implement and run integration tests using Quarkus with Citrus framework for effective testing. "Effective Project Structuring for Microservices with Quarkus" by Ivelin Yanev is a great way to learn the key to leveraging Quarkus effectively by understanding how to structure your project correctly. Kostiantyn Ivanov wrote a great tutorial, "Connecting to Elasticsearch Using Quarkus" the helps you explore how to integrate Quarkus with Elasticsearch, a well-known full-text search engine and NoSQL datastore. You will also see the latest Quarkus Insights episodes, top tweets/discussions and upcoming Quarkus attended events. Check out ! Want to get newsletters in your inbox? using the on page form.
17 Oct 2024 12:00am GMT
Keycloak 26.0.1 released
To download the release go to . HIGHLIGHTS LDAP USERS ARE CREATED AS ENABLED BY DEFAULT WHEN USING MICROSOFT ACTIVE DIRECTORY If you are using Microsoft AD and creating users through the administrative interfaces, the user will created as enabled by default. In previous versions, it was only possible to update the user status after setting a (non-temporary) password to the user. This behavior was not consistent with other built-in user storages as well as not consistent with others LDAP vendors supported by the LDAP provider. UPGRADING Before upgrading refer to for a complete list of changes. ALL RESOLVED ISSUES BUGS * Selection list does not close after outside click admin/ui * Fix v2 login layout login/ui * No message for `policyGroupsHelp` admin/ui * Customizable footer (Keycloak 26) not displaying in keycloak.v2 login theme login/ui * Remove inaccurate statement about master realm imports docs * [26.0.2] Migration from 25.0.1 Identity Provider Errors identity-brokering * Do not rely on the `pwdLastSet` attribute when updating AD entries ldap
17 Oct 2024 12:00am GMT
10 Oct 2024
JBoss Blogs
Meet Keycloak at KubeCon Salt Lake City, Utah in Nov 2024
We are thrilled to announce that Keycloak will be at KubeCon Salt Lake City, Utah in Nov 2024. There are several Keycloak specific sessions lined up during this conference, and we will be hosting a Kiosk at the Project Pavilion at KubeCon. WHAT IS KUBECON? Keycloak's presence in the previous KubeCons was a huge success, and we continue to have a lot of fun interacting with Keycloak enthusiasts, users, newcomers alike. KubeCon is a fast-growing Cloud Native tech conference expected to have up to 8,000 developers, architects, and technical leaders onsite as well as thousands of participants virtually. KubeCon Salt Lake City will be held from Nov. 12th, 2024 through Nov. 15th, 2024, with many of the co-located events happening on Tuesday, Nov 12th, 2024. KEYCLOAK COMMUNITY MEET & GREET AT THE PROJECT PAVILION from Hitachi, , , from Red Hat and other contributors will be at the Keycloak kiosk at the . This is a great chance to meet people who use Keycloak, contribute to Keycloak, take our survey about new Keycloak features, and get some cool swag! Keycloak Kiosk opening hours: * Wednesday, November 13: 3:15pm-8:00pm * Thursday, November 14: 1:45pm-5:00pm * Friday, November 15: 12:30pm-2:30pm OPENSHIFT COMMONS GATHERING The OpenShift Commons Gathering happens on Tuesday (Nov. 12th, 2024) and builds connections and collaboration across OpenShift communities, projects and stakeholders. Some maintainers from the Keycloak development team will be here during the afternoon. This gives a chance for more community Keycloak maintainers, contributors, and users to meet and share their ideas or just hang out. Access to the OpenShift Commons event is free and does not require a paid KubeCon ticket, . KEYCLOAK SPECIFIC EVENTS AT KUBECON Below is the Keycloak specific event that the attendees both in-person and virtually can plan to attend and learn more about a Highly Available Keycloak deployed in a Multi-Site environment. * Friday, November 15, 4:55pm - 5:30pm MST(UTC-7) By Ryan Emerson & Kameswararao Akella, Red Hat. We're preparing for KubeCon SLC 2024 and can't wait to connect with our community. Mark your calendars and join us. See you in Salt Lake City, Utah!
10 Oct 2024 12:00am GMT
Introducing Jakarta Data in WildFly Preview
I'm excited that in the 34 Beta release we were able to introduce support for into WildFly Preview. It was a bit of an unexpected last minute thing that we were able to do this, which left us without time to much in the way of documentation. We'll correct that for WildFly 35, but in the meantime I'll use this blog post as a way to introduce the basics. Note In the 34 release, Jakarta Data is only available in WildFly Preview, and not in standard WildFly. It is provided at the , which is enabled out-of-the-box in WildFly Preview. JAKARTA DATA OVERVIEW My purpose in this post isn't to dive much into the details of Jakarta Data itself; there are other resources that do a good job of covering that. I want to focus here on how the WildFly Preview and Hibernate ORM integration of Jakarta Data works, so users can get going with using Jakarta Data in a WildFly server. So this next bit is very brief and high level. Jakarta Data brings the repository pattern to the Jakarta ecosystem. As explained in the > a repository is a mediator between an application's domain logic and the > underlying data storage, be it a relational database, NoSQL database, or any > other data source. > > In Jakarta Data, a Repository provides a structured and organized way to > interact with data. It abstracts data storage and retrieval complexities, > allowing you to work with domain-specific objects and perform common > operations on data without writing low-level database queries. An application developer defines a repository by providing an interface annotated with the Jakarta Data @Repository annotation. The repository interface declares methods used for data retrieval and modification of a particular . A repositoy interface can include different methods that deal with different entity types, giving application authors flexibility to define repositories that fit the needs of their application domain. Following is an example repository: @Repository interface Publishing { @Find Book book(String isbn); @Find Author author(String ssn); @Insert void publish(Book book); @Insert void create(Author author); // query methods ... } Book and Author are typical entity classes. The repository interface methods are annotated with various Jakarta Data annotations (@Insert, @Find, etc) that define the expected persistence behavior of the method. There's much more to the Jakarta Data programming model than this; for all the details see: * The * The * Gavin King's excellent on Jakarta Data A Jakarta Data implementation like WildFly Preview can support one or more Jakarta Data . A provider understands one or more Java annotation types that are used to define entities, and it understands how to interact with a particular type of back end datastore. WildFly Preview's Jakarta Data implementation supports the provider, which uses Hibernate ORM to interact with a variety of different relational databases. Hibernate Data Repositories supports the jakarta.persistence.Entity annotation as the mechanism for application authors to define entities. USING HIBERNATE DATA REPOSITORIES IN YOUR APPLICATION There are two key things to understand in order to use WildFly Preview's Hibernate Data Repositories provider: * How to configure build time generation of the implementation of your @Repository interfaces. * How to configure the runtime behavior of the Hibernate ORM instance that will interact with the database. BUILD-TIME GENERATION OF REPOSITORY IMPLEMENTATIONS An application author using Jakarta Data simply writes an interface for their repository, but of course for that to work at runtime there must be an actual implementation of that interface. It's the responsibility of the Jakarta Data provider to provide that implementation. Hibernate Data Repositories does this by generating the implementation classes as part of the build of your application. So, to use Jakarta Data with WildFly Preview you need to configure the generation of those classes as part of your application build. In a Maven build this is done by configuring the Maven compiler plugin to use the org.hibernate.orm:hibernate-jpamodelgen artifact as an annotation processor: org.apache.maven.plugins maven-compiler-plugin 3.12.1 org.hibernate.orm hibernate-jpamodelgen Note that there is no version element in the org.hibernate.orm:hibernate-jpamodelgen declaration above. You could provide one, but best practice is to control the version in your pom's dependencyManagement. Importing the org.wildfly.bom:wildfly-ee-preview-with-tools BOM lets you align the version of Hibernate artifacts with what's used in your target WildFly Preview runtime: org.wildfly.bom wildfly-ee-preview-with-tools 34.0.0.Beta1 pom import Warning Some users may have learned to configure Hibernate annotation processing by declaring org.hibernate.orm:hibernate-jpamodelgen as a provided dependency in their pom. With the Hibernate version used with WildFly, . Use the maven-compiler-plugin configuration approach described above. If you're using Gradle, you'll need to use annotationProcessor: annotationProcessor 'org.hibernate.orm:hibernate-jpamodelgen:6.6.1' The generated repository implementation classes internally use various Hibernate ORM classes, so to compile the generated code you'll need to add a dependency on Hibernate: org.hibernate.orm hibernate-core provided CONFIGURING HIBERNATE ORM Under the covers, your repository implementation will use Hibernate ORM to interact with the database. You configure ORM by providing a META-INF/persistence.xml file, the same as you would with a Jakarta Persistence application: java:jboss/datasources/ExampleDS The jta-data-source value should match the value of the jndi-name attribute in a datasource you've declared in the WildFly Preview datasources or datasources-agroal subsystem configuration. CONFIGURING WILDFLY PREVIEW TO SUPPORT JAKARTA DATA Jakarta Data in WildFly Preview is configured using the new jakarta-data subsystem. This subsystem isn't included in any of WildFly Preview's out-of-the-box configuration files, so to use it you'll need to add it to your configuration. If you're using a complete WildFly Preview installation, like the ones available from the , then you can use the JBoss CLI to add the Jakarta Data extension and subsystem to your configuration: $ /extension=org.wildfly.extension.jakarta.data:add $ /subsystem=jakarta-data:add If you're using Galleon to provision a slimmed WildFly Preview installation, you'll need to specify the jakarta-data Galleon layer. For example if you are using the WildFly Maven Plugin to provision a server that supports a Jakarta REST application interacting with a Postgresql database, the configuration in your application's pom.xml might look like this: org.wildfly.plugins wildfly-maven-plugin wildfly-preview@maven(org.jboss.universe:community-universe) org.wildfly wildfly-datasources-preview-galleon-pack 8.0.1.Final
package The subsystem itself is very simple and doesn't expose any configuration attributes. Note that for the jakarta-data subsystem to work, the jpa subsystem must be present in your configuration. It's present in our out-of-the-box configurations and will be included if you provision a server using the jakarta-data Galleon layer. Please try out Jakarta Data in WildFly Preview and give us your feedback! We'll continue to work on the integration, with a goal of including it in standard WildFly in one of the next couple of releases.
10 Oct 2024 12:00am GMT
08 Oct 2024
JBoss Blogs
Backwards compatibility in Keycloak releases
With four major releases of Keycloak every year it can be a daunting task to keep deployments up to date. Especially, since . Combine this with the importance of patching deployments quickly for vulnerabilities, this can leave many deployments open to known vulnerabilities as the time and effort required to update to is too costly. Additionally, currently Keycloak client libraries are released together with the server, resulting in new major versions of a client library, where in fact there can be no changes at all, or perhaps only a bug fix or two. For these reasons, after Keycloak 26.0 is released there will be some changes to how Keycloak is being released: * Keycloak server will have 4 minor releases every year, and a major release every 2-3 years * Keycloak client libraries will be released separately. The latest client library release will support all currently supported Keycloak server releases We will continue to bring new features and enhancements to Keycloak in each release, and we are committed to doing so in a backwards compatible way, making it seamless and easy to upgrade. When a minor comes with breaking changes, such changes will be opt-in. This will be driven through versioning where the currently default version for a Feature or an API can not change in a minor release, and there will be a new version that can be explicitly enabled. The current version of a Feature or API can be deprecated in a minor, but will not be removed until the next major version. This will allow you to gradually roll-out new Feature or API versions separately from upgrading. You can choose to get ready for the next major release early, or wait and do it in one go. Backwards compatibility guarantees will only be given to Features and APIs that are fully supported. Preview features or preview APIs, as well as non-public APIs may change at any time.
08 Oct 2024 12:00am GMT
07 Oct 2024
JBoss Blogs
Supporting Multiple Redis Databases with Infinispan cache aliases enhancement
In Infinispan 15 we provided a large set of commands to make it possible to replace your Redis Server by Infinispan, without changing your code. In this tutorial you will learn how Infinispan cache aliases will help you on replacing your Redis Server by Infinispan for multiple Redis databases. Key takeaways: * What are cache aliases and how to create caches with aliases or update existing ones * Learn how Infinispan and Redis differ in data organization * Support multiple databases in Infinispan with cache aliases when using the RESP protocol Supporting multiple Redis databases is available since Infinispan 15.0 (latest stable release at the time of this writing). However, Hot Rod, CLI and Infinispan Console support is Tech Preview in Infinispan 15.1 (in development right now). REDIS HOT REPLACEMENT FOR INFINISPAN Since Infinispan 15, you can use Infinispan as a hot replacement for Redis because it supports most Redis commands through the RESP protocol. This works because Infinispan Server has the RESP endpoint enabled by default. Redis clients will automatically connect and be routed to Infinispan's internal connector. RUNNING THE INFINISPAN SERVER AND USING A REDIS CLIENT Testing a Redis client with Infinispan Server is very easy. First run the Infinispan Server as explained in the . Important: Caches aliases fully work from 15.1.0.Dev04 release. Make sure you pull the latest 15.1 image locally. Command line with Docker or Podman docker run -it -p 11222:11222 -e USER="admin" -e PASS="password" quay.io/infinispan/server:15.1 podman run -it -p 11222:11222 -e USER="admin" -e PASS="password" --net=host quay.io/infinispan/server:15.1 Next, connect to Infinispan using . Use port 11222 instead of the default 6379. Since Infinispan is secured by default, make sure to provide the admin credentials. > redis-cli -p 11222 --user admin --pass password 127.0.0.1:11222> set hello world OK 127.0.0.1:11222> get hello "world" That's all! If you're wondering where the data is stored, it's in the "respCache". This is the default cache used by the Infinispan RESP connector, and it's pre-configured with sensible defaults. It's ready to use and serves as a good replacement for Redis. Please note that starting with I nfinispan 15.1, the data container cache list includes a new column called "Aliases". We'll cover that later. Infinispan Server Console in (admin/password) credentials. REDIS DATABASES VERSUS INFINISPAN CACHES In Redis, databases are essentially separate, isolated namespaces within a single Redis server. Each database can store its own set of key-value pairs independently from the others. By default, Redis provides 16 databases, numbered from 0 to 15. You can switch between these databases using the SELECT command. This feature helps organize data and isolate different applications or use cases within the same Redis instance, though it's important to note that all databases share the same memory space and configuration settings. Infinispan on the other hand employs a distributed cache model where data is spread across multiple nodes. It doesn't use the concept of separate databases; instead, it organizes data using caches, which can be configured with different settings and partitioned across a cluster. Data is distributed and replicated across multiple nodes, offering high availability and scalability. There isn't a direct equivalent to Redis's databases, but data can be segmented using different caches and configurations. Here is a table that resumes the main differences between Redis Databases and Infinispan caches: Aspect Redis Database Infinispan Cache Definition A logical namespace within a single Redis instance, allowing isolation of keys and values. Default of 16 databases per instance. All databases share server resources and configuration. A container for key-value pairs within a distributed or in-memory cache. Can be distributed across multiple nodes, with multiple caches configurable within the same instance. Storage Model Stores data in a single server's memory with simple key-value storage. Databases are isolated from each other but share server resources. Stores data across a cluster of nodes with features like partitioning, replication, and distributed caching. Suitable for large-scale and high-availability scenarios. Isolation Provides isolation between databases using the SELECT command. All databases share memory and configuration settings. Provides isolation and configuration flexibility at the cache level. Each cache can be independently configured and may be distributed or replicated. Configuration and Flexibility Limited to basic configuration options related to the database index and server settings. All databases share the same server resources. Extensive configuration options for each cache, including different modes (e.g., local, distributed, replicated), eviction policies, and more. If the Infinispan connector uses a single cache named "respCache" by default, you can support multiple Redis databases… by using cache aliases. CACHE ALIAS TO RESCUE In Infinispan, cache aliases are alternative names you can assign to a cache. They allow you to refer to the same underlying cache configuration using different names. Cache aliases in Infinispan allow for efficient switching between different versions or states of cached data, without having to modify or reload your application logic. This makes cache aliases especially useful in scenarios where data needs to be updated, but you want to ensure high availability and minimal impact to application performance. USE CASES FOR CACHE ALIASES Cache aliases in Infinispan are great for managing changing data without disrupting your application. It allows you to switch between data snapshots easily. You can keep using an old data version while loading a new one. When the new data is ready, you just switch the alias to point to it, without downtime. There is better performance and high availability since your app doesn't touch the cache that's being updated, it runs smoothly without slowdowns or errors. If something goes wrong, you can quickly rollback and switch to the previous data version with the alias. For example, imagine an online shop that needs to update its catalog: 1. The shop keeps showing products using the current data (current_catalog pointing to catalog_snapshot_1). 2. While customers browse, new product data is loaded into catalog_snapshot_2 in the background. 3. Once catalog_snapshot_2 is fully updated, the alias (current_catalog) is switched to point to catalog_snapshot_2. 4. he old catalog_snapshot_1 cache is now free to be cleared and used for the next update. The website updates its catalog data without causing big delays or downtime for users. CREATING A CACHE WITH AN ALIAS Before learning how to use cache aliases for the RESP protocol and multiple databases, let's first learn how to create and update cache aliases. There are several ways to create a cache or cache configuration in Infinispan, but my favorite is using the Infinispan Server Console. Run the Infinispan Server and access the Console as explained in the . To create a cache, use the cache creation wizard by clicking the "Create Cache" button. In the cache tuning step, you'll find the "Aliases" option, where you can add as many aliases as you want. In the final step, you'll be able to review the configuration in JSON, XML, or YAML formats. When you create a cache with aliases, the list will show the cache's aliases. You can filter caches by name or alias using the "search by" field.. ADDING AN ALIAS AT RUNTIME For existing caches, good news! The aliases attribute in a cache configuration can be changed at runtime. You can do this in several ways: * Using the administration API in Hotrod * Using the Infinispan Server Command Line Interface (CLI) * Using the Server Console or REST API To perform this operation, you need ADMIN access in Infinispan. USING THE HOTROD CLIENT To modify an alias at runtime, use the administration API. Below is an example for client/server mode. If you're using Infinispan Embedded in your application, a similar API is available. RemoteCacheManager remoteCacheManager = // created or injected if using Quarkus or Spring Boot remoteCacheManager.administration().updateConfigurationAttribute("myCache", "aliases", "alias alias2"); RemoteCache cacheFromAlias = cacheManager.getCache("alias"); Check this example and more in the . USING THE COMMAND LINE TOOL The Command Line Tool (CLI) of Infinispan provides a way to change cache aliases at runtime. First, run the CLI with the following command: podman/docker run -it --net=host infinispan/cli From the command line, connect to the running server: [disconnected]> connect Username: admin Password: ******** [6b0130c153e3-50183@cluster//containers/default]> Then, use "alter cache" command to update aliases attribute: alter cache myCache2 --attribute=aliases --value=current_catalog Finally, describe the configuration of the cache and verify the change: [6b0130c153e3-50183@cluster//containers/default]> describe caches/cache2 { "myCache2" : { "distributed-cache" : { "aliases" : [ "current_catalog" ], "owners" : "2", "mode" : "SYNC", "statistics" : true, "encoding" : { "media-type" : "application/x-protostream" } } } } TIP: Use help command [6b0130c153e3-50183@cluster//containers/default]> alter cache -h Usage: alter cache [] Alters a cache configuration Options: --attribute The configuration attribute --value The value for the configuration attribute. If the attribute supports multiple values, separate them with commas -f, --file -h, --help Argument: The cache name USING THE SERVER CONSOLE From the list of caches, select Edit aliases action. A modal dialog will open. You can add or remove aliases from there. SUPPORTING MULTIPLE DATABASES Let's try selecting databases 0 and 1 using the Redis CLI. To switch databases in Redis, use the SELECT command followed by the database number. Lets try over Infinispan again. First, use SELECT 0 to start in database 0. Then, use SELECT 1 to switch to database 1. > redis-cli --user admin --pass password 127.0.0.1:11222[1]> select 0 OK 127.0.0.1:11222[1]> select 1 (error) ERR DB index is out of range Database 0 works, but database 1 does not. On closer inspection of the respCache configuration, we see the default respCache with alias "0" is defined. To select database "1", you need to create a new cache. Lets use the Infinispan Console again to do that. Go to the cache creation wizard and choose "add cache configuration" option this time. Choose the RESP.DIST template and create the cache. This template is specifically designed for RESP caches. Finally, add alias "1" to the new cache as described in the section on adding an alias at runtime. Alternatively, you can copy and paste the configuration from respCache changing the alias 0 to alias 1. Now that we have a cache with alias 1, we can select and add the data: > redis-cli --user admin --pass password 127.0.0.1:11222[1]> select 0 OK 127.0.0.1:11222[1]> select 1 OK 127.0.0.1:11222[1]> set hello word OK It is important to highlight that, unlike Redis Databases, each cache can be set up differently based on your application's needs. This lets you take advantage of Infinispan's flexible configuration (For example, you can add backups using Cross-Site Replication for some "databases" and not all of them) while still keeping the simplicity of using your Redis client in your app. CONCLUSIONS In this tutorial, you've learned how to use multiple databases with the RESP protocol and how to use Infinispan caches as a replacement for Redis databases. By using different caches instead of Redis databases, you gain several advantages, as discussed. You can now approach your data needs in a more flexible and effective way, tailored to your specific scenarios. You have also learned what cache aliases are and how helpful they can be in different situations, not just Redis databases.
07 Oct 2024 12:00am GMT