20 Dec 2024
JBoss Blogs
Quarkus 3.17.5 - Maintenance release
We released Quarkus 3.17.5, a maintenance release for our 3.17 release train, and the last release for 2024. UPDATE To update to Quarkus 3.17, we recommend updating to the latest version of the Quarkus CLI and run: quarkus update Note that quarkus update can update your applications from any version of Quarkus (including 2.x) to Quarkus 3.17. For more information about the adjustments you need to make to your applications, please refer to the . FULL CHANGELOG You can get the full changelog of on GitHub. COME JOIN US We value your feedback a lot so please report bugs, ask for improvements… Let's build something great together! If you are a Quarkus user or just curious, don't be shy and join our welcoming community: * provide feedback on ; * craft some code and ; * discuss with us on and on the ; * ask your questions on .
20 Dec 2024 12:00am GMT
18 Dec 2024
JBoss Blogs
Using the management console on OpenShift
In this blog post I'd like to show how you can use the management console (aka ) for WildFly instances running on OpenShift. PREREQUISITES The console is an integral part of WildFly and is activated by default when running on-premise. For instances running on OpenShift, the console is not available by default, though. To use the console on OpenShift, you need a WildFly image that meets the following requirements: * Management user: The management console is protected by default and requires a management user added with the add-user.sh script. * Public route to the management interface: The management interface has to be publicly available from the outside. * Allowed origin: The console uses the to talk to the management interface of a running WildFly instance. In an OpenShift environment, the origins of the public route and the management interface itself are different. That's why we need to tell WildFly that it is ok to make requests to the management interface from another origin (see for more details). You can build such an image on your own based on the official WildFly images available at (see "Extending the image"). Another way is to use the pre-built images from . These images are mainly meant for HAL development and testing but already meet these requirements, which makes them suitable for our use case. In particular the images add a management user admin:admin and have a list of preconfigured allowed origins. Warning The additions in the images are only meant for development and testing purposes. Under no circumstances must this be used in production! Do not rely on the management user admin:admin or the preconfigured allowed origins. To add the allowed origin for the public route, we make use of the kubectl plugin. This plugin makes it straightforward to connect to a WildFly instance running on OpenShift and execute CLI commands. Please visit to find out how to install and use the plugin. INSTRUCTIONS The steps below assume you have access to an OpenShift cluster and installed kubectl and the jboss-cli plugin. 1. Create application oc new-app quay.io/halconsole/wildfly oc create route edge --service wildfly --port 9990 2. Add allowed origin Use oc get pods to find the name of the pod to connect to and oc get routes to get the hostname of the public route to the management interface. kubectl jboss-cli -p Login using admin:admin and execute these CLI commands: /core-service=management/management-interface=http-interface:list-add(name=allowed-origins,value=https://) reload exit 3. Open the management console at https:// and login using admin:admin. ONLINE VERSION OF THE MANAGEMENT CONSOLE As an alternative to adding the allowed origin, you can also use the online version of the management console available at . This URL ships the latest version of the management console. Note The management console is a single-page application () without any server side dependencies. As such, it can run on its own and connect to an arbitrary management interface. The online version of the console makes use of this fact. See for more details. 1. Create the application as above and find the hostname of the public route using oc get routes. 2. Open 3. Add a management interface to the public route: Give an arbitrary name, select https as a scheme, enter the hostname of the public route without https and port 80: 4. Click Add and then Connect 5. Login using admin:admin THINGS TO KEEP IN MIND Please note that the above instructions are just a workaround to access the OpenShift management console as long as there is no more compatible, container-friendly way. In particular, the approach ignores some principles that should not be applied in a cloud environment: * Changing the management configuration of a pod is an antipattern as it will not outlive a pod restart. At that point, you'll have to reconfigure the allowed origin. * With a route, you are accessing pods behind a service. If your deployments have multiple pods, it's complex and hacky to access a specific pod or configure all pods. * Do not use the images in production under any circumstances. They contain preconfigured, insecure credentials and are meant only for development and testing purposes. OUTLOOK We're currently working on the . This version will also support a dedicated variant for OpenShift that will integrate with the OpenShift management console and addresses the limitations mentioned above. For more information you can watch the on the next-gen management console from the last , get the or reach out to us in the HAL Zulip .
18 Dec 2024 12:00am GMT
Thanks for a great 2024!
We are almost at the end of 2024 and we wanted to take this opportunity to thank all our community members for their help on this year. 2024 was a busy year for WildFly and a lot was accomplished. We continued with our quarterly releases and delivered 4 Major versions (31, 32, 33, 34) and 5 micro updates. WildFly 35 Beta also just came out last week. There were many developments done this year but here are some of the highlights from our : Stability levels The WildFly project has high standards related to quality, stability and backwards compatibility. A key way an open source project like WildFly can ensure high standards are met is by "community bake", allowing interested users to have access to features that are still undergoing a hardening process, while not forcing users who are not interested in such things to consume them. To better facilitate this, WildFly 31 introduced the notion of formal "stability levels" that can be associated with functionality. MicroProfile 7.0 We have added support for MicroProfile 7.0 with inclusion of updated specifications for MicroProfile Fault Tolerance, MicroProfile OpenAPI, MicroProfile REST Client and MicroProfile Telemetry. Jakarta EE 11 Work on support of Jakarta EE 11 is ongoing with inclusion of preview stability support for in both standard WildFly and WildFly Preview, in addition to the updates of Jakarta EE 10 APIs in WildFly Preview. WildFly Glow The WildFly Glow tools (a CLI application and a Maven plugin) will analyze your application artifact, determine what WildFly feature-packs and Galleon layers are needed to run your application, and make suggestions about other features (e.g. TLS support) that you may want to include in your optimized WildFly installation. You can take the information WildFly Glow provides and use it in your own provisioning configuration, or you can have WildFly Glow provision a server, bootable jar or Docker image for you. Jakarta MVC We have added a preview stability extension and subsystem for . Vert.x extension We have added a preview stability extension and subsystem for to configure and share the Vert.x instance within WildFly server for other subsystems which need it. WildFly AI We have created a for Generative AI that simplifies the integration of AI in WildFly applications. Overall, more than 300 feature, enhancement and bug issues were resolved in our main code by more than 60 contributors, and if you include task and component upgrade issues we resolved over 700. This is not counting all the work done in the components integrated in WildFly. We have updated our release process with an emphasis on to enhance and expand WildFly. We continued to expand our to cover more use cases. We also organized two mini conferences around WildFly in March and November: * * We will start 2025 with the release of WildFly 35 and we have a lot of exciting news for the future of WildFly! Thanks all for of your for your work and help in making WildFly a successful Open Source project. We are looking forward to continuing working with you next year! Best wishes from the WildFly team and Happy New Year 🎉🎊! Brian
18 Dec 2024 12:00am GMT
Quarkus Infinispan Embedded extension
We are excited to announce the first release of the Quarkus Infinispan Embedded Extension! This extension is now available in the Quarkiverse Hub. It is a big step forward for developers who want to use Infinispan in embedded mode with Quarkus. WHAT IS INFINISPAN EMBEDDED MODE? Infinispan is a powerful, distributed in-memory data store and cache. In embedded mode, Infinispan runs within your application, in library mode, without needing a separate server. This means your app can handle data caching and storage directly in its own process, making it faster and simpler. WHY USE THE QUARKUS INFINISPAN EMBEDDED EXTENSION? The new extension makes it easy to use Infinispan with Quarkus requiring minimal setup and delivering fast in-memory performance to your Quarkus apps. USE CASES FOR INFINISPAN EMBEDDED IN QUARKUS Here are some scenarios where using Infinispan in embedded mode with Quarkus might be a great fit: In-Memory Caching: Use Infinispan as a local cache to speed up data retrieval and reduce database load in your application. Temporary Data Processing: Manage and process temporary or short-lived data directly within the application. Local Data Storage for Microservices: Use Infinispan as a lightweight, in-memory store for individual microservices that don't require centralized data persistence. Offline Applications: When working with offline or edge applications where an external server is not available, Infinispan embedded mode ensures data is stored locally and efficiently. Data Replication in Small Clusters: Use Infinispan to handle data replication across a few nodes without the overhead of a separate Infinispan server. TRADE-OFFS OF USING INFINISPAN IN EMBEDDED MODE While running Infinispan in embedded mode offers simplicity and speed, there are some trade-offs to consider. Since Infinispan runs within your application's process, it shares the same memory and CPU resources. This can increase your application's resource usage, especially as the data size grows. Additionally, embedded mode is best suited for single-node or small-scale deployments; for larger, distributed systems, using Infinispan in remote mode with a dedicated server may offer better scalability and separation of concerns. INFINISPAN EMBEDDED AND KUBERNETES DEPLOYMENTS When running applications on Kubernetes, using Infinispan in embedded mode can introduce additional challenges. For instance, scaling an embedded Infinispan setup requires scaling the entire application pod, which may not be as efficient as scaling an external Infinispan cluster independently. Kubernetes' ability to handle distributed workloads aligns better with remote Infinispan setups, where storage and application layers can scale separately for improved resource management. For more information, check the in the official Infinispan documentation. HOW TO GET STARTED Getting started is very easy. Just add the dependency to your Quarkus application: io.quarkiverse.infinispan quarkus-infinispan-embedded 1.0.1 Then you can Inject the EmbeddedCacheManager and interact with Infinispan. @Inject private EmbeddedCacheManager cacheManager; To enable Protobuf serialization, you define a schema like this: @Proto public record Greeting(String name, String message) { @ProtoSchema(includeClasses = { Greeting.class }, schemaPackageName = "io.quarkiverse.infinispan") public interface GreetingSchema extends GeneratedSchema { } } Using the EmbeddedCacheManager you will be able to create caches on the fly. Configuration config = new ConfigurationBuilder() .encoding().mediaType(MediaType.APPLICATION_PROTOSTREAM) .clustering().cacheMode(CacheMode.DIST_ASYNC).build(); // Create a cache Cache cache = cacheManager.administration() .withFlags(CacheContainerAdmin.AdminFlag.VOLATILE) .getOrCreateCache("mycache", config); // Put a value in the cache cache.put(id, greeting); // Read a value from the cache cache.get(id); NATIVE SUPPORT AND FUTURE FEATURES The Quarkus Infinispan Embedded Extension supports native mode, but some advanced features may be limited. We encourage developers to test it, share feedback, and help us enhance its capabilities. WHERE TO LEARN MORE For detailed documentation and examples, check out the project in the Quarkiverse Hub: COME JOIN US We welcome your feedback and contributions to improve the extension. Feel free to open issues, suggest features, or contribute code on the project's GitHub repository. Thank you for being part of the Quarkus community. We hope you enjoy the new Infinispan Embedded Extension! If you are a Quarkus user or just curious, don't be shy and join our welcoming community: * provide feedback on ; * craft some code and ; * ask your questions on . * discuss with us on , or on the ;
18 Dec 2024 12:00am GMT
Our next LTS will be Quarkus 3.20
You are probably now familiar with our biannual (as in twice a year) LTS releases. Every 6 months, we pick a Quarkus release that will be supported for a full year. LTS releases are designed for users who want to keep a given version for a longer period of time instead of following our monthly release pace. We already released a few LTS (namely 3.2, 3.8, and 3.15 released at the end of September) and we know that some of our users are now planning their work according to our LTS schedule. Now is time to announce the schedule for our next LTS: Quarkus 3.20. tl;dr Quarkus 3.20 will be our next LTS release. It will be the direct continuation of the 3.19 branch. If you contribute to Quarkus or the Quarkus Platform and need a feature in the next Quarkus LTS, make sure it has been merged in the before February 11th included (the day before the 3.19.0.CR1 release). February 11th is the date of the feature freeze for Quarkus 3.19 and 3.20 LTS. QUARKUS 3.18 Quarkus 3.18 will be a regular minor version of Quarkus. It will be released on January 29th 2025. See our for all the details. QUARKUS 3.19 Quarkus 3.19 will be released on February 26th. We will branch Quarkus 3.19 from main when we release 3.19.0.CR1 on February 12th, as usual. After branching, main will host the development for Quarkus 3.21. QUARKUS 3.20 Quarkus 3.20 will be our next LTS version. It will be released on March 26th. This release will be the direct continuation of the 3.19 cycle and will actually get branched from the 3.19 branch. The focus for the 3.20 cycle will be on hardening 3.19 and fixing issues. It won't contain any new features. It might contain some additional component upgrades to fix CVEs or important bugs. Consequently, and this is important: if you contribute to Quarkus or the Quarkus Platform and need a feature in the next Quarkus LTS, make sure it has been merged in the before February 11th included (the day before the 3.19.0.CR1 release). February 11th is the date of the feature freeze for Quarkus 3.19 and 3.20 LTS. As this release will be maintained for 12 months, we will be recommending that extension maintainers and contributors consider bug fixes and enhancements for LTS releases. This will ensure that LTS releases are as stable and robust as possible, while still offering the full breadth of the Quarkus ecosystem. This means that extension maintainers and contributors will need to consider having branches and versioning in place to support 3.20 during the whole LTS cycle. QUARKUS 3.21 The plan is to release Quarkus 3.21 the same day as Quarkus 3.20 LTS. It will contain the new features developed in the main branch during the 3.19 → 3.20 cycle as Quarkus 3.20 LTS will be branched from the 3.19 branch. QUESTIONS? If you have any questions about this plan, feel free to ask in the comments of this blog post or on . COME JOIN US We value your feedback a lot so please report bugs, ask for improvements… Let's build something great together! If you are a Quarkus user or just curious, don't be shy and join our welcoming community: * provide feedback on ; * craft some code and ; * discuss with us on and on the ; * ask your questions on .
18 Dec 2024 12:00am GMT
17 Dec 2024
JBoss Blogs
Storing sessions in Keycloak 26
As you may know, Keycloak 26 now uses by default the Persistent user sessions feature. In this blog post I would like to uncover a little bit more background on why we introduced this feature, what are the alternatives and what is the future. SESSION STORAGES IN KEYCLOAK 26 CHEATSHEET This section provides a TLDR guidance on what sessions storages exist and when each of them should be used with Keycloak 26. The following sections provide more details on each storage type and reasoning behind introducing or dropping each of them. Number of sites Sessions storage Characteristics When to use Keycloak CLI options to enable Single site Persistent sessions * Sessions stored in the database and cached in memory * Sessions available after cluster restart * Lower memory usage * Higher database usage * Default and recommended for standard installations * You want your sessions to survive restarts and upgrades * Accept higher database usage No additional configuration needed Sessions stored in memory * Faster reads and writes * Sessions lost after cluster restart * Higher memory usage (all sessions must be in memory) * Can't use persistent user sessions feature * Please provide your feedback , as we want to understand why you can't use persistent user sessions --features-disabled="persistent-user-sessions" Sessions stored in external Infinispan * Sessions stored only in external Infinispan * Reduced database usage * Using Hot Rod client for communication with external Infinispan * Experimental feature * Do not use in production as it is experimental * Evaluate and provide your feedback if you are interested in this feature and want to help to make it supported. --features="clusterless" --features-disabled="persistent-user-sessions" Sessions stored in memory and external Infinispan * 4 copies of each session 2x in Keycloak memory and 2x in Infinispan memory * Sessions available after Keycloak cluster restarts * High memory usage * Experimental and will be removed soon * When you used this setup with previous releases and cannot switch to persistent user sessions now --features="cache-embedded-remote-store" --features-disabled="persistent-user-sessions" Multiple sites () Persistent user sessions * Sessions stored in the database without caching in Keycloak memory * Synchronously replicating sessions to second site (depending on database configuration) * When resiliency to whole site outage is needed --features="multi-site" Sessions stored in external Infinispan * Sessions stored only in external Infinispan * Using Hot Rod client for communication with external Infinispan * Reduced database usage * Experimental feature * Do not use in production as it is experimental * Evaluate and provide your feedback if you are interested in this feature and want to help to make it supported. --features="multi-site,clusterless" --features-disabled="persistent-user-sessions" EVOLUTION OF STORING SESSIONS In the old Keycloak days, all sessions were stored only in embedded Infinispan - in memory of each Keycloak node in a distributed cache (each Keycloak node storing some portion of sessions where each session is present in at least 2 nodes). This worked well in a single site with a small to medium amount of sessions, and the setup was resilient to one Keycloak node without losing any data. This could be extended to more than one node if we increase the number of nodes storing each session. WHAT ABOUT WHOLE SITE DISASTERS? The problem occurred when more nodes failed or when a whole site failed. Users asked for more resilient setups. For this, we introduced a technical preview of the cross-site feature. The impact on the session data was that we replicated all of them across 4 locations - 2 Keycloak clusters and 2 Infinispan clusters. With each of these locations needing to store all of the sessions in order to be able to search/query them. In the beginning, this setup didn't perform very well, one of the reasons was that we needed to synchronously replicate the data 4 times to keep the system in the correct state. As a consequence of this bad performance we initially wanted to drop the feature, however due to significant community interest we decided to evolve the feature instead. After several optimisations and performance tuning, we were able to release this in Keycloak 24 under the name multi-site, which allowed active-passive setups. This architecture replicated some data asynchronously to the second Keycloak cluster and therefore, we could not use this setup in an active-active way. I WANT MY SESSIONS TO SURVIVE! Even though we were more resilient with this setup, we are still losing sessions when the whole deployment goes down, which happens, for example, during updates. We received a lot of complaints about this. That is where persistent sessions came into consideration as a rescue to both of these problems - asynchronous updates replication to the other site and losing sessions. The idea is to store sessions in the database - the source of truth for sessions. We already stored offline sessions in the database so we reused the concept and introduced a new feature named Persistent user sessions which is now enabled by default in Keycloak 26. IS THE DATABASE THE CORRECT PLACE FOR SUCH WRITE-HEAVY OBJECTS? Almost each request coming to Keycloak needs to check whether a session exists, whether it is valid and usually also update its validity period. This makes sessions read and write heavy objects and the question whether the database is the correct place to store them is appropriate. At the moment of writing this blog post, we have no reports that would show performance problems with persistent user sessions and it seems the advantages overcome the disadvantages. Still, we have an additional feature in experimental mode that you can evaluate. As explained above, some of the problems with the multiple sites setup in Keycloak 24 were that we needed to have sessions replicated in 4 locations and the second Keycloak cluster was receiving some updates asynchronously. This can be also solved by storing sessions only in the external Infinispan as sessions are replicated only twice instead of four times. Also, the asynchronous replication is not used anymore as we do not need to replicate changes to Keycloak nodes. Infinispan also provides query and indexing capabilities for searching sessions which avoids sequential scans needed with the sessions stored in embedded Infinispan. Note this is an experimental feature and therefore it is not yet fully finished and performance optimised. We are eager to hear your feedback to understand where persistent user sessions fail and where the pure Infinispan storage for sessions could shine. WHAT OPTIONS DO I HAVE AND WHICH OF THEM SHOULD I CONSIDER? Since we could not remove any of the options from the list above without a proper deprecation period, all of them can still be used in Keycloak 26, however, some of them are more blessed than others. SINGLE SITE WITH SESSIONS STORED IN THE DATABASE AND CACHED IN MEMORY This is the default setup in Keycloak 26. SINGLE SITE WITH SESSIONS STORED IN MEMORY This is the default setup used in Keycloak versions prior to 26 and at the moment probably the most commonly used among all of them. The recommendation is to switch to persistent user sessions and with no additional configuration with Keycloak 26 the switch will be done automatically. However, if you have some problems with persistent user sessions (eager to hear your feedback ), and you don't mind losing your sessions on restarts you can enable this setup by disabling the persistent-user-sessions feature. bin/kc.[sh|bat] build --features-disabled="persistent-user-sessions" SINGLE SITE WITH SESSIONS STORED IN EXTERNAL INFINISPAN This is the experimental setup mentioned above. To configure this, disable persistent-user-sessions and enable clusterless features. bin/kc.[sh|bat] build --features="clusterless" --features-disabled="persistent-user-sessions" SINGLE SITE WITH SESSIONS STORED IN MEMORY AND EXTERNAL INFINISPAN This setup uses the functionality aimed for multi-site, however, this was often used in a single site as well, because of its benefit of not losing sessions on Keycloak restarts. We believe persistent user sessions make this setup obsolete and Keycloak will refuse to start with this setup complaining with this message: Remote stores are not supported for embedded caches….. This functionality is deprecated and will be removed in the next Keycloak major release. To run this configuration, disable persistent-user-sessions, enable cache-embedded-remote-store features and configure embedded Infinispan accordingly. bin/kc.[sh|bat] build --features="cache-embedded-remote-store" --features-disabled="persistent-user-sessions" OPTIONS FOR MULTIPLE SITES Running Keycloak in multiple sites requires two building blocks to make data available and synchronized in both sites. A synchronously replicated database and an external Infinispan in each site with cross-site replication enabled. The whole setup is described . From the point of view of storing sessions the setup is always forcing usage of the Persistent user sessions feature and they are stored only in the database with no caching in the Keycloak's memory. To configure this enable the multi-site feature. bin/kc.[sh|bat] build --features="multi-site" It is possible to evaluate the experimental clusterless feature described for the single site also with the multiple sites. In this setup the sessions are not stored in the database but in the external Infinispan. Note this is an experimental feature and as such it is not yet fully documented and performance optimised. To configure this, disable persistent-user-sessions and enable multi-site and clusterless features. bin/kc.[sh|bat] build --features="multi-site,clusterless" --features-disabled="persistent-user-sessions" FEEDBACK WELCOMED If you have any questions or feedback on this proceed to the following GitHub discussions: * * * FREQUENTLY ASKED QUESTIONS WHY DO WE NEED EXTERNAL INFINISPAN IN A MULTI-SITE SETUP WITH PERSISTENT USER SESSIONS In this case external Infinispan is not used for storing sessions, however, we still need it for communication between two Keycloak sites, for example, for invalidation messages, for synchronization of background tasks and also for storing some objects, usually short-lived, like authentication sessions, login failures or action tokens.
17 Dec 2024 12:00am GMT
Introducing Vertx Subsystem in WildFly Preview
I'm excited to announce the integration of the into WildFly Preview from WildFly 35 Beta release. This feature pack introduces Vertx configuration capabilities through the WildFly management model, making it easier to manage and integrate Vertx with existing WildFly subsystems. Note In the 35 release, the vertx subsystem is only available in WildFly Preview, and not in standard WildFly. It is provided at the , which is enabled out-of-the-box in WildFly Preview. ECLIPSE VERT.X OVERVIEW is an open-source toolkit designed for building event-driven, asynchronous applications. Currently, Vertx instances have been used by opentelemetry and microprofile-reactive-messaging-smallrye subsystems within WildFly to provide features powered by vertx components underneath, but there was no central mechanism to configure them. KEY FEATURES OF THE WILDFLY VERTX FEATURE PACK This feature pack provides centralized configuration and management of the Vertx instance, so administrators now have a unified way to define and manage the Vertx instance. Following the recommendation from the Vert.x team, it is good to have a single Vertx instance for everything, which ensures optimal efficiency and simplicity. 1. Configurable VertxOptions: Administrators can define Vertx configurations using the WildFly management model, ensuring consistency across subsystems. 2. Expose the Vertx Instance to CDI container: When administrators set up a Vertx instance in the vertx subsystem, it is exposed to the CDI container with a fixed qualifier, so other subsystems like opentelemetry and microprofile-reactive-messaging-smallrye can use it using CDI API. CONFIGURING VERTX INSTANCE IN WILDFLY PREVIEW Vertx instance in WildFly Preview is configured using the new vertx subsystem. This subsystem isn't included in any of WildFly Preview's out-of-the-box configuration files, so to use it you'll need to add it to your configuration. If you're using a complete WildFly Preview installation, like the ones available from the , then you can use the JBoss CLI to add the vertx extension and subsystem to your configuration: $ /extension=org.wildfly.extension.vertx:add $ /subsystem=vertx:add Once vertx subsystem is added, you can define some VertxOptions and set up the Vertx instance to refer to the options you just configured: $ /subsystem=vertx/vertx-option=vo:add(event-loop-pool-size=20, max-eventloop-execute-time=5, max-eventloop-execute-time-unit=SECONDS) $ /subsystem=vertx/vertx=vertx:add(option-name=vo) You will see the configuration like: standalone.xml For more configuration, please refer to the in the wildfly-vertx-feature-pack Wiki page. USE CASES With above configuration, there is a Vertx instance exposed in CDI container with a qualifier, which has been integrated to opentelemetry subsystem (microprofile-reactive-messaging-smallrye subsystem soon) by setting the associated configuration item internally. So when you play quickstart with the vertx configuration above, you will see a log message: [org.wildfly.extension.vertx] (default task-1) WFLYVTX0008: Use Vertx instance from vertx subsystem which indicates that the Vertx instance from the vertx subsystem is used underneath. The Vertx instance has 20 event loop threads set, and it will log a warning if it detects that event loop threads haven't returned within 5 seconds. FUTURE PLAN * There is a plan to increase the stability level to community and finally to the default level to be used in the standalone WildFly distributions. * Now the vertx subsystem is integrated internally whenever it is available, maybe it is better to give the decisions to the administrators so that they can configure the opentelemetry subsystem and microprofile-reactive-messaging-smallrye subsystem to use or not the vertx instance coming from the vertx subsystem. * When this vertx subsystem becomes mature enough and higher stability level, we also consider to move it to WildFly codebase to align the release cycles. Please try out the vertx subsystem in WildFly Preview and give us your feedback! We'll continue to work on the integration, with a goal of including it in standard WildFly in one of the next couple of releases.
17 Dec 2024 12:00am GMT
16 Dec 2024
JBoss Blogs
Infinispan 15.1.0.Final
"It Was All A Dream" No, man. This time it's for real! Freshly brewed, for the fine connoisseurs of distributed caching, we are proud to present Infinispan 15.1, codenamed Just like its beer namesake, this is a stout release, packed with flavor and features. Here's a quick rundown of what's new: QUERY SPATIAL QUERIES Infinispan now supports geographical queries. The feature allows users to perform queries based on geographical criteria. Spatial predicates can be used in combination with other predicates to implement additional filtering. Moreover, spatial fields can be used to project distances and to order the results according to distances from a given geographical point. You can define on the same entity one or more spatial fields. Each of them denotes a pair of geographical coordinates: latitude and longitude. Infinispan's query language supports three spatial predicates: within circle, within polygon and within box. If we want to sort our results according to the distance from a given query point, we can use the order by distance clause. We can also project the distances from the same query point. Different units of measurement can be used to denote the radius of the circle predicate or to project the distances from a given query point. Read our recent and the for more information. NESTED ENTITIES JOINS This contribution has been made by András Gyuró and Gabor Ori from our amazing Infinispan open source community. A big thanks to them! The feature allows to exploit the nested (not-flattened) relations between root entities and embedded entities in order to join their values to be queried. As an example, let's suppose we have an entity Team having a nested embedded field named players. It is possible to execute a query which selects all the teams having at least one player having number 7 and at the same time having name Ryan or Will. A possible query in this case could be: select t.name from model.Team t join t.players p where (p.name ='Ryan' AND p.number=7) or (p.name='Will' AND p.number=7) NON-BLOCKING QUERY API A new addition has been added to the Query API. Non-blocking/reactive alternative methods are available to query your data. Those methods are: Publisher publish(int maxBatchSize); CompletionStage> executeAsync(); CompletionStage executeStatementAsync(); The new methods are experimental, meaning they may change in the future, and only available for the Hot Rod client (remote query). NEW JAVA HOT ROD CLIENT A brand-new client implementation has been introduced replacing the current hotrod-client module. The public API is still the same so the code can be used without changes. The new client completely removes the prior connection pool, instead opting for a single pipelined channel to each server instead. The client configuration is thus ignored and deprecated. Due to only having a single client connection to each server users should see a substantial decrease in file descriptors in use for both server and client applications. Also, the majority of usage should see performance gains with the new client. The opposite may occur in cases of a single server with extremely high concurrency usage on the client. The new client has dropped support for HotRod protocols older than 3.0, which is from Infinispan 10. This was mostly done as some features in some versions of 2 require dedicated sockets which is not acceptable in the new client. The streaming cache commands (InputStream and OutputStream-based) had to be reworked to support a single socket, and thus we had to add a new Hot Rod protocol 4.1 to support these commands. If you are using the new client you can only use these streaming commands if your server also supports 4.1 or newer. If you find the need to use a Hot Rod protocol version older than 3.0 the prior client can be used by importing the hotrod-client-legacy module instead. CONSOLE CACHE ALIASES INDEX METAMODEL TRACING CACHE ALIASES Infinispan 15.0 had cache aliases that only worked within the context of the RESP connector. Now this functionality has been extended to all parts of Infinispan, including all other protocols as well as embedded. CERTIFICATE RELOADING SSL/TLS certificates have an expiration date, after which they will no longer be valid. The process of renewing a certificate is also known as rotation. Infinispan now monitors the keystore files for changes and automatically reloads them without requiring a server or client restart. Note to ensure seamless operations during certificate rotation, use certificates signed by a Certificate Authority and configure both server and client trust stores with the CA certificate. The use of self-signed certificates will cause temporary handshake failures until all clients and servers have been updated. TIME QUANTITIES IN CONFIGURATION Wherever a time quantity, such as a timeout or an interval, is specified within a declarative configuration, it is possible to describe it using time units: * ms: milliseconds * s: seconds * m: minutes * h: hours * d: days For example: { "distributed-cache": { "remote-timeout": "35s"} } FIXES Too many to count. We want to thank our amazing community members for and helping out with providing detailed information that helps us debug and solve problems. DEPRECATIONS AND REMOVALS The main change is the removal of the old server templates (like org.infinispan.DIST_SYNC) which were redundant and didn't provide any advantage to defining configurations. JDK REQUIREMENTS Like for 15.0, you will need at least JDK 17 in order to use Infinispan 15.1. Infinispan also supports JDK 21 and the recently released JDK 23. DOCUMENTATION As usual, many improvements, updates and fixes. RELEASE NOTES You can look at the to see what was changed since our last development build. Get them from our .
16 Dec 2024 12:00am GMT
12 Dec 2024
JBoss Blogs
WildFly 35 Beta is released!
I'm pleased to announce that the new WildFly 35.0.0.Beta1 release is available for download at . NEW AND NOTABLE This quarter we had a heavy focus on MicroProfile, particularly . * WildFly now . * WildFly now . * Standard WildFly now . This was previously supported in WildFly Preview. * Standard WildFly now . This was previously supported in WildFly Preview. * Our MicroProfile Reactive Messaging subsystem has added OpenTelemetry tracing integration for and . There's plenty of new things beyond MicroProfile area as well, though: * WildFly now includes in the bin/systemd directory, replacing the old, unsupported docs/contrib/scripts/systemd files. The new units include support for a managed domain. * The jaxrs subsystem now . This feature allows the client to send a JSON http request with Content-Type "application/merge-patch+json", and the JSON content will be directly merged to the target resource. * We added to standard WildFly. This was previously ; now it is available in standard WildFly as well. This feature is provided at the . * WildFly Preview has a new , intended to give user's greater control over the configuration of Vert.x instances running in the server. This feature is provided at the . * WildFly Glow has also received a new feature to , this allows feature packs to be grouped into spaces such as an incubating space to reflect the stability of the feature pack and to allow users to select which spaces they want to use. For further details please see the detailed in GitHub. As we approach the end of 2024 this is the time of the year where we release the Beta release of the major version just before we wrap up for the end of the year and will follow up with the Final release once we return in the new year. Please take this time as an opportunity to try out this release and provide us with any feedback. Finally I would like to thank everyone who has contributed to making this release happen, both via direct contributions to WildFly as well as the countless contributions to the projects that WildFly depends upon. We also had a couple of first time contributors this release so I would like to thank for taking on one of our good-first-issue Jira issues to remove redundant code from our application client implementation and for contributing updates to our testsuite to move tests to a more appropriate location.
12 Dec 2024 12:00am GMT
Videos for the holidays and meet us at FOSDEM!
VIDEOS TO RE-WATCH This year, the Keycloak project was present at multiple conferences. Here are the videos to watch for the holiday break if you haven't watched them yet: , , , and . When going through the list, I found that at least two of the talks have not been published on the Keycloak blog yet. So here they are: * FOSDEM in February with the talk , * FrOSCon in August with . Did we miss another video that we should have shared here? WE ARE EXCITED TO CONNECT WITH THE COMMUNITY All conferences were exciting for us: We met with the community to share the latest developments of Keycloak, engaged in discussions and heard interesting stories from people running Keycloak in their production environments. The FrOSCon and the KubeCon conferences were special as we had our own stand where we connected to both new and existing users of Keycloak. At FrOSCon, we had our own signage up as this photo proves! If you have not met us at a conference yet, please take this : Let us know if you want to share your story with the broader community, and we will be in contact with you about the next steps. MEET US NEXT AT FOSDEM! The good news is that we will back at . We got some talks accepted and will share the details once the schedule is out. In the meantime, save the date to either join us in Brussels or live on the stream. If you want to connect on-site, . Some of our team members will also be at the , which is unfortunately already sold out. We are already planning for other upcoming events in 2025, so return to this blog to read the latest news here!
12 Dec 2024 12:00am GMT
Quarkus Newsletter #51 - December
"Quarkus has surpassed the 1,000 contributor milestone" by Dimitrios Andreadis will explore the evolution and future of Quarkus, Red Hat's next-generation Java framework designed to optimize applications for cloud-native environments. "Java Quarkus LangChain4j - Building a Chatbot" by Bhagvan Kommadi will guide you through leveraging two powerful tools - Quarkus and LangChain4j - to create chatbots that are not only efficient and scalable but also capable of understanding and generating human-like responses. "Testing Apache Camel Routes with Testcontainers" by Andras Fejes describes how Apache Camel and Testcontainers combine EIPs for integration and Testcontainers for robust, automated testing. Hamid Khanjani's "Hibernate Reactive with Panache: The Quarkus-Powered ORM Revolution for Reactive Java Applications and Kubernetes" describes how when Hibernate is combined with Panache, an intuitive and declarative ORM layer from Quarkus, it becomes a game-changer for Java developers. Explore different choices available for developing a new Quarkus application in "Getting Started with Quarkus: A Guide to Application Creation" by Jagnya Datta Panigrahi. You will also see the latest Quarkus Insights episodes, top tweets/discussions and upcoming Quarkus attended events. Check out ! Want to get newsletters in your inbox? using the on page form.
12 Dec 2024 12:00am GMT
A kubectl plugin to run WildFly management operations on Kubernetes
In this article, Jeff Mesnil presents a kubectl plugin to run WildFly management operations on Kubernetes.
12 Dec 2024 12:00am GMT
11 Dec 2024
JBoss Blogs
What's new in Vert.x 5
11 Dec 2024 12:00am GMT
Quarkus 3.17.4 - Maintenance release
We released Quarkus 3.17.4, a maintenance release for our 3.17 release train. UPDATE To update to Quarkus 3.17, we recommend updating to the latest version of the Quarkus CLI and run: quarkus update Note that quarkus update can update your applications from any version of Quarkus (including 2.x) to Quarkus 3.17. For more information about the adjustments you need to make to your applications, please refer to the . FULL CHANGELOG You can get the full changelog of on GitHub. PLATFORM COMPONENT UPGRADES CAMEL QUARKUS Camel Quarkus has been updated to 3.17.0. QUARKUS CXF We updated to Quarkus CXF 3.17.3 in the Quarkus Platform. COME JOIN US We value your feedback a lot so please report bugs, ask for improvements… Let's build something great together! If you are a Quarkus user or just curious, don't be shy and join our welcoming community: * provide feedback on ; * craft some code and ; * discuss with us on and on the ; * ask your questions on .
11 Dec 2024 12:00am GMT
Migrate from Vert.x 4 to Vert.x 5
11 Dec 2024 12:00am GMT
Eclipse Vert.x 5 candidate 3 released!
11 Dec 2024 12:00am GMT