01 May 2024

feedJBoss Blogs

How to start WildFly with a custom Banner

WildFly 32 introduces as an experimental feature, the possibility to print a Banner when the application server is starting up. In this article, we will learn how to customize the banner and how to use a simple Ansible playbook to install a custom banner on every server in your inventory. Running WildFly in experimental mode ... The post appeared first on .

01 May 2024 1:57pm GMT

30 Apr 2024

feedJBoss Blogs

Configuring a Caching Realm with Elytron

This tutorial guides you through configuring a caching realm in Elytron to improve authentication performance for your WildFly applications. By caching user credentials retrieved from a separate security realm (e.g., LDAP), you can significantly reduce the load on your identity store and enhance application responsiveness. Setting up the Base Realm Securing your WildFly applications often ... The post appeared first on .

30 Apr 2024 10:19am GMT

Vlog: WildFly Glow, WildFly 32 deployment to OpenShift with database access

In we are using WildFly Glow 1.0 to deploy the todo-backend quickstart application to OpenShift with automatic connection to a PostgreSQL database. No configuration needed!

30 Apr 2024 12:00am GMT

Quarkus 3.10 - Hibernate Search standalone POJO mapper, Flyway 10, security enhancements

After some big changes in Quarkus 3.9, we have the pleasure to announce Quarkus 3.10. Quarkus 3.10 is for developers who want the latest features, if you are looking for an extended support cycle, you are encouraged to stay on 3.8 LTS. Here are the main changes for 3.10: * - Extension for the Hibernate Search Standalone POJO Mapper with Elasticsearch * - Update Flyway to 10.10.0 * - Move package config to an interface * - Allow authentication mechanism selection for a REST endpoint with annotation * - Optional support for the OIDC session cookie dir encryption * - Support for verifying OIDC JWT claims with custom Jose4j Validator * - Support resolving of static OIDC tenants based on token issuers * - Add OIDC TokenCertificateValidator UPDATE To update to Quarkus 3.10, we recommend updating to the latest version of the Quarkus CLI and run: quarkus update Note that quarkus update can update your applications from any version of Quarkus (including 2.x) to Quarkus 3.10. For more information about the adjustments you need to make to your applications, please refer to the . If you are not already using 3.x, please refer to the for all the details. You can also refer to for additional details. Once you upgraded to 3.0, also have a look at the , , , , , and , , , , and migration guides. WHAT'S NEW? HIBERNATE SEARCH STANDALONE POJO MAPPER Hibernate Search has been part of the Quarkus ecosystem for a long time but until now we only supported the Hibernate ORM integration. Quarkus 3.10 adds a brand new extension to add support for the Hibernate Search standalone POJO Mapper, which allows to index directly arbitrary POJOs. If you want to index structured data coming from files, MongoDB entities, … this new extension should make you happy. Interested in this extension? We have a for you, which covers the Quarkus integration and the basics. For more advanced information about the Hibernate Search standalone POJO mapper, please consult the and the guide. FLYWAY 10 We are a bit late to the party as we encountered some incompatibility with native executables and the new Flyway 10 but Quarkus 3.10 comes with an upgrade to Flyway 10, more precisely 10.10. If you are not using quarkus update, have a look at the as they split some more database supports out of Flyway core. CHANGES TO QUARKUS.PACKAGE.* CONFIG The quarkus.package.* config was refactored. The configuration changes are handled by quarkus update but if you are upgrading manually, have a closer look at the dedicated section of the . Note that when using the previous configuration properties are deprecated but should still work so you don't have to upgrade right away. They will be dropped in a (relatively distant) future version. SECURITY ENHANCEMENTS This new version comes with several improvements related to security: * - Allow authentication mechanism selection for a REST endpoint with annotation * - Optional support for the OIDC session cookie dir encryption * - Support for verifying OIDC JWT claims with custom Jose4j Validator * - Support resolving of static OIDC tenants based on token issuers * - Add OIDC TokenCertificateValidator QUARKUS CXF Quarkus CXF 3.10.0 was released and is now available in . Check for more information about what is new in this release. FULL CHANGELOG You can get the full changelog of and on GitHub. CONTRIBUTORS The Quarkus community is growing and has now . Many many thanks to each and everyone of them. In particular for the 3.10 release, thanks to Ales Justin, Alexander Schwartz, Alexey Loubyansky, Andy Damevin, Antonio Musarra, asjervanasten, avivmu, Bas Passon, Bruno Baptista, Clement Escoffier, Damiano Renfer, David M. Lloyd, ennishol, Eric Deandrea, Erin Schnabel, fdlane, Foivos Zakkak, Fouad Almalki, Francesco Nigro, Galder Zamarreño, George Gastaldi, Georgios Andrianakis, Guillaume Smet, Gwenneg Lepage, Holly Cummins, Ioannis Canellos, Jakub Jedlicka, James Netherton, Jan Martiska, Jason T. Greene, Jean Bisutti, Jerome Prinet, Jonas Jensen, Juan Jose Garcia, Julien Ponge, Jérémie Bresson, Katia Aresti, Klaus Nguetsa, Ladislav Thon, Laurent Broudoux, luneo7, Marc Nuri, Marco Sappé Griot, Marco Schaub, Marek Skacelik, marko-bekhta, Martin Kouba, Matej Novotny, Max Rydahl Andersen, Michal Maléř, Michal Vavřík, Michiel Thomassen, Monhemius, B. (Bart), Ozan Gunalp, Peter Palaga, Phillip Krüger, Pierre Adam, Robbie Gemmell, Roberto Cortez, Sanne Grinovero, Sebastian Davids, Sergey Beryozkin, Stéphane Épardaud, Thomas Canava, Thomas Segismont, Vinicius A. Santos, xstefank, Yoann Rodière, Yoshikazu Nojima, Yukihiro Okada, and Žan Horvat. COME JOIN US We value your feedback a lot so please report bugs, ask for improvements… Let's build something great together! If you are a Quarkus user or just curious, don't be shy and join our welcoming community: * provide feedback on ; * craft some code and ; * discuss with us on and on the ; * ask your questions on .

30 Apr 2024 12:00am GMT

29 Apr 2024

feedJBoss Blogs

Secure WildFly applications with OpenID Connect

WildFly 25 enables you to secure deployments using OpenID Connect (OIDC) without installing a Keycloak client adapter. This tutorial will show a proof of concept example of it. Keycloak with Quarkus The latest version of Keycloak is based on Quarkus Runtime. If you are still running the old Keycloak engine, we recommend checking this article ... The post appeared first on .

29 Apr 2024 7:48am GMT

Quarkus 3.9.5 released - Maintenance release

Today, we released Quarkus 3.9.5, our fourth (we skipped 3.9.0) maintenance release for the 3.9 release train. This release contains bugfixes and documentation improvements. It should be a safe upgrade for anyone already using 3.9. UPDATE To update to Quarkus 3.9, we recommend updating to the latest version of the Quarkus CLI and run: quarkus update Note that quarkus update can update your applications from any version of Quarkus (including 2.x) to Quarkus 3.9. If you are not already using 3.x, please refer to the for all the details. You can also refer to for additional details. Once you upgraded to 3.0, also have a look at the , , , , , , , , and migration guides. FULL CHANGELOG You can get . COME JOIN US We value your feedback a lot so please report bugs, ask for improvements… Let's build something great together! If you are a Quarkus user or just curious, don't be shy and join our welcoming community: * provide feedback on ; * craft some code and ; * discuss with us on and on the ; * ask your questions on .

29 Apr 2024 12:00am GMT

28 Apr 2024

feedJBoss Blogs

Product-Market Fit Framework for B2B Startups

Finding product-market fit (PMF) is arguably the most critical challenge faced by startups. Navigating the path to PMF can often feel like moving through a labyrinth, with each turn posing potential setbacks or breakthroughs. The PMF framework is designed to guide startups through the various stages of validating and scaling their product in the market. It helps founders identify where they are in the product-market fit journey, what their immediate focus should be, and how to recognize if they are on the right track or if adjustments are needed. The PMF framework is divided into four distinct levels, each representing a stage in a startup's lifecycle: 1. Nascent: Focus on identifying a critical problem and delivering a solution that is deeply satisfying to a small group of customers. 2. Developing: Transition from initial traction to a scalable business model by increasing the customer base and establishing reliable demand generation processes. 3. Strong: Scale business operations efficiently while maintaining product quality that meets market demands. 4. Extreme: Achieve widespread market acceptance, continuously optimize the product offerings, and explore new market opportunities for further expansion. The table below provides a look at each level, featuring key characteristics such as company and financial metrics, primary focus areas, benchmarks for measuring progress, and danger signals that might indicate potential issues. Product-Market Fit Framework Summary (click to enlarge) Navigating it means you've successfully developed a product that meets a substantial market demand and is capable of sustaining growth. Achieving PMF is not a one-time activity but an ongoing process that involves three dimensions: Satisfaction, Demand, and Efficiency. These dimensions evolve over the lifetime of a startup, each taking precedence at different stages of the company's growth. * Satisfaction: In the early stages, ensuring customer satisfaction is paramount. Founders must concentrate on solving a critical problem that is important and urgent for a select group of customers. This involves creating a product or service that not only addresses the problem effectively but also delivers a superior experience compared to existing solutions. As startups move to higher levels of PMF, maintaining satisfaction remains crucial but becomes part of a broader strategy that includes scaling and optimizing operations * Demand: As a startup transitions from the nascent to the developing stage, the focus shifts towards generating and scaling demand. At this point, the product or service has been validated with an initial customer base, and the challenge becomes attracting a larger audience. This involves fine-tuning marketing strategies, diversifying channels, and increasing market outreach to capture a broader segment of potential customers. Successful demand generation is marked by an increasing customer base and the establishment of repeatable sales processes. * Efficiency: In the later stages of PMF, particularly when a startup reaches strong and extreme levels, efficiency becomes a critical focus. At these stages, the startup must optimize its operations to handle the scaling business effectively. This includes streamlining processes, reducing costs, improving operational throughput, and leveraging technology to enhance productivity. Efficiency gains are crucial for sustaining growth at scale, managing large teams, and expanding to new markets without compromising on quality or customer satisfaction. Focus areas during the stage in a startup's lifecycle (click to enlarge) The PMF framework is not a roadmap; it's a diagnostic tool that helps founders make strategic decisions and pivot when necessary, ensuring they stay aligned with market needs and business objectives. Through these evolving focus areas, the PMF framework not only helps startups understand what is required at each stage but also prepares them to anticipate changes in focus as they grow. This amazing framework was thoroughly outlined by during a session on Lenny's Product Growth Podcast. Impressed by the depth and practicality of the discussion, I felt compelled to distill and document the key points to better understand and apply these principles. For a comprehensive exploration of the and to hear from Todd Jackson himself, visit for the full episode. Todd's expertise and clarity in presenting this framework make it an extremely useful resource for any startup looking to navigate the complex journey to market fit.

28 Apr 2024 6:35pm GMT

25 Apr 2024

feedJBoss Blogs

WildFly 32 is released!

I'm pleased to announce that the new WildFly and WildFly Preview 32.0.0.Final releases are available for download at . There's a lot to talk about this time, so let's get going! NEW AND NOTABLE WILDFLY GLOW 1.0 FINAL Ever since the introduction of several years back, a major WildFly focus has been tooling to improve our users' ability to easily provision an optimal WildFly installation, on-premise and particularly for the cloud. I'm very excited to announce Final availability of a major advance in this area - the set of provisioning tools we call . The WildFly Glow tools (a and a ) analyze your application artifact, determine what WildFly feature-packs and Galleon layers are needed to run your application, and make suggestions about other features (e.g. TLS support) that you may want to include in your optimized WildFly installation. You can take the information WildFly Glow provides and use it in your own provisioning configuration, or you can have WildFly Glow provision a server, bootable jar or Docker image for you. WildFlow Glow also provides a . The gives you a good sense of what WildFly Glow is about. But to really help you understand WildFly Glow's benefits, I encourage you to read or watch the various posts and videos that the WildFly community has published this year: ARTICLES * * * * tutorial PRESENTATIONS Jean Francois Denise presented WildFly Glow during the March * Slides are . * Jean Francois' talk starts at the 2:47:52 mark of the . VIDEOS * * Also, keep an out here or on the for an upcoming post from Jean Francois on using WildFly Glow to help with automatic connection to a database when deploying on OpenShift. USER GUIDES We've added a new page to the wildly.org site. Each guide will show the steps to accomplish a specific, focused task, with links to guides showing any prerequisites and to guides for related tasks. This is something WildFly has long needed, and we're very excited to see it happening! We're now up to 10 guides in a variety of topic areas. Please have a look and give us your feedback and suggestions for other guides you'd like to see. INDIVIDUAL FEATURES There a number of new individual features in WildFly 32, but before getting into the individual items I want to highlight again the capabilities introduced in WildFly 31 to introduce features at . Features can be introduced at one of four stability levels - experimental, preview, community or default - with the ideal outcome being that we promote them in subsequent releases to higher levels. The goal here is to allow users who want to look at features in earlier stages of the development lifecycle to easily do so, without leaving users who are not interested in that in a situation where they may inadvertently use those features. We introduced this capability in WildFly 31, and added at community stability, but in WildFly 32 we've significantly expanded our use of the concept, and for it. I'll talk more about feature stability levels , but first let's talk about the new features. SECURITY * For outbound requests, we've added support for an SSLContext that can . This feature is provided at the community stability level. * The elytron-oidc-client subsystem has for OpenID Connect authentication requess. This feature is provided at the preview stability level. * Authentication using credentials updated outside of WildFly will . This feature is provided at the default stability level. PROVISIONING * The WildFly provisioning tooling has been . See for more on this. This feature is provided at the community stability level. * The adds ability to create channels defining component versions used to provision WildFly that can be maintained separately from WildFly's feature packs. This ability has been used for a while now in component testing and by provisioning projects like the and . WildFly has now as part of each release to make such use easier. This feature is provided at the community stability level. We'll continue to make further use of WildFly Channels in upcoming WildFly releases. To learn more about Prospero and WildFly Channels, have a look at the following articles. * * WILDFLY DEVELOPMENT The following two features are focused on people who are developing either WildFly itself or extensions to it. * Subsystem development enhancements previously used in the wildfly-clustering-common Maven module have been to make them more broadly usable. * Utilities to in the testsuite are now available. This feature is provided at the community stability level. OTHER GOODIES * Standard WildFly now via a new mvc-krazo subsystem. This capability was previously introduced in WildFly Preview 31; now it is available in standard WildFly. This feature is provided at the preview stability level. * When you start WildFly, instead of always typing long things like -c standalone-microprofile-ha.xml, now you can . This feature is provided at the community stability level. * For all you asciiart fans, when you start WildFly with the --stability=experimental flag, now you get . This feature is provided at the experimental stability level. WILDFLY PREVIEW, EE 11 AND SE 17 The 32 release introduces a significant inflection in how we are using WildFly Preview. Beginning with this release we are starting to use WildFly Preview to provide a look at what we're doing for Jakarta EE 11 support. EE 11 won't go GA before this summer, and standard WildFly won't support EE 11 before the WildFly 34 release, at earliest. But when we wrapped up 32 development there were milestone, Release Candidate and Final releases of many EE 11 specs and implementations available, so we decided to provide those in WildFly Preview. This means for a number of EE APIs, WildFly Preview no longer provides an EE 10 compatible implementation. However, for a number of specifications that are planning changes for EE 11 we are still offering the EE 10 variant. In future releases we'll shift those to the EE 11 variants. As a result of this shift to EE 11 APIs, WildFly Preview no longer supports running on Java SE 11. Going forward, if you want to use WildFly Preview you'll need to use SE 17 or higher. A number of EE 11 APIs no longer produce SE 11 compatible binaries, which means an EE 11 runtime can no longer support SE 11. Note This removal of support for SE 11 has no impact on standard WildFly. Standard WildFly 32 continues to support running on SE 11. We do, however, encourage users to move to SE 17 or later, as the general Java ecosystem is moving away from SE 11 support, and eventually standard WildFly will as well. The following table lists the various Jakarta EE technologies offered by WildFly Preview 32, along with information about which EE platform version the specification relates to. Note that a number of Jakarta specifications are unchanged between EE 10 and EE 11, while other EE technologies that WildFly offers are not part of EE 11. Jakarta EE Technology WildFly Preview Version EE Version Jakarta Activation 2.1 10 & 11 Jakarta Annotations 3.0.0 11 Jakarta Authentication 3.0 10 Jakarta Authorization 3.0.0-M2 11 Jakarta Batch 2.1 10 & 11 Jakarta Concurrency 3.1.0-M1 11 Jakarta Connectors 2.1 10 & 11 Jakarta Contexts and Dependency Injection 4.1.0 11 Jakarta Debugging Support for Other Languages 2.0 10 & 11 Jakarta Dependency Injection 2.0 10 & 11 Jakarta Enterprise Beans 4.0 10 & 11 Jakarta Enterprise Web Services 2.0 10 Jakarta Expression Language 6.0.0 11 Jakarta Faces 4.1.0-M1 11 Jakarta Interceptors 2.2.0 11 Jakarta JSON Binding 3.0 10 & 11 Jakarta JSON Processing 2.1 10 & 11 Jakarta Mail 2.1 10 & 11 Jakarta Messaging 3.1 10 & 11 Jakarta MVC (preview stability only) 2.1 N/A Jakarta Pages 3.1 10 Jakarta Persistence 3.2.0-M2 11 Jakarta RESTful Web Services 3.1 10 Jakarta Security 4.0.0-M2 11 Jakarta Servlet 6.1.0-M2 11 Jakarta SOAP with Attachments 3.0 10 Jakarta Standard Tag Library 3.0 10 & 11 Jakarta Transactions 2.0 10 & 11 Jakarta Validation 3.1.0-M2 11 Jakarta WebSocket 2.2.0-M1 11 Jakarta XML Binding 4.0 10 Jakarta XML Web Services 4.0 10 Notes: 1. This Jakarta EE 10 technology is not part of EE 11 but is still provided by WildFly. 2. Jakarta MVC is not of the Jakarta EE Platform or the Web or Core Profile Warning Jakarta EE 11 no longer supports running with a Java SecurityManager enabled. As a result, individual Jakarta specification projects may have removed SecurityManager calls from the API jars WildFly Preview integrates, and the associated implementation artifacts may have done the same. As a result, WildFly Preview should not be run with the SecurityManager enabled. Future releases will prohibit use with the SecurityManager enabled if EE 11 APIs are used. FEATURE STABILITY LEVELS As I noted above, WildFly now provides new features at ---- experimental, preview, community or default. Out of the box, standard WildFly allows use of features at community or default stability, while WildFly Preview allows preview, community or default. If you wish to allow lower stability level features than the out-of-the-box setting, this can be done using the stability command line parameter: bin/standalone.sh --stability=experimental In WildFly 32 we've introduced features at all four stability levels. You can identify the stability level of new features by looking at the title of the Jira issue in the "Feature Request" section of the . For features at anything other than default stability, the issue title will be prefaced by one of [Experimental], [Preview] or [Community]. TOOLING SUPPORT FOR FEATURE STABILITY LEVELS Our Galleon-based provisioning tooling has also had updates related to feature stability levels: we've added configuration options to allow you to control the stability level of features in your installation. This can be used to do things like: * Prevent the provisioning of lower stability features, so they are not available for use even when the --stability server start param is used. * Enable the inclusion of lower stability features in the configuration files the provisioning tool generates, avoiding the need to use a post-provisioning tool like the WildFly CLI to incorporate them into the configuration. To limit your installation level to the highest stability features, you would include the following in your maven plugin configuration: default To allow Galleon to include lower stability features in your installation's generated configuration files, you could do something like: preview Note If one wants to have different values for configuration files and packages (i.e. filesystem resources like JBoss Modules modules), then the and options should be used instead of . The use case for using config-stability-level and package-stability-level as an alternative to stability-level is when the user wishes to generate configurations with features at a given stability level while allowing provisioning of packages at a lower level. The presence of the lower stability level packages allows subsequent update of the configuration, e.g. with the WildFly CLI, to enable lower stability features. The latest , (for bootable jars) and the and tools all support these stability level configuration options. I encourage you to try them out. SUPPORTED SPECIFICATIONS JAKARTA EE Standard WildFly 32 is a compatible implementation of the EE 10 as well as the and the . WildFly is EE 10 Platform, Web Profile and Core Profile compatible when running on both Java SE 11 and Java SE 17. WildFly is also a compatible EE 10 Core Profile implementation when running on SE 21. Evidence supporting our certification is available in the repository on GitHub: Specification Compatibility Evidence Jakarta EE 10 Full Platform Jakarta EE 10 Web Profile Jakarta EE 10 Core Profile MICROPROFILE WildFly supports numerous MicroProfile specifications. Because we no longer support MicroProfile Metrics, WildFly 32 cannot claim to be a compatible implementation of the MicroProfile 6.1 specification. However, WildFly's MicroProfile support includes implementations of the following specifications in our "full" (e.g. standalone-full.xml) and "default" (e.g standalone.xml) configurations as well as our "microprofile" configurations (e.g. standalone-microprofile.xml): MicroProfile Technology WildFly Full/Default Configurations WildFly MicroProfile Configuration MicroProfile Config 3.1 X X MicroProfile Fault Tolerance 4.0  -  X MicroProfile Health 4.0  -  X MicroProfile JWT Authentication 2.1 X X MicroProfile LRA 2.0  -  X MicroProfile OpenAPI 3.1  -  X MicroProfile Reactive Messaging 3.0  -   -  MicroProfile Reactive Streams Operators 3.0  -   -  MicroProfile Rest Client 3.0 X X MicroProfile Telemetry 1.1  -  X for the above specifications that are part of MicroProfile 6.1 can be found in the WildFly Certifications repository on GitHub. JAVA SE SUPPORT RECOMMENDED SE VERSIONS I'm pleased to be able to say that our recommendation is that you run WildFly 32 on Java SE 21, as that is the latest LTS JDK release where we have completed the full set of testing we like to do before recommending a particular SE version. WildFly 32 also is heavily tested and runs well on Java 17 and Java 11. This recommendation to run on SE 21 is a shift from previous releases, where we recommended SE 17. This is because during the WildFly 32 development cycle we completed the qualification exercise that we go through before recommending an LTS SE release. Our recommendation of SE 21 over earlier LTS releases is solely because as a general principle we recommend being on later LTS releases, not because of any problems with WildFly on SE 17 or SE 11. One reason to use later SE versions is because it gets you ahead of the curve as WildFly and other projects begin to move on from supporting older SE releases. In the I indicated that WildFly 30 would likely be the last feature release to support SE 11. Obviously, that is not the case as we still support SE 11 in standard WildFly 32. However, as , WildFly Preview no longer supports SE 11. We're continuing to evaluate our plans around SE 11 support, and I'll be sure to post here as we make decisions. I do encourage WildFly users to prepare now for any eventual change to move off of SE 11. While we recommend using an LTS JDK release, I do believe WildFly runs well on JDK 22. By runs well, I mean the main WildFly testsuite runs with no more than a few failures in areas not expected to be commonly used. We want developers who are trying to evaluate what a newer JVM means for their applications to be able to look to WildFly as a useful development platform. Please note that WildFly runs in classpath mode. INCOMPATIBLE CHANGES We from WildFly 32. We suggest any users of this functionality investigate . As , WildFly Preview no longer supports running on Java SE 11. Users also should not run WildFly Preview 32 with a Java SecurityManager enabled. RELEASE NOTES The full WildFly 32 release notes are . Issues fixed in the underlying release are listed in the WildFly Core JIRA. Please try it out and give us your feedback, in the , or . Meanwhile, we're busy at work on WildFly 33! Best regards, Brian

25 Apr 2024 12:00am GMT

Indexing rollover with Quarkus and Hibernate Search

This is the first post in the series diving into the implementation details of the backing the guide search of . Does your application need full-text search capabilities? Do you need to keep your application running and producing search results without any downtime, even when reindexing all your data? Look no further. In this post, we'll cover how you can approach this problem and solve it in practice with a few low-level APIs, provided you use Hibernate Search, be it or . The approach suggested in this post is based on the fact that Hibernate Search uses , and communicates with the actual index through a read/write alias, depending on the operation it needs to perform. For example, a search operation will be routed to a read index alias, while an indexing operation will be sent to a write index alias. This approach is implemented and successfully used in our Quarkus application that backs the guides' search of . You can see the complete implementation here: and . Applications using Hibernate Search can keep their search indexes up-to-date by updating the index gradually, as the data on which the index documents are based is modified, providing a near real-time index synchronisation. On the other hand, if the search requirements allow for a delay in synchronisation or the data is updated only at certain times of day, the option of mass indexing can effectively keep the indexes up-to-date. The provides more information about these approaches and other Hibernate Search capabilities. The application discussed in this post is using the mass indexing approach. This means that at certain events, e.g., when a new version of the application is deployed or a scheduled time is reached, the application has to process the documentation guides and create search index documents from them. Now, since we want our application to keep providing results to any search requests while we add/update documents to the indexes, we cannot perform a simple reindexing operation using a , or the recently added , as these would drop all existing documents from the index before indexing them: search operations would not be able to match them anymore until reindexing finishes. Instead, we can create a new index with the same schema and route any write operations to it. Since Hibernate Search does not provide the rollover feature out of the box () we will need to resort to using the lower-level APIs to access the Elasticsearch client and perform the required operations ourselves. To do so, we need to follow a few simple steps: 1. Get the mapping information for the index we want to reindex using the schema manager. @Inject SearchMapping searchMapping; (1) // ... searchMapping.scope(MyIndexedEntity.class).schemaManager() (2) .exportExpectedSchema((backendName, indexName, export) -> { (3) var createIndexRequestBody = export.extension(ElasticsearchExtension.get()) .bodyParts().get(0); (4) var mappings = createIndexRequestBody.getAsJsonObject("mappings"); (5) var settings =createIndexRequestBody.getAsJsonObject("settings"); (6) }); 1. Inject SearchMapping somewhere in your app so that we can use it to access a schema manager. 2. Get a schema manager for the indexed entity we are interested in (MyIndexedEntity). If all entities should be targeted, then Object.class can be used to create the scope. 3. Use the export schema API to access the mapping information. 4. Use the extension to get access to the Elasticsearch-specific .bodyParts() method that returns a JSON representing the JSON HTTP body needed to create the indexes. 5. Get the mapping information for the particular index. 6. Get the settings for the particular index. 2. Get the reference to the Elasticsearch client, so we can perform API calls to the search backend cluster: @Inject SearchMapping searchMapping; (1) // ... RestClient client = searchMapping.backend() (2) .unwrap(ElasticsearchBackend.class) (3) .client(RestClient.class); (4) 1. Inject SearchMapping somewhere in your app so that we can use it to access a schema manager. 2. Access the backend from a search mapping instance. 3. Unwrap the backend to the ElasticsearchBackend, so that we can access backend-specific APIs. 4. Get a reference to the Elasticsearch's rest client. 3. Create a new index using the OpenSearch/Elasticsearch rollover API that would allow us to keep using the existing index for read operations, while write operations will be sent to the new index: @Inject SearchMapping searchMapping; (1) // ... SearchIndexedEntity> entity = searchMapping.indexedEntity(MyIndexedEntity.class); var index = entity.indexManager().unwrap(ElasticsearchIndexManager.class).descriptor(); (2) var request = new Request("POST", "/" + index.writeName() + "/_rollover"); (3) var body = new JsonObject(); body.add("mappings", mappings); body.add("settings", settings); body.add("aliases", new JsonObject()); (4) request.setEntity(new StringEntity(gson.toJson(body), ContentType.APPLICATION_JSON)); var response = client.performRequest(request); (5) //... 1. Inject SearchMapping somewhere in your app so that we can use it to access a schema manager. 2. Get the index descriptor to get the aliases from it. 3. Start building the rollover request body using the write index alias from the index descriptor. 4. Note that we are including an empty "aliases" so that the aliases are not copied over to the new index, except for the write alias (which is implicitly updated since the rollover request is targeting it directly). We don't want the read alias to start pointing to the new index immediately. 5. Perform the rollover API request using the Elasticsearch REST client obtained in the previous step. With this successfully completed, indexes are in the state we wanted: We can start populating our write index without affecting search requests. Once we are done with indexing, we can either commit or rollback depending on the results: Committing the index rollover means that we are happy with the results and ready to switch to the new index for both reading and writing operations while removing the old one. To do that, we need to send a request to the cluster: var client = ... (1) var request = new Request("POST", "_aliases"); (2) request.setEntity(new StringEntity(""" { "actions": [ { "add": { (3) "index": "%s", "alias": "%s", "is_write_index": false }, "remove_index": { (4) "index": "%s" } } ] } """.formatted( newIndexName, readAliasName, oldIndexName ) (5) , ContentType.APPLICATION_JSON)); var response = client.performRequest(request); (5) //... 1. Get access to the Elasticsearch REST client as described above. 2. Start creating an _aliases API request. 3. Add an action to update the index aliases to use the new index for both read and write operations. Here, we must make the read alias point to the new index. 4. Add an action to remove the old index. 5. The names of the new/old index can be retrieved from the response of the initial _rollover API request, while the aliases can be retrieved from the index descriptor. Otherwise, if we have encountered an error or decided for any other reason to stop the rollover, we can roll back to using the initial index: var client = ... (1) var request = new Request("POST", "_aliases"); (2) request.setEntity(new StringEntity(""" { "actions": [ { "add": { (3) "index": "%s", "alias": "%s", "is_write_index": true }, "remove_index": { (4) "index": "%s" } } ] } """.formatted( oldIndexName, writeAliasName, newIndexName ) (5) , ContentType.APPLICATION_JSON)); var response = client.performRequest(request); (5) //... 1. Get access to the Elasticsearch REST client as described above. 2. Start creating an _aliases API request. 3. Add an action to update the index aliases to use the old index for both read and write operations. Here, we must make the write alias point back to the old index. 4. Add an action to remove the new index. 5. The names of the new/old index can be retrieved from the response of the initial _rollover API request, while the aliases can be retrieved from the index descriptor. Keep in mind that in case of a rollback, your initial index may be out of sync if any write operations were performed while the write alias was pointing to the new index. With this knowledge, we can organize the rollover process as follows: try (Rollover rollover = Rollover.start(searchMapping)) { // Perform the indexing operations ... rollover.commit(); } Where the Rollover class will look as follows: class Rollover implements Closeable { public static Rollover start(SearchMapping searchMapping) { // initiate the rollover process by sending the _rollover request ... // ... return new Rollover( client, rolloverResponse ); (1) } @Override public void close() { if ( !done ) { (2) rollback(); } } public void commit() { // send the `_aliases` request to switch to the *new* index // ... done = true; } public void rollback() { // send the `_aliases` request to switch to the *old* index // ... done = true; } } 1. Keep the reference to the Elasticsearch REST client to perform API calls. 2. If we haven't successfully committed the rollover, it'll be rolled back on close. Once again, for a complete working example of this rollover implementation, check out the . If you find this feature useful and would like to have it built-in into your Hibernate Search and Quarkus apps feel free to reach out to us on the to discuss your ideas and suggestions. Stay tuned for more details in the coming weeks as we publish more blog posts diving into other interesting implementation aspects of this application. Happy searching and rolling over!

25 Apr 2024 12:00am GMT

23 Apr 2024

feedJBoss Blogs

Obtaining heap dump on OutOfMemoryError with Quarkus native

Starting with GraalVM for JDK 21, native executables can run with the -XX:+HeapDumpOnOutOfMemoryError option to generate a heap dump when a java.lang.OutOfMemoryError is thrown. In this blog post we will explore how to use the flag, we will inspect what a GraalVM Native Image heap dump looks like and how it compares with one produced by HotSpot. Note: The heap of GraalVM Native Image executables has both read-only and read-write segments. The read-only part is referred to as the "image heap" and contains the data pre-initialized during the image build to help speed up start-up time. The read-write part is where allocations at runtime are made. Therefore, heap dumps generated at runtime will contain content from both the "image heap" and the read-write heap. To see this flag in action, we need to manufacture a situation where a Quarkus application runs out of memory. One easy way to achieve this is to configure the application with a garbage collector that doesn't do any memory reclamation, i.e. the Epsilon GC. Once the Quarkus application is running with Epsilon GC, apply some load and within a short space of time it will run out of memory and produce a heap dump. Let's do this using a Quarkus application that simply responds to an HTTP endpoint request as a starting point. The sample application can be downloaded from using a browser or via the command line: $ wget "https://code.quarkus.io/d?S=io.quarkus.platform%3A3.8&cn=code.quarkus.io" -O code.zip $ unzip code.zip $ cd code-with-quarkus Next, build a Quarkus native executable with GraalVM for JDK 21 configuring it to use Epsilon GC: $ ./mvnw package -DskipTests -Dnative -Dquarkus.native.additional-build-args=--gc=epsilon Note: The GC selection needs to be done at build time for Quarkus native applications. While the Quarkus native executable is being produced you will be able to observe that GC is indeed configured to be EpsilonGC: [INFO] --- quarkus-maven-plugin:3.9.3:build (default) @ getting-started-reactive --- ... [1/8] Initializing... Java version: 21.0.2+13, vendor version: GraalVM CE 21.0.2+13.1 Graal compiler: optimization level: 2, target machine: compatibility C compiler: gcc (redhat, x86_64, 13.2.1) Garbage collector: Epsilon GC (max heap size: 80% of RAM) Once the build completes, start Quarkus with -XX:+HeapDumpOnOutOfMemoryError -XX:+ExitOnOutOfMemoryError. The latter forces the application to shutdown when an OutOfMemoryError occurs rather than leave the process in an indeterminate state: $ getting-started-reactive-1.0.0-SNAPSHOT-runner -XX:+HeapDumpOnOutOfMemoryError -XX:+ExitOnOutOfMemoryError -Xmx64m __ ____ __ _____ ___ __ ____ ______ --/ __ \/ / / / _ | / _ \/ //_/ / / / __/ -/ /_/ / /_/ / __ |/ , _/ , ------------------------------------------------------------------------------------------------ 1,231,054 9,438,264 58,748,168 === Class Histogram Table is sorted by "SIZE". Printing first 6 lines. Use -DprintFirst=# to override. INSTANCES SIZE SUM SIZE CLASS ------------------------------------------------------------------------------------------------ 1 3,480,544 3,480,544 byte[3480528] 1 3,236,728 3,236,728 byte[3236705] 1 642,648 642,648 byte[642626] 1 289,824 289,824 byte[289808] 1 173,664 173,664 byte[173645] 1 157,728 157,728 byte[157710] ... ... ... ... 1,231,048 1,457,128 50,767,032 ------------------------------------------------------------------------------------------------ 1,231,054 9,438,264 58,748,168 === Class Histogram Table is sorted by "SUM SIZE". Printing first 6 lines. Use -DprintFirst=# to override. INSTANCES SIZE SUM SIZE CLASS ------------------------------------------------------------------------------------------------ 1 3,480,544 3,480,544 byte[3480528] 1 3,236,728 3,236,728 byte[3236705] 132,330 24 3,175,920 java.lang.String 50,277 40 2,011,080 io.vertx.core.http.impl.headers.HeadersMultiMap$MapEntry 10,054 184 1,849,936 io.quarkus.resteasy.reactive.server.runtime.QuarkusResteasyReactiveRequestContext 44,852 40 1,794,080 com.oracle.svm.core.monitor.JavaMonitor ... ... ... ... 993,539 2,720,704 43,199,880 ------------------------------------------------------------------------------------------------ 1,231,054 9,438,264 58,748,168 The presence of SubstrateVM, the VM that powers native images built with GraalVM, can be clearly observed because of the instances of com.oracle.svm.core.monitor.JavaMonitor present in the heap dump. What would the heap dump look like if we repeat exactly the same exercise but instead we use Quarkus JVM mode? Let's see: Rebuild the Quarkus app for JVM mode and run it with Epsilon GC: $ mvnw package -DskipTests $ java -XX:+HeapDumpOnOutOfMemoryError -XX:+ExitOnOutOfMemoryError -XX:+UnlockExperimentalVMOptions -XX:+UseEpsilonGC -Xmx64m -jar quarkus-run.jar __ ____ __ _____ ___ __ ____ ______ --/ __ \/ / / / _ | / _ \/ //_/ / / / __/ -/ /_/ / /_/ / __ |/ , _/ , ------------------------------------------------------------------------------------------------ 947,377 19,428,176 66,477,312 === Class Histogram Table is sorted by "SIZE". Printing first 6 lines. Use -DprintFirst=# to override. INSTANCES SIZE SUM SIZE CLASS ------------------------------------------------------------------------------------------------ 1 972,120 972,120 int[243026] 1 416,136 416,136 int[104030] 1 282,056 282,056 int[70510] 1 237,608 237,608 byte[237587] 1 131,920 131,920 int[32976] 1 129,672 129,672 int[32414] ... ... ... ... 947,371 17,258,664 64,307,800 ------------------------------------------------------------------------------------------------ 947,377 19,428,176 66,477,312 === Class Histogram Table is sorted by "SUM SIZE". Printing first 6 lines. Use -DprintFirst=# to override. INSTANCES SIZE SUM SIZE CLASS ------------------------------------------------------------------------------------------------ 91,335 24 2,192,040 java.lang.String 7,189 232 1,667,848 io.quarkus.resteasy.reactive.server.runtime.QuarkusResteasyReactiveRequestContext 35,946 40 1,437,840 io.vertx.core.http.impl.headers.HeadersMultiMap$MapEntry 15,528 80 1,242,240 java.util.HashMap$Node[16] 14,380 80 1,150,400 io.vertx.core.http.impl.headers.HeadersMultiMap$MapEntry[16] 34,942 32 1,118,144 java.util.HashMap$Node ... ... ... ... 748,057 19,427,688 57,668,800 ------------------------------------------------------------------------------------------------ 947,377 19,428,176 66,477,312 As expected, no SubstrateVM classes are present in this heap dump, leaving only Quarkus, Vert.x and OpenJDK types in the heap dump.

23 Apr 2024 12:00am GMT

22 Apr 2024

feedJBoss Blogs

Ship.Cars leverages Quarkus to reach its goals

is a revolutionary partner in auto transport logistics, offering customizable software solutions specially tailored to accommodate all your car hauling requirements. Our tools are impeccably designed to amplify your business's ability to streamline, automate, and organize the entire car hauling process, from start to finish. Through the development of various products, Ship.Cars has helped the automotive logistics industry to transition into the modern age. Our industry solutions, such as LoadMate and LoadMate Pro, cater to the various needs of dealerships, rental car companies, and other shippers. Meanwhile, innovations like our SmartHaul TMS and SmartHaul APP have become indispensable tools for our car haulers to book and manage their loads. CONTENDING WITH CHALLENGES As a product-centric organization, we utilize the microservice paradigm to deliver a diverse array of functionality via numerous distinct software products. Thus far, we've developed over 50 microservices. Each of these not only meets the requisite functional requirements but also adheres to rigorous technical specifications. These specifications ensure seamless provisioning of services, consistent performance under load, and easy identification and resolution of any arising issues. The construction of these services, over a large period of time, has relied on various frameworks, including Quarkus , Spring Boot and Django. Each framework exhibits its unique strengths and weaknesses extending from nuanced characteristics. However, with time, we've determined that Quarkus optimally fulfills a large portion of our requirements. This explains our current shift from Django to Quarkus for a significant portion of our development. As Ship.Cars deploys its microservices on Kubernetes within the Google Cloud platform, we continually seek efficient ways to scale our developmental prowess, while simultaneously saving cloud resource consumption. With cloud resource consumption costs always being a priority, we strive to find effective ways to optimize memory and processor use in the cloud. Common challenges often arise when deploying microservices in the cloud, including: 1. Lower cloud resource consumption: Multiple active microservices can consume a significant amount of memory and CPU, escalating costs rapidly. Hence, effective management of cloud resources is crucial. 2. Faster boot-up times: In a microservices architecture, it's important for services to stop, start, and scale swiftly. Slow boot-up times can have a severe impact on system performance and responsiveness. 3. Streamlined microservices development: Building and ensuring interoperability within microservices can be complex, requiring deft management and specialized tooling. 4. Resilience and fault tolerance: Microservices must be resilient and capable of quick recovery from unexpected failures. Implementing such fault tolerance mechanisms, however, can be challenging. 5. Service discovery: The ability to discover and communicate between services becomes critical as their number increases. Traditional hard-coded endpoints do not scale well in these scenarios. 6. Event-driven microservices: Implementing an event-driven architectural model in microservices enables distinct services to communicate asynchronously. Yet, orchestrating this can be difficult. 7. Reactive and imperative programming: The selection of an appropriate programming model for the cloud, especially one that supports scalability and system responsiveness, can be daunting. Quarkus could beautifully address these challenges as follows: 1. Lower cloud resource consumption: Known for their high memory usage, traditional Java applications can get expensive in a cloud environment where resources cost money. Quarkus significantly reduces the memory footprint of applications, leading to more efficient cloud resource management. 2. Faster boot-up times: Slow startup times are quite common with traditional Java applications, an issue that presents a particular problem in the cloud where applications need to scale up and down quickly. Quarkus drastically improves start-up performances, with applications often starting in sub-second times. 3. Streamlined microservices development: Quarkus has been designed to work with popular Java standards and technologies such as Eclipse MicroProfile, Jakarta EE, OpenTelemetry, Hibernate, etc., simplifying the development process and reducing the time and complexity involved. 4. Resilience and fault tolerance: Quarkus employs the MicroProfile Fault Tolerance specification to provide features like timeout, retry, bulkhead, circuit breaker, and fallback. These features render your microservices more resilient and fault-tolerant. 5. Service discovery: Quarkus supports Kubernetes service discovery natively, allowing services to discover and communicate with each other in a reliable manner. 6. Event-driven microservices: Quarkus supports event-driven architecture, enabling services to communicate through events, thereby reducing the complexity and coupling between the services. 7. Reactive and imperative programming: Quarkus gives developers the freedom to use reactive or imperative programming models or even combine both in the same application, creating a perfect solution for scalability and system responsiveness. TACKLING CLOUD RESOURCE CONSUMPTION For businesses like ours, one of our organizational goals is to reduce costs while not sacrificing platform's performance to ensure premium user experience. However, traditional JVM-based services often present challenges like substantial memory footprints, extended startup times, and high CPU usage. These problems not only impact technical aspects but also have financial implications, significantly affecting the overall cost of running and maintaining software solutions. Native images are standalone executables that include both the application code and the necessary runtime components. With the advent of GraalVM, a high-performance, polyglot virtual machine able to run applications written in different programming languages, the concept of native images has gained popularity. Native images offer several advantages, such as: * Faster startup time: As pre-compiled entities, native images can start incredibly quickly, often in milliseconds. This aspect is hugely beneficial when applications need to start and stop almost instantly, like in serverless functions or cloud-based microservices architectures. For instance, one of our microservices, native powered by Quarkus 3.2.7.Final, starts in just 0.677s. * Lower memory footprint: Applications' memory footprints can be significantly reduced with native images as they only include the runtime components actually used by the applications. This efficiency is important in cloud environments where resource usage directly affects costs. Real service memory usage Figure 1. Memory usage of a Quarkus native image * Easier distribution: As standalone executables, native images can be easily distributed and run on any environment without requiring the installation of a separate runtime. * Reduced container size: Being fully self-contained, the container images for native images are more efficient to distribute due to their reduced size. This leads to faster start-up times in containerized environments like Kubernetes. For example, the size comparison between Quarkus Native (85.1 MB), Quarkus Non-Native (648.4 MB) and Spring Boot (861.9 MB) provides a clear picture of the difference in resource efficiency between them. With Quarkus, you can compile your application into a native image by leveraging the GraalVM native-image compiler, allowing your Java applications to experience these advantages in cloud platforms, containerization, and serverless architectures due to their swift startup times and lower resource consumption. OPTIMIZING DEVELOPER PRODUCTIVITY Quarkus brings several benefits which enhance developer productivity, such as: 1. Live Coding: With no build time and deploy time, developers can test changes to the code instantaneously. 2. Zero configuration with Dev Services: Quarkus can automatically configure some services for development and testing purposes, enhancing efficiency. 3. Continuous testing: Continuous testing is implemented via the command line and the Dev UI, enhancing the quality of the end product without depending on third-party tools and processes. 4. Dev UI: Developers can configure extensions, monitor the application, and test components with great ease. 5. Unified config: All of the application's configurations are consolidated in one place, improving accessibility. 6. Standards-based EMBRACING QUARKUS EXTENSIONS Quarkus Extensions are pre-configured feature sets designed to simplify several common tasks during application development. They offer an efficient way to imbibe new capabilities or direct integrations in your project with minimum effort. In our organization, we managed to implement our internal extensions swiftly, effectively addressing maintenance issues and configuration incompatibilities we encountered earlier while trying to create native images. Today, we benefit from an extension hub that quells all previous concerns and enhances our productivity. While Quarkus extensions are powerful tools offering deep integration, optimization, and enhanced developer experience, it's essential to weigh the trade-offs and consider if simpler solutions like standard JAR libraries might suit the need better. LOOKING AHEAD In the graphical representation below, I want to illustrate the inherent relationship between the process of adopting Quarkus and the subsequent outcomes over time. Figure 2. Comparison of Difficulty/Cost and Ease of Ease-of-Use/Returns Over Time in Adopting Quarkus Features On the "Y-Axis", we define difficulty or cost in terms of story-points per sprint, reflecting the relative effort required for the features' implementation. This also represents costs in terms of time and resources spent in the adoption of Quarkus features. Simultaneously, ease-of-use/returns take into account metrics such as decreased debugging time, faster feature development, and improvements in team productivity post successful implementation. The graph clearly demonstrates that at the outset (tagged as "Begin" on the "X-Axis"), both the difficulty (illustrated in higher story points) and costs are at their peak, signifying a challenging initial phase. However, as we move along the timeline from "Begin" through "Middle" and onto "Future", we see a notable drop in story-points per sprint, indicating a reduced difficulty level and cost. In parallel to this, the ease-of-use and returns charted start at a comparatively low point at the beginning. These escalate gradually as we advance along the timeline towards "Middle" and "Future", showing a tangible increase in productivity and other gains from adopting and integrating Quarkus features into our practices. By the time we reach "Future", we see a substantial decrease in difficulty and cost, while the ease-of-use and returns have considerably increased. This dual progression effectively highlights the significant benefits of investing in the adoption of Quarkus, despite the initial challenges. Investing in Quarkus is a strategic maneuver towards creating efficient, scalable, and modern applications aptly suited for the cloud era. With its robust capabilities and supportive community, Quarkus is well-positioned to pioneer the future of cloud-native application development. The decision to adopt Quarkus is a significant leap towards optimizing for efficiency, scalability, and cutting-edge application performance that will provide us with a considerable competitive edge in the rapidly evolving tech landscape.

22 Apr 2024 12:00am GMT

19 Apr 2024

feedJBoss Blogs

How to debug Quarkus applications

In this article, we will learn how to debug a Quarkus application using two popular Development Environments such as IntelliJ Idea and VS Studio. We'll explore how these IDEs can empower you to effectively identify, understand, and resolve issues within your Quarkus projects. Enabling Debugging in Quarkus When running in development mode, Quarkus, by default, ... The post appeared first on .

19 Apr 2024 8:24am GMT

18 Apr 2024

feedJBoss Blogs

Revolutionizing time tracking: how Quarkus transformed our backend development

GRAN Software Solutions is a German company that designs and builds modern backend solutions. We work with large automotive clients and others to restructure and create new solutions. We also develop and offer SaaS tools to help us and others in our daily work. One such tool we built for ourselves and others is a time tracking application called . THE TIME TRACKING CHALLENGE We needed to create a time tracking application because the existing solutions on the market did not meet our specific requirements. They were either not designed for developers, lacked the simplicity we needed, or were loaded with unnecessary features. We wanted to build a tool that was perfectly tailored to our needs, using the extensive experience we had gained from working on client projects over the years. We also wanted to create a more modern and user-friendly design, which would be fun to use and incorporate newer technologies such as Quarkus. The main issue we faced with existing time tracking solutions was the lack of an easy way to switch between clients. We also found that they did not support quick actions or shortcuts, which we were used to, and there was no visual way to see the time entries we made during the day. Additionally, we wanted to track time within the context of contracts signed with our clients in terms of daily rates and contract caps. That's why we decided to create a custom solution to address all of these specific needs. DISCOVERING QUARKUS When we were choosing the technology stack to use for our backend, our main goal was to use technologies that we were already familiar with, such as the Kotlin programming language, Spring Boot framework, and Postgres database. We also wanted to select an ecosystem that could provide us with libraries for database connectivity, web client, caching, and other similar features. Additionally, we wanted to use a high-performance solution to keep our hosting costs low and avoid high memory requirements. After analyzing various solutions on the market, we decided to use the Quarkus framework as it met all of our requirements. OUR BACKEND DEVELOPMENT EXPERIENCE WITH QUARKUS: THE KEY FEATURES We have designed our application architecture to separate the frontend and backend parts. To secure our backend APIs in a modern and secure way, we opted to use JSON web tokens, and Quarkus has excellent support for them. We also use role-based security for our APIs, and Quarkus makes it easy for us to implement this. We have different roles in our application, such as regular users and admins, and this information is encoded in our JSON web tokens. Quarkus ensures that these tokens are not tampered with or manipulated when they reach our back-end systems. @RolesAllowed for authorization of our API endpoints @Path("/clients") @RolesAllowed("User") @Produces(MediaType.APPLICATION_JSON) @ApplicationScoped class ClientResource( private val getClientsHandler: GetClientsHandler, private val newClientHandler: NewClientHandler, We relied heavily on rich JSON support to model our data flexibly and delegate much of the functionality to Postgres itself to manipulate the data. This way, we could pass the already-built JSON objects back to the API client, which significantly reduced the time it took to make design decisions in the application code. Quarkus provided fantastic support for JSON object APIs. We believe that Postgres is the right place to perform data manipulations and aggregations, not the application code, due to performance and code maintenance reasons. Using JsonObject to pass our data in and out @GET @Produces(MediaType.APPLICATION_JSON) suspend fun getProfile() = db.preparedQuery( """select profile from "user" where email = $1""".trimIndent() ).execute().awaitSuspending().first().getJsonObject("profile") Although Quarkus primarily targets Java programming language, Kotlin support is also quite good. We used coroutines and suspending functions, which allowed for greater performance and much simpler code compared to some other asynchronous programming models that are available. Kotlin's structured concurrency enabled us to write seemingly sequential code but in reality, very performant asynchronous code. Quarkus provides excellent Kotlin extension methods built on top of existing asynchronous APIs such as Mutiny. We executed the database migration on application startup, which was very important for us. Fortunately, Quarkus has excellent Flyway support, so all our database migrations were in one place and executed during our backend booting process. This kept our database schema and data transparent and reproducible. Figure 1. Using Flyway to execute database migrations For our deployments, we use Kubernetes. Before using Quarkus, we described our application requirements using helm packaging, but with Quarkus, we opted for another approach as Quarkus offers a great Kubernetes extension. Instead of writing any code, we described our Kubernetes resources using an application.yaml file, keeping our complete application configuration in one place. This extension generated Kubernetes resource files behind the scenes, which we then applied to our Kubernetes cluster. This works well for us. Figure 2. Using the Kubernetes extension to generate Kubernetes resources For packaging our backend API, we used the Jib extension. To package our application in a container, all we had to do was use the application.yaml file and set all the required parameters such as image name tags repository, and so on. We didn't have to maintain the Docker file on our own, which was very convenient. Our time tracking application needs to send emails to our users and admins on various occasions. To keep things simpler, we decided not to go for any third-party API-driven email-sending approach. Instead, we send emails ourselves, and for that purpose, we use Qute email templates, which make composing and sending emails to our users very simple. This extension provides support for coding coroutines, allowing for non-blocking sending and higher throughput. Figure 3. Using Qute email templates to send emails DEVELOPMENT JOURNEY The Quarkus development process has been excellent so far. Compared to other frameworks like Spring Boot, Quarkus has a faster startup time and a smaller memory footprint. It also provides profiles, which allows us to have slightly different configurations or behaviors between environments. We can easily substitute some hard-to-run third-party services with local mocks, leaving the application code unchanged. Quarkus is also great in terms of configuration and how easily we can overwrite values stored in the application.yaml file with external environment variables. Although the hot reload mode didn't work well with Kotlin, I believe all the bugs related to it will be solved in upcoming releases. During development, we had to restart our running service most of the time for code changes to take effect. Our backend API functionalities took approximately a month and a half to complete. Considering that only two developers worked on the backend, I think it was a good result. In this phase of our product lifecycle, we decided against writing automated tests due to constantly revisiting requirements and our needs. Instead, we went for manual testing for now. Once our time tracking application gets more active users, we plan to start writing automated tests using Quarkus test support, including Testcontainers and others. Developing a full-blown API, including API security with JSON web tokens and authorization in place, having database migration automatically applied during application boot time, having a flexible and maintainable code base revolving around JSON, with the ability to package and deploy our API to our Kubernetes cluster, is quite an achievement for just a month and a half of work. CONCLUSION We are glad to share that using Quarkus, Kotlin, and Postgres as the foundation of our backend API has been a fun and productive experience for us. Quarkus's ability to experiment quickly and leverage ready-made components has made us confident that we made the right technological choice. Although there are some imperfections with hot reload and some quirks with Kotlin, we are waiting for the fixes to be made and have no doubt that Quarkus is the best solution for us. We are working smart and hard to bring new features to our time tracking application. To achieve this, we will continue to use the great features provided by Quarkus, which dramatically reduce the time needed to roll out our features quickly. We invite you to try our time tracker at .

18 Apr 2024 12:00am GMT

Quarkus 3.9.4 released - Maintenance release

Today, we released Quarkus 3.9.4, our third (we skipped 3.9.0) maintenance release for the 3.9 release train. This release contains bugfixes and documentation improvements. It should be a safe upgrade for anyone already using 3.9. UPDATE To update to Quarkus 3.9, we recommend updating to the latest version of the Quarkus CLI and run: quarkus update Note that quarkus update can update your applications from any version of Quarkus (including 2.x) to Quarkus 3.9. If you are not already using 3.x, please refer to the for all the details. You can also refer to for additional details. Once you upgraded to 3.0, also have a look at the , , , , , , , , and migration guides. FULL CHANGELOG You can get . COME JOIN US We value your feedback a lot so please report bugs, ask for improvements… Let's build something great together! If you are a Quarkus user or just curious, don't be shy and join our welcoming community: * provide feedback on ; * craft some code and ; * discuss with us on and on the ; * ask your questions on .

18 Apr 2024 12:00am GMT

17 Apr 2024

feedJBoss Blogs

How to configure Keycloak Log Level

In this brief tutorial, we will explore how to configure the log level for a Keycloak distribution powered by Quarkus. We'll walk through the process of applying this change persistently or as a startup option, providing administrators with flexibility in managing logging settings. The latest Keycloak distribution runs on top of Quarkus Runtime. If you ... The post appeared first on .

17 Apr 2024 6:45am GMT

Quarkus 3.8.4 released - Maintenance release

Today, we released Quarkus 3.8.4, our third (we skipped 3.8.0) maintenance release for our 3.8 LTS release train. This release contains bugfixes and documentation improvements. It should be a safe upgrade for anyone already using 3.8. UPDATE To update to Quarkus 3.8, we recommend updating to the latest version of the Quarkus CLI and run: quarkus update --stream=3.8 Note that quarkus update can update your applications from any version of Quarkus (including 2.x) to Quarkus 3.8. If you are not already using 3.x, please refer to the for all the details. You can also refer to for additional details. Once you upgraded to 3.0, also have a look at the , , , , , , , migration guides. FULL CHANGELOG You can get . COME JOIN US We value your feedback a lot so please report bugs, ask for improvements… Let's build something great together! If you are a Quarkus user or just curious, don't be shy and join our welcoming community: * provide feedback on ; * craft some code and ; * discuss with us on and on the ; * ask your questions on .

17 Apr 2024 12:00am GMT