Getting data in JSON format via REST services from the backend server is common practice. In the simplest case a JSON provider like Jackson translates your Java objects into a JSON string and back into Java objects automatically.
However, this does not cover cases where the data model (e.g., implemented as JPA entities) is different from the view model. For example, if you have BLOBs in your model it does not make much sense to transfer them as BASE64 encoded strings. Mostly, because BLOBs tend to be large and may not be needed at once.
In this article we will show how to provide different JSON “views” or dialects of the data using the same REST service.
Usually developers have to create and deploy different versions of their application: For local development, testing, training, production, …
Different third-party and system dependencies for those different versions will preferably be configured via the container, e.g. data sources, JMS, topics, mail server, etc. However, most applications also contain several custom application properties such as the current version, mail addresses, images, templates, etc. Most of them may be static, but there are cases where you want to change application properties dynamically, i.e. without rebuilding the artifact.
In this article we will describe some approaches how this goal can be approached using the JBoss WildFly/EAP7 application server.
As explained in the blog entry Upgrading and patching the Red Hat JBoss Enterprise Application Platform JBoss EAP offers the possibility to conveniently update the server installation with the latest patches.
However, the way this is implemented leaves all previous versions and patches of your modules behind. I.e., older versions of the JAR files will not be used anymore, but just waste disk space. This is desirable only, if you want to have the possibility to roll back a patch later on or would like to keep track of the patch history.
To give you some numbers: The current EAP 6.4 server has an initial size of 166MB, but grows to a size of 509MB when updated to version 6.4.4. In this article we’d like to show you how to remove all unused garbage from the installation.
In this article we will try to define a classification of projects that deal in one way or the other with the migration of code or data. This classification is not strictly hierarchical, since in general too many aspects overlap. However, the intent of this document is not to deliver a scientifically precise hierarchy, but to provide you with practical ideas when dealing with migration.
There are many tools to visualize or analyze databases. You will also find lots of programs to copy databases between different vendors. However, we experienced these tools are not flexible enough for our migration projects. They fail because, e.g., they cannot map the various data types between different databases correctly, or because the amount of data becomes too big. The solution we suggest is to program sophisticated data migrations using an extensible framework instead of configuring some (limited) tool. We found that this approach gives us much more flexibility when performing data migrations. Migrating a database almost always requires a custom solution, since every system has its peculiarities. Another advantage of “programming” a migration is that your developers may freely combine plain copying code with computational parts. For example, it may be necessary to contact a third-party system during a migration process in order to obtain some information. In one of our projects we had to contact a GIS (Geographic information system) server to relate the positional IDs stored in the database with those in the GIS database.