Almost every developer knows the problem to read large sets of data from the database. It may crash your application with an
OutOfMemoryError and cripple its performance.
As a consequence, filtering and processing large results may even further increase memory consumption.
Luckily, there is a convenient remedy using Hibernate and Kotlin sequences.
Imagine a situation where your query returns a large set of data, which causes problems with memory consumption when it is processed further:
dao.loadSensorValues(date, values, keys) .filter(...) .groupBy(...) .map(...)
This is, because each processing step may create an intermediate list containing again large amounts of data. To help with that, Kotlin offers sequences to use instead of lists.
dao.loadSensorValues(date, values, keys).asSequence() .filter(...) .groupBy(...) .map(...)
Sequences, as opposed to lists, are evaluated lazily.
The second approach in combination with that is not to read the whole data at once from the database, but using a Stream:
fun loadSensorValues(parameters...): Stream<SystemSensorValueJPA> = configureQuery(...).resultStream
That is, JPA offers a
getResultStream() method, and Hibernate maps it to its internal
stream() implementation. That methods reads the data in chunks using “scrollable results“.
Hibernate and JPA allow us to conveniently replace reading whole result lists with result streams. Kotlin sequences offer a mechanism to process data lazily and efficiently.
You must log in to post a comment.