Java 8 has added support for parallel processing arrays — as I’m sure most of you know. There are lots of official release documents from Oracle talking about how employing these parallel methods and classes improves speed of your application — and as to be expected, there are lots of bloggers who set off to write sample code around it and analyze the performance.
I am still relatively new to Java 8, I’ll confess, though I do like the new fork/join paradigm Java 8 introduced and the support for parallelism in this version. One thing I’ve been looking at more closely is streams — which allows one to easily start dwelling into parallel processing.
The idea is relatively simple, you create a Stream instance for your “data” (be it an array, collection, I/O stream etc) and then this can be broken down into smaller chunks under the cover and processed.
One way I thought about using this is the simple problem of finding an item into an array. So I set off to put together this sample code (find it on Github in project Java8ParallelFind). As a warning, it’s really nothing fancy, just enough code to get me an idea of using Streams and lambda expressions.
The problem I encounter occasionally is this: let’s say we have a (large) array of instances of a certain class which encapsulates some data. This data contains at least 2 fields — enough to justify creating a standalone bean. In the code on Github I’ve used the
KeyValue class — which has 2 properties: Key and Value, though the names could be misleading, as I’m not storing these instances in a
Map or similar. Instead think of it as a naming convention so we know all the searching and finding will be done based on the Key property, and we are interested in the Value property when we find an item. (A better way to think about it could be to think of a bean encapsulating a person’s first and last name — when searching for a person in a directory of names, we are used to searching first by surname then by first name. In this case surname would be the equivalent of Key.)
Getting back to the problem I’m trying to solve, consider an array with a lot of instances of
KeyValue class — in unsorted order. I want to find out the value corresponding to a certain key. This means traversing the array, find the
KeyValue instance which has the given key and retrieve the value. Let’s make the problem simpler and just say that I want to find the
KeyValue instance for a given key. (And yes, I know that if I use a
Map this is done in constant time for me by the implementation of the class itself, but there are lots of cases where data arrives to us in an array or a list, and we cannot justify creating a new
Map instance for this retrieval.)
If you look in the code, in the (ingeniously named!) class
App, you will find a method called
find() — this takes an array and a key, simply traverses it and when it finds the given key it stops. Notice that I’m not even going to bother to return the value, I just want to measure the time it takes the code to traverse the array and find it. The function itself actually uses 2 counters to measure the system time before and after the operation and returns the difference — thus giving me the time (in ns!) it took the method to execute.
That is the standard approach that most Java techies are probably used to when searching for an element in an array. Please note, that we are talking about one-off search here — as such, this does not justify employing a sorting prior to this method and then performing a binary search as this would see the complexity of the method go to
O(n * log(n)) as opposed to simply
O(n) when we do just a traversal of the array.
Now, for the next test, let’s transform our array into a (parallel) stream, and use a filter to only retrieve the first element which matches our key. This can be very concisely written in Java 8 as follows:
Arrays.stream(arr).filter(x -> key.equals(x.getKey())).findFirst();
Again, we measure the time it took the operation to execute and return it.
When i run this (you can use simply
mvn exec:java in the command line to run it) I get this:
Normal find: 17688000 Parallel find: 77561000
You can run it a few times and the numbers will change a bit, as the OS warms up and caches get filled, but the order of magnitude seems to stay the same: the parallel approach seems to take about 3-4 times more than the classical one 🙁
A few words on the way I generate the array: as you can see in the generateArray() method, I simply generate random numbers for the key and the value and convert them to their string representation. I also make sure that the array always contains one single element with the key I’m searching for and this is always added at the end of the array — this way I’m forcing (in a linear approach) the traversal of the full array. (However, in a parallel approach you’d think this is not necessary, as it should bail out on first found.)
If you play around with the size of the array, it seems the lower the length, the higher the difference in the order of magnitude — as in, the slower the parallel approach performs comparing to the liner one. This is to be expected to be honest, I understand that using the parallel creates some overhead, and that should really only pay off when the length of the array is pretty big.
However, I don’t get why processing the array through a parallel stream takes about 3-4 times more than doing it sequentially. I figured I need to investigate this issue more and will get back here with details. For now though, I’ll stick to my normal traversal of the array sequentially 🙁
More from my site