Java Forum Stuttgart – Part 2

After a long time since my first post, I had the time to write another one. In Java Forum Stuttgart – Part 1 I described the first talks I attended at the JFS 2016. In this post I will present some more impressions about the JFS.

HomeKit, Weave oder Eclipse SmartHome?

The third talk I listened to compared several smart home frameworks and their chances in the future. As an entree Apple HomeKit and Google Weave were presented. Both systems are designed as closed systems. Every vender who would like to integrate his devices into one of those systems has to support the protocol of this particular system. But because there is a big zoo of protocols out their, the presenters expect that none of this systems will be the single home automation solution for the future.

After this 5 minute introduction to the fail of HomeKit and Weave, Eclipse SmartHome (ESH) is presented as the system which could succeed in being the single solution. They base their assumption on the basic design of ESH. ESH is not a single solution. It is more like a framework where every vender can integrate his devices. Developers on the other side can access the devices in a unified way and build their solution based on ESH. One of this solutions is OpenHAB. It is the predecessor of ESH and is build on top if it.

Über den Umgang mit Lamdas

The last talk before lunch break was held by Michael Wiedeking. I heared him speaking at Herbst Campus in 2012 and was excited about his talk. Talks by him are mostly very informative and entertaining at the same time. After a first introduction what was needed to implement method references and lambdas in Java 8, he talked about the usage of lambdas in Java 8, especially how useful lambdas and method references are. At the end of his talk he presented a first solution how you can handle checked exceptions inside streams.

Top Performance Bottleneck Patterns Deep Dive

This talk was another entertaining and informative one at JFS. Andreas Grabner gave a short introduction to devops and how Otto – a big German retailer – improved its performance and time to market of new features.

In the rest of the talk he showed simple metrics to measure performance in production and which common problems he often finds. One of those metrics is a click-heatmap to measure user experience. This map shows how often a users click on areas of your page, which can be an indicator about the responsiveness of your web page.

Afterwards he presents some widespread performance problems. Place one and two are reserved for bad database handling, like not using prepared statements to reduce parsing overhead. On place three you can find bad code, especially bad control flow management. Exceptions are basically a good idea, but they can reduce performance when used in the wrong way. You can find out more about this topic on his blog.
That is enough for today. The last talks of the Java Forum Stuttgart will be published – hopefully – soon.

Java Forum Stuttgart – Part 1

Some days ago I attended Java Forum Stuttgart. After Herbst Campus in 2012, it was my second commercial conference. So I am still new to such conferences, but until now I like the format of those regional conferences. Big enough to meet new people.

As you can see in the program of the conference, a lot of interesting talks were given. Here is a short overview of the talks I attended.

  1. Eclipse on Steroids – Boost your Eclipse and Workspace Setup given by Frederic Ebelshäuser from Yatta Solutions GmbH
  2. Spark vs. Flink – Rumble in the (Big Data) Jungle given by Michael Pisula and Konstantin Knauf from TNG Technology Consulting GmbH
  3. HomeKit, Weave oder Eclipse SmartHome? Best Practices für erfolgreiche Smart-Home-Projekte given by Thomas Eichstädt-Engelen and Sebastian Janzen from neusta next GmbH & Co. KG and innoQ Deutschland GmbH
  4. Über den Umgang mit Lamdas given by Michael Wiedeking from MATHEMA Software GmbH
  5. Top Performance Bottleneck Patterns Deep Dive given by Andreas Grabner from Dynatrace
  6. Erhöhe i um 1 given by Michael Wiedeking from MATHEMA Software GmbH
  7. Was jeder Java-Entwickler über Strings wissen sollte given by Bernd Müller from Ostfalia Hochschule für angewandte Wissenschaften

Eclipse on Steroids

This talk covered the new eclipse profiles developed by Yatta. Eclipse profiles give you the ability to share your eclipse configuration between several computers or team members. Therefore every needed information about your current configuration of eclipse is saved. This includes installed plug-ins, settings, repository paths, checked out projects and working sets. The contents of your repository remain untouched. Yatta only saves the paths. The same applies for only locally available plug-ins.

The profiles can be shared via yatta.de, where you also can restrict the visibility of your profiles. You can make it visible for every one, just a group of people or only yourself. To install a shared plug-in, you can download the yatta-launcher. You only need to select the profile, specify a location for eclipse and the workspace and the launcher will do the rest. Every plug-in is installed automatically. After the first start, the launcher configures the re, but youpositories and checks out the code. This may take a while, but after it is finished, your workspace looks as close to the saved one as possible.
There are some nice other features, like caching eclipse and plug-in downloads. But the feature I am missing most from eclipse in this context is also not yet supported by yatta. There is no (known) possibility to upgrade your eclipse major version with a single click. After every major update, you have to install all plug-ins again. As mentioned yatta does not support this, but the speaker was interested in that topic. So maybe some day we can use it.

Spark vs. Flink

As the title mentions, this talk compares the two BigData frameworks Spark and Flink. They are compared by their abilities in batch and stream processing, but the main part targets the streaming possibilities. This is also the area where the two frameworks diverge the most. Flink is written as a pure streaming framework, where Spark is based on batch processing and due to that only supports micro-batch processing.

Flink is basically written in Java and Spark is written in Scala. For Java developers, this means that the Flink API feels more natural than the Spark one. The Java Spark API looks more like a Java wrapper around the Scala API. This goes hand in hand with the fact, that new features are first available in the Scala API.

Comparing both APIs against MapReduce or Storm both APIs provide a higher level of abstraction. This is not content of the talk, but the next table shows a comparison of some BigData frameworks and their level of abstraction.

Batch Streaming
high level Pig Spark Flink
low level MapReduce Storm

When both lecturers were asked which framework they would use, the answer is as always: It depends! If you have a lot of batch work and only a small part of streaming data, Spark is the framework of your choice. The integration between batch and streaming is a bit better in Spark. If it is vice versa and you have a lot of streaming data, they recommend Flink. They used Flink it their last project and it did the job quite well. It should also be mentioned here, that Google Cloud Dataflow provides support for Flink. Cloud Dataflow is a replacement for MapReduce at Google.

That is enough for today. The next part of the Java Forum Stuttgart will be published in some days.