Java Forum Stuttgart – Part 1

Some days ago I attended Java Forum Stuttgart. After Herbst Campus in 2012, it was my second commercial conference. So I am still new to such conferences, but until now I like the format of those regional conferences. Big enough to meet new people.

As you can see in the program of the conference, a lot of interesting talks were given. Here is a short overview of the talks I attended.

  1. Eclipse on Steroids – Boost your Eclipse and Workspace Setup given by Frederic Ebelshäuser from Yatta Solutions GmbH
  2. Spark vs. Flink – Rumble in the (Big Data) Jungle given by Michael Pisula and Konstantin Knauf from TNG Technology Consulting GmbH
  3. HomeKit, Weave oder Eclipse SmartHome? Best Practices für erfolgreiche Smart-Home-Projekte given by Thomas Eichstädt-Engelen and Sebastian Janzen from neusta next GmbH & Co. KG and innoQ Deutschland GmbH
  4. Über den Umgang mit Lamdas given by Michael Wiedeking from MATHEMA Software GmbH
  5. Top Performance Bottleneck Patterns Deep Dive given by Andreas Grabner from Dynatrace
  6. Erhöhe i um 1 given by Michael Wiedeking from MATHEMA Software GmbH
  7. Was jeder Java-Entwickler über Strings wissen sollte given by Bernd Müller from Ostfalia Hochschule für angewandte Wissenschaften

Eclipse on Steroids

This talk covered the new eclipse profiles developed by Yatta. Eclipse profiles give you the ability to share your eclipse configuration between several computers or team members. Therefore every needed information about your current configuration of eclipse is saved. This includes installed plug-ins, settings, repository paths, checked out projects and working sets. The contents of your repository remain untouched. Yatta only saves the paths. The same applies for only locally available plug-ins.

The profiles can be shared via, where you also can restrict the visibility of your profiles. You can make it visible for every one, just a group of people or only yourself. To install a shared plug-in, you can download the yatta-launcher. You only need to select the profile, specify a location for eclipse and the workspace and the launcher will do the rest. Every plug-in is installed automatically. After the first start, the launcher configures the re, but youpositories and checks out the code. This may take a while, but after it is finished, your workspace looks as close to the saved one as possible.
There are some nice other features, like caching eclipse and plug-in downloads. But the feature I am missing most from eclipse in this context is also not yet supported by yatta. There is no (known) possibility to upgrade your eclipse major version with a single click. After every major update, you have to install all plug-ins again. As mentioned yatta does not support this, but the speaker was interested in that topic. So maybe some day we can use it.

Spark vs. Flink

As the title mentions, this talk compares the two BigData frameworks Spark and Flink. They are compared by their abilities in batch and stream processing, but the main part targets the streaming possibilities. This is also the area where the two frameworks diverge the most. Flink is written as a pure streaming framework, where Spark is based on batch processing and due to that only supports micro-batch processing.

Flink is basically written in Java and Spark is written in Scala. For Java developers, this means that the Flink API feels more natural than the Spark one. The Java Spark API looks more like a Java wrapper around the Scala API. This goes hand in hand with the fact, that new features are first available in the Scala API.

Comparing both APIs against MapReduce or Storm both APIs provide a higher level of abstraction. This is not content of the talk, but the next table shows a comparison of some BigData frameworks and their level of abstraction.

Batch Streaming
high level Pig Spark Flink
low level MapReduce Storm

When both lecturers were asked which framework they would use, the answer is as always: It depends! If you have a lot of batch work and only a small part of streaming data, Spark is the framework of your choice. The integration between batch and streaming is a bit better in Spark. If it is vice versa and you have a lot of streaming data, they recommend Flink. They used Flink it their last project and it did the job quite well. It should also be mentioned here, that Google Cloud Dataflow provides support for Flink. Cloud Dataflow is a replacement for MapReduce at Google.

That is enough for today. The next part of the Java Forum Stuttgart will be published in some days.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s