Java Forum Stuttgart – Part 3

This is the last post of my visit at the Java Forum Stuttgart. In Part 1 and Part 2 I described the other talks I attended at the JFS 2016. In this post I will present the remaining talks I attended.

Erhöhe i um 1

This topic was a replacement for another talk where the speaker was not able to attend the conference. Michael Wiedeking again gave an entertaining talk about comments in code, especially comments like i = i + 1; // Increase i by 1. He also discussed the difference between API documentation, like JavaDoc, and normal in-line comments. His resume was that instead of writing comments, one should invest time in better readable names.

Another interesting part of the talk covered different types of interfaces. He splits interfaces into three types.

  1. unchangeable public interfaces
  2. changeable public interfaces
  3. private interfaces

Type 3 is the least problematic one. This type is only used to encapsulate different parts of our software internally. Changing parts of type 1 interfaces is like changing a normal class. It is just a refactoring, because the developer checked out all usages of the interface. Type 2 interfaces are used in-house or by a small number of users, which are known by the developer. A change in this kind of interface is a bit more problematic, but with good reasons it is acceptable, because only few people have to change their software. Nonetheless it should be avoided. Type 1 interfaces are the most problematic ones, because they are published to a wide audience and used by a lot of developers world wide. A good example for this is the JDK. Changing interfaces or the visibility of interfaces of type 1 is nearly impossible. Every change of an interface of this type will break a huge number of builds and is therefore not acceptable.

Was jeder Java-Entwickler über Strings wissen sollte

This talk was held in the fashion of What every Java Programmer should know about Floating Point Arithmetic and revealed some interesting insights of Strings in Java. Before presenting those insights, the speaker gave a short introduction into measuring the performance of Java programs. This part is mainly based on blog posts from Antonio Goncalves, the book Java Performance written by Scott Oaks, and Quality Code written by Stephen Vance. Performance in Java is best measured using the Java Microbenchmarking Harness which is developed with the OpenJDK. It allows to analyze programs in scales down to nano- and microseconds and provides support to warm up the JIT compiler.

After this introduction to measuring performance in Java, the presenter shows the impact of String#intern. This function moves the content of a String into a StringTable and only saves a reference to the content. Due to this, two Strings having the same content, only need the memory space one time for the content and two times for the references to the content. Depending on the application, this could reduce the memory footprint significantly. If you want to analyze this, you can use –XX:-PrintStringTableStatistics as a command line argument. Together with the introduction of the G1 garbage collector (-XX:+UseG1GC), the String deduplication could be activated by -XX:+UseStringDeduplication.

This and that

Between the talks and on the way to and from the Java Forum there were a lot of other interesting talks. All in all it was a nice experience and I will reserve the date for the next Java Forum in my calender.

Java Forum Stuttgart – Part 2

After a long time since my first post, I had the time to write another one. In Java Forum Stuttgart – Part 1 I described the first talks I attended at the JFS 2016. In this post I will present some more impressions about the JFS.

HomeKit, Weave oder Eclipse SmartHome?

The third talk I listened to compared several smart home frameworks and their chances in the future. As an entree Apple HomeKit and Google Weave were presented. Both systems are designed as closed systems. Every vender who would like to integrate his devices into one of those systems has to support the protocol of this particular system. But because there is a big zoo of protocols out their, the presenters expect that none of this systems will be the single home automation solution for the future.

After this 5 minute introduction to the fail of HomeKit and Weave, Eclipse SmartHome (ESH) is presented as the system which could succeed in being the single solution. They base their assumption on the basic design of ESH. ESH is not a single solution. It is more like a framework where every vender can integrate his devices. Developers on the other side can access the devices in a unified way and build their solution based on ESH. One of this solutions is OpenHAB. It is the predecessor of ESH and is build on top if it.

Über den Umgang mit Lamdas

The last talk before lunch break was held by Michael Wiedeking. I heared him speaking at Herbst Campus in 2012 and was excited about his talk. Talks by him are mostly very informative and entertaining at the same time. After a first introduction what was needed to implement method references and lambdas in Java 8, he talked about the usage of lambdas in Java 8, especially how useful lambdas and method references are. At the end of his talk he presented a first solution how you can handle checked exceptions inside streams.

Top Performance Bottleneck Patterns Deep Dive

This talk was another entertaining and informative one at JFS. Andreas Grabner gave a short introduction to devops and how Otto – a big German retailer – improved its performance and time to market of new features.

In the rest of the talk he showed simple metrics to measure performance in production and which common problems he often finds. One of those metrics is a click-heatmap to measure user experience. This map shows how often a users click on areas of your page, which can be an indicator about the responsiveness of your web page.

Afterwards he presents some widespread performance problems. Place one and two are reserved for bad database handling, like not using prepared statements to reduce parsing overhead. On place three you can find bad code, especially bad control flow management. Exceptions are basically a good idea, but they can reduce performance when used in the wrong way. You can find out more about this topic on his blog.
That is enough for today. The last talks of the Java Forum Stuttgart will be published – hopefully – soon.

Java Forum Stuttgart – Part 1

Some days ago I attended Java Forum Stuttgart. After Herbst Campus in 2012, it was my second commercial conference. So I am still new to such conferences, but until now I like the format of those regional conferences. Big enough to meet new people.

As you can see in the program of the conference, a lot of interesting talks were given. Here is a short overview of the talks I attended.

  1. Eclipse on Steroids – Boost your Eclipse and Workspace Setup given by Frederic Ebelshäuser from Yatta Solutions GmbH
  2. Spark vs. Flink – Rumble in the (Big Data) Jungle given by Michael Pisula and Konstantin Knauf from TNG Technology Consulting GmbH
  3. HomeKit, Weave oder Eclipse SmartHome? Best Practices für erfolgreiche Smart-Home-Projekte given by Thomas Eichstädt-Engelen and Sebastian Janzen from neusta next GmbH & Co. KG and innoQ Deutschland GmbH
  4. Über den Umgang mit Lamdas given by Michael Wiedeking from MATHEMA Software GmbH
  5. Top Performance Bottleneck Patterns Deep Dive given by Andreas Grabner from Dynatrace
  6. Erhöhe i um 1 given by Michael Wiedeking from MATHEMA Software GmbH
  7. Was jeder Java-Entwickler über Strings wissen sollte given by Bernd Müller from Ostfalia Hochschule für angewandte Wissenschaften

Eclipse on Steroids

This talk covered the new eclipse profiles developed by Yatta. Eclipse profiles give you the ability to share your eclipse configuration between several computers or team members. Therefore every needed information about your current configuration of eclipse is saved. This includes installed plug-ins, settings, repository paths, checked out projects and working sets. The contents of your repository remain untouched. Yatta only saves the paths. The same applies for only locally available plug-ins.

The profiles can be shared via yatta.de, where you also can restrict the visibility of your profiles. You can make it visible for every one, just a group of people or only yourself. To install a shared plug-in, you can download the yatta-launcher. You only need to select the profile, specify a location for eclipse and the workspace and the launcher will do the rest. Every plug-in is installed automatically. After the first start, the launcher configures the re, but youpositories and checks out the code. This may take a while, but after it is finished, your workspace looks as close to the saved one as possible.
There are some nice other features, like caching eclipse and plug-in downloads. But the feature I am missing most from eclipse in this context is also not yet supported by yatta. There is no (known) possibility to upgrade your eclipse major version with a single click. After every major update, you have to install all plug-ins again. As mentioned yatta does not support this, but the speaker was interested in that topic. So maybe some day we can use it.

Spark vs. Flink

As the title mentions, this talk compares the two BigData frameworks Spark and Flink. They are compared by their abilities in batch and stream processing, but the main part targets the streaming possibilities. This is also the area where the two frameworks diverge the most. Flink is written as a pure streaming framework, where Spark is based on batch processing and due to that only supports micro-batch processing.

Flink is basically written in Java and Spark is written in Scala. For Java developers, this means that the Flink API feels more natural than the Spark one. The Java Spark API looks more like a Java wrapper around the Scala API. This goes hand in hand with the fact, that new features are first available in the Scala API.

Comparing both APIs against MapReduce or Storm both APIs provide a higher level of abstraction. This is not content of the talk, but the next table shows a comparison of some BigData frameworks and their level of abstraction.

Batch Streaming
high level Pig Spark Flink
low level MapReduce Storm

When both lecturers were asked which framework they would use, the answer is as always: It depends! If you have a lot of batch work and only a small part of streaming data, Spark is the framework of your choice. The integration between batch and streaming is a bit better in Spark. If it is vice versa and you have a lot of streaming data, they recommend Flink. They used Flink it their last project and it did the job quite well. It should also be mentioned here, that Google Cloud Dataflow provides support for Flink. Cloud Dataflow is a replacement for MapReduce at Google.

That is enough for today. The next part of the Java Forum Stuttgart will be published in some days.