Maven Cobertura and JaCoCo Plugins – What Are Your Rock Bottom, Minimum Standards?

When it comes to test driven development, the best line I’ve heard is the following.

Your clients are not a distributed test environment

These wise words where uttered by my Spring Core lecturer while covering the difference between unit and integration tests in Spring parlance. On the note of unit and integration tests, after working on a vast array of projects, it has dawned on me, with some sadness, that not a single project, organization or team I’ve been on has had non negotiable standards when it comes to code coverage. Of course, most projects have had unit testing as a part of a checklist, but not a single one has made a lasting effort to both measure and enforce a minimum goal in terms of code coverage.

In this post we take a look at potential rock bottom configurations for the free cobertura-maven-plugin in particular and also visit the jacoco-maven-plugin. Finally we encounter lacking JDK 8 support and start considering paying for a commercial plugin.

Before delving into code coverage tooling it’s worth asking why it matters, to whom, and what it means. So, what does code coverage mean? Why should software engineers care? Why should project managers care? Why should project sponsors care? If a consultancy (vendor) is doing the development, why should this organisation care? These are not questions we’ll delve into here in depth other than noting that coverage reports help detect code that had not been adequately tested by automated test suites.

No Standards? Introduce a Lower Quality Gate

So what to do in a world of little to no standards? In my mind the answer is to set one’s own personal standards, starting with defining what rock bottom is. This is a personal professional line in the sand. Its also a great question, when considering joining a project or organization, to ask of your prospective employer. The question would be what unit, integration and system test code coverage standards the organization has and then crucially how they are enforced and made visible to all concerned.

In terms of motivating the need to minimum standards, the term quality gate seems apt. On a given project, even personal project, one would have two gates, the lower gate would be enabled by default and builds will fail on developer machines if this minimum standard is not met, a CI server would also independently verify using the minimum standard. If this lower quality gate has not been met, the project manager or development manager should know about it.

The Plugins

Lets move onto the plugins. The cobertura-maven-plugin is used to report on and check your unit test code coverage using the Cobertura code coverage utility. So we’ll first check if all tests are passing and then check to make sure our standards have been met. Once we move onto the integration test phase, where our beans and infrastructure is tested in concert, the jacoco-maven-plugin will report on and check our integration test code coverage. 

The importance of performing both unit testing (individual classes) and integration testing (incorporating a container such as the Spring context) cannot be overstated. Both plugins and so both types of testing must be done in a given project and this stands to reason: we want coverage standards for individual classes as well as cooperating runtime services and ordinarily we only proceed to the latter once the former has succeeded as per the Maven Build Lifecycle.

Rock Bottom – Our Lower Quality Gate

It stands to reason that there some should be some correlation between the application domain and the amount of effort one will invest in unit and integration testing. When it comes to rock bottom however the application domain is irrelevant since it represents our bare minimum standard that is domain agnostic.

In terms of the merits of a rock bottom configuration for Cobertura and JaCoCo, the following IBM developerWorks sourced statement supports such an approach.

The main thing to understand about coverage reports is that they’re best used to expose code that hasn’t been adequately tested.

Cobertura

Defining a minimum standard when it comes to Cobertura, as it turns out, takes some effort when one considers the array of options one has to consider. For example the configuration below is the usage example provided on the official plugin page.

      <plugin>
        <groupId>org.codehaus.mojo</groupId>
        <artifactId>cobertura-maven-plugin</artifactId>
        <version>2.6</version>
        <configuration>
          <check>
            <!-- Min branch coverage rate per class. 0 to 100. -->
            <branchRate>85</branchRate>
            <!-- Min line coverage rate per class. 0 to 100. -->
            <lineRate>85</lineRate>
            <haltOnFailure>true</haltOnFailure>
            <!-- Min branch coverage rate for project as a whole. -->
            <totalBranchRate>85</totalBranchRate>
            <!-- Min line coverage rate for project as a whole. -->
            <totalLineRate>85</totalLineRate>
            <!-- Min line coverage rate per package. -->
            <packageLineRate>85</packageLineRate>
            <!-- Min branch coverage rate per package. -->
            <packageBranchRate>85</packageBranchRate>
            <regexes>
              <!-- Package specific settings. -->
              <regex>
                <pattern>com.example.reallyimportant.*</pattern>
                <branchRate>90</branchRate>
                <lineRate>80</lineRate>
              </regex>
              <regex>
                <pattern>com.example.boringcode.*</pattern>
                <branchRate>40</branchRate>
                <lineRate>30</lineRate>
              </regex>
            </regexes>
          </check>
        </configuration>
        <executions>
          <execution>
            <goals>
              <goal>clean</goal>
              <goal>check</goal>
            </goals>
          </execution>
        </executions>
      </plugin>

The first question that comes to mind when it comes to the above is what the configuration means in the first place. The main concept we need is the difference between the line rate and branch rate, which has been neatly explained here. So, a potential starting point would be a 50% line coverage rate on a project wide basis as a rock bottom configuration with branch coverage excluded. Naturally we will halt on failure as a rule since this is our bare minimum standard and not necessarily what we will aspire to achieve.

			<plugin>
        		<groupId>org.codehaus.mojo</groupId>
        		<artifactId>cobertura-maven-plugin</artifactId>
        		<version>2.5.2</version>
        		<configuration>
        			<instrumentedDirectory>target/cobertura/instrumented-classes</instrumentedDirectory>
          			<outputDirectory>target/cobertura/report</outputDirectory>
          			<check>
            			<haltOnFailure>true</haltOnFailure>
            			<totalLineRate>50</totalLineRate>
          			</check>
        		</configuration>
        		<executions>
          			 <execution>
                        <id>cobertura-clean</id>
                        <phase>clean</phase>
                        <goals>
                            <goal>clean</goal>
                        </goals>
                    </execution>
                    <execution>
                        <id>cobertura-instrument</id>
                        <phase>process-classes</phase>
                        <goals>
                            <goal>instrument</goal>
                        </goals>
                    </execution>
                    <execution>
                        <id>cobertura-verify</id>
                        <phase>verify</phase>
                        <goals>
                            <goal>check</goal>
                        </goals>
                    </execution>
        		</executions>
      		</plugin>

JaCoCo

When using JaCoCo to generate code coverage reports both that jacoco-maven-plugin and maven-failsafe-plugin must be configured as per this excellent resource.

<plugin>
    <groupId>org.jacoco</groupId>
    <artifactId>jacoco-maven-plugin</artifactId>
    <version>0.6.3.201306030806</version>
    <executions>
        <!-- The Executions required by unit tests are omitted. -->
        <!--
            Prepares the property pointing to the JaCoCo runtime agent which
            is passed as VM argument when Maven the Failsafe plugin is executed.
        -->
        <execution>
            <id>pre-integration-test</id>
            <phase>pre-integration-test</phase>
            <goals>
                <goal>prepare-agent</goal>
            </goals>
            <configuration>
                <!-- Sets the path to the file which contains the execution data. -->
                <destFile>${project.build.directory}/coverage-reports/jacoco-it.exec</destFile>
                <!--
                    Sets the name of the property containing the settings
                    for JaCoCo runtime agent.
                -->
                <propertyName>failsafeArgLine</propertyName>
            </configuration>
        </execution>
        <!--
            Ensures that the code coverage report for integration tests after
            integration tests have been run.
        -->
        <execution>
            <id>post-integration-test</id>
            <phase>post-integration-test</phase>
            <goals>
                <goal>report</goal>
            </goals>
            <configuration>
                <!-- Sets the path to the file which contains the execution data. -->
                <dataFile>${project.build.directory}/coverage-reports/jacoco-it.exec</dataFile>
                <!-- Sets the output directory for the code coverage report. -->
                <outputDirectory>${project.reporting.outputDirectory}/jacoco-it</outputDirectory>
            </configuration>
        </execution>
    </executions>
</plugin>

JDK 8 Support Lacking – Time To Look At Altassian Clover

While producing this post I had to abondon using both cited plugins and to start looking at Altassian Clover since the two cited free plugins do not support JDK 8 at present but Altassian Clover does. The latter does come with a $300 price tag, and that should be fine, it worth spending money on good development tools.

cobertura-maven-plugin issue log

Issue 1: cobertura-maven-plugin 2.6 gave incessant error, downgraded to 2.5.2 which made the error go away. Did not have the time for analysing the reasons for the failure.

Issue 2: Tests would not run with the mvn clean verify command, got incessant exceptions and bytecode on the console with the reason being “Expected stackmap frame at this location.” As it turns out this is was due to JDK 8 not being supported. Downgrading to JDK 7 was not an option for me, neither was spending time on understanding subtle new behaviours of JDK 7 .

Advertisements

The Benefits of Dependency Injection

In this post we firstly describe what dependency injection is, as background, and then articulate what the benefits of dependency injection are. We provide answers to these three questions below and also consider whether dependency injection containers should be used in every project.

  1. What is dependency injection?
  2. What Java frameworks or containers provide dependency injection support?
  3. What are the benefits of dependency injection?

What Is Dependency Injection?

When a particular class has a member variable (also called a field) that is an instance of another class then this is a dependency. How the initialization of instance members  happens and who performs this initialization is the crux of dependency injection or the dependency injection pattern.

Normally, without dependency injection, the particular class would have code to initialize an instance variable in a constructor with the two alternatives to this approach being initializer blocks or final methods.

With dependency injection, an instance of the dependency is provided to our particular class by an external party using either the contructor of the particular class or a setter of the particular class. So the external party is the who, and in term of the how this external party performs either contructor injection or setter injection.

Dependency injection frameworks alleviate the need for factories and the use of new in your Java code. Configuration instructions, via annotations or XML, replace new in your Java code. You may still write factories from time to time, but your code will not depend directly on them.

What Java frameworks provide dependency injection support?

The two major frameworks are the Spring Framework and Google Guice. PicoContainer is another alternative.

What Are The Benefits Of Dependency Injection?

In short your code will be easier to change, easier to unit test and easier to reuse in other contexts. We provides explanations for these benefits below, but before we do so, its worth quoting Anand Rajana [1] in terms of his summary:

It facilitates the design and implementation of loosely coupled, reusable, and testable objects in your software design and implementation by removing dependencies that often inhibit reuse. Dependency injection can help you design your applications so that the architecture [container] links the components rather than the components linking themselves.

Your code will be easier to maintain and easier reuse in other contexts.

This is because your object is handed what it needs to work and since it it freed from the burden of resolving its dependencies. When an object is handed what it need to work in terms of its dependencies by a container such as Spring, this is called inversion of control (IoC). One may wonder, why does this, that is IoC, make maintenance and reuse easier? Its because the components (plain old Java objects) that make up your application are loosely coupled as a result of your objects being handed what they need to work – its far easier to swap out a component as a matter of configuration than it otherwise would have been in a tightly coupled architecture.

It is also because dependency injection promotes programming to interfaces which conceals the implementation details of each dependency and naturally this makes it easier to swap out implementations of a given interface. The combination of your objects being handed what they need to work by an IoC container and programming to interfaces significantly enhances loose coupling.

Finally, it is because the centralized control over the object lifecycle brings with it a host of benefits that enhances the maintainability and re-usability of ones code. For example, Spring’s BeanFactoryPostProcessors are able to modify the definition of any bean in the ApplicationContext‘s BeanFactory before any objects are created. A good example of such a BeanFactoryPostProcessor is a PropertyPlaceholderConfigurer which substitutes ${variables} in an definitions with values from .properties files. In addition, a BeanPostProcessor may wrap your beans in a dynamic proxy and add behaviour to your application logic transparently. A good example of behaviour that can be transparently added is transaction management behavior that has been declaratively specified using annotations.

Your code will have significantly enhanced testability

This is because dependencies can easily be replaced with stubs or mocks in unit tests.

Should Dependency Injection Containers Be Used in Every Project?

This is a matter of opinion and so this will be the authors opinion. In short, the answer is no.

One may wonder why the answer is no. It is because one still needs to motivate why one needs all the benefits that come with using say Spring or Guice in one’s architecture. In short-lived prototyping projects maintainability and unit testing may not be the top priority, it could be that the bulk of the effort is say in producing a GUI prototype or experimenting with new technologies. It could also be that reusability is of no concern. Every project is different and thought and motivation needs to be applied to every technology decision.

To give an example of where the benefits of IoC may not be the top priority. The author was handed a micro project (single developer at a time)  that was in trouble in terms of delivery to the client, the top priority was to show the client results in weekly sprints, and the first decision made was to speed up local development and make the existing code generally comprehensible. This involved introducing Maven, the Maven Jetty plugin, JRebel, significant refactoring and much more. Results where delivered, albeit in a tightly coupled manner with hand rolled singletons (factories) and little unit or integration testing. The project later reached a stage where introducing an IoC container made sense in terms of priorities in order to realise all of the cited benefits, above all to introduce testing automation to stub out cloud services.

Although the short answer is no, its worth noting that the long answer is no but in general yes. More often than not prototyping projects go live, and it is in general unlikely that an application will require no maintenance after it went into production. So one should not set up fellow maintainers or one-self up for failure and 9 times out of 10 (if not 9.9 times out of 10) use IoC containers / dependency injection in order to realise the core benefits and that is code that is easier to change, easier to unit test and easier to reuse in other contexts.

References

  1. Dependency Injection, Slideshare, Anand Rajana
  2. Core Spring. Student Lecture Manual. Version 3.2.
  3. google-guice (its worth listening to the presentation, they provide good examples in their slides)
  4. Inversion of Control Containers and the Dependency Injection Pattern, Martin Fowler, 23 January 2004 (somewhat longwinded)

Spring Web Services Tomcat Compatibility

In this post we attempt to help you choose the appropriate version of Tomcat when using Spring Web Services.

As context, if you’ve developed a web service, using Spring Web Services, chances are you’ve blissfully been using the Maven Jetty Plugin in your project during the development phase. When you’re getting ready to deploy to Tomcat however, you may need to know what the Servlet/JSP Spec version supported by Spring Web Services is since this is how the Tomcat Which Version page guides users in their decision making process.

Here are steps that could be used to determine the relevant Servlet/JSP Spec version and hence Tomcat version:

  1. Run mvn dependency:tree
  2. Look for the org.springframework.ws:spring-ws-core line and then for the version number of its org.springframework:spring-webmvc line, in our case, we have 3.1.0.RELEASE as the version
  3. Now look at the dependencies of the version of spring-webmvc and in turn it’s org.apache.tomcat:tomcat-servlet-api dependency, in our case, we find that we find that we are depedant on tomcat-servlet-api version 7.0.8
  4. Finally, have a look at the MANIFEST of the above mentioned jar, and here you will find the relevant Servlet Spec, in our case we find: Specification-Title : Java API for Servlets, Specification-Version : 3.0
  5. Choose Tomcat 7 as advised.

 

org.apache.tomcat

Dude, Where’s My Hibernate Second-Level Ehcache Hit?

This post, in which we assume you use Spring, shows how to confirm that your hibernate second-level cache is working, which is one of the first things you would want to do after you have completed the necessary configuration.

Trying to use ehcache DEBUG log output to confirm your cache is working may end up wasting your time, rather use the approach documented here.

Step 1: Tell Hibernate to collect statistics in your test spring context

The new line you will include is hibernate.generate_statistics line shown below.

<prop key="hibernate.query.startup_check">false</prop>
<prop key="hibernate.hbm2ddl.auto">create-drop</prop>
<!-- Enable the second-level cache  -->
<prop key="hibernate.cache.use_second_level_cache">true</prop>
<prop key="hibernate.cache.use_query_cache">true</prop>
<prop key="hibernate.cache.region.factory_class">net.sf.ehcache.hibernate.EhCacheRegionFactory</prop>
<prop key="hibernate.cache.provider_configuration_file_resource_path">ehcache.xml</prop>
<prop key="hibernate.generate_statistics">true</prop>

Step 2: Create a JUnit test case where you expect a cache hit and check for the hit

You would have, in your ehcache.xml config file, created a cache for each DBO (i.e. JPA @Entity) that you want to have cached. So, now simply create a unit test where you create a set of objects that should be cached, then fetch them, from the cache (hopefully).

Include the following in your Test class, if its not there already:

@Autowired
private ApplicationContext applicationContext;

Now write a test case such as the following:

@Test
public void testGenerateStuff() {
   int stuffNumber = databaseHelper.generateStuff();
   Stuff stuff = databaseHelper.getStuff(stuffNumber);
   // Pertinent lines ...
   EntityManagerFactoryInfo entityManagerFactoryInfo = (EntityManagerFactoryInfo) applicationContext.getBean("entityManagerFactory");
   EntityManagerFactory emf = entityManagerFactoryInfo.getNativeEntityManagerFactory();
   EntityManagerFactoryImpl emfImp = (EntityManagerFactoryImpl)emf;
   Statistics stats = emfImp.getSessionFactory().getStatistics();
   printStats(stats);
   assertTrue(stats.getSecondLevelCacheHitCount() > 0);
}

public static void printStats(Statistics stats) {
   System.out.println(stats.toString());
   System.out.println("Second Level Cache Put Count ==> " + stats.getSecondLevelCachePutCount());
   System.out.println("Second Level Cache Hit Count ==> " + stats.getSecondLevelCacheHitCount());
   System.out.println("Second Level Cache Miss Count ==> " + stats.getSecondLevelCacheMissCount());
}

Spring JUnit Ehcache: Hibernate Second-Level Caching: Another unnamed CacheManager already exists in the same VM.

If you are quite simply trying to introduce Hibernate second level caching into your Spring based application using Ehcache, you may end up banging your head against this wall when running your Junit tests: Another unnamed CacheManager already exists in the same VM.

Even worse, you may get sidetracked by posts such as this one which has to do with Spring 3.1 caching.

It may be that you have decided to included a version of Ehcache > 2.5, if that is the case, then before wasting more time on this, like I did, downgrade to Ehcache 2.4.7:

BEFORE

<dependency>
<groupId>net.sf.ehcache</groupId>
<artifactId>ehcache-core</artifactId>
<version>2.5.2</version>
</dependency>

AFTER

<dependency>
<groupId>net.sf.ehcache</groupId>
<artifactId>ehcache-core</artifactId>
<version>2.4.7</version>
</dependency>

One more tip, to get started with you ehcache.xml config, copy the defaults in ehcache-failsafe.xml and go from there, dump the file in your resources folder in your project and consult the Ehcache Hibernate Second-Level Cache Documentation.

Be sure to perform dependency validation using the Spring container’s initialization callback mechanism

A Spring Container In The Real World

Ala JavaSpecialists’ newsletter, this post comes to you from the beautiful Kuils River region of greater Cape Town, also known as the Kuils Riviera. Here you are reminded that to some “life is not a beach”, in particular when passing by the local township where dismal poverty and a hand to mouth existance reigns.

On that note, lets dive straight into the subject matter of this post, performing the most basic validation in your Spring beans to quickly detect stupid mistakes.

Now, as a Spring user you have a number of options when it comes to configuring dependencies you wish to have injected into a given bean. You can use autowiring, which itself has further options, such as autowiring by type or name, or you can use xml configuration with explicit bean references. Regardless of what you use, it pays to use the Spring container’s initialization callback mechanism to avoid or easily detect basic mistakes that can easily eat up hours of development time. This entails validating your dependencies, at the very least in terms of whether they are null or not, in a consistent fashion in any given project.

Lets first get onto an example of such a basic mistake, in which a suspected wiring configuration error, which turned out to be a programmer error, cost precious time, and then move onto what do to, consistently, to avoid future similar wasteage.

Finally, we present two ways of detecting wiring configuration errors. The first is suboptimal and does not use Spring 3 optimally. The second approach uses Spring 3 in the recommended fashion.

BEFORE

Herewith a class under test, note the @Autowired bean propertyService.

public class ConfigServiceImpl implements ConfigService {

    @Autowired
    @Qualifier("propertyServiceBean")
    protected PropertyService propertyService;

    // NO INIT METHOD

...
}

Here is the stupid mistake.

@RunWith(SpringJUnit4ClassRunner.class)
@ContextConfiguration(locations = {"/com/mayloom/activiti/config/spring-context.xml"})
public class ConfigServiceTest {

   @Autowired
   private ConfigService configService;

   @Test
   public void updateConfiguration() {

      ConfigService configService = new ConfigServiceImpl(); // This line must be removed

      Properties props = configService.getProperties();

The mistake is the programmer is autowiring the configService yet also instatiating it. The latter renders Autowiring of dependencies in configService as useless, the entire line must be removed. It could be that this was a class that was later turned into a Spring bean, so just an oversight.

This the the result of the mistake, or at least a snippet of the surefire report showing the offending NullPointerException.

-------------------------------------------------------------------------------
Test set: com.mayloom.config.ConfigServiceTest
-------------------------------------------------------------------------------
Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.282 sec <<< FAILURE!
updateConfiguration(com.mayloom.config.ConfigServiceTest)  Time elapsed: 0.266 sec  <<< ERROR!
java.lang.NullPointerException
	at com.mayloom.config.ConfigServiceImpl.getProperties(ConfigServiceImpl.java:72)

Now, if you had been making various Spring configuration changes in your project, and you see the above, one could start thinking that a wiring configuration error is the cause and waste hours reconfiguring and basically looking in the wrong place, even though the mistake made is quite obvious to us now.

AFTER

To quickly rule out wiring errors as a cause, in this case, and in general, all one has to do is to check that wired dependencies are not null as shown in the Default initialization and destroy methods section of the Spring manual.

So, we would end up with the pertitent section of our spring configuration looking as as shown below, with the default-init-method being what we want to include.

<beans xmlns="http://www.springframework.org/schema/beans"
       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
       xmlns:context="http://www.springframework.org/schema/context"
       xsi:schemaLocation="http://www.springframework.org/schema/beans
           http://www.springframework.org/schema/beans/spring-beans-3.0.xsd
           http://www.springframework.org/schema/context
		   http://www.springframework.org/schema/context/spring-context-3.0.xsd" default-init-method="init">

Next, we’ll update ConfigServiceImpl as shown below.

public class ConfigServiceImpl implements ConfigService {

  @Autowired
  @Qualifier("propertyServiceBean")
  protected PropertyService propertyService;

  // this is (unsurprisingly) the initialization callback method
  public void init() {
    	log.info("==================== ConfigServiceImpl =========================");
        if (this.propertyService == null) {
            throw new IllegalStateException("The [propertyService] property must be set.");
        }
    }

Thats pretty much it, no advanced concurrency lesson here, no generics to marvel at, but it may save you time.

SUPERIOR APPROACH – UPDATE 03 Dec 2013

The techniques listed above are suboptimal in terms of how Spring is used. A better approach is to annotate bean initialization methods with @PostConstruct combined with a configuration instruction for Spring to process this annotation (add <context:annotation-config/> to your Spring application context configuration file). Then, one would use a RequiredAnnotationBeanPostProcessor and this would be done by annotation setter methods with @Required. The end result of using the @Required annotation on the setters of dependencies is that Spring will enforce @Required properties being set and this will be done before beans are made available for use, that is, the enforcement will occur during the bean post processing phase of the application context initialization lifecycle.