Maven Cobertura and JaCoCo Plugins – What Are Your Rock Bottom, Minimum Standards?

When it comes to test driven development, the best line I’ve heard is the following.

Your clients are not a distributed test environment

These wise words where uttered by my Spring Core lecturer while covering the difference between unit and integration tests in Spring parlance. On the note of unit and integration tests, after working on a vast array of projects, it has dawned on me, with some sadness, that not a single project, organization or team I’ve been on has had non negotiable standards when it comes to code coverage. Of course, most projects have had unit testing as a part of a checklist, but not a single one has made a lasting effort to both measure and enforce a minimum goal in terms of code coverage.

In this post we take a look at potential rock bottom configurations for the free cobertura-maven-plugin in particular and also visit the jacoco-maven-plugin. Finally we encounter lacking JDK 8 support and start considering paying for a commercial plugin.

Before delving into code coverage tooling it’s worth asking why it matters, to whom, and what it means. So, what does code coverage mean? Why should software engineers care? Why should project managers care? Why should project sponsors care? If a consultancy (vendor) is doing the development, why should this organisation care? These are not questions we’ll delve into here in depth other than noting that coverage reports help detect code that had not been adequately tested by automated test suites.

No Standards? Introduce a Lower Quality Gate

So what to do in a world of little to no standards? In my mind the answer is to set one’s own personal standards, starting with defining what rock bottom is. This is a personal professional line in the sand. Its also a great question, when considering joining a project or organization, to ask of your prospective employer. The question would be what unit, integration and system test code coverage standards the organization has and then crucially how they are enforced and made visible to all concerned.

In terms of motivating the need to minimum standards, the term quality gate seems apt. On a given project, even personal project, one would have two gates, the lower gate would be enabled by default and builds will fail on developer machines if this minimum standard is not met, a CI server would also independently verify using the minimum standard. If this lower quality gate has not been met, the project manager or development manager should know about it.

The Plugins

Lets move onto the plugins. The cobertura-maven-plugin is used to report on and check your unit test code coverage using the Cobertura code coverage utility. So we’ll first check if all tests are passing and then check to make sure our standards have been met. Once we move onto the integration test phase, where our beans and infrastructure is tested in concert, the jacoco-maven-plugin will report on and check our integration test code coverage. 

The importance of performing both unit testing (individual classes) and integration testing (incorporating a container such as the Spring context) cannot be overstated. Both plugins and so both types of testing must be done in a given project and this stands to reason: we want coverage standards for individual classes as well as cooperating runtime services and ordinarily we only proceed to the latter once the former has succeeded as per the Maven Build Lifecycle.

Rock Bottom – Our Lower Quality Gate

It stands to reason that there some should be some correlation between the application domain and the amount of effort one will invest in unit and integration testing. When it comes to rock bottom however the application domain is irrelevant since it represents our bare minimum standard that is domain agnostic.

In terms of the merits of a rock bottom configuration for Cobertura and JaCoCo, the following IBM developerWorks sourced statement supports such an approach.

The main thing to understand about coverage reports is that they’re best used to expose code that hasn’t been adequately tested.

Cobertura

Defining a minimum standard when it comes to Cobertura, as it turns out, takes some effort when one considers the array of options one has to consider. For example the configuration below is the usage example provided on the official plugin page.

      <plugin>
        <groupId>org.codehaus.mojo</groupId>
        <artifactId>cobertura-maven-plugin</artifactId>
        <version>2.6</version>
        <configuration>
          <check>
            <!-- Min branch coverage rate per class. 0 to 100. -->
            <branchRate>85</branchRate>
            <!-- Min line coverage rate per class. 0 to 100. -->
            <lineRate>85</lineRate>
            <haltOnFailure>true</haltOnFailure>
            <!-- Min branch coverage rate for project as a whole. -->
            <totalBranchRate>85</totalBranchRate>
            <!-- Min line coverage rate for project as a whole. -->
            <totalLineRate>85</totalLineRate>
            <!-- Min line coverage rate per package. -->
            <packageLineRate>85</packageLineRate>
            <!-- Min branch coverage rate per package. -->
            <packageBranchRate>85</packageBranchRate>
            <regexes>
              <!-- Package specific settings. -->
              <regex>
                <pattern>com.example.reallyimportant.*</pattern>
                <branchRate>90</branchRate>
                <lineRate>80</lineRate>
              </regex>
              <regex>
                <pattern>com.example.boringcode.*</pattern>
                <branchRate>40</branchRate>
                <lineRate>30</lineRate>
              </regex>
            </regexes>
          </check>
        </configuration>
        <executions>
          <execution>
            <goals>
              <goal>clean</goal>
              <goal>check</goal>
            </goals>
          </execution>
        </executions>
      </plugin>

The first question that comes to mind when it comes to the above is what the configuration means in the first place. The main concept we need is the difference between the line rate and branch rate, which has been neatly explained here. So, a potential starting point would be a 50% line coverage rate on a project wide basis as a rock bottom configuration with branch coverage excluded. Naturally we will halt on failure as a rule since this is our bare minimum standard and not necessarily what we will aspire to achieve.

			<plugin>
        		<groupId>org.codehaus.mojo</groupId>
        		<artifactId>cobertura-maven-plugin</artifactId>
        		<version>2.5.2</version>
        		<configuration>
        			<instrumentedDirectory>target/cobertura/instrumented-classes</instrumentedDirectory>
          			<outputDirectory>target/cobertura/report</outputDirectory>
          			<check>
            			<haltOnFailure>true</haltOnFailure>
            			<totalLineRate>50</totalLineRate>
          			</check>
        		</configuration>
        		<executions>
          			 <execution>
                        <id>cobertura-clean</id>
                        <phase>clean</phase>
                        <goals>
                            <goal>clean</goal>
                        </goals>
                    </execution>
                    <execution>
                        <id>cobertura-instrument</id>
                        <phase>process-classes</phase>
                        <goals>
                            <goal>instrument</goal>
                        </goals>
                    </execution>
                    <execution>
                        <id>cobertura-verify</id>
                        <phase>verify</phase>
                        <goals>
                            <goal>check</goal>
                        </goals>
                    </execution>
        		</executions>
      		</plugin>

JaCoCo

When using JaCoCo to generate code coverage reports both that jacoco-maven-plugin and maven-failsafe-plugin must be configured as per this excellent resource.

<plugin>
    <groupId>org.jacoco</groupId>
    <artifactId>jacoco-maven-plugin</artifactId>
    <version>0.6.3.201306030806</version>
    <executions>
        <!-- The Executions required by unit tests are omitted. -->
        <!--
            Prepares the property pointing to the JaCoCo runtime agent which
            is passed as VM argument when Maven the Failsafe plugin is executed.
        -->
        <execution>
            <id>pre-integration-test</id>
            <phase>pre-integration-test</phase>
            <goals>
                <goal>prepare-agent</goal>
            </goals>
            <configuration>
                <!-- Sets the path to the file which contains the execution data. -->
                <destFile>${project.build.directory}/coverage-reports/jacoco-it.exec</destFile>
                <!--
                    Sets the name of the property containing the settings
                    for JaCoCo runtime agent.
                -->
                <propertyName>failsafeArgLine</propertyName>
            </configuration>
        </execution>
        <!--
            Ensures that the code coverage report for integration tests after
            integration tests have been run.
        -->
        <execution>
            <id>post-integration-test</id>
            <phase>post-integration-test</phase>
            <goals>
                <goal>report</goal>
            </goals>
            <configuration>
                <!-- Sets the path to the file which contains the execution data. -->
                <dataFile>${project.build.directory}/coverage-reports/jacoco-it.exec</dataFile>
                <!-- Sets the output directory for the code coverage report. -->
                <outputDirectory>${project.reporting.outputDirectory}/jacoco-it</outputDirectory>
            </configuration>
        </execution>
    </executions>
</plugin>

JDK 8 Support Lacking – Time To Look At Altassian Clover

While producing this post I had to abondon using both cited plugins and to start looking at Altassian Clover since the two cited free plugins do not support JDK 8 at present but Altassian Clover does. The latter does come with a $300 price tag, and that should be fine, it worth spending money on good development tools.

cobertura-maven-plugin issue log

Issue 1: cobertura-maven-plugin 2.6 gave incessant error, downgraded to 2.5.2 which made the error go away. Did not have the time for analysing the reasons for the failure.

Issue 2: Tests would not run with the mvn clean verify command, got incessant exceptions and bytecode on the console with the reason being “Expected stackmap frame at this location.” As it turns out this is was due to JDK 8 not being supported. Downgrading to JDK 7 was not an option for me, neither was spending time on understanding subtle new behaviours of JDK 7 .

Advertisements

The Development Manager Role – An Essential Counter Weight

Gemsbok fightProject oriented development organizations have their plus points, but one should never forget this basic and fundamental point. And that is that there is

an inherit conflict of interest in between the Development Manager and the Project Manager.

In my opinion the role serves as an essential counter weight, and by that I mean a counter weight to short term profit oriented actions in software development projects that end up killing long term profitability.

Automated deployment with Maven and friends

I’m busy with an automated deployment configuration task and stumbled upon the following excellent presentation by John Ferguson Smart from Wakaleo Consulting: Automated deployment with Maven and friends – Going the whole nine yards

If I can add some context to labelling the presentation as excellent. In essence it represents what I regard as best practices. Setting up the mentioned infrastructure takes time, but WILL save time and money in the long run. Here are some other reasons why I advocate the use of the practices outlined in the presentation:

  • I’ve had the misfortune of a three month stint as a human continuous integration / deployment “Jenkins” in the early days of a large scale enterprise project, this has left me scarred and obsessed with automated deployment and subsequent testing.
  • I’ve seen Liquibase replicated many times, at considerable expense.
  • If you don’t set it all up early in a project, you will pay for it with wasted time, one way or the other.

 

Search your source with wcgrep -l methodname

Having used, for years, the inferior command

$grep -rin 'methodname' .

when recursively searching through files for a method, the discovery of

wcgrep -l methodname

comes as a shock.

Looks like I’m not the only one using $grep -rin. In any case, there is a new, clever kid on the block, freshly installed in /usr/bin: wcgrep -l methodname

It is smart enough not to search Subversion’s text-base files, something that has plagued me, and perhaps you too. For an entertaining post on the topic, read this post on justinsomnia.org, published six years ago, ouch, that rubs some serious salt into the wound.

You can find the wcgrep script HERE.

Java Exception Rule Book

This post consists of a briefly outlined set of Java Exception rules, or best practices, with an accompanying look at rule compliance using specific transient network and database layer Exceptions.

The rules are based on the relevant rules from Joshua Bloch’s lauded and highly recommended Effective Java, 2nd Edition. Bloch is arguably the authority on the subject, and his legacy includes being listed as author of the Sun JDK Throwable implementation (see @author in the source code).

RULE BOOK

The author has violated these rules hundreds if not thousands of times over the last decade. As a mitigating factor it can be argued that the rules in themselves represent exception utopia, and so, it is highly unlikely that they will all be consistently followed in any given code base, or put differently, that even with the best of intentions any software engineer will follow all of them due to say project timeline pressures.

Such pressure may however be a symptom of not asserting one’s own standards when a unit of work or project commences, to alleviate pressure one can always clearly communicate one’s own non negotiable engineering standards at the start of a project.

Rule 1: Never use exceptions for ordinary control flow

import java.util.ArrayList;
import java.util.Iterator;

/**
* Do not, use Exceptions for flow control, this is an example of what not to do.
*
* @author nico
*/
public class DontUseExceptionsForFlowControl {

 public static void main(String[] args) {

   ArrayList list = new ArrayList();

   list.add("hello world");

   // Let's to the wrong thing, and jump out of this loop with an iterator related
   // Exception...

   Iterator iter = list.iterator();

   while(true) {

     try {

       System.out.println(iter.next());

     } catch (java.util.NoSuchElementException e) {

        // Using Exception for flow control, so can safely ignore this one.
        // TODO Stop using Exceptions for flow control.
     }

   }

  }

}

Using Exceptions for ordinary control flow violates the Principle of Least Astonishment which states “the result of performing some operation should be obvious, consistent, and predictable, based upon the name of the operation and other clues”.

Rule 2: Never write APIs that force others to use Exceptions for ordinary control flow

/**
* Database layer API, implementation detail agnostic.
*/
public interface EmailDatabaseHelper {

 // TODO refactor, this forces the user to catch checked exception EmailDoesNotExistException, with an email
 // address not existing being in no way exceptional, and hence in ordinary control flow
 public void doesEmailExist(String email) throws EmailDoesNotExistException;
}

Rule 3: Use runtime exceptions for programmer errors and checked exceptions where recovery is possible

Note that this rule does not state, as you may expect, that you must use unchecked exceptions for programmer errors, since that casts too wide a net. Unchecked throwables include runtime exceptions and errors, with the latter conventionally reserved for JVM error reporting under conditions where continued execution is impossible.

Rule 4: When uncertain as to whether a condition is recoverable or not, use an unchecked exception

The reasoning behind this is simple, if you do not know how an API user will recover from the checked exception, do not place the burden on the API user in terms of trying to figure out how, and potentially wasting time concluding that its not possible.

Rule 5: Only use checked exceptions if the API user can take action to recover from the said exception

If the only course of action, that you could take, when confronted with your own checked exception, is a variation of the below example then chances are good that you should not be using a checked exception. The checked exception adds no value, so, either attempt to refactor as per Rule 6, or use an unchecked exception.

} catch (CheckedExceptionWithNoUsefulActionPossible e) {
  logger.error("an error occurred");
  e.printStackTrace();
  System.exit(1); // or stopping the current thread
}

Rule 6: Refactor, where possible, checked exceptions that violate rule 5 into a state-checking method and unchecked exception

Before.

try {
  ball.kick(HARD);
} catch (BallCannotBeKickedException e) {
  // recover from exceptional conditions
}

After. State-checking method and unchecked exception has been introduced.

// ball will not be accessed concurrently, and so the calling sequence is safe
if (ball.isKickeable()) {
  ball.kick(HARD);
} else {
  // recover from exceptional condition
}

Rule 7: Clearly document if the unrecoverability of an unchecked exception is likely to be transient in nature

Certain conditions are unrecoverable at a particular instant but not permanently, and a prime example is an exception related to a lock being held on a database table on which say an update is being attempted. Methods that throw unchecked exceptions where the exception relates to a condition that may be transient in nature should document this fact, that is that the condition may be transient, and suggest a retry in the Javadoc comment associated with the unchecked exception.

Rule 8: Use the standard Java platform library unchecked exceptions where appropriate, do not re-invent the wheel

Reuse the standard Java platform library unchecked exceptions whereever possible, whilst honouring their documented semantics. A prime example is IllegalArgumentException. See the subclasses of RuntimeException for further candidates.

Rule 9: When dealing with lower-level exceptions inappropriate to the higher-level abstraction, perform either exception translation or exception chaining

Exception translation.

try {
     // lower level method invocation e.g. JPA call
} catch(LowerLevelMethodInvocationException e) {
     // now translate the lower level exception into an exception that matches
     // the higher level abstraction in our current context
     throw new HigherLevelException(...);
}

Exception chaining with chaining-aware contructor.

class HigherLevelAbstractionException extends Exception {
    HigherLevelAbstractionException(Throwable cause) {
         super(cause);
    }
}
try {
     // lower level method invocation e.g. JPA call
} catch(LowerLevelMethodInvocationException e) {
     // now translate the lower level exception into an exception that matches
     // the higher level abstraction in our current context
     throw new HigherLevelAbstractionException(e); // use chaining-aware constructor
}

Rule 10: You MUST document all exceptions, checked and unchecked, thrown by your methods

Excuse the all-caps and repetition will all exceptions being defined, but proper documentation is in no way negotiable and is every software engineers professional responsibility. The Javadoc @throws tag must be used to document both checked and unchecked exceptions in terms of the conditions under which they will be thrown. The only difference between checked and unchecked exceptions is that you must only define checked exceptions with the throws keyword, do not include unchecked exceptions here.

Rule 11: Force including the values of all parameters and fields that comprise the cause of an exception in its detail message

Support personnel or fellow programmers, when faced with a stack trace in say a log file, require pertinent data in order to ascertain exactly what caused an exception. Without the values of all parameters and fields that comprised the exception, it may be impossible to reproduce an exception.

In terms of forcing including pertinent parameter and field data in the detail message of exception, simply do not provide a constructor that has a string parameter as detail message, rather force the user to specify the said data in the constructor.

Consider the following checked exception, which has accessor methods since as per Rule 5 such exceptions should be used for recoverable exceptions.

/**
* @author Nico
*/
public class ServiceOperationException extends Exception {

private String suppliedApiKey;

/**
* Construct an ServiceOperationException.
*
* @param suppliedApiKey    the API key supplied to the service user upon successful registration
*/
public ServiceOperationException(String suppliedApiKey) {

// detail message
super("Supplied API Key: " + suppliedApiKey);

// capture for recovery purposes
this.suppliedApiKey = suppliedApiKey;
}

public String getSuppliedApiKey() {
return suppliedApiKey;
}

public void setSuppliedApiKey(String suppliedApiKey) {
this.suppliedApiKey = suppliedApiKey;
}
}

Rule 12: Ensure that your methods are failure atomic and if not document this fact in your API

Methods that are failure atomic leave an object in the state it was prior to the method invocation in the event of failure. Either ensure parameters have their validity checked before proceeding to to the actual work (and modification) in a method invoked on a mutable object, or ensure the relevant class is modified to be immutable.

Rule 13: Never ignore exceptions

From time to time you may see empty catch blocks, and in the worst case, catch blocks that catch java.lang.Exception (as listed in Tim McCune’s Exception-Handling Antipatterns) or java.lang.Throwable. This is a cardinal sin, and should never be done. At the very least, the exception should be logged (that is the entire stack trace should be logged) with an appropriate log level. In some cases, it may be justifiable to take no action, but in such cases, comments must justify why no action is taken.

RULE COMPLIANCE – RECOVERABLE TRANSIENT CONDITIONS

Some exceptions are associated with conditions that are possibly but not necessarily transient in nature, and so, when faced with them, the appropriate course of action is to automatically retry the operation. The Spring Batch Retry mechanism is geared for exactly such exceptions.

In terms of the rules, Rule 3, 4, 5, 6 and 7 are applicable, given our definition of the conditions being transient, but not with absolute certainty. In other words, given the lack of certainty as to whether the condition is recoverable, an unchecked exception should be used, as per Rule 3, 4 and Rule 5. If we choose not adhere to this rule, and feel the condition is indeed recoverable, and use a checked exception, as per Rule 5, then as per Rule 6 one should refactor into an unchecked exception with an accompanying state-checking method.

So, in short, we should use:

  • unchecked exception (if uncertain if recoverable) or if unrecoverability is a certainty
  • checked exception if recoverable
  • even better, unchecked exception with a state-checking method if recoverable

Database Layer

Consider the following exceptions, that one may see with a Spring / JPA / Hibernate stack:

  1. UnexpectedRollbackException (Spring)
  2. OptimisticLockException (JPA)
  3. LockAcquisitionException (Hibernate)

The decision to make all of these unchecked exceptions complies with the rules, given that within the context of a single transaction, the underlying condition is not recoverable. Within the context of retries, and so multiple transactions, the condition may indeed be recoverable due to the transient nature of the underlying conditions (a lock on a table will most likely not be permanently held). So, methods that throw these exceptions should firstly document that they throw these exceptions with @throws Javadoc (as per Rule 10) and then also document the fact that the user may wish to attempt retries (as per Rule 7).

Network Layer

Another example of an exception that may point to a transient condition is a SocketException, especially when considering its subclasses, BindException, ConnectException, NoRouteToHostException and PortUnreachableException. These exceptions are all checked exceptions, and this is incorrect since at the instant an associated method was called, recovery would not necessarily be possible, that is, recovery is not a certainty, so Rule 3 is violated. Rather, as per Rule 4, the exceptions should be unchecked and the transient nature of the exceptions should be documented as per Rule 7.

Key References

  1. Chapter 9 of Joshua Bloch’s (the author of java.lang.Throwable) Effective Java, Second Edition. If you don’t own a copy, buy one, you won’t regret it.
  2. Tim McCune’s Exception-Handling Antipatterns

Empty if() statements in your code – keep them or dump them?

When using the static Java code analysis tool PMD, one of the basic rules that you can violate is the empty if statement rule, which has a critical severity level by default. Why is there such a rule in the first place and is the critical severity level warranted? As an avid user, or abuser in terms of PMD, of empty if statements, with no clear answer, I went and trawled the web for answers.

To start off with, PMD does offer the following advise, in the Best Practices page

Generally, pick the ones you like, and ignore or suppress the warnings you don’t like. It’s just a tool.

In the context of the subject matter of this post, that may be a hint to buy the official book, PMD Applied, which is fair enough. In any case, lets get onto the first obvious example of why the rule is warranted and motivate why it should be a critical violation:

if (firstCode != null && firstCode.getCode() != null && firstCode.getCode().equals("101")) {
    doSomething();
} else {
    // TODO: process code "102"
}

Another version of the above is of course the version without the TODO comment:

if (firstCode != null && firstCode.getCode() != null && firstCode.getCode().equals("101")) {
    doSomething();
} else {

}

Both cases are clear problem areas, as the code is blatantly incomplete, and if your unit tests don’t pick it up, you certainly want your static code analyzer doing so. But what about the following:

if (firstCode != null && firstCode.getCode() != null && firstCode.getCode().equals("101")) {
    doSomething();
} else {
    // only supporting 101 in this release, 102 onwards to follow
}

In this example, some value is being added in terms of comments, and I have regularly been taking this approach, until PMD forced me to think about it. I took the approach in part because from what I can remember from compiler implementation theory any given compiler, such as a Java byte code compiler, would be clever enough to detect and remove the dead code.

As it turns out, from some limited research, this is not necessarily the case, since if it was, then shrinking tools such as ProGuard would surely not list “Remove unnecessary branches” as a byte code optimisation feature.

I managed to find very few opinions on this online, although the following was of interest. In the end, I concluded that the gain provided in the documentation in the final example does not add sufficient value to justify the risk of not completing a branch, and also, it bloats the code base. So, I have resolved to do away with the practise and to move such comments either above the if statement or into the javadoc where appropriate.

Hiring A Software Product Owner – Key Candidate Measurement Criteria

Measurable Engineering Excellence Should Matter To Your Product Owner - Not Simplistic Perceptions of Hard Work

In a software project,  the product owner is responsible for representing the interests of stakeholders and the business. The importance of this role cannot be overstated, since put in a more matter of fact way, the product owner is responsible for the success of the product. In this post, key candidate measurement criteria, garnered from the experiences of the author, are presented. These will help you avoid the nightmare scenario where vasts sums are paid devoid of any prior measures whatsoever, with purely subjective motivations such as “the guys have been working really hard” or “I have seen the user interface demo“.

Before proceeding to the key measurement criteria, we briefly address who the product owner may be and what the interests of the stakeholders and business may be.

Who Is The Product Owner?

The product owner, an individual, can either literally be the product owner or an appointed agent in a software development project using, as an example, a Scrum management framework. In a small to medium sized enterprise, the product owner may be an individual that has additional responsibilities, and in a startup, it is likely that a founder takes on the role, in which case, it would be investors that would in effect be ‘hiring’ their product owner.

Regardless of the structuring in the enterprise, as with any, if the individual with the responsibility is not held accountable for failure, or on the flip side sufficiently rewarded for success, then in the absence of extreme luck, you are doomed, if not in the short run, then most probably in the long run. Naturally, with accountability comes authority, and likewise, if your product owner does not have sufficient authority, formally, then in the absence of extreme luck…

What Are The Interests Of The StakeHolders and Business?

The interests of stakeholders and the business are naturally variable, but we can at the very least presume that the business has a short term or long term goal of profitability. In other words, the success of the product can generally be measured at the very least in terms of growth in profitability. It is worth noting, and this is by no means piercing insight, that without formal project parameters presented as directives by the stakeholders and the business, in the absence of extreme luck…

Key Measurement Criteria

The following criteria are in no way negotiable, and admittedly, the first, and most important, is difficult to measure.

1. Passion For Both Software Engineering and Your Business Domain

Although passion is subjective, there are ways to measure at the very least interest levels when it comes to both software engineering and your business domain. In a general sense, a person without passion, especially in a key position, is likely to demotivate the rest of the team.

2. Commercial Software Development Experience

Elementary Filtering

If the candidate does not have any commercial experience whatsoever, not matter how good they look, whether on paper, physically or in terms of reputation, by hiring them into this key senior position you are taking a risk that is not worth taking. It may be repetitive, but again, if the candidate matches either of the following, do not hire them as your software product owner:

  • No commercial experience whatsoever – please note that working at a foundation or not for profit organization does not count as commercial experience.
  • No commercial software development experience – please note that university projects simply do not count, and again, developing software at a foundation or not for profit organization does not count as commercial experience.

Secondary Filtering

The candidate should ideally have software engineering experience in an environment with mature software engineering processes, and in the first instance, the candidate should have some feel for what immature and mature processes are. It should be remembered that the product owner must look after the interests of all stakeholders, and if for example quality and ease of maintenance is an objective, you do not want an individual who has no idea how to measure the many different facets of software quality, and in the worst case, motivates for or authorises the payment for a project based on a variation of how hard he/she believes the team has worked or a simple user interface demonstration.

Although the following set of concepts are by no means all-encompassing, the candidate should understand and have at least some knowledge of:

You will notice that there is a focus on measures above, and this is key. Software development can very quickly degrade into a subjective discipline if left devoid of measures of success, and an entirely subjective process is not engineering, its flying by the seat of your pants. In short, there is simply no way a product owner can do their job without a strong software engineering background.

3. Suitable Personality Traits

Meticulous And Streetwise Leader With Some Charisma

You are unlikely to see this as a bullet point in a job specification requirement as it sounds a bit ridiculous, so lets cull the ‘Some Charisma’ requirement, and we are left with ‘meticulous, streetwise leader’. It still doesn’t sound kosher enough, but lets get to the semantics regardless. When it comes to the leader portion, you can look at the candidates CV for a past history in this regard. So we are left with the meticulous and streetwise requirements. When we say streetwise, it again comes down to experience, on the software development and product development streets, if someone tries to pull a fast one (e.g.  try to leave an intellectual property clause out of a contract), your product manager should know the move.

Respectful To Specialist Engineers (And All Stakeholders)

Although the candidate should have a strong software engineering background, over time it is likely that there will be a widening gulf between the expertise of specialist software engineers and the generalist product owner.

You do not want someone who will belittle what they do not understand, and it happens time and time again, especially if the person does not have the required software engineering qualifications and experience to begin with. In the worst case scenario, your product owner will openly show contempt for, and blame failures on, technical team members and in particular developers, rather than investigate the root causes of problems and potential remedies (e.g. seeking the counsel of an independent engineering consultant). In this worst case scenario, your engineers will leave, and the better ones will go first.

Likely To Assume Responsibility For Failure

You want to avoid hiring the type of person who is likely to hide failure and not assume responsibility for it. Here we are entering the murky waters of professional integrity, and the potential for this post to degrade, ad nauseum, into a  lecture on ethics. But on a serious note, how does one avoid hiring the kind of person who is likely to be dishonest, or in a different form, only report on successes, but not known failures in order to protect their own interests?

When it comes to this topic, while one can rely on one’s gut feelings, industrial psyhcologists and psychometrists are best placed to advise. Costs might become a consideration here, but one can apply testing on the short list and the money spent here may be some of the best dollars you have ever spent. Based on our own basic research we have determined that role play exercises, rather than questionairres (e.g. Myers-Briggs) are the best placed tools, but again, it is best that you consult with the experts in this domain.