Code coverage consists of a set of software metrics that can tell you how much of the production code is covered by a given test suite. It's purely quantitative, and does not say anything about the quality of either the production code or the test code. That said, the examination of code coverage reports will sometimes lead to the discovery of unreachable code which can be eliminated. But more importantly, such reports can be used as a guide for the discovery of missing tests. This is not only useful when creating tests for existing production code, but also when writing tests first, such as in the practice of TDD (Test Driven Development).
JMockit Coverage provides three different and complementary code coverage metrics: line coverage, path coverage, and data coverage. An example coverage report showing all metrics can be found on-line.
The line coverage metric tells us how much of the executable code in a source file has been exercised by tests. Each executable line of code can be uncovered, covered, or partially covered. In the first case, none of the executable code in it was executed at all. In the second, all of the code was fully executed at least once. In the third case, only part of the executable code in the line was executed. This can happen, for example, with lines of code containing multiple logical conditions in a complex boolean expression. JMockit Coverage identifies all three cases, computing the coverage percentage for each executable line of code accordingly: 0% for an uncovered line, 100% for a covered line, or some value in between for a partially covered line.
A branching point exists wherever the program makes a decision between two possible execution paths to follow. Any line of code containing a logical condition will be divided in at least two executable segments, each belonging to a separate branch. An executable line of source code with no branching points contains a single segment. Lines with one or more branching points contain two or more executable segments, separated by consecutive branching points in the line.
Lets say that NS >= 1
is the number of executable segments on a given line.
If NE
is the number of segments in that line which were executed at least once during a
test run (ie, they are covered segments), then we can calculate the coverage percentage for the line as
100 * NE / NS
.
Similarly, the line coverage percentage for a whole source file is calculated from the total number of executable segments and the total number of covered segments, considering all executable lines of code in the file. The percentage for a package, in turn, is calculated from the total and covered numbers of segments in the whole set of source files belonging to the package. Finally, the total code coverage percentage is computed by the same formula on the totals for all packages.
A completely different metric is path coverage, which is computed for method and constructor bodies, not for lines or segments of code. It tells us how many of the possible execution paths through a method or constructor, from entry to exit, have been executed at least once during the test run.
Note that each method or constructor has a single point of entry, but can have multiple exits.
An exit occurs when a return
or throw
statement is executed.
These are normal exits, of course. A method/constructor execution can also terminate abruptly, by
propagating an exception (or error) thrown as a result of a method call, an attempt to access a null
reference, or some other action which caused an unintended program failure.
Each possible path can be either fully executed (covered) or not (uncovered). Paths that execute only partially (ie, they were terminated abruptly) are simply considered as uncovered.
The path coverage percentage for a method or constructor body is computed in a way similar to the line
coverage computation.
If NP
is the number of possible paths through the implementation body and
NPE
is the number of paths executed from entry to exit, then the metric is computed as
100 * NPE / NP
.
Also in the same way as the line coverage metric, we extend this formula to the whole source file, the whole package,
and the whole set of packages touched by the test run.
Measures how many of the instance and static non-final fields were fully exercised by the test run.
To be fully exercised, a field must have the last value assigned to it read by at least one test.
The percentages are calculated as 100 * NFE / NF
, where NF
is the number of non-final
fields and NFE
the number of fully exercised fields.
The JMockit Coverage tool can generate the following types of output:
.java
" source files inside all directories of name "src
"
found directly or indirectly under the current working directory; any intermediate sub-directories between
"src
" and the top-level package directory, such as "src/java
" for example, are also
searched.
coverage.ser
" is written
under the current working directory or a specified output directory.
If the file already exists, its contents are either overwritten or appended with the in-memory results of the
current test run, as specified.
mockit.coverage.data.CoverageData.readDataFromFile(File)
method will create a new
CoverageData
instance with all the coverage data available in a given serialized file.
For more on this, refer to the API documentation available in jmockit-coverage.jar
.
When running a test suite with the coverage tool, there is optional "call point" information which can be gathered, as selected by the user. A call point is the point in the source test code from which a specific line of production code was exercised.
Generating coverage with this extra information takes more time and produces significantly larger output; on the other hand, it can be useful to know which lines of test code caused a given line of production code to be executed during the test run. When included in the HTML report, the list of call points appears hidden at first but can be easily viewed by clicking on each executable line of code.
To enable the JMockit Coverage tool in a JUnit/TestNG test run, add both jmockit.jar
and
jmockit-coverage.jar
to the runtime classpath.
With JUnit, make sure that jmockit.jar
appears first in the classpath.
(For more details on running tests with JMockit, see the corresponding
section in the Tutorial.)
When not using the JMockit mocking APIs, code coverage can still be activated without adding any jar to the
classpath.
Instead, run with "-javaagent:<proper path>/jmockit-coverage.jar
" as a JVM
initialization parameter.
In most cases, the coverage tool does not require any additional configuration to be used.
There are, however, several aspects of the tool's behavior which can optionally be configured for a given
test run.
This is done by setting one or more of several "jmockit-coverage-xyz
"
system properties for the JVM instance running the test suite.
For convenience, the "jmockit-
" prefix can be omitted, so "coverage-xyz
" is also
valid.
Note that you should be able to easily specify these properties inside an Ant target, a Maven surefire
plugin configuration, or a test run configuration for your Java IDE of choice, using either JUnit or TestNG; no
JMockit-specific plugin is needed.
The available configuration properties are:
[jmockit-]coverage-output
: one or more comma-separated values between
html
, html-nocp
("nocp" stands for "no call points"),
serial
, and serial-append
, which select the kind of
output to be generated at the end of the test run.
The default if none is specified is to generate the basic HTML report (html-nocp
).
serial
" or "serial-append
" causes a serialized data file of
name "coverage.ser
" to be generated; in the case of "serial-append
", coverage data
gathered by the current test run will be appended to the contents of a previously existing data file
(if said file doesn't exist, it has the same effect as "serial
").
[jmockit-]coverage-outputDir
: absolute or relative path to the output directory, to
be used for writing any "coverage.ser
" or "index.html
" files (plus the remaining
".html
" files of the HTML report, in automatically created sub-directories).
By default, the current working directory of the running JVM is used, with all ".html
" files of the
HTML report generated inside a "coverage-report
" sub-directory.
[jmockit-]coverage-srcDirs
: comma-separated list of Java source directories to be
searched when generating an HTML report.
(This is not relevant for the serialized data file.)
Each directory is specified by an absolute or relative path.
If no such directory is specified, all "src
" directories under the current working directory are
searched.
[jmockit-]coverage-classes
:
Either an OS-like regular expression (with the typical "*
" and "?
" wildcards), or a
java.util.regex-conformable
regular expression.
The given expression will be used to select the classes (by fully qualified name) from production code which
should be considered for coverage.
By default, all classes in production code loaded during the test run and which are not inside jar files are
considered.
some.package.*
" selects all classes under some.package
or any
sub-package.
loaded
", then all classes will be considered,
but only those which get loaded by the JVM during the test run; classes that are part of the codebase but never
get loaded are left out.
This is very useful when the test run includes only a few tests, targeting only a subset of the codebase.
[jmockit-]coverage-excludes
:
The same as the previous property, but for class names which should be excluded from consideration when
instrumenting classes for coverage.
This property can be used together with coverage-classes
or on its own.
By default, no classes between those selected for coverage are excluded from consideration.
[jmockit-]coverage-metrics
:
one or more comma-separated words between line
(the default),
path
, data
, and all
,
which select the specific set of code coverage metrics to gather coverage information for.
[jmockit-]coverage-check
:
one or more semicolon-separated rules specifying minimum coverage checks to be performed at the end of a
test run.
By default, no such checks are performed.
For details, see the Checking minimum coverage section.
When the coverage tool generates a report at the end of a test run, it always overwrites any previous report. Normally, the coverage data from which the report is generated reflects only what was gathered during the current test run. Now suppose you have multiple test suites or test run configurations, and you want to generate a single aggregated HTML report for the code covered by the full set of tests. Here is where the "coverage.ser" serialized data files come in.
To activate the generation of these files, we simply set the coverage-output
system property to a value
containing "serial
" or "serial-append
".
As these two values suggest, there are different ways to combine multiple coverage data files.
The following sub-sections provide the details for each case.
Suppose we want to gather coverage data from multiple test runs and later generate an aggregate HTML report merging
together the results from all test runs.
Each test run needs to generate its own coverage.ser
file, so that later they can be merged together
in a final step which produces the report; therefore, each test run should be configured with
"coverage-output=serial
".
Note that, in order to preserve the original coverage.ser
output files generated by each test run, they
will need to be written or copied into different output directories.
Assuming that two or more coverage.ser
files are available in separate directories, an aggregate report
can be generated from them by executing the mockit.coverage.CodeCoverage.main
method (a regular Java
"main" method).
To facilitate this, the jmockit-coverage.jar
file is executable.
As an example, the following Ant task could be used:
<java fork="yes" dir="myBaseDir" jar="jmockit-coverage.jar">
<jvmarg line="-Djmockit-coverage-output=html"/>
<arg line="module1-outDir anotherOutDir"/>
</java>
The example above uses "myBaseDir
" as the base directory where a separate JVM instance will run.
Two output directories containing "coverage.ser
" data files are specified, as command line arguments.
Other configuration parameters can be specified through the "coverage-xyz
" system properties.
This separate JVM instance will read each of the "coverage.ser
" data files, merge the coverage data in
memory, and then generate the aggregate HTML report before exiting.
Another way to obtain an aggregate coverage report from the execution of multiple test runs is to accumulate coverage
data from all tests into a single data file.
This can be achieved by using the same working directory for all test runs or by pointing
coverage-outputDir
to a shared directory, while having
coverage-output=serial-append
for each test run.
Additionally, the last test run in the sequence should also specify html
or html-nocp
for the coverage-output
property, together with serial-append
.
Naturally, the first test run must not read data from this file; therefore, either the file should be
deleted before the first test run, or ignored by having the first test run use coverage-output=serial
.
So, the difference between output modes "serial
" and "serial-append
" is that with the first
we have multiple "coverage.ser
" files (each in a different directory used by a separate test run), while
with the second we share a single data file between all test runs.
If desired, JMockit Coverage can check that the final coverage percentages at the end of a test run satisfy arbitrary
minimum values.
Such checks can be specified through one or more checking rules assigned to the
"coverage-check
" system property (when more than one, they must be separated by ";" characters).
Each checking rule must be in the form "[scope:]min line percentage[,min path percentage[,min data percentage]]". There are three types of scopes:
80
" specifies that the total line coverage should be at least 80%, with no
minimum percentages for the other metrics.
An example specifying thresholds for all three metrics could be "70,60,85
".
Note that a value of "0
" can also be used to specify no minimum.
perFile
: Specifies minimum percentages that each source file must satisfy. If one or more
files end up with a lower percentage, the check fails.
An example: "perFile:50,0,40
", meaning that each source file must have at least 50% of line coverage
and at least 40% of data coverage.
com.important:90,70
" specifies that total line coverage for files under
"com.important
" should be at least 90%, while total path coverage should be at least 70%.
All checks (if any) are performed at the end of the test run (at JVM shutdown, actually).
Other forms of output (HTML report, serialized file) are not affected.
When an individual check fails, a descriptive message is printed to standard output.
If one or more checks have failed, two final actions are taken to have the fact reported: first, an empty file of
name "coverage.check.failed
" is created in the current working directory; second, an error
(specifically, an AssertionError
) is thrown.
When checks are performed but they all pass, the "coverage.check.failed
" file, if present in the current
directory, is deleted.
The use of a file to mark the success or failure of coverage checks is meant to allow build tools to react accordingly, typically by failing the build when the file is present. For example, we can do the following in an Ant build script:
<fail message="Coverage check failed">
<condition><available file="coverage.check.failed"/></condition>
</fail>
Or the following in a Maven pom.xml file:
<plugin>
<artifactId>maven-enforcer-plugin</artifactId>
<executions>
<execution>
<id>coverage.check</id>
<goals><goal>enforce</goal></goals>
<phase>test</phase>
<configuration>
<rules>
<requireFilesDontExist>
<files><file>coverage.check.failed</file></files>
</requireFilesDontExist>
</rules>
</configuration>
</execution>
</executions>
</plugin>
If you run tests with Maven's "test" goal, you will need the following dependencies in the pom.xml
file
(assuming the "jmockit.version
" property has been properly defined):
<dependency>
<groupId>org.jmockit</groupId>
<artifactId>jmockit</artifactId>
<version>${jmockit.version}</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.jmockit</groupId>
<artifactId>jmockit-coverage</artifactId>
<version>${jmockit.version}</version>
<scope>runtime</scope>
</dependency>
In Maven 2/3, the surefire
plugin is the one usually responsible for actually running tests.
To configure the coverage tool, specify values for the appropriate "coverage-xyz
" system properties.
For example, the output directory for generated files can be specified through the coverage-outputDir
property.
<plugin>
<artifactId>maven-surefire-plugin</artifactId>
<configuration>
<systemPropertyVariables>
<coverage-outputDir>target/my-coverage-report</coverage-outputDir>
<!-- other properties, if needed -->
</systemPropertyVariables>
</configuration>
</plugin>
Finally, if the tests don't actually use the JMockit mocking APIs, it's still possible to use the coverage tool.
In this case, the only dependency needed is the one on "jmockit-coverage
".
Additionally, it's necessary to configure the surefire
plugin as follows:
<plugin>
<artifactId>maven-surefire-plugin</artifactId>
<configuration>
<argLine>
-javaagent:"${settings.localRepository}"/org/jmockit/jmockit-coverage/${jmockit.version}/
jmockit-coverage-${jmockit.version}.jar
<!-- coverage properties, if any are needed -->
</argLine>
</configuration>
</plugin>
To have the JMockit Coverage HTML report included in the generated
Maven site documentation, the
src/site/site.xml
descriptor file needs to be provided, with contents similar to what's shown below.
<?xml version="1.0" encoding="UTF-8"?>
<project
xmlns="http://maven.apache.org/DECORATION/1.3.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/DECORATION/1.3.0
http://maven.apache.org/xsd/decoration-1.3.0.xsd">
<body>
<menu ref="reports"/>
<menu>
<item name="Code Coverage Report" href="../../coverage-report/index.html"/>
</menu>
</body>
</project>
Sometimes we want to turn coverage output off for a particular test run, without having to remove the coverage jar from the classpath. This can be done in two different ways.
For one, we can manipulate the read-only attribute of the relevant output file, when one has already been
generated.
The particular file to be manipulated, always in the working directory, is "coverage.ser
" for serialized
output or "coverage-report/index.html
" for HTML output.
The file attribute is checked by JMockit at startup; when marked as read-only it cannot be overwritten, so JMockit
avoids the attempt entirely.
Note that the working directory can usually be selected separately for each test run configuration in the Java IDE. Also, a Java IDE usually provides an easy mechanism to toggle the read-only status of a file in the project: in IntelliJ IDEA it is done by double clicking the status bar, with the desired file opened in the editor; in Eclipse there is a "Read only" check box in the "Properties" screen (which can be opened by typing "Alt+Enter") for the text file selected in the editor.
Another way to switch coverage off is to simply set the coverage-output
system property
to an unknown output format, such as "-Dcoverage-output=none
".
In previous sections we described the most typical way to use the coverage tool, by enabling it during a JUnit/TestNG test run to measure test coverage. The tool can also be used in a more general context, though: the standalone mode, where it can attach to any Java 6+ process to measure line and path coverage metrics on specified classes, regardless of which code is making calls into said classes.
To activate standalone mode, the target JVM instance must be started with the
"-javaagent:<proper/path/>jmockit-coverage.jar
" command line argument.
That's it; none of the JMockit toolkit jars need to be present in the classpath of the target process.
Initial configuration settings for the coverage tool can be specified through the "coverage-xyz
" system
properties previously described, but this is entirely optional; the configuration properties can be modified later
through a dedicated UI.
Once the target process is running with the JMockit Coverage Java agent, the user can connect to it with a JMX
client which can access arbitrary "MBeans".
Usually, the standard
JConsole tool
available in a Java 6+ JDK will be used.
The JMockit Coverage MBean provides several configuration properties (the same ones which can be set with
"-D
" on the command line), and one operation through which the desired output can be generated.
The user interface provided by JConsole is shown below, where the process that is running with the coverage tool is a
Tomcat 7 server instance.
The configuration properties (shown as "Attributes" of the "CoverageControl" MBean above) are as before, except in
the case of "SrcDirs" (which corresponds to coverage-srcDirs
).
If this property is not specified, no attempt is made to find source files for the classes considered for coverage.
In case the MBean UI is not used, coverage output will be generated at JVM shutdown, according to the configuration specified through coverage properties.