Metric definitions
This section explains the metrics used in the Sonar solution to evaluate your code.
This section explains the code metrics used in the Sonar solution by category.
Complexity
The table below lists the complexity metrics used in the Sonar solution.
Metric
Metric key
Definition
Cyclomatic complexity
complexity
A quantitative metric used to calculate the number of paths through the code. See below.
Cognitive complexity
cognitive_complexity
A qualification of how hard it is to understand the code’s control flow. See the Cognitive Complexity white paper for a complete description of the mathematical model applied to compute this measure.
Both complexity metrics can be used in a quality gate condition on overall code.
Cyclomatic complexity
Cyclomatic complexity is a quantitative metric used to calculate the number of paths through the code. The analyzer calculates the score of this metric for a given "function" (depending on the language, it may be a function, a method, a subroutine, etc.) by incrementing the function’s Cyclomatic complexity counter by one each time the control flow of the function splits resulting in a new conditional branch. Each function has a minimum complexity of 1. The calculation formula is as follows:
cyclomaticComplexity = 1 + numberOfConditionalBranches
The split detection is explained below by language.
The calculation of the overall code’s Cyclomatic complexity is basically the sum of all complexity scores calculated at the function level. For some languages, complexity outside functions is taken into account additionally.
Coverage
The table below lists the Overview metrics used in the Sonar solution.
Metric
Metric key
Definition
Condition coverage
branch_coverage
On each line of code containing some boolean expressions, the condition coverage answers the following question: ‘Has each boolean expression been evaluated both to true
and to false
?’. This is the density of possible conditions in flow control structures that have been followed during unit tests execution.
conditionCoverage = (CT + CF) / (2*B)
where:
• CT = conditions that have been evaluated to ‘true’ at least once
• CF = conditions that have been evaluated to ‘false’ at least once
• B = total number of conditions
Condition coverage on new code
new_branch_coverage
This definition is identical to Condition coverage but is restricted to new/updated source code.
Condition coverage hits
branch_coverage_hits_data
A list of covered conditions.
Conditions by line
conditions_by_line
The number of conditions by line.
Coverage
coverage
A mix of Line coverage and Condition coverage. It’s goal is to provide an even more accurate answer the question ‘How much of the source code has been covered by the unit tests?’.
coverage = (CT + LC)/(B + EL)
where:
• CT = conditions that have been evaluated to ‘true’ at least once
• CF = conditions that have been evaluated to ‘false’ at least once
• LC = covered lines = linesToCover - uncoveredLines
• B = total number of conditions
• EL = total number of executable lines (linesToCover)
Coverage on new code
new_coverage
This definition is identical to Coverage but is restricted to new/updated source code.
Line coverage
line_coverage
On a given line of code, Line coverage simply answers the question ‘Has this line of code been executed during the execution of the unit tests?’. It is the density of covered lines by unit tests:
Line coverage = LC / EL
where:
• LC = covered lines ( linesToCover - uncoveredLines )
• EL = total number of executable lines (Lines to cover)
Line coverage on new code
new_line_coverage
This definition is identical to Line coverage but restricted to new/updated source code.
Line coverage hits
coverage_line_hist_data
A list of covered lines.
Lines to cover
lines_to_cover
Coverable lines. The number of lines of code that could be covered by unit tests (for example, blank lines or full comments lines are not considered as lines to cover). Note that this metric is about what is possible, not what is left to do (that’s Uncovered lines).
Lines to cover on new code
new_lines_to_cover
This definition is Identical to Lines to cover but restricted to new/updated source code.
Skipped unit tests
skipped_tests
The number of skipped unit tests.
Uncovered conditions
uncovered_conditions
The number of conditions that are not covered by unit tests.
Uncovered conditions on new code
new_uncovered_conditions
This definition is identical to Uncovered conditions but restricted to new/updated source code.
Uncovered lines
uncovered_lines
The number of lines of code that are not covered by unit tests.
Uncovered lines on new code
new_uncovered_lines
This definition is identical to Uncovered lines but restricted to new/updated source code.
Unit tests
tests
The number of unit tests.
Unit tests duration
test_execution_time
The time required to execute all the unit tests.
Unit test errors
test_errors
The number of unit tests that have failed.
Unit test failures
test_failures
The number of unit tests that have failed with an unexpected exception.
Unit test success density (%)
test_success_density
unitTestSuccessDensity = (unitTests - (unitTestErrors + unitTestFailures)) / (unitTests) * 100
Most of the coverage metrics can be used in a quality gate condition.
Duplications
The table below lists the duplication metrics used in the Sonar solution.
Metric
Metric key
Definition
Duplicated blocks
duplicated_blocks
The number of duplicated blocks of lines.
For a block of code to be considered as duplicated:
• Non-Java projects: There should be at least 100 successive and duplicated tokens.Those tokens should be spread at least on:30 lines of code for COBOL20 lines of code for ABAP10 lines of code for other languages
• There should be at least 100 successive and duplicated tokens.
• Those tokens should be spread at least on:
• 30 lines of code for COBOL
• 20 lines of code for ABAP
• 10 lines of code for other languages
• Java projects: There should be at least 10 successive and duplicated statements whatever the number of tokens and lines. Differences in indentation and in string literals are ignored while detecting duplications.
Duplicated files
duplicated_files
The number of files involved in duplications.
Duplicated lines
duplicated_lines
The number of lines involved in duplications.
Duplicated lines (%)
duplicated_lines_density
Is calculated by using the following formula:
duplicated_lines / lines * 100
The duplication metrics can be used in a quality gate condition.
Issues
The table below lists the Introduction metrics used in the Sonar solution.
Metric
Metric key
Definition
New issues
new_violations
The total number of issues raised for the first time on new code.
Issues
violations
The total number of issues in all states.
False positive issues
false_positive_issues
The total number of issues marked as False positive.
Open issues
open_issues
The total number of issues in the Open status.
Accepted issues
accepted_issues
The total number of issues marked as Accepted.
All issues metrics can be used in a quality gate condition (on overall code) except New issues.
Maintainability
The table below lists the Clean Code benefits: the software qualities metrics used in the Sonar solution.
Metric
Metric key
Definition
Issues
code_smells
The total number of issues impacting the maintainability (maintainability issues).
New issues
new_code_smells
The total number of maintainability issues raised for the first time on new code.
Technical debt
sqale_index
A measure of effort to fix all maintainability issues. See below.
Technical debt on new code
new_technical_debt
A measure of effort to fix the maintainability issues raised for the first time on new code. See below.
Technical debt ratio
sqale_debt_ratio
The ratio between the cost to develop the software and the cost to fix it. See below.
Technical debt ratio on new code
new_sqale_debt_ratio
The ratio between the cost to develop the code changed on new code and the cost of the issues linked to it. See below.
Maintainability rating
sqale_rating
The rating related to the value of the technical debt ratio. See below.
Maintainability rating on new code
new_squale _rating
The rating related to the value of the technical debt ratio on new code. See below.
All maintainability metrics can be used in a quality gate condition.
Quality gate
The table below lists the Quality gates metrics used in the Sonar solution.
Metric
Metric key
Definition
Quality gate status
alert_status
The state of the quality gate associated with your project.
Possible values are ERROR and OK.
Quality gate details
quality_gate_details
Status (failing or not) of each condition in the quality gate.
Reliability
The table below lists the Clean Code benefits: the software qualities metrics used in the Sonar solution.
Metric
Metric key
Definition
Issues
bugs
The total number of issues impacting the reliability (reliability issues).
New issues
new_bugs
The total number of reliability issues raised for the first time on new code.
Reliability rating
reliability_rating
Rating related to reliability. The rating grid is as follows: A = 0 bug B = at least one minor bug C = at least one major bug D = at least one critical bug E = at least one blocker bug
Reliability remediation effort
reliability_remediation_effort
The effort to fix all reliability issues. The remediation cost of an issue is taken over from the effort (in minutes) assigned to the rule that raised the issue (see Technical debt above).
An 8-hour day is assumed when values are shown in days.
Reliability remediation effort on new code
new_reliability_remmediation_effort
The same as Reliability remediation effort but on new code.
All reliability metrics below can be used in a quality gate condition.
Security
The table below lists the Clean Code benefits: the software qualities metrics used in the Sonar solution.
Metric
Metric key
Definition
Issues on new code
new_vulnerabilities
The total number of vulnerabilities raised for the first time on new code.
Security rating
security_rating
Rating related to security. The rating grid is as follows: A = 0 vulnerability B = at least one minor vulnerability C = at least one major vulnerability D = at least one critical vulnerability E = at least one blocker vulnerability
Security remediation effort
security_remediation_effort
The effort to fix all vulnerabilities. The remediation cost of an issue is taken over from the effort (in minutes) assigned to the rule that raised the issue (see Technical debt above).
An 8-hour day is assumed when values are shown in days.
Security remediation effort on new code
new_security_remediation_effort
The same as Security remediation effort but on new code.
Security hotspots on new code
new_security_hotspots
The number of security hotspots on new code.
Security hotspots reviewed
security_hotspots_reviewed
The percentage of reviewed security hotspots compared in relation to the total number of security hotspots.
New security hotspots reviewed
new_security_hotspots_reviewed
The percentage of reviewed security hotspots on new code.
Security review rating
security_review_rating
The security review rating is a letter grade based on the percentage of reviewed security hotspots. Note that security hotspots are considered reviewed if they are marked as Acknowledged, Fixed, or Safe.
The rating grid is as follows: A = >= 80% B = >= 70% and <80% C = >= 50% and <70% D = >= 30% and <50% E = < 30%
Security review rating on new code
new_security_review_rating
The security review rating for new code.
All security metrics can be used in a quality gate condition except the Security hotspots metrics.
Size
The table below lists the size metrics used in the Sonar solution.
Metric
Metric key
Definition
Classes
classes
The number of classes (including nested classes, interfaces, enums, and annotations).
Comment lines
comment_lines
The number of lines containing either comment or commented-out code. See below for calculation details.
Comments (%)
comment_lines_density
The comment lines density. It is calculated based on the following formula:
[commentLines / (NumberOfLinesOfCode + commentLines)] * 100
Examples:
• 50% means that the number of lines of code equals the number of comment lines.
• 100% means that the file only contains comment lines.
Files
files
The number of files.
Lines
lines
The number of physical lines (number of carriage returns).
Lines of code
ncloc
The number of physical lines that contain at least one character which is neither a whitespace nor a tabulation nor part of a comment.
Lines of code per language
ncloc_language_distribution
The non-commented lines of code distributed by language.
Functions
functions
The number of functions. Depending on the language, a function is defined as either a function, a method, or a paragraph.
Language-specific details:
• COBOL: It’s the number of paragraphs.
• Java: Methods in anonymous classes are ignored.
• VB.NET: Accessors are not considered to be methods.
Projects
projects
The number of projects in a portfolio.
Statements
statements
The number of statements.
Most of the size metrics can be used in a quality gate condition.
Last updated
Was this helpful?
Comment lines
Non-significant comment lines (empty comment lines, comment lines containing only special characters, etc.) do not increase the number of comment lines.
The following piece of code contains 9 comment lines:
In addition:
For COBOL: Generated lines of code and pre-processing instructions (SKIP1, SKIP2, SKIP3, COPY, EJECT, REPLACE) are not counted as lines of code.
For Java: File headers are not counted as comment lines (because they usually define the license).