Start FreeLog in
SonarQube Cloud | Digging deeper | Understanding project measures

Understanding project measures and metrics

On this page

This document describes code metrics used in the Sonar solution. You can find these metrics in the Measures tab of your project. If you are looking for SonarQube API endpoints, you will find them in the SonarQube Clouds's help menu.

Project measures

A list of security, security review, and security hotspot metrics used in the Sonar solution.

Security

MetricMetric keyDefinition
VulnerabilitiesvulnerabilitiesThe total number of security issues (also called vulnerabilities).
Vulnerabilities on new codenew_vulnerabilitiesThe total number of vulnerabilities raised for the first time on new code.
Security ratingsecurity_ratingRating related to security. The rating grid is as follows:
A = 0 vulnerability
B = at least one minor vulnerability
C = at least one major vulnerability
D = at least one critical vulnerability
E = at least one blocker vulnerability
Security rating on new codenew_security_ratingRating related to security on new code.
Security remediation effortsecurity_remediation_effort

The effort to fix all vulnerabilities. The remediation cost of an issue is taken over from the effort (in minutes) assigned to the rule that raised the issue (see Technical debt above). 

An 8-hour day is assumed when values are shown in days.

Security remediation effort on new codenew_security_remediation_effortThe same as Security remediation effort but on new code.
Security hotspotssecurity_hotspotsThe number of security hotspots.
Security hotspots on new codenew_security_hotspotsThe number of security hotspots on new code.
Security hotspots reviewedsecurity_hotspots_reviewedThe percentage of reviewed security hotspots compared in relation to the total number of security hotspots.
New security hotspots reviewednew_security_hotspots_reviewedThe percentage of reviewed security hotspots on new code.
Security review ratingsecurity_review_rating

The security review rating is a letter grade based on the percentage of reviewed security hotspots. Note that security hotspots are considered reviewed if they are marked as Acknowledged, Fixed, or Safe.

The rating grid is as follows:
A = >= 80%
B = >= 70% and <80%
C = >= 50% and <70%
D = >= 30% and <50%
E = < 30%

Security review rating on new codenew_security_review_ratingThe security review rating for new code.

All security metrics can be used in a quality gate condition except the Security Hotspots metrics.

Reliability

A list of reliability metrics used in the Sonar solution.

MetricMetric keyDefinition
BugsbugsThe total number of issues impacting the reliability (reliability issues).
Bugs on new codenew_bugsThe total number of reliability issues raised for the first time on new code.
Reliability ratingreliability_ratingRating related to reliability. The rating grid is as follows:
A = 0 bug
B = at least one minor bug
C = at least one major bug
D = at least one critical bug
E = at least one blocker bug
Reliability rating on new codenew_reliability_ratingRating related to reliability on new code.
Reliability remediation effortreliability_remediation_effort

The effort to fix all reliability issues. The remediation cost of an issue is taken over from the effort (in minutes) assigned to the rule that raised the issue (see Technical debt above). 

An 8-hour day is assumed when values are shown in days.

Reliability remediation effort on new codenew_reliability_remmediation_effortThe same as Reliability remediation effort but on new code.

You can use all reliability metrics in a quality gate condition.

Maintainability

A list of maintainability metrics used in the Sonar solution.

MetricMetric keyDefinition
Code smellscode_smellsThe total number of issues impacting the maintainability (maintainability issues).
Code smells on new codenew_code_smellsThe total number of maintainability issues raised for the first time on new code.
Technical debtsqale_indexA measure of effort to fix all maintainability issues. See below.
Technical debt on new codenew_technical_debtA measure of effort to fix the maintainability issues raised for the first time on new code. See below.
Technical debt ratiosqale_debt_ratioThe ratio between the cost to develop the software and the cost to fix it. See below.
Technical debt ratio on new codenew_sqale_debt_ratioThe ratio between the cost to develop the code changed on new code and the cost of the issues linked to it. See below.
Maintainability ratingsqale_ratingThe rating related to the value of the technical debt ratio. See below.
Maintainability rating on new codenew_squale _ratingThe rating related to the value of the technical debt ratio on new code. See below.

All maintainability metrics can be used in a quality gate condition.

Technical debt

The technical debt is the sum of the maintainability issue remediation costs. An issue remediation cost is the effort (in minutes) evaluated to fix the issue. It is taken over from the effort assigned to the rule that raised the issue.

An 8-hour day is assumed when the technical debt is shown in days.

Technical debt ratio

The technical debt ratio is the ratio between the cost to develop the software and the technical debt (the cost to fix it). It is calculated based on the following formula:

sqale_debt_ratio = technical debt /(cost to develop one line of code * number of lines of code)

Where the cost to develop one line of code is predefined in the database (by default, 30 minutes)

Example:

  • Technical debt: 122,563
  • Number of lines of code: 63,987
  • Cost to develop one line of code: 30 minutes
  • Technical debt ratio: 6.4%
Maintainability rating

The default Maintainability rating scale (sqale_rating) is:

  • A ≤ 5%
  • B ≥ 5% to <10%
  • C ≥ 10% to <20%
  • D ≥ 20% to < 50%
  • E ≥ 50%

Coverage

A list of coverage metrics used in the Sonar solution.

MetricMetric keyDefinition
Coveragecoverage

A mix of line coverage and condition coverage. Its goal is to provide an even more accurate answer to the question:

How much of the source code has been covered by unit tests?

coverage = (CT + LC)/(B + EL)

where:

  • CT: conditions that have been evaluated to true at least once
  • LC: covered lines = lines_to_cover - uncovered_lines
  • B: total number of conditions
  • EL: total number of executable lines (lines_to_cover)
Coverage on new codenew_coverageThis definition is identical to coverage but is restricted to new or updated source code.

Lines to cover


lines_to_coverCoverable lines. The number of lines of code that could be covered by unit tests, for example, blank lines or full comments lines are not considered as lines to cover. Note that this metric is about what is possible, not what is left to do - that's uncovered lines.
Lines to cover on new codenew_lines_to_coverThis definition is identical to lines to cover but restricted to new or updated source code.
Uncovered linesuncovered_linesThe number of lines of code that are not covered by unit tests.
Uncovered lines on new codenew_uncovered_linesThis definition is identical to uncovered lines but restricted to new or updated source code.
Line coverageline_coverage

On a given line of code, line coverage simply answers the question: 

Has this line of code been executed during the execution of the unit tests? 

It is the density of covered lines by unit tests:

line_coverage = LC / EL

where:

  • LC = covered lines = lines_to_cover - uncovered_lines
  • EL = total number of executable lines (lines_to_cover)
Line coverage on new codenew_line_coverageThis definition is identical to line coverage but restricted to new or updated source code.
Line coverage hitscoverage_line_hist_dataA list of covered lines.
Condition coveragebranch_coverage

The condition coverage answers the following question on each line of code containing boolean expressions: 

Has each boolean expression been evaluated both to true and to false

This is the density of possible conditions in flow control structures that have been followed during unit tests execution.

branch_coverage = (CT + CF) / (2*B)

where:

  • CT = conditions that have been evaluated to true at least once
  • CF = conditions that have been evaluated to false at least once
  • B = total number of conditions
Condition coverage on new codenew_branch_coverageThis definition is identical to condition coverage but is restricted to new or updated source code.
Condition coverage hitsbranch_coverage_hits_dataA list of covered conditions.
Conditions by lineconditions_by_lineThe number of conditions by line.
Covered conditions by linecovered_conditions_by_lineNumber of covered conditions by line.

Uncovered conditions



uncovered_conditionsThe number of conditions that are not covered by unit tests.
Uncovered conditions on new codenew_uncovered_conditionsThis definition is identical to Uncovered conditions but restricted to new or updated source code.
Unit teststestsThe number of unit tests.
Unit test errorstest_errorsThe number of unit tests that have failed.
Unit test failurestest_failuresThe number of unit tests that have failed with an unexpected exception.
Skipped unit testsskipped_testsThe number of skipped unit tests.
Unit tests durationtest_execution_timeThe time required to execute all the unit tests.
Unit test success density (%) test_success_densitytest_success_density = (tests - (test_errors + test_failures)) / (tests) * 100

You can use most of the coverage metrics in a quality gate condition.

Duplications

A list of duplication metrics used in the Sonar solution.

MetricMetric keyDefinition
Duplicated lines density (%)duplicated_lines_density

Duplicated lines density is calculated by using the following formula:

duplicated_lines_density= duplicated_lines / lines * 100

Duplicated lines density (%) on new codenew_duplicated_lines_densityThe same as duplicated lines density but on new code.
Duplicated linesduplicated_linesThe number of lines involved in duplications.
Duplicated lines on new codenew_duplicated_linesThe number of lines involved in duplications on new code.
Duplicated blocksduplicated_blocks

The number of duplicated blocks of lines.

For a block of code to be considered as duplicated:

  • Non-Java projects:
    • There should be at least 100 successive and duplicated tokens.
    • Those tokens should be spread at least on:
    • 30 lines of code for COBOL
    • 20 lines of code for ABAP
    • 10 lines of code for other languages
  • Java projects:
    There should be at least 10 successive and duplicated statements whatever the number of tokens and lines. Differences in indentation and in string literals are ignored while detecting duplications.
Duplicated block on new codenew_duplicated_blocksThe number of duplicated blocks of lines on new code.
Duplicated files duplicated_filesThe number of files involved in duplications.

You can use the duplication metrics in a quality gate condition.

Size

A list of size metrics used in the Sonar solution.


Metric keyDefinition
New linesnew_linesThe number of physical lines on new code (number of carriage returns).

Lines of code



nclocThe number of physical lines that contain at least one character which is neither a whitespace nor a tabulation nor part of a comment.
LineslinesThe number of physical lines (number of carriage returns).
StatementsstatementsThe number of statements.

Functions


functions

The number of functions. Depending on the language, a function is defined as either a function, a method, or a paragraph.

Language-specific details:

  • COBOL: It's the number of paragraphs.
  • Dart: Any function expression is included, whether it's the body of a function declaration, of a method, constructor, getter, top-level or nested function, top-level or nested lambda.
  • Java: Methods in anonymous classes are ignored.
  • VB.NET: Accessors are not considered to be methods.
ClassesclassesThe number of classes (including nested classes, interfaces, enums, annotations, mixins, extensions, and extension types).
FilesfilesThe number of files.
Comment linescomment_linesThe number of lines containing either comment or commented-out code. See below for calculation details.
Comments (%)comment_lines_density

The comment lines density. It is calculated based on the following formula: 

comment_lines_density=[comment_lines / (lines + comment_lines)] * 100

Examples:

  • 50% means that the number of lines of code equals the number of comment lines.
  • 100% means that the file only contains comment lines.
Lines of code per language ncloc_language_distributionThe non-commented lines of code distributed by language.
ProjectsprojectsThe number of projects in a portfolio.
Comment lines

Non-significant comment lines (empty comment lines, comment lines containing only special characters, etc.) do not increase the number of comment lines.

The following piece of code contains 9 comment lines:

/**                                            +0 => empty comment line
 *                                             +0 => empty comment line
 * This is my documentation                    +1 => significant comment
 * although I don't                            +1 => significant comment
 * have much                                   +1 => significant comment
 * to say                                      +1 => significant comment
 *                                             +0 => empty comment line
 ***************************                   +0 => non-significant comment
 *                                             +0 => empty comment line
 * blabla...                                   +1 => significant comment
 */                                            +0 => empty comment line

/**                                            +0 => empty comment line
 * public String foo() {                       +1 => commented-out code
 *   System.out.println(message);              +1 => commented-out code
 *   return message;                           +1 => commented-out code
 * }                                           +1 => commented-out code
 */                                            +0 => empty comment line

In addition:

  • For COBOL: Generated lines of code and pre-processing instructions (SKIP1, SKIP2, SKIP3, COPY, EJECT, REPLACE) are not counted as lines of code.
  • For Java and Dart: File headers are not counted as comment lines (because they usually define the license).

You can use the size metrics in a quality gate condition.

Complexity

Complexity metrics used in the Sonar solution.

MetricMetric keyDefinition
Cyclomatic complexitycomplexityA quantitative metric used to calculate the number of paths through the code. See below.
Cognitive complexitycognitive_complexityA qualification of how hard it is to understand the code's control flow. See the Cognitive Complexity white paper for a complete description of the mathematical model applied to compute this measure.

You can use both complexity metrics in a quality gate condition on overall code.

Cyclomatic complexity

Cyclomatic complexity is a quantitative metric used to calculate the number of paths through the code. The analyzer calculates the score of this metric for a given function (depending on the language, it may be a function, a method, a subroutine, etc.) by incrementing the function's cyclomatic complexity counter by one each time the control flow of the function splits resulting in a new conditional branch. Each function has a minimum complexity of 1. The calculation formula is as follows:

Cyclomatic complexity = 1 + number of conditional branches

The calculation of the overall code’s cyclomatic complexity is basically the sum of all complexity scores calculated at the function level. In some languages, the complexity of external functions is additionally taken into account. 

Split detection by language. 

ABAP

The ABAP analyzer calculates the cyclomatic complexity at the function level. It increments the cyclomatic complexity by one each time it detects one of the following keywords: 

  • AND
  • CATCH
  • DO
  • ELSEIF
  • IF
  • LOOP
  • LOOPAT
  • OR
  • PROVIDE
  • SELECT…ENDSELECT
  • TRY
  • WHEN
  • WHILE
C/C++/Objective-C

The C/C++/Objective-C analyzer calculates the cyclomatic complexity at function and coroutine levels. It increments the cyclomatic complexity by one each time it detects: 

  • A control statement such as: if, while, do while, for
  • A switch statement keyword such as: case, default
  • The && and || operators
  • The ? ternary operator 
  • A lambda expression definition
C#

The C# analyzer calculates the cyclomatic complexity at method and property levels. It increments the cyclomatic complexity by one each time it detects:

  • one of these function declarations: method, constructor, destructor, property, accessor, operator, or local function declaration.
  • A conditional expression
  • A conditional access
  • A switch case or switch expression arm
  • An and/or pattern
  • One of these statements: do, for, foreach, if, while
  • One of these expressions: ??, ??=, ||, or && 
COBOL

The COBOL analyzer calculates the cyclomatic complexity at paragraph, section, and program levels. It increments the cyclomatic complexity by one each time it detects one of these commands (except when they are used in a copybook): 

  • ALSO
  • ALTER
  • AND
  • DEPENDING
  • END_OF_PAGE
  • ENTRY
  • EOP
  • EXCEPTION
  • EXEC CICS HANDLE
  • EXEC CICS LINK
  • EXEC CICS XCTL
  • EXEC CICS RETURN
  • EXIT
  • GOBACK
  • IF
  • INVALID
  • OR
  • OVERFLOW
  • SIZE
  • STOP
  • TIMES
  • UNTIL
  • USE
  • VARYING
  • WHEN
Dart

The Dart analyzer calculates the cyclomatic complexity for:

  • top-level functions
  • top-level function expressions (lambdas)
  • methods 
  • accessors (getters and setters)
  • constructors

It increments the complexity by one for each of the structures listed above. It doesn't increment the complexity for nested function declarations or expressions.

In addition, the count is incremented by one for each: 

  • short-circuit binary expression or logical patterns (&&, ||, ??)
  • if-null assignments (??=)
  • conditional expressions (?:)
  • null-aware operators (?[, ?., ?.., ...?)
  • propagating cascades (a?..b..c)
  • if statement or collection
  • loop (for, while, do, and for collection)

case or pattern in a switch statement or expression

Java

The Java analyzer calculates the cyclomatic complexity at the method level. It increments the Cyclomatic complexity by one each time it detects one of these keywords: 

  • if
  • for
  • while
  • case
  • &&
  • ||
  • ?
  • ->
JS/TS, PHP

The JS/TS analyzer calculates the cyclomatic complexity at the function level. The PHP analyzer calculates the cyclomatic complexity at the function and class levels. Both analyzers increment the cyclomatic complexity by one each time they detect:

  • A function (i.e non-abstract and non-anonymous constructors, functions, procedures or methods)
  • An if keyword
  • A short-circuit (AKA lazy) logical conjunction (&&)
  • A short-circuit (AKA lazy) logical disjunction (||)
  • A ternary conditional expression
  • A loop
  • A case clause of a switch statement
  • A throw or a catch statement
  • A goto statement (only for PHP)
PL/I

The PL/I analyzer increments the cyclomatic complexity by one each time it detects one of the following keywords: 

  • PROC
  • PROCEDURE
  • GOTO
  • GO TO
  • DO
  • IF
  • WHEN
  • |
  • !
  • |=
  • !=
  • &
  • &=
  • A DO statement with conditions (Type 1 DO statements are ignored)
PL/SQL

The PL/SQL analyzer calculates the cyclomatic complexity at the function and procedure level. It increments the cyclomatic complexity by one each time it detects:

  • The main PL/SQL anonymous block (not inner ones)
  • One of the following statements: 
    • CREATE PROCEDURE 
    • CREATE TRIGGER 
    • basic LOOP 
    • WHEN clause (the WHEN of simple CASE statement and searched CASE statement)
    • cursor FOR LOOP
    • CONTINUE / EXIT WHEN clause (The WHEN part of the CONTINUE and EXIT statements)
    • exception handler (every individual WHEN)
    • EXIT 
    • FORLOOP
    • FORALL
    • IF
    • ELSIF
    • RAISE
    • WHILELOOP
  • One of the following expressions:
    • AND expression (AND reserved word used within PL/SQL expressions)
    • OR expression (OR reserved word used within PL/SQL expressions), 
    • WHEN clause expression (the WHEN of simple CASE expression and searched CASE expression)
VB.NET

The VB.NET analyzer calculates the cyclomatic complexity at function, procedure, and property levels. It increments the cyclomatic complexity by one each time it detects:

  • a method or constructor declaration (Sub, Function), 
  • AndAlso
  • Case
  • Do
  • End
  • Error
  • Exit
  • For
  • ForEach
  • GoTo
  • If
  • Loop
  • On Error
  • OrElse
  • Resume
  • Stop
  • Throw
  • Try
  • While

Issues

A list of issues metrics used in the Sonar solution.

MetricMetric keyDefinition
IssuesviolationsThe number of issues in all states.
Issues on new codenew_violationsThe number of issues raised for the first time on new code.
Accepted issuesaccepted_issuesThe number of issues marked as Accepted.
Open issuesopen_issuesThe number of issues in the Openstatus.
Accepted issues on new codenew_accepted_issuesThe number of Accepted issues on new code.
False positive issuesfalse_positive_issuesThe number of issues marked as False positive.
Blocker issuessoftware_quality_blocker_issuesIssues with software quality Blocker severity level.
High issuessoftware_quality_high_issuesIssues with software quality High severity level.
Medium issuessoftware_quality_medium_issuesIssues with software quality Medium severity level.
Low issuessoftware_quality_low_issuesIssues with software quality Low severity level.
Info issuessoftware_quality_info_issuesIssues with software quality Info severity level.
Deprecated: Blocker issuesblocker_violationsIssues with a Blocker severity level.
Deprecated: Critical issuescritical_violationsIssues with a Critical severity level.
Deprecated: Major issuesmajor_violationsIssues with a Major severity level.
Deprecated: Minor issuesminor_violationsIssues with a Minor severity level.
Deprecated: Info issuesinfo_violationsIssues with an Info severity level.

Quality Gates

Quality gates metrics used in the Sonar solution.

MetricMetric keyDefinition
Quality gate statusalert_status

The state of the quality gate associated with your project. 

Possible values are ERROR and OK. 

Quality gate detailsquality_gate_detailsStatus (failing or not) of each condition in the quality gate.

Was this page helpful?

© 2008-2025 SonarSource SA. All rights reserved. SONAR, SONARSOURCE, SONARQUBE, and CLEAN AS YOU CODE are trademarks of SonarSource SA.

Creative Commons License