Metric definitions
This section explains the code metrics used in the Sonar solution by category and by instance mode.
Complexity
The table below lists the complexity metrics used in the Sonar solution.
Metric | Metric key | Definition |
---|---|---|
Cyclomatic complexity | complexity | A quantitative metric used to calculate the number of paths through the code. See below. |
Cognitive complexity | cognitive_complexity | A qualification of how hard it is to understand the code's control flow. See the Cognitive Complexity white paper for a complete description of the mathematical model applied to compute this measure. |
Both complexity metrics can be used in a quality gate condition on overall code.
Cyclomatic complexity
Cyclomatic complexity is a quantitative metric used to calculate the number of paths through the code. The analyzer calculates the score of this metric for a given “function” (depending on the language, it may be a function, a method, a subroutine, etc.) by incrementing the function's Cyclomatic complexity counter by one each time the control flow of the function splits resulting in a new conditional branch. Each function has a minimum complexity of 1. The calculation formula is as follows:
cyclomaticComplexity = 1 + numberOfConditionalBranches
The split detection is explained below by language.
The calculation of the overall code’s Cyclomatic complexity is basically the sum of all complexity scores calculated at the function level. For some languages, complexity outside functions is taken into account additionally.
ABAP
The ABAP analyzer calculates the Cyclomatic complexity at function level. It increments the Cyclomatic complexity by one each time it detects one of the following keywords:
- AND
- CATCH
- DO
- ELSEIF
- IF
- LOOP
- LOOPAT
- OR
- PROVIDE
- SELECT…ENDSELECT
- TRY
- WHEN
- WHILE
C/C++/Objective-C
The C/C++/Objective-C analyzer calculates the Cyclomatic complexity at function and coroutine levels. It increments the Cyclomatic complexity by one each time it detects:
- A control statement such as: if, while, do while, for
- A switch statement keyword such as: case, default
- The && and || operators
- The ? ternary operator
- A lambda expression definition
Each time the analyzer scans a header file as part of a compilation unit, it computes for this header the measures: statements, functions, classes, Cyclomatic complexity, and Cognitive complexity. That means that each measure may be computed more than once for a given header. In that case, it stores the largest value for each measure.
C#
The C# analyzer calculates the Cyclomatic complexity at method and property levels. It increments the Cyclomatic complexity by one each time it detects:
- one of these function declarations: method, constructor, destructor, property, accessor, operator, or local function declaration.
- A conditional expression
- A conditional access
- A switch case or switch expression arm
- An and/or pattern
- One of these statements: do, for, foreach, if, while
- One of these expressions: ??, ??=, ||, or &&
COBOL
The COBOL analyzer calculates the Cyclomatic complexity at paragraph, section, and program levels. It increments the Cyclomatic complexity by one each time it detects one of these commands (except when they are used in a copybook):
- ALSO
- ALTER
- AND
- DEPENDING
- END_OF_PAGE
- ENTRY
- EOP
- EXCEPTION
- EXEC CICS HANDLE
- EXEC CICS LINK
- EXEC CICS XCTL
- EXEC CICS RETURN
- EXIT
- GOBACK
- IF
- INVALID
- OR
- OVERFLOW
- SIZE
- STOP
- TIMES
- UNTIL
- USE
- VARYING
- WHEN
Dart
The Dart analyzer calculates the Cyclomatic Complexity for:
- top-level functions
- top-level function expressions (lambdas)
- methods
- accessors (getters and setters)
- constructors
It increments the complexity by one for each of the structures listed above. It doesn't increment the complexity for nested function declarations or expressions.
In addition, the count is incremented by one for each:
- short-circuit binary expression or logical patterns (
&&
,||
,??
) - if-null assignments (
??=
) - conditional expressions (
?:
) - null-aware operators (
?[
,?.
,?..
,...?
) - propagating cascades (
a?..b..c
) if
statement or collection- loop (
for
,while
,do
, andfor
collection)
case
or pattern in a switch
statement or expression
Java
The Java analyzer calculates the Cyclomatic complexity at method level. It increments the Cyclomatic complexity by one each time it detects one of these keywords:
- If
- for
- while
- case
- &&
- ||
- ?
- ->
JS/TS, PHP
The JS/TS analyzer calculates the Cyclomatic complexity at function level. The PHP analyzer calculates the Cyclomatic complexity at function and class levels. Both analyzers increment the Cyclomatic complexity by one each time they detect:
- A function (i.e non-abstract and non-anonymous constructors, functions, procedures or methods)
- An if keyword
- A short-circuit (AKA lazy) logical conjunction (&&)
- A short-circuit (AKA lazy) logical disjunction (||)
- A ternary conditional expression
- A loop
- A case clause of a switch statement
- A throw or a catch statement
- A go to statement (only for PHP)
PL/I
The PL/I analyzer increments the Cyclomatic complexity by one each time it detects one of the following keywords:
- PROC
- PROCEDURE
- GOTO
- GO TO
- DO
- IF
- WHEN
- |
- !
- |=
- !=
- &
- &=
- A DO statement with conditions (Type 1 DO statements are ignored)
For procedures having more than one return statement: each additional return statement except for the last one, will increment the complexity metric.
PL/SQL
The PL/SQL analyzer calculates the Cyclomatic complexity at function and procedure level. It increments the Cyclomatic complexity by one each time it detects:
- The main PL/SQL anonymous block (not inner ones)
- One of the following statements:
- CREATE PROCEDURE
- CREATE TRIGGER
- basic LOOP
- WHEN clause (the “WHEN” of simple CASE statement and searched CASE statement)
- cursor FOR LOOP
- CONTINUE / EXIT WHEN clause (The “WHEN” part of the CONTINUE and EXIT statements)
- exception handler (every individual “WHEN”)
- EXIT
- FORLOOP
- FORALL
- IF
- ELSIF
- RAISE
- WHILELOOP
- One of the following expressions:
- ANDexpression (“AND” reserved word used within PL/SQL expressions)
- Rexpression (“OR” reserved word used within PL/SQL expressions),
- WHEN clause expression (the “WHEN” of simple CASE expression and searched CASE expression)
VB.NET
The VB.NET analyzer calculates the Cyclomatic complexity at function, procedure, and property levels. It increments the Cyclomatic complexity by one each time it detects:
- a method or constructor declaration (Sub, Function),
- AndAlso
- Case
- Do
- End
- Error
- Exit
- For
- ForEach
- GoTo
- If
- Loop
- On Error
- OrElse
- Resume
- Stop
- Throw
- Try
- While
Coverage
The table below lists the test coverage metrics used in the Sonar solution.
Metric | Metric key | Definition |
---|---|---|
Condition coverage | branch_coverage | On each line of code containing some boolean expressions, the condition coverage answers the following question: 'Has each boolean expression been evaluated both to
where:
|
Condition coverage on new code | new_branch_coverage | This definition is identical to Condition coverage but is restricted to new/updated source code. |
Condition coverage hits | branch_coverage_hits_data | A list of covered conditions. |
Conditions by line | conditions_by_line | The number of conditions by line. |
Coverage | coverage | A mix of Line coverage and Condition coverage. It's goal is to provide an even more accurate answer the question 'How much of the source code has been covered by the unit tests?'.
where:
|
Coverage on new code | new_coverage | This definition is identical to Coverage but is restricted to new/updated source code. |
Line coverage | line_coverage | On a given line of code, Line coverage simply answers the question 'Has this line of code been executed during the execution of the unit tests?'. It is the density of covered lines by unit tests: Line coverage = LC / EL where:
|
Line coverage on new code | new_line_coverage | This definition is identical to Line coverage but restricted to new/updated source code. |
Line coverage hits | coverage_line_hist_data | A list of covered lines. |
Lines to cover | lines_to_cover | Coverable lines. The number of lines of code that could be covered by unit tests (for example, blank lines or full comments lines are not considered as lines to cover). Note that this metric is about what is possible, not what is left to do (that's Uncovered lines). |
Lines to cover on new code | new_lines_to_cover | This definition is Identical to Lines to cover but restricted to new/updated source code. |
Skipped unit tests | skipped_tests | The number of skipped unit tests. |
Uncovered conditions | uncovered_conditions | The number of conditions that are not covered by unit tests. |
Uncovered conditions on new code | new_uncovered_conditions | This definition is identical to Uncovered conditions but restricted to new/updated source code. |
Uncovered lines | uncovered_lines | The number of lines of code that are not covered by unit tests. |
Uncovered lines on new code | new_uncovered_lines | This definition is identical to Uncovered lines but restricted to new/updated source code. |
Unit tests | tests | The number of unit tests. |
Unit tests duration | test_execution_time | The time required to execute all the unit tests. |
Unit test errors | test_errors | The number of unit tests that have failed. |
Unit test failures | test_failures | The number of unit tests that have failed with an unexpected exception. |
Unit test success density (%) | test_success_density | unitTestSuccessDensity = (unitTests - (unitTestErrors + unitTestFailures)) / (unitTests) * 100 |
Most of the coverage metrics can be used in a quality gate condition.
Duplications
The table below lists the duplication metrics used in the Sonar solution.
Metric | Metric key | Definition |
---|---|---|
Duplicated blocks | duplicated_blocks | The number of duplicated blocks of lines. For a block of code to be considered as duplicated:
|
Duplicated files | duplicated_files | The number of files involved in duplications. |
Duplicated lines | duplicated_lines | The number of lines involved in duplications. |
Duplicated lines (%) | duplicated_lines_density | Is calculated by using the following formula:
|
The duplication metrics can be used in a quality gate condition.
Issues
The table below lists the issues metrics used in the Sonar solution.
Issues
The table below lists the issues metrics used in the Sonar solution.
Metric | Metric key | Definition |
---|---|---|
Issues on new code | new_violations | The total number of issues raised for the first time on new code. |
Issues | violations | The total number of issues in all states. |
False positive issues | false_positive_issues | The total number of issues marked as False positive. |
Open issues | open_issues | The total number of issues in the Open status. |
Accepted issues | accepted_issues | The total number of issues marked as Accepted. |
Blocker severity issues | software_quality_blocker_issues | Issues with a Blocker severity level. |
High severity issues | software_quality_high_issues | Issues with a High severity level. |
Medium severity issues | software_quality_medium_issues | Issues with a Medium severity level. |
Low severity Issues | software_quality_low_issues | Issues with a Low severity level. |
Info severity issues | software_quality_info_issues | Issues with an Info severity level. |
All issues metrics can be used in a quality gate condition (on overall code) except New issues.
Issues
The table below lists the issues metrics used in the Sonar solution.
Metric | Metric key | Definition |
---|---|---|
Issues on new code | new_violations | The total number of issues raised for the first time on new code. |
Issues | violations | The total number of issues in all states. |
False positive issues | false_positive_issues | The total number of issues marked as False positive. |
Open issues | open_issues | The total number of issues in the Open status. |
Accepted issues | accepted_issues | The total number of issues marked as Accepted. |
Blocker issues | blocker_violations | Issues with a Blocker severity level. |
Critical issues | critical_violations | Issues with a Critical severity level. |
Major issues | major_violations | Issues with a Major severity level. |
Minor issues | minor_violations | Issues with a Minor severity level. |
Info issues | info_violations | Issues with an Info severity level. |
All issues metrics can be used in a quality gate condition (on overall code) except New issues.
Maintainability
The table below lists the maintainability metrics used in the Sonar solution.
Metric | Metric key | Definition |
---|---|---|
Maintainability issues | software_quality_maintainability_issues | The total number of issues impacting the maintainability. |
Maintainability issues on new code | new_software_quality_maintainability_issues | The total number of maintainability issues raised for the first time on new code. |
Technical debt | software_quality_maintainability_remediation_effort | A measure of effort to fix all maintainability issues. See below. |
Technical debt on new code | new_software_quality_maintainability_remediation_effort | A measure of effort to fix the maintainability issues raised for the first time on new code. See below. |
Technical debt ratio | software_quality_maintainability_debt_ratio | The ratio between the cost to develop the software and the cost to fix it. See below. |
Technical debt ratio on new code | new_software_quality_maintainability_debt_ratio | The ratio between the cost to develop the code changed on new code and the cost of the issues linked to it. See below. |
Maintainability rating | software_quality_maintainability_rating | The rating related to the value of the technical debt ratio. See below. |
Maintainability rating on new code | new_software_quality_maintainability_rating | The rating related to the value of the technical debt ratio on new code. See below. |
All maintainability metrics can be used in a quality gate condition.
Metric | Metric key | Definition |
---|---|---|
Code smells | code_smells | The total number of issues impacting the maintainability (maintainability issues). |
Code smells on new code | new_code_smells | The total number of maintainability issues raised for the first time on new code. |
Technical debt | sqale_index | A measure of effort to fix all maintainability issues. See below. |
Technical debt on new code | new_technical_debt | A measure of effort to fix the maintainability issues raised for the first time on new code. See below. |
Technical debt ratio | sqale_debt_ratio | The ratio between the cost to develop the software and the cost to fix it. See below. |
Technical debt ratio on new code | new_sqale_debt_ratio | The ratio between the cost to develop the code changed on new code and the cost of the issues linked to it. See below. |
Maintainability rating | sqale_rating | The rating related to the value of the technical debt ratio. See below. |
Maintainability rating on new code | new_squale _rating | The rating related to the value of the technical debt ratio on new code. See below. |
All maintainability metrics can be used in a quality gate condition.
Technical debt
The technical debt is the sum of the maintainability issue remediation costs. An issue remediation cost is the effort (in minutes) evaluated to fix the issue. The issue remediation cost is taken over from the effort assigned to the rule that raised the issue.
An 8-hour day is assumed when the technical debt is shown in days.
Technical debt ratio
The technical debt ratio is the ratio between the cost to develop the software and the technical debt (the cost to fix it). It is calculated based on the following formula:
technicalDebt /(costToDevelop1lineOfCode * numberOfLinesOfCode)
Where the cost to develop one line of code is predefined in the database (by default, 30 minutes, can be changed).
Example:
- Technical debt: 122,563
- Number of lines of code: 63,987
- Cost to develop one line of code: 30 minutes
- Technical debt ratio: 6.4%
Maintainability rating
The default Maintainability rating grid is:
- A ≤ 5% to 0%
- B ≥ 5% to <10%
- C ≥ 10% to <20%
- D ≥ 20% to < 50%
- E ≥ 50%
You can define another maintainability rating grid: see Changing the Maintainability rating grid.
Quality gate
The table below lists the quality gate metrics used in the Sonar solution.
Metric | Metric key | Definition |
---|---|---|
Quality gate status | alert_status | The state of the quality gate associated with your project. Possible values are ERROR and OK. |
Quality gate details | quality_gate_details | Status (failing or not) of each condition in the quality gate. |
Reliability
The table below lists the reliability metrics used in the Sonar solution.
Metric | Metric key | Definition |
---|---|---|
Reliability issues | software_quality_reliability_issues | The total number of issues impacting the reliability. |
Reliability issues on new code | new_software_quality_reliability_issues | The total number of reliability issues raised for the first time on new code. |
Reliability rating | software_quality_reliability_rating | Rating related to reliability. The rating grid is as follows: A = 0 or more info issues B = at least one low issue C = at least one medium issue D = at least one high issue E = at least one blocker issue |
Reliability remediation effort | software_quality_reliability_remediation_effort | The effort to fix all reliability issues. The remediation cost of an issue is taken over from the effort (in minutes) assigned to the rule that raised the issue (see Technical debt above). An 8-hour day is assumed when values are shown in days. |
Reliability remediation effort on new code | new_software_quality_reliability_remediation_effort | The same as Reliability remediation effort but on new code. |
All reliability metrics below can be used in a quality gate condition.
Metric | Metric key | Definition |
---|---|---|
Bugs | bugs | The total number of issues impacting the reliability (reliability issues). |
Bugs on new code | new_bugs | The total number of reliability issues raised for the first time on new code. |
Reliability rating | reliability_rating | Rating related to reliability. The rating grid is as follows: A = 0 bug B = at least one minor bug C = at least one major bug D = at least one critical bug E = at least one blocker bug |
Reliability remediation effort | reliability_remediation_effort | The effort to fix all reliability issues. The remediation cost of an issue is taken over from the effort (in minutes) assigned to the rule that raised the issue (see Technical debt above). An 8-hour day is assumed when values are shown in days. |
Reliability remediation effort on new code | new_reliability_remmediation_effort | The same as Reliability remediation effort but on new code. |
All reliability metrics below can be used in a quality gate condition.
Security
The table below lists the security metrics used in the Sonar solution.
Metric | Metric key | Definition |
---|---|---|
Security issues | software_quality_security_issues | The total number of security issues. |
Security issues on new code | new_software_quality_security_issues | The total number of security issues raised for the first time on new code. |
Security rating | software_quality_security_rating | Rating related to security. The rating grid is as follows: A = 0 or more info issues B = at least one low issue C = at least one medium issue D = at least one high issue E = at least one blocker issue |
Security remediation effort | software_quality_security_remediation_effort | The effort to fix all vulnerabilities. The remediation cost of an issue is taken over from the effort (in minutes) assigned to the rule that raised the issue (see Technical debt above). An 8-hour day is assumed when values are shown in days. |
Security remediation effort on new code | new_software_quality_security_remediation_effort | The same as Security remediation effort but on new code. |
Security hotspots | security_hotspots | The number of security hotspots. |
Security hotspots on new code | new_security_hotspots | The number of security hotspots on new code. |
Security hotspots reviewed | security_hotspots_reviewed | The percentage of reviewed security hotspots compared in relation to the total number of security hotspots. |
New security hotspots reviewed | new_security_hotspots_reviewed | The percentage of reviewed security hotspots on new code. |
Security review rating | security_review_rating | The security review rating is a letter grade based on the percentage of reviewed security hotspots. Note that security hotspots are considered reviewed if they are marked as Acknowledged, Fixed, or Safe. The rating grid is as follows: |
Security review rating on new code | new_security_review_rating | The security review rating for new code. |
All security metrics can be used in a quality gate condition except the Security hotspots metrics.
Metric | Metric key | Definition |
---|---|---|
Vulnerabilities | vulnerabilities | The total number of security issues (also called vulnerabilities). |
Vulnerabilities on new code | new_vulnerabilities | The total number of vulnerabilities raised for the first time on new code. |
Security rating | security_rating | Rating related to security. The rating grid is as follows: A = 0 vulnerability B = at least one minor vulnerability C = at least one major vulnerability D = at least one critical vulnerability E = at least one blocker vulnerability |
Security remediation effort | security_remediation_effort | The effort to fix all vulnerabilities. The remediation cost of an issue is taken over from the effort (in minutes) assigned to the rule that raised the issue (see Technical debt above). An 8-hour day is assumed when values are shown in days. |
Security remediation effort on new code | new_security_remediation_effort | The same as Security remediation effort but on new code. |
Security hotspots | security_hotspots | The number of security hotspots. |
Security hotspots on new code | new_security_hotspots | The number of security hotspots on new code. |
Security hotspots reviewed | security_hotspots_reviewed | The percentage of reviewed security hotspots compared in relation to the total number of security hotspots. |
New security hotspots reviewed | new_security_hotspots_reviewed | The percentage of reviewed security hotspots on new code. |
Security review rating | security_review_rating | The security review rating is a letter grade based on the percentage of reviewed security hotspots. Note that security hotspots are considered reviewed if they are marked as Acknowledged, Fixed, or Safe. The rating grid is as follows: |
Security review rating on new code | new_security_review_rating | The security review rating for new code. |
All security metrics can be used in a quality gate condition except the Security hotspots metrics.
Severity
The table below lists the severity metrics used in Multi-Quality Rule mode.
Severity | Definition |
---|---|
Blocker | Bug with a high probability to impact the behavior of the application in production. |
High | Either a bug with a low probability of impacting the behavior of the application in production or an issue that represents a security flaw. |
Medium | A quality flaw that can highly impact the developer's productivity. |
Low | A quality flaw that can slightly impact the developer's productivity. |
Info | Neither a bug nor a quality flaw, just a finding. |
Users with appropriate permissions are able to set a custom severity on a rule.
The table below lists the severity metrics used in Standard Experience mode.
Severity | Definition |
---|---|
Blocker | Bug with a high probability to impact the behavior of the application in production. For example, a memory leak, or an unclosed JDBC connection are BLOCKERs that must be fixed immediately. |
Critical | Either a bug with a low probability to impact the behavior of the application in production or an issue that represents a security flaw. An empty catch block or SQL injection would be a CRITICAL issue. The code must be reviewed immediately. |
Major | A quality flaw that can highly impact the developer's productivity. An uncovered piece of code, duplicated blocks, or unused parameters are examples of MAJOR issues. |
Minor | A quality flaw that can slightly impact the developer's productivity. For example, lines should not be too long, and "switch" statements should have at least 3 cases, are both be considered MINOR issues. |
Info | Neither a bug nor a quality flaw, just a finding. |
Users with appropriate permissions are able to set a custom severity on a rule.
Size
The table below lists the size metrics used in the Sonar solution.
Metric | Metric key | Definition |
---|---|---|
Classes | classes | The number of classes (including nested classes, interfaces, enums, annotations, mixins, extensions, and extension types). |
Comment lines | comment_lines | The number of lines containing either comment or commented-out code. See below for calculation details. |
Comments (%) | comment_lines_density | The comment lines density. It is calculated based on the following formula:
Examples:
|
Files | files | The number of files. |
Lines | lines | The number of physical lines (number of carriage returns). |
Lines of code | ncloc | The number of physical lines that contain at least one character which is neither a whitespace nor a tabulation nor part of a comment. |
Lines of code per language | ncloc_language_distribution | The non-commented lines of code distributed by language. |
Functions | functions | The number of functions. Depending on the language, a function is defined as either a function, a method, or a paragraph. Language-specific details:
|
Projects | projects | The number of projects in a portfolio. |
Statements | statements | The number of statements. |
Comment lines
Non-significant comment lines (empty comment lines, comment lines containing only special characters, etc.) do not increase the number of comment lines.
The following piece of code contains 9 comment lines:
In addition:
- For COBOL: Generated lines of code and pre-processing instructions (SKIP1, SKIP2, SKIP3, COPY, EJECT, REPLACE) are not counted as lines of code.
- For Java and Dart: File headers are not counted as comment lines (because they usually define the license).
Most of the size metrics can be used in a quality gate condition.
Was this page helpful?