Metric definitions
On this page
Complexity
Complexity (complexity
): Complexity refers to cyclomatic complexity, a quantitative metric used to calculate the number of paths through the code. Whenever the control flow of a function splits, the complexity counter gets incremented by one. Each function has a minimum complexity of 1. This calculation varies slightly by language because keywords and functionalities do.
Language-specific details
Language | Notes |
ABAP | The following keywords increase the complexity by one: AND , CATCH , CONTINUE , DO , ELSEIF , IF , LOOP , LOOPAT , OR , PROVIDE , SELECT…ENDSELECT , TRY , WHEN , WHILE . |
C/C++/Objective-C | The complexity gets incremented by one for: each control statement such as if , while , do while , for ; switch statement keywords such as case , default ; the && and || operators; the ? ternary operator and lambda expression definitions. |
COBOL | The following commands increase the complexity by one (except when they are used in a copybook): ALSO , ALTER , AND , DEPENDING , END_OF_PAGE , ENTRY , EOP , EXCEPTION , EXIT , GOBACK , CONTINUE , IF , INVALID , OR , OVERFLOW , SIZE , STOP , TIMES , UNTIL , USE , VARYING , WHEN , EXEC CICS HANDLE , EXEC CICS LINK , EXEC CICS XCTL , EXEC CICS RETURN . |
Java | Keywords incrementing the complexity: if , for , while , case , && , || , ? , -> . |
JS/TS, PHP | Complexity is incremented by one for each: function (i.e non-abstract and non-anonymous constructors, functions, procedures or methods), if , short-circuit (AKA lazy) logical conjunction (&& ), short-circuit (AKA lazy) logical disjunction (|| ), ternary conditional expressions, loop, case clause of a switch statement, throw and catch statement, go to statement (only for PHP). |
PL/I | The following keywords increase the complexity by one: PROC , PROCEDURE , GOTO , GO TO , DO , IF , WHEN , | , ! , |= , != , & , &= . |
PL/SQL | The complexity gets incremented by one for: the main PL/SQL anonymous block (not inner ones), create procedure, create trigger, proceduredefinition, basic loop statement, whenclausestatement (the “when” of simplecasestatement and searchedcasestatement), continuestatement, cursorforloopstatement, continueexitwhenclause (The “WHEN” part of the continue and exit statements), exceptionhandler (every individual “WHEN”), exitstatement, forloopstatement, forallstatement, ifstatement, elsifclause, raisestatement, returnstatement, whileloopstatement, andexpression (“and” reserved word used within PL/SQL expressions), orexpression (“or” reserved word used within PL/SQL expressions), whenclauseexpression (the “when” of simplecaseexpression and searchedcase_expression). |
VB.NET | The complexity gets incremented by one for: method or constructor declaration (Sub, Function), AndAlso , Case , Continue , End , Error , Exit , If , Loop , On Error , GoTo , OrElse , Resume , Stop , Throw , Try . |
Cognitive complexity (cognitive_complexity
): How hard it is to understand the code's control flow. See the Cognitive complexity white paper for a complete description of the mathematical model applied to compute this measure.
Duplications
Duplicated blocks (duplicated_blocks
): The number of duplicated blocks of lines.
Language-specific details
For a block of code to be considered as duplicated:
Non-Java projects:
- There should be at least 100 successive and duplicated tokens.
- Those tokens should be spread at least on:
- 30 lines of code for COBOL
- 20 lines of code for ABAP
- 10 lines of code for other languages
Java projects:
There should be at least 10 successive and duplicated statements whatever the number of tokens and lines. Differences in indentation and in string literals are ignored while detecting duplications.
Duplicated files (duplicated_files
): The number of files involved in duplications.
Duplicated lines (duplicated_lines
): The number of lines involved in duplications.
Duplicated lines (%) (duplicated_lines_density
): duplicated_lines
/ (lines of code) * 100
Issues
New issues (new_violations
): The number of issues raised for the first time on new code.
New xxx issues (new_xxx_violations
): The number of issues of the specified severity raised for the first time on new code, where xxx is one of: blocker
, critical
, major
, minor
, info
.
Issues (violations
): The total count of issues in all states.
xxx issues (xxx_violations
): The total count of issues of the specified severity, where xxx is one of: blocker
, critical
, major
, minor
, info
.
False positive issues (false_positive_issues
): The total count of issues marked false positive.
Open issues (open_issues
): The total count of issues in the Open state.
Confirmed issues (confirmed_issues
): The total count of issues in the Confirmed state.
Reopened issues (reopened_issues
): The total count of issues in the Reopened state.
Maintainability
Code smells (code_smells
): The total count of code smell issues.
New code smells (new_code_smells
): The total count of Code Smell issues raised for the first time on New Code.
Maintainability rating (sqale_rating
): (Formerly the SQALE rating.) The rating given to your project related to the value of your technical debt ratio. The default maintainability rating grid is:
A=0-0.05, B=0.06-0.1, C=0.11-0.20, D=0.21-0.5, E=0.51-1
The maintainability rating scale can be alternately stated by saying that if the outstanding remediation cost is:
- <=5% of the time that has already gone into the application, the rating is A
- between 6 to 10% the rating is a B
- between 11 to 20% the rating is a C
- between 21 to 50% the rating is a D
- anything over 50% is an E
Technical debt (sqale_index
): A measure of effort to fix all code smells. The measure is stored in minutes in the database. An 8-hour day is assumed when values are shown in days.
Technical debt on new code (new_technical_debt
): a measure of effort required to fix all code smells raised for the first time on new code.
Technical debt ratio (sqale_debt_ratio
): The ratio between the cost to develop the software and the cost to fix it. The Technical Debt Ratio formula is: Remediation cost / Development cost
Which can be restated as: Remediation cost / (Cost to develop 1 line of code * Number of lines of code)
The value of the cost to develop a line of code is 0.06 days.
Technical debt ratio on new code (new_sqale_debt_ratio
): The ratio between the cost to develop the code changed on new code and the cost of the issues linked to it.
Quality gates
Quality gate status (alert_status
): The state of the quality gate associated with your project. Possible values are ERROR
and OK
. Note: the WARN
value has been removed since SonarQube 7.6.
Quality gate details (quality_gate_details
): For all the conditions of your quality gate, you know which condition is failing and which is not.
Reliability
Bugs (bugs
): The total number of bug issues.
New Bugs (new_bugs
): The number of new bug issues.
Reliability Rating (reliability_rating
)
A = 0 Bugs
B = at least 1 Minor Bug
C = at least 1 Major Bug
D = at least 1 Critical Bug
E = at least 1 Blocker Bug
Reliability remediation effort (reliability_remediation_effort
): The effort to fix all bug issues. The measure is stored in minutes in the DB. An 8-hour day is assumed when values are shown in days.
Reliability remediation effort on new code (new_reliability_remediation_effort
): The same as Reliability remediation effort but on the code changed on new code.
Security
Vulnerabilities (vulnerabilities
): The number of vulnerability issues.
Vulnerabilities on new code (new_vulnerabilities
): The number of new vulnerability issues.
Security Rating (security_rating
)
A = 0 Vulnerabilities
B = at least 1 Minor Vulnerability
C = at least 1 Major Vulnerability
D = at least 1 Critical Vulnerability
E = at least 1 Blocker Vulnerability
Security remediation effort (security_remediation_effort
): The effort to fix all vulnerability issues. The measure is stored in minutes in the DB. An 8-hour day is assumed when values are shown in days.
Security remediation effort on new code (new_security_remediation_effort
): The same as Security remediation effort but on the code changed on New Code.
Security hotspots (security_hotspots
): The number of Security Hotspots
Security hotspots on new code (new_security_hotspots
): The number of new Security Hotspots on New Code.
Security review rating (security_review_rating
): The security review rating is a letter grade based on the percentage of Reviewed Security Hotspots. Note that security hotspots are considered reviewed if they are marked as Acknowledged, Fixed or Safe.
A = >= 80%
B = >= 70% and <80%
C = >= 50% and <70%
D = >= 30% and <50%
E = < 30%
Security review rating on new code (new_security_review_rating
): The security review rating for new code.
Security hotspots reviewed (security_hotspots_reviewed
): The percentage of reviewed security hotspots. Ratio formula: Number of Reviewed Hotspots x 100 / (To_Review Hotspots + Reviewed Hotspots)
New Security Hotspots Reviewed: The percentage of reviewed security hotspots on new code.
Size
Classes (classes
): The number of classes (including nested classes, interfaces, enums, and annotations).
Comment lines (comment_lines
): The number of lines containing either comment or commented-out code.
Non-significant comment lines (empty comment lines, comment lines containing only special characters, etc.) do not increase the number of comment lines.
The following piece of code contains 9 comment lines:
Language-specific details
Language | Note |
COBOL | Generated lines of code and pre-processing instructions (SKIP1 , SKIP2 , SKIP3 , COPY , EJECT , REPLACE ) are not counted as lines of code. |
Java | File headers are not counted as comment lines (because they usually define the license). |
Comments (%) (comment_lines_density
): The comment lines density = comment lines / (lines of code + comment lines) * 100
With such a formula:
- 50% means that the number of lines of code equals the number of comment lines
- 100% means that the file only contains comment lines
Files (files
): The number of files.
Lines (lines
): The number of physical lines (number of carriage returns).
Lines of code (ncloc
): The number of physical lines that contain at least one character which is neither a whitespace nor a tabulation nor part of a comment.
Lines of code per language (ncloc_language_distribution
): The non-commented lines of code distributed by language.
Functions (functions
): The number of functions. Depending on the language, a function is defined as either a function, a method, or a paragraph.
Language-specific details
Language | Note |
COBOL | It is the number of paragraphs. |
Java | Methods in anonymous classes are ignored. |
VB.NET | Accessors are not considered to be methods. |
Projects (projects
): The number of projects in a Portfolio.
Statements (statements
): The number of statements.
Tests
Condition coverage (branch_coverage
): On each line of code containing some boolean expressions, the condition coverage answers the following question: 'Has each boolean expression been evaluated both to true
and to false
?'. This is the density of possible conditions in flow control structures that have been followed during unit tests execution.
Condition coverage = (CT + CF) / (2*B)
where:
- CT = conditions that have been evaluated to 'true' at least once
- CF = conditions that have been evaluated to 'false' at least once
- B = total number of conditions
Condition coverage on new code (new_branch_coverage
): This definition is identical to Condition coverage but is restricted to new/updated source code.
Condition coverage hits (branch_coverage_hits_data
): A list of covered conditions.
Conditions by line (conditions_by_line
): The number of conditions by line.
Covered conditions by line (covered_conditions_by_line
): The number of covered conditions by line.
Coverage (coverage
): A mix of line coverage and condition coverage. Its goal is to provide an even more accurate answer the question 'How much of the source code has been covered by the unit tests?'.
Coverage = (CT + CF + LC)/(2*B + EL)
where:
- CT = conditions that have been evaluated to 'true' at least once
- CF = conditions that have been evaluated to 'false' at least once
- LC = covered lines = linestocover - uncovered_lines
- B = total number of conditions
- EL = total number of executable lines (
lines_to_cover
)
Coverage on new code (new_coverage
): This definition is identical to Coverage but is restricted to new/updated source code.
Line coverage (line_coverage
): On a given line of code, Line coverage simply answers the question 'Has this line of code been executed during the execution of the unit tests?'. It is the density of covered lines by unit tests:
Line coverage = LC / EL
where:
- LC = covered lines (
lines_to_cover
-uncovered_lines
) - EL = total number of executable lines (
lines_to_cover
)
Line coverage on new code (new_line_coverage
): This definition is identical to Line coverage but restricted to new/updated source code.
Line coverage hits (coverage_line_hits_data
): A list of covered lines.
Lines to cover (lines_to_cover
): The number of lines of code that could be covered by unit tests (for example, blank lines or full comments lines are not considered as lines to cover).
Lines to cover on new code (new_lines_to_cover
): This definition is Identical to Lines to cover but restricted to new/updated source code.
Skipped unit tests (skipped_tests
): The number of skipped unit tests.
Uncovered conditions (uncovered_conditions
): The number of conditions that are not covered by unit tests.
Uncovered conditions on new code (new_uncovered_conditions
): This definition is identical to Uncovered conditions but restricted to new/updated source code.
Uncovered lines (uncovered_lines
): The number of lines of code that are not covered by unit tests.
Uncovered lines on new code (new_uncovered_lines
): This definition is identical to Uncovered lines but restricted to new/updated source code.
Unit tests (tests
): The number of unit tests.
Unit tests duration (test_execution_time
): The time required to execute all the unit tests.
Unit test errors (test_errors
): The number of unit tests that have failed.
Unit test failures (test_failures
): The number of unit tests that have failed with an unexpected exception.
Unit test success density (%) (test_success_density
): Test success density = (Unit tests - (Unit test errors + Unit test failures)) / (Unit tests) * 100
Was this page helpful?