Start Free
10.7 | Server upgrade and maintenance | Troubleshooting

Troubleshooting

On this page

Checking the logs

If you're having trouble starting your server for the first time (or any subsequent time!) the first thing to do is check your server logs. You'll find them in <sonarqubeHome>/logs:

  • sonar.log: Log for the main process. Holds general information about startup and shutdown. You'll get overall status here but not details. Look to the other logs for that.
  • web.log: Information about initial connection to the database, database migration and reindexing, and the processing of HTTP requests. This includes database and search engine logs related to those requests.
  • ce.log: Information about background task processing and the database and search engine logs related to those tasks.
  • es.log: Ops information from the search engine, such as Elasticsearch startup, health status changes, cluster-, node- and index-level operations, etc.

Understanding the logs

When there's an error, you'll very often find a stacktrace in the logs. If you're not familiar stacktraces, they can be intimidatingly tall walls of incomprehensible text. As a sample, here's a fairly short one:

java.lang.IllegalStateException: Unable to blame file **/**/foo.java
    at org.sonarsource.scm.git.JGitBlameCommand.blame(JGitBlameCommand.java:128)
    at org.sonarsource.scm.git.JGitBlameCommand.access$000(JGitBlameCommand.java:44)
    at org.sonarsource.scm.git.JGitBlameCommand$1.call(JGitBlameCommand.java:112)
    at org.sonarsource.scm.git.JGitBlameCommand$1.call(JGitBlameCommand.java:109)
    at java.util.concurrent.FutureTask.run(Unknown Source)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
    at java.lang.Thread.run(Unknown Source)
Caused by: java.lang.NullPointerException
    at org.eclipse.jgit.treewalk.filter.PathFilter.create(PathFilter.java:77)
    at org.eclipse.jgit.blame.BlameGenerator.<init>(BlameGenerator.java:161)
    at org.eclipse.jgit.api.BlameCommand.call(BlameCommand.java:203)
    at org.sonarsource.scm.git.JGitBlameCommand.blame(JGitBlameCommand.java:126)
    ... 7 more

Unless you wrote the code that produced this error, you really only care about:

  • the first line, which ought to have a human-readable message after the colon. In this case, it's Unable to blame file **/**/foo.java
  • and any line that starts with Caused by. There are often several Caused by lines, and indentation makes them easy to find as you scroll through the error. Be sure to read each of these lines. Very often one of them - the last one or next-to-last one - contains the real problem.

Recovering from Elasticsearch read-only indices

You may encounter issues with Elasticsearch (ES) indices becoming locked in read-only mode. ES requires free disk space available and implements a safety mechanism to prevent the disk from being flooded with index data that:

  • For non-DCE – locks all indices in read-only mode when the 95% used disk usage watermark is reached.
  • For DCE – locks all or some indices in read-only mode when one or more node reaches the 95% used disk usage watermark.

ES shows warnings in the logs as soon as disk usage reaches 85% and 90%. At 95% usage and above, indices turning read-only causes errors in the web and compute engine.

Freeing disk space will not automatically make the indices return to read-write. To make indices read-write, you also need to:

  • For non-DCE – restart SonarQube.
  • For DCE – restart ALL application nodes (the first application node restarted after all have been stopped will make the indices read-write).

SonarQube's built-in resilience mechanism allows SonarQube to eventually recover from the indices being behind data in the DB (this process can take a while).

If you still have inconsistencies, you'll need to rebuild the indices (this operation can take a long time depending on the number of issues and components):

non-DCE:

  1. Stop SonarQube
  2. Delete the data/es8 directory
  3. Restart SonarQube

DCE:

  1. Stop the whole cluster (ES and application nodes)
  2. Delete the data/es8 directory on each ES node
  3. Restart the whole cluster

Note: See Configure and operate a cluster for information on stopping and starting a cluster.

Failed tasks during reindexing

During Elasticsearch reindexing due to disaster recovery or an upgrade, you may have failed tasks in your branches or pull requests. If you only have a few failed tasks, you can reanalyze your branch or pull request. You may want to use web services to remove branches and pull requests that can't be reanalyzed because they have been removed from version control. If you have many failed tasks, you may want to delete your Elasticsearch directory and reindex again. To delete your Elasticsearch directory:

non-DCE:

  1. Stop SonarQube
  2. Delete the data/es8 directory
  3. Restart SonarQube

DCE:

  1. Stop the whole cluster (ES and application nodes)
  2. Delete the data/es8 directory on each ES node
  3. Restart the whole cluster

Timeout issues when setting up Database Connection Pool

In some configurations when there is a firewall between SonarQube and the data you may experience timeout issues. The firewall may interrupt idle DB connections after a specific timeout which can lead to resetting connections.  See also Issues with MS SQL Server connection below.

You can customize the HikariCP settings to the defaults listed below to avoid timeout issues.

sonar.jdbc.idleTimeout=600000
sonar.jdbc.keepaliveTime=300000
sonar.jdbc.maxLifetime=1800000
sonar.jdbc.validationTimeout=5000

Additionally, it is now possible to configure HikariCP properties described here using the following naming convention: sonar.jdbc.{HikariCP property name}.

Issues with MS SQL Server connection

HikariCP may get exhausted from connections causing SonarQube to be unresponsive. In this case, the error may display something like  HikariPool-1 - Connection is not available or HikariPool-1 - Cannot acquire connection from data source.

In this case, customize the HikariCP settings as follows:

sonar.jdbc.minIdle=25
sonar.jdbc.maxActive=25
sonar.jdbc.maxLifetime=0
sonar.jdbc.maxWait=30000

Performance issues

In case of performance issues, you may try the following:

  •  Review the hardware recommendations for the SonarQube server linked to Elasticsearch usage.
  • Move the Elasticsearch storage to a storage with high IOPS and low latency: see Configure the Elasticsearch storage path in Installing the SonarQube server from the ZIP file.
  • Set the Housekeeping with a reduced retention time, to limit the database size.
  • Configure the analysis scope to reduce the number of files analyzed, leading to shorter analysis and smaller database footprint.
  • From the Enterprise Edition: increase the number of Compute Engine workers and/or configure the Compute Engine to enable parallel processing of pull requests and branch analyses for each project. See Compute engine performance
  • For the Data Center Edition on Kubernetes: set up autoscaling.

Issues with IIS and SAML integration

If you are using an IIS reverse proxy with SAML authentication, you may encounter one of the following  issues:

  • The URL redirection to the SAML Identity Provider (sonar.auth.saml.loginUrl) is not managed correctly.
  • "You are not authorized to access this page" error is raised when logging in.

In that case, make sure that, at the IIS server level, you have performed all the configuration steps described in the section Securing the server behind a proxy>Using IIS on Windows in Operating the server.

Issue with downloading regulatory reports

If nothing happens when you try to download a regulatory report (in the SonarQube UI at Project Information > Regulatory Report) and your SonarQube server is deployed on Kubernetes, the issue could be your download speed or the report size. To fix this, increase your body size and connection timeout Ingress settings as follows:

annotations:
    cert-manager.io/cluster-issuer: sectigo
    nginx.ingress.kubernetes.io/proxy-body-size: "64m"
    nginx.ingress.kubernetes.io/proxy-connect-timeout: "300"
    nginx.ingress.kubernetes.io/proxy-read-timeout: "300"

Was this page helpful?

© 2008-2024 SonarSource SA. All rights reserved. SONAR, SONARSOURCE, SONARQUBE, and CLEAN AS YOU CODE are trademarks of SonarSource SA.

Creative Commons License