Server host requirements
This section describes the requirements and recommendations for the machine running SonarQube Community Build.
This section describes the general requirements, recommendations, and limitations for the machine running SonarQube Community Build in case of a ZIP, Docker, or Kubernetes installation. Additional requirements specific to an installation type may be mentioned in the respective installation section.
Limitations
Running SonarQube Community Build on environments where ElasticSearch-related Linux prerequisites can’t be met is not supported.
This concerns (the list is not exhaustive): Azure App Service, AWS App Runner, and AWS Fargate. Using these services may cause issues that will ultimately make SonarQube unreliable and unsuitable for enterprise production use.
See On Linux systems.
Supported operating systems
SonarQube Community Build can run on the following operating systems (note that z/OS is not supported):
Linux (x64, AArch64)
Windows (x64)
macOS (x64, AArch64)
Hardware requirements
In the table below:
A small-scale installation is typically a SonarQube installation that supports up to 1M lines of code.
A large-scale installation is typically a SonarQube installation that supports up to 50M lines of code.
RAM
For a small-scale installation:
• 4GB of RAM
For a large-scale installation:
• 16 GB of RAM
Processor
64-bit system.
For a small-scale installation:
• 2 cores
For a large-scale installation:
• 8 cores
In addition, for a server installation from a Docker image:
• amd64 architecture or arm64-based Apple Silicon
Disk space
Depends on how much code you analyze with SonarQube.
For a small-scale installation:
• 30 GB
Free disk space
10% free disk space.
Note: This requirement stems from Elasticsearch's susceptibility to crashing if disk usage exceeds its high disk watermark, which is set at 90% by default. For more information, see the Elasticsearch documentation.
Hardware configuration recommendations
Elasticsearch is used by SonarQube Community Build in the background. To ensure good performance of your SonarQube Community Build, you need to follow these recommendations that are linked to Elasticsearch usage.
Disk
• Free disk space is an absolute requirement. Elasticsearch implements a safety mechanism to prevent the disk from being flooded with index data. This mechanism locks all indices in read-only mode when a 95% disk usage watermark is reached.
• Disk access can easily become the bottleneck of Elasticsearch. If you can afford SSDs, they are by far superior to any spinning media. SSD-backed nodes see boosts in both query and indexing performance. If you use spinning media, try to obtain the fastest disks possible (high-performance server disks with 15,000 RPM drives).
• Using RAID 0 is an effective way to increase disk speed, for both spinning disks and SSD. There is no need to use mirroring or parity variants of RAID because of Elasticsearch replicas and database primary storage.
• Do not use remote-mounted storage, such as NFS, SMB/CIFS, or network-attached storage (NAS). They are often slower, display larger latencies with a wider deviation in average latency, and are a single point of failure.
• You may put <sonarqubeHome>/Data (where sonarqubeHome is the SonarQube Community Build installation directory; it is recommended to use /opt/sonarqube for this directory) into a separate partition to help alleviate the single point of failure mentioned above.
RAM
It is recommended that you allocate 50% of the available memory to the Elasticsearch heap while leaving the other 50% free. Lucene (used by Elasticsearch) is designed to leverage the underlying OS for caching in-memory data structures.
• Don’t allocate more than 32GB.
See the following Elasticsearch articles for more details:
• Elasticsearch Guide: Heap Sizing
• A Heap of Trouble
• Elasticsearch Reference: JVM heap size
CPU
If you need to choose between faster CPUs or more cores, then choose more cores. The extra concurrency that multiple cores offer will far outweigh a slightly faster clock speed.
Data is distributed on multiple nodes by nature, so execution time depends on the slowest node. It’s better to have multiple medium boxes than one fast and one slow one.
I/O scheduler for SSD
If you use SSD, do not use the CFQ (Completely Fair Queuing) I/O scheduler (this is the defaul I/O scheduler under most Unix distributions). Use either the deadline or the NOOP scheduler instead.
When you write data to disk, the I/O Scheduler decides when that data is actually sent to the disk. The CFQ allocates "time slices" to each process, and then optimizes the delivery of these various queues to the disk. It is optimized for spinning media: the nature of rotating platters means it is more efficient to write data to disk based on physical layout. The deadline scheduler optimizes based on how long writes have been pending, while NOOP is just a simple FIFO queue.
Hard drives
They should have excellent read and write performance.
Most importantly, the "data" folder houses the Elasticsearch indices on which a huge amount of I/O will be done when the server is up and running. Read and write hard drive performance will therefore have a big impact on the overall SonarQube Community Build host performance.
Software requirements
Client web browser
• Microsoft Edge: latest version
• Mozilla Firefox: latest version
• Google Chrome: latest version
• Safari: latest version
Java
Applies only to a server installation from the ZIP file.
• Oracle JRE or OpenJDK
• Java version 17 or 21
• Recommendation: Use Java CPU (critical patch update) releases.
Note: SonarQube Community Build is able to analyze any kind of Java source files regardless of the version of Java they comply with.
Last updated
Was this helpful?