Customizing the Data Center Edition (DCE) Helm chart before installation
While we only document the most pressing Helm chart customizations in this documentation, there are other possibilities for you to choose to customize the chart before installing. See the Helm chart README file and Customize the chart before installing documentation for more information on these. In particular, see the recommended production use case values.
You can customize the Helm chart:
- By editing the default values in the
values.yaml
file.
- Or directly in the Helm chart installation command line (see Installing the Helm chart).
To set up:
- Monitoring: see Setting up the monitoring of a Kubernetes deployment.
- Autoscaling: see Setting up autoscaling on Kubernetes for your Data Center Edition.
You must configure the access to your database (except if you want to use SonarQube for test purposes and want to use the embedded database H2). See Setting access to your external database in Customizing the Helm chart (Developer and Enterprise Edition).
You can:
- Enable Openshift. See Enabling Openshift in Customizing the Helm chart (Developer and Enterprise Edition).
- Ensure a restricted security level in your Openshift or Kubernetes installation, see Ensuring a restricted security level in Customizing the Helm chart (Developer and Enterprise Edition).
- Create an Ingress to make SonarQube accessible from outside. See Creating an Ingress in Customizing the Helm chart (Developer and Enterprise Edition)
If you use custom certificates for your code repository, see Using custom certificates for your code repository in Customizing the Helm chart (Developer and Enterprise Edition).
Kubernetes services automatically discover SonarQube cluster nodes, eliminating the need to specify them in a node's configuration.
Storing your JWT token
To keep user sessions alive during a restart, you need to store the JWT token you generated during pre-installation steps. To do so, store the token in the applicationNodes.jwtSecret
parameter.
Deploying with Istio
The DCE Helm chart can be installed in clusters that have Istio pre-installed (SonarQube Server is tested using Istio in sidecar mode).
When deploying SonarQube in an Istio service mesh environment, you need to configure fixed ports for Hazelcast communication between application nodes. This is required because Istio's sidecar proxy needs to know all ports in advance for traffic management, security policies, and observability.
By default, SonarQube's Hazelcast cluster uses dynamic port allocation, which conflicts with Istio's requirement for explicit port declarations in service definitions and network policies. To ensure that Istio can properly route traffic, apply security policies, and provide telemetry for all inter-node communication within the SonarQube cluster, configure fixed fixed ports for the Hazelcast communication channels by setting the following parameters:
applicationNodes.webPort
- Port used by the Web process for cluster communicationapplicationNodes.cePort
- Port used by the Compute Engine process for cluster communication
Example configuration:
applicationNodes:
webPort: 4023 # Web process communication
cePort: 4024 # Compute Engine process communication
About persistence in Elasticsearch
SonarQube Server comes with a bundled Elasticsearch. As Elasticsearch is stateful, it makes sense to persist the Elasticsearch data for Data Center Edition (DCE) clusters because the cluster will survive the loss of any single search node without index corruption. Persistence is enabled for the DCE by default and managed with the Helm chart.
Disabling persistence would result in a longer startup time until SonarQube Server is fully available which can be a very large factor considering the downtime for the index rebuild on DCE clusters.
Related pages
Was this page helpful?