Customizing the Data Center Edition (DCE) Helm chart before installation
While we only document the most pressing Helm chart customizations in this documentation, there are other possibilities for you to choose to customize the chart before installing. See the Helm chart README file and Customize the chart before installing documentation for more information on these. In particular, see the recommended production use case values.
You can customize the Helm chart:
- By editing the default values in the
values.yaml
file. - Or directly in the Helm chart installation command line if installing from the Helm repository (see Installing the Helm chart from Helm repository).
To set up SonarQube Server monitoring, see Setting up the monitoring of a Kubernetes deployment.
Enabling Openshift
If you want to install SonarQube Server Helm chart on Openshift:
- Set
OpenShift.enabled
totrue
. - Set
Openshift.createSCC
tofalse
. - If you want to make your application publicly visible with Routes, you can set
route.enabled
totrue
. Please check the configuration details in the Helm chart documentation to customize the Route based on your needs.
Ensuring a restricted security level
About the Pod security level
Below is the Pod security level applied by default to each container. To apply a security level, a default SecurityContext
is set on the container through the SonarQube Server Helm chart.
Container | Pod security level | Note |
---|---|---|
SonarQube Server application containers | restricted | |
SonarQube Server init containers | restricted | |
postgresql containers | restricted | |
init-sysctl | privileged | Utility software that requires root access. |
init-fs | baseline | Utility software that requires root access. To disable the container, set initFs.enabled in the Helm chart to false . |
The SecurityContext
below is set as default on all restricted containers.
In a Kubernetes installation
To run the SonarQube Server Helm chart in a full restricted namespace, you must disable the init-sysctl
and init-fs
containers by setting in the Helm chart:
initSysctl.enabled
tofalse
.initFs.enabled
tofalse
.
Since these containers are used to perform some settings at the host/kernel level, disabling them may require additional configuration steps. For more information, see Elasticsearch prerequisites in the Helm chart documentation.
In an OpenShift installation
The configuration described in Enabling OpenShift above forces the disabling of the init-sysctl
and init-fs
containers. These containers should not be required in the vast majority of cases for an Openshift installation. Therefore, an Openshift installation is compatible with restricted SCCv2 (Security Context Constraints).
Securing communication within the cluster
You can secure communication within the SonarQube Server cluster by using mutual TLS. A secure connection can be set up at two different layers:
- Between search nodes.
- Between application and search nodes.
Securing communication between search nodes
To establish a secure connection between search nodes:
Step 1: Generate CA and certificate
You must generate a Certificate Authority together with a certificate and private key for the nodes in your cluster.
You can use the elasticsearch-certutil
tool to generate both the Certificate Authority and the certificate (see the Elastic documentation):
- Generate the CA.
- Generate only one certificate that is valid for all the nodes:
- Make sure you include all the search nodes' hostnames. They will be then added as DNS names in the Subject Alternative Name. See the example below.
- Choose the password that will be assigned to
searchNodes.searchAuthentication.userPassword
(optional). - As a result of the certificate creation process, you should get a file called
http.p12
. Rename it toelastic-stack-ca.p12
DNS names list example
As an example, let's assume that your cluster has three search nodes with the release's name set to "sq", the chart's name set to "sonarqube-dce", and the namespace set to "sonar". You will need to add the following DNS names in the SAN.
sq-sonarqube-dce-search-0.sq-sonarqube-dce-search.sonar.svc.cluster.local
sq-sonarqube-dce-search-1.sq-sonarqube-dce-search.sonar.svc.cluster.local
sq-sonarqube-dce-search-2.sq-sonarqube-dce-search.sonar.svc.cluster.local
sq-sonarqube-dce-search
Remember to add the service name in the list (in this case, sq-sonarqube-dce-search
). Note that you can retrieve the search nodes' FQDN running hostname -f
within one of the pods.
Step 2: Configure the authentication in the Helm chart.
To configure the search node authentication in the Helm chart:
- Set
searchNodes.searchAuthentication.enabled
totrue.
- Create the secret that will contain the certificate and assign its name to the
searchNodes.searchAuthentication.keyStoreSecret
parameter. - If you chose a password in the certificate generation process, set the
keyStorePassword
orkeyStorePasswordSecret
values with that password value.
Securing communication between application and search nodes
In order to secure the communications between the application and search nodes:
- Secure the communication between search nodes as described above.
- Set
nodeEncryption.enabled
totrue
.
Creating an Ingress to make SonarQube Server service accessible from outside
To make the SonarQube service accessible from outside of your cluster, you most likely need an Ingress.
The SonarSource Helm chart has an optional dependency on the NGINX Ingress Helm chart which installs the NGINX Ingress controller (To install the NGINX Ingress Helm chart through SonarQube Server DCE Helm chart, set ingress-nginx.enabled
to true
in DCE’s values.yaml
.). You should use it only in a test environment. In a production environment, it’s highly recommended that you use your own Ingress controller since the controller is a critical part of the software chain.
To create an Ingress resource through the Helm chart:
- Add the following to your DCE’s
values.yaml
. In this configuration, we use the Ingress class NGINX with a body size of at least 64 MB since this is what we recommend.
Printing logs as JSON strings
SonarQube Server prints all logs in plain text to stdout/stderr. It can print logs as JSON-String if the variable logging.jsonOutput
is set to true
. This will enable log collection tools like Loki to post-process the information provided by the application.
LogQL Example
With JSON Logging enabled, you can define a LogQL Query like this to filter only logs with the severity "ERROR" and display the Name of the Pod as well as the Message:
About persistence in Elasticsearch
SonarQube Server comes with a bundled Elasticsearch. As Elasticsearch is stateful, it makes sense to persist the Elasticsearch data for Data Center Edition (DCE) clusters because the cluster will survive the loss of any single search node without index corruption. Persistence is enabled for the DCE by default and managed with the Helm chart.
Disabling persistence would result in a longer startup time until SonarQube Server is fully available which can be a very large factor considering the downtime for the index rebuild on DCE clusters.
Was this page helpful?