Start Free
Latest | Server installation and setup | Deploying on Kubernetes | Data Center Edition | Customizing Helm chart

Customizing the Data Center Edition (DCE) Helm chart before installation

On this page

While we only document the most pressing Helm chart customizations in this documentation, there are other possibilities for you to choose to customize the chart before installing. See the Helm chart README file and Customize the chart before installing documentation for more information on these. In particular, see the recommended production use case values.

You can customize the Helm chart:

Enabling Openshift

If you want to install SonarQube Server Helm chart on Openshift:

  1. Set OpenShift.enabled to true.
  2. Set Openshift.createSCC to false.
  3. If you want to make your application publicly visible with Routes, you can set route.enabled to true. Please check the configuration details in the Helm chart documentation to customize the Route based on your needs.

Ensuring a restricted security level

About the Pod security level

Below is the Pod security level applied by default to each container. To apply a security level, a default SecurityContext is set on the container through the SonarQube Server Helm chart. 

ContainerPod security levelNote
SonarQube Server application containersrestricted
SonarQube Server init containersrestricted
postgresql containersrestricted
init-sysctlprivileged

Utility software that requires root access. 


init-fsbaselineUtility software that requires root access. To disable the container, set initFs.enabled in the Helm chart to false.

The SecurityContext below is set as default on all restricted containers. 

allowPrivilegeEscalation: false
runAsNonRoot: true
runAsUser: 1000
runAsGroup: 1000
seccompProfile:
  type: RuntimeDefault
capabilities:
  drop: ["ALL"]
In a Kubernetes installation

To run the SonarQube Server Helm chart in a full restricted namespace, you must disable the init-sysctl and init-fs containers by setting in the Helm chart:

  • initSysctl.enabled to false
  • initFs.enabled to false

Since these containers are used to perform some settings at the host/kernel level, disabling them may require additional configuration steps. For more information, see Elasticsearch prerequisites in the Helm chart documentation. 

In an OpenShift installation

The configuration described in Enabling OpenShift above forces the disabling of the init-sysctl and init-fs containers. These containers should not be required in the vast majority of cases for an Openshift installation. Therefore, an Openshift installation is compatible with restricted SCCv2 (Security Context Constraints).

Securing communication within the cluster

You can secure communication within the SonarQube Server cluster by using mutual TLS. A secure connection can be set up at two different layers:

  • Between search nodes.
  • Between application and search nodes.

Securing communication between search nodes

To establish a secure connection between search nodes:

Step 1: Generate CA and certificate

You must generate a Certificate Authority together with a certificate and private key for the nodes in your cluster. 

You can use the elasticsearch-certutil tool to generate both the Certificate Authority and the certificate (see the Elastic documentation):

  1. Generate the CA.
  2. Generate only one certificate that is valid for all the nodes:
    • Make sure you include all the search nodes' hostnames. They will be then added as DNS names in the Subject Alternative Name. See the example below. 
    • Choose the password that will be assigned to searchNodes.searchAuthentication.userPassword (optional).
    • As a result of the certificate creation process, you should get a file called http.p12. Rename it to elastic-stack-ca.p12

DNS names list example

As an example, let's assume that your cluster has three search nodes with the release's name set to "sq", the chart's name set to "sonarqube-dce", and the namespace set to "sonar". You will need to add the following DNS names in the SAN.

sq-sonarqube-dce-search-0.sq-sonarqube-dce-search.sonar.svc.cluster.local

sq-sonarqube-dce-search-1.sq-sonarqube-dce-search.sonar.svc.cluster.local

sq-sonarqube-dce-search-2.sq-sonarqube-dce-search.sonar.svc.cluster.local

sq-sonarqube-dce-search

Step 2: Configure the authentication in the Helm chart.

To configure the search node authentication in the Helm chart:

  1. Set searchNodes.searchAuthentication.enabled to true.
  2. Create the secret that will contain the certificate and assign its name to the searchNodes.searchAuthentication.keyStoreSecret parameter. 
  3. If you chose a password in the certificate generation process, set the keyStorePassword or keyStorePasswordSecret values with that password value.

Securing communication between application and search nodes

In order to secure the communications between the application and search nodes:

  1. Secure the communication between search nodes as described above. 
  2. Set nodeEncryption.enabled to true.

Creating an Ingress to make SonarQube Server service accessible from outside

To make the SonarQube service accessible from outside of your cluster, you most likely need an Ingress.

The SonarSource Helm chart has an optional dependency on the NGINX Ingress Helm chart which installs the NGINX Ingress controller  (To install the NGINX Ingress Helm chart through SonarQube Server DCE Helm chart, set ingress-nginx.enabled to true in DCE’s values.yaml.). You should use it only in a test environment. In a production environment, it’s highly recommended that you use your own Ingress controller since the controller is a critical part of the software chain. 

To create an Ingress resource through the Helm chart:

  • Add the following to your DCE’s values.yaml. In this configuration, we use the Ingress class NGINX with a body size of at least 64 MB since this is what we recommend.
ingress:

  enabled: true

  # Used to create an Ingress record.

  hosts:

    - name: <Your Sonarqube FQDN>

      # Different clouds or configurations might need /* as the default path

      path: /

      # For additional control over serviceName and servicePort

      # serviceName: someService

      # servicePort: somePort

  annotations: 

    nginx.ingress.kubernetes.io/proxy-body-size: "64m"

Printing logs as JSON strings

SonarQube Server prints all logs in plain text to stdout/stderr. It can print logs as JSON-String if the variable logging.jsonOutput is set to true. This will enable log collection tools like Loki to post-process the information provided by the application.

LogQL Example

With JSON Logging enabled, you can define a LogQL Query like this to filter only logs with the severity "ERROR" and display the Name of the Pod as well as the Message:

{namespace="sonarqube-dce", app="sonarqube-dce"}| json | severity="ERROR" | line_format "{{.nodename}} {{.mes

About persistence in Elasticsearch

SonarQube Server comes with a bundled Elasticsearch. As Elasticsearch is stateful, it makes sense to persist the Elasticsearch data for Data Center Edition (DCE) clusters because the cluster will survive the loss of any single search node without index corruption. Persistence is enabled for the DCE by default and managed with the Helm chart.


Was this page helpful?

© 2008-2024 SonarSource SA. All rights reserved. SONAR, SONARSOURCE, SONARQUBE, and CLEAN AS YOU CODE are trademarks of SonarSource SA.

Creative Commons License