# Customizing Helm chart

While we only document the most pressing Helm chart customizations in this documentation, there are other possibilities for you to choose to customize the chart before installing. See the [Helm chart README file](https://artifacthub.io/packages/helm/sonarqube/sonarqube-dce) and [Customize the chart before installing](https://helm.sh/docs/intro/using_helm/#customizing-the-chart-before-installing) documentation for more information on these. In particular, see the recommended production use case values.

You can customize the Helm chart:

* By editing the default values in the `values.yaml` file.
* Or directly in the Helm chart installation command line if installing from the Helm repository, see .

{% hint style="info" %}
To set up SonarQube Server monitoring, see [introduction](https://docs.sonarsource.com/sonarqube-server/2025.2/setup-and-update/deploy-on-kubernetes/set-up-monitoring/introduction "mention") to Setting up monitoring.
{% endhint %}

## Enabling Openshift <a href="#enabling-openshift" id="enabling-openshift"></a>

If you want to install SonarQube Server Helm chart on OpenShift:

1. Set `OpenShift.enabled` to `true`.
2. Set `OpenShift.createSCC` to `false`.
3. If you want to make your application publicly visible with Routes, you can set `route.enabled` to `true`. Please check the [configuration details](https://artifacthub.io/packages/helm/sonarqube/sonarqube#openshift) in the Helm chart documentation to customize the Route based on your needs.

## Ensuring a restricted security level <a href="#ensuring-restricted-level" id="ensuring-restricted-level"></a>

<details>

<summary>About the Pod security level</summary>

Below is the [Pod security level](https://kubernetes.io/docs/concepts/security/pod-security-admission/#pod-security-levels) applied by default to each container. To apply a security level, a default `SecurityContext` is set on the container through the SonarQube Server Helm chart.

| **Container**                           | **Pod security level** | **Note**                                                                                                                 |
| --------------------------------------- | ---------------------- | ------------------------------------------------------------------------------------------------------------------------ |
| SonarQube Server application containers | restricted             | <p><br></p>                                                                                                              |
| SonarQube Server init containers        | restricted             | <p><br></p>                                                                                                              |
| postgresql containers                   | restricted             | <p><br></p>                                                                                                              |
| init-sysctl                             | privileged             | <p>Utility software that requires root access.</p><p><br></p>                                                            |
| init-fs                                 | baseline               | Utility software that requires root access. To disable the container, set `initFs.enabled` in the Helm chart to `false`. |

The `SecurityContext` below is set as default on all restricted containers.

```css-79elbk
allowPrivilegeEscalation: false
runAsNonRoot: true
runAsUser: 1000
runAsGroup: 1000
seccompProfile:
  type: RuntimeDefault
capabilities:
  drop: ["ALL"]
```

</details>

<details>

<summary>In a Kubernetes installation</summary>

To run the SonarQube Server Helm chart in a full restricted namespace, you must disable the `init-sysctl` and `init-fs` containers by setting in the Helm chart:

* `initSysctl.enabled` to `false`.
* `initFs.enabled` to `false`.

Since these containers are used to perform some settings at the host/kernel level, disabling them may require additional configuration steps. For more information, see [Elasticsearch prerequisites](https://artifacthub.io/packages/helm/sonarqube/sonarqube#elasticsearch-prerequisites) in the Helm chart documentation.

</details>

<details>

<summary>In an OpenShift installation</summary>

The configuration described in **Enabling OpenShift** above forces the disabling of the `init-sysctl` and `init-fs` containers. These containers should not be required in the vast majority of cases for an Openshift installation. Therefore, an Openshift installation is compatible with restricted SCCv2 (Security Context Constraints).

</details>

## Securing communication within the cluster <a href="#securing-communication" id="securing-communication"></a>

You can secure communication within the SonarQube Server cluster by using mutual TLS. A secure connection can be set up at two different layers:

* Between search nodes.
* Between application and search nodes.

### Securing communication between search nodes <a href="#securing-communication-between-search-nodes" id="securing-communication-between-search-nodes"></a>

To establish a secure connection between search nodes:

<details>

<summary>Step 1: Generate CA and certificate</summary>

You must generate a Certificate Authority together with a certificate and private key for the nodes in your cluster.

You can use the `elasticsearch-certutil` tool to generate both the [Certificate Authority](https://www.elastic.co/guide/en/elasticsearch/reference/current/security-basic-setup.html#generate-certificates) and the [certificate](https://www.elastic.co/guide/en/elasticsearch/reference/current/security-basic-setup-https.html#encrypt-http-communication) (see [the Elastic documentation](https://www.elastic.co/guide/en/elasticsearch/reference/current/security-basic-setup-https.html)):

1. Generate the CA.
2. Generate only one certificate that is valid for all the nodes:
   * Make sure you include all the search nodes’ hostnames. They will be then added as DNS names in the Subject Alternative Name. See the example below.
   * Choose the password that will be assigned to `searchNodes.searchAuthentication.userPassword` (optional).
   * As a result of the certificate creation process, you should get a file called `http.p12`. Rename it to `elastic-stack-ca.p12`

**DNS names list example**

As an example, let’s assume that your cluster has three search nodes with the release’s name set to "sq", the chart’s name set to "sonarqube-dce", and the namespace set to "sonar". You will need to add the following DNS names in the SAN.

`sq-sonarqube-dce-search-0.sq-sonarqube-dce-search.sonar.svc.cluster.local`

`sq-sonarqube-dce-search-1.sq-sonarqube-dce-search.sonar.svc.cluster.local`

`sq-sonarqube-dce-search-2.sq-sonarqube-dce-search.sonar.svc.cluster.local`

`sq-sonarqube-dce-search`

{% hint style="info" %}
Remember to add the service name in the list (in this case, `sq-sonarqube-dce-search`). Note that you can retrieve the search nodes’ FQDN running `hostname -f` within one of the pods.
{% endhint %}

</details>

<details>

<summary>Step 2: Configure the authentication in the Helm chart.</summary>

To configure the search node authentication in the Helm chart:

1. Set `searchNodes.searchAuthentication.enabled` to `true.`
2. Create the secret that will contain the certificate and assign its name to the `searchNodes.searchAuthentication.keyStoreSecret` parameter.
3. If you chose a password in the certificate generation process, set the `keyStorePassword` or `keyStorePasswordSecret` values with that password value.

</details>

### Securing communication between application and search nodes <a href="#securing-communication-between-application-and-search-nodes" id="securing-communication-between-application-and-search-nodes"></a>

In order to secure the communications between the application and search nodes:

1. Secure the communication between search nodes as described above.
2. Set `nodeEncryption.enabled` to `true`.

## Using custom certificates for your code repository <a href="#custom-certificates" id="custom-certificates"></a>

When you are working with your own Certificate Authority or in an environment that uses self-signed certificates for your code repository platform, you can create a secret containing this certificate and add this certificate to the Java truststore inside the SonarQube deployment.

To add a certificate to the Javatrustore inside the SonarQube deployment:

1. Ask the relevant team to provide you with a PEM format certificate or certificate chain. We will assume it to be called `cert.pem` on the following commands.
2. Generate the kubernetes secret, e.g. with this command:

```css-79elbk
kubectl create secret generic --from-file cert.pem <secretName> -n  <sonarqubeNamespace>
```

3\. In SonarQube’s `value.yaml` file, add:

```css-79elbk
caCerts:
  enabled: true
  secret: <secretName>
```

## Creating an Ingress to make SonarQube Server service accessible from outside <a href="#creating-ingress" id="creating-ingress"></a>

To make the SonarQube service accessible from outside of your cluster, you most likely need an Ingress.

The Sonar Helm chart has an optional dependency on the [NGINX Ingress Helm chart](https://kubernetes.github.io/ingress-nginx) which installs the NGINX Ingress controller (To install the NGINX Ingress Helm chart through SonarQube Server DCE Helm chart, set `ingress-nginx.enabled` to `true` in DCE’s `values.yaml`.). You should use it only in a test environment. In a production environment, it’s highly recommended that you use your own Ingress controller since the controller is a critical part of the software chain.

To create an Ingress resource through the Helm chart:

* Add the following to your DCE’s `values.yaml`. In this configuration, we use the Ingress class NGINX with a body size of at least 64 MB since this is what we recommend.

```css-79elbk
ingress:

  enabled: true

  # Used to create an Ingress record.

  hosts:

    - name: <Your Sonarqube FQDN>

      # Different clouds or configurations might need /* as the default path

      path: /

      # For additional control over serviceName and servicePort

      # serviceName: someService

      # servicePort: somePort

  annotations: 

    nginx.ingress.kubernetes.io/proxy-body-size: "64m"
```

## Printing logs as JSON strings <a href="#printing-logs-as-json" id="printing-logs-as-json"></a>

SonarQube Server prints all logs in plain text to stdout/stderr. It can print logs as JSON-String if the variable `logging.jsonOutput` is set to `true`. This will enable log collection tools like [Loki](https://grafana.com/oss/loki/) to post-process the information provided by the application.

<details>

<summary>LogQL Example</summary>

With JSON Logging enabled, you can define a LogQL Query like this to filter only logs with the severity "ERROR" and display the Name of the Pod as well as the Message:

```css-79elbk
{namespace="sonarqube-dce", app="sonarqube-dce"} | json | severity="ERROR" | line_format "{{.nodename}} {{.message}}"
```

</details>

## About persistence in Elasticsearch <a href="#about-es-persistence" id="about-es-persistence"></a>

SonarQube Server comes with a bundled Elasticsearch. As Elasticsearch is stateful, it makes sense to persist the Elasticsearch data for Data Center Edition (DCE) clusters because the cluster will survive the loss of any single search node without index corruption. Persistence is enabled for the DCE by default and managed with the Helm chart.

{% hint style="warning" %}
Disabling persistence would result in a longer startup time until SonarQube Server is fully available which can be a very large factor considering the downtime for the index rebuild on DCE clusters.
{% endhint %}
