Customizing Helm chart
How to perform the most important SonarQube Helm chart customization when working with SonarQube Server.
While we only document the most pressing SonarQube Server Helm chart customizations in this documentation, there are other possibilities for you to choose to customize the chart before installing. Please see the Helm chart README file for more information on these. In particular, see the recommended production use case values.
You can customize the SonarQube Server Helm chart:
- By editing the default values in the - values.yamlfile.
- Or directly in the Helm chart installation command line, see Installing Helm chart. 
Parameters passed in the command line have precedence over parameters set in values.yaml.
Enabling OpenShift
If you want to install SonarQube Server on OpenShift, you must enable OpenShift in the Helm chart. In that case:
- The Helm chart will auto-configure itself to comply with the default OpenShift SCCs (Security Context Constraints). 
- Not using a default SCC in your OpenShift cluster may cause problems. 
To enable OpenShift in the Helm chart:
- Set - OpenShift.enabledto- true.
- Set - OpenShift.createSCCto- false.
- If you want to make your application publicly visible with Routes, you can set - route.enabledto- true. Please check the configuration details in the Helm chart documentation to customize the Route based on your needs.
Ensuring a restricted security level
Setting access to your external database
You must configure the access to your database (except if you want to use SonarQube for test purposes and want to use the embedded database H2).
To do so:
- Set - jdbcOverwrite.enabledto- true.
- Set - jdbcOverwrite.jdbcUrlto the database URL and- jdbcOverwrite.jdbcUsernameto the database username.
- Store the database password in a Kubernetes secret and set - jdbcOverwrite.jdbcSecretNameto the secret’s name.
- If you use an Oracle database: - Let the Helm chart download and inject the corresponding JDBC driver in SonarQube by setting - jdbcOverwrite.oracleJdbcDriver.urlto the URL of the Oracle JDBC driver to be downloaded.
- In case the download requires it, set - jdbcOverwrite.oracleJdbcDriver.netrcCredsto the name of the Kubernetes secret containing the- .netrcfile that stores the credentials.
 
For more information, see this section in the ArtifactHub page of the Helm Chart.
Enabling persistency in Elasticsearch
SonarQube Server comes with a bundled Elasticsearch, and as Elasticsearch is stateful, so is SonarQube Server. There is an option to persist the Elasticsearch indexes in a Persistent Volume, but with regular killing operations by the Kubernetes Cluster, these indexes can be corrupted. By default, persistency is disabled in the Helm chart.
Enabling persistency decreases the startup time of the SonarQube Server Pod significantly, but you are risking corrupting your Elasticsearch index. You can enable persistency by adding the following to the values.yaml:
persistence:
  enabled: trueLeaving persistency disabled results in a longer startup time until SonarQube Server is fully available, but you won’t lose any data as SonarQube Server will persist all data in the database.
Using custom certificates for your code repository
When you are working with your own Certificate Authority or in an environment that uses self-signed certificates for your code repository platform, you can create a secret containing this certificate and add this certificate to the Java truststore inside the SonarQube deployment.
To add a certificate to the Javatrustore inside the SonarQube deployment:
- Ask the relevant team to provide you with a PEM format certificate or certificate chain. We will assume it to be called - cert.pemon the following commands.
- Generate the kubernetes secret, e.g. with this command: 
kubectl create secret generic --from-file cert.pem <secretName> -n  <sonarqubeNamespace>3. In SonarQube’s value.yaml file, add:
caCerts:
  enabled: true
  secret: <secretName>Creating an Ingress to make SonarQube Server service accessible from outside
To make the SonarQube Server service accessible from outside of your cluster, you most likely need an Ingress.
To create an Ingress resource through the Helm chart:
- Add the following to your SonarQube Server’s - values.yaml. In this configuration, we use the Ingress class NGINX with a body size of at least 64MB since this is what we recommend.
ingress:
  enabled: true
  # Used to create an Ingress record.
  hosts:
    - name: <Your SonarQube Server's FQDN>
      # Different clouds or configurations might need /* as the default path
      path: /
      # For additional control over serviceName and servicePort
      # serviceName: someService
      # servicePort: somePort
  annotations: 
    nginx.ingress.kubernetes.io/proxy-body-size: "64m"Related pages
- Installing Data Center Edition on Kubernetes: Installing on Kubernetes or Openshift 
Last updated
Was this helpful?

