Start Free
Latest | Server installation and setup | Deploying on Kubernetes | Deploying SonarQube cluster

Deploying a SonarQube cluster on Kubernetes

On this page

This page applies to deploying SonarQube Data Center Edition on Kubernetes. For information on deploying Community, Developer, and Enterprise editions of SonarQube on Kubernetes, see this documentation.

Overview

You can find the SonarQube DCE Helm chart on GitHub.

Your feedback is welcome at our community forum.

Kubernetes environment recommendations

When you want to operate SonarQube on Kubernetes, consider the following recommendations.

Supported versions

The SonarQube helm chart should only be used with the latest version of SonarQube and a supported version of Kubernetes. There is a dedicated helm chart for the LTA (Long-Term Active) version of SonarQube that follows the same patch policy as the application, while also being compatible with the supported versions of Kubernetes.

Pod Security Standards

Here is the list of containers that are compatible with the Pod Security levels:

  • privileged:
    • init-sysctl
  • baseline:
    • init-fs
  • restricted:
    • SQ application containers
    • SQ init containers.
    • PostgreSQL containers.

This is achieved by setting this SecurityContext as default on most containers:

allowPrivilegeEscalation: false
runAsNonRoot: true
runAsUser: 1000
runAsGroup: 1000
seccompProfile:
  type: RuntimeDefault
capabilities:
  drop: ["ALL"]

Based on that, one can run the SQ helm chart in a full restricted namespace, by deactivating the initSysctl.enabled and initFs.enabled parameters, which require root access.

For more information, see the production-use-case or take a look at the values.yaml file.

Helm chart specifics

We try to provide a good default with the Helm chart, but there are some points to consider while working with SonarQube on Kubernetes. Please read the following sections carefully to make the correct decisions for your environment.

Persistency

SonarQube comes with a bundled Elasticsearch and, as Elasticsearch is stateful, so is SonarQube. For Data Center Edition (DCE) clusters, it makes sense to persist the Elasticsearch data because the cluster will survive the loss of any single search node without index corruption. By default, persistency is enabled for the DCE, and managed with the Helm chart.

Enabling persistency decreases the project reload time so that accessing project data is much faster. Although there is no need to change the default value in DCE, you can manage persistency with the following parameter in the values.yaml:

persistence:
  enabled: true

Disabling persistency would result in a longer startup time until SonarQube is fully available which can be a very large factor considering the downtime for the index rebuild on DCE clusters.

Ingress Creation

To make the SonarQube service accessible from outside of your cluster, you most likely need an ingress. Creating a new ingress is also covered by the Helm chart. See the following section for help with creating one.

Ingress Class

The SonarSource Helm chart has an optional dependency to the NGINX-ingress helm chart. If you already have NGINX-ingress present in your cluster, you can use it.

If you want to install NGINX as well, add the following to your values.yaml.

nginx:
  enabled: true

We recommend using the ingress-class NGINX with a body size of at least 8MB. This can be achieved with the following changes to your values.yaml:

ingress:
  enabled: true
  # Used to create an Ingress record.
  hosts:
    - name: <Your Sonarqube FQDN>
      # Different clouds or configurations might need /* as the default path
      path: /
      # For additional control over serviceName and servicePort
      # serviceName: someService
      # servicePort: somePort
  annotations: 
    kubernetes.io/ingress.class: nginx
    nginx.ingress.kubernetes.io/proxy-body-size: "8m"

Monitoring

See Setting up the monitoring of a Kubernetes deployment.

Log Format

SonarQube prints all logs in plain-text to stdout/stderr. It can print logs as JSON-String if the variable logging.jsonOutput is set to true. This will enable log collection tools like Loki to do post processing on the information that are provided by the application.

LogQL Example

With JSON Logging enabled, you can define a LogQL Query like this to filter only logs with the severity "ERROR" and display the Name of the Pod as well as the Message:

{namespace="sonarqube-dce", app="sonarqube-dce"}| json | severity="ERROR" | line_format "{{.nodename}} {{.message}}"

ES Cluster Authentication

Since SonarQube 8.9, you can enable basic security for the Search Cluster in SonarQube. To benefit from this additional layer of security on Kubernetes as well, you need to provide a PKCS#11 Container with the required certificates to our Helm chart. The required secret can be created like this:

kubectl create secret generic <NAME OF THE SECRET> --from-file=/PATH/TO/YOUR/PKCS12.container=elastic-stack-ca.p12 -n <NAMESPACE>

Other Configuration Options

This documentation only contains the most important Helm chart customizations. See the Customize the chart before installing documentation and the Helm chart README for more possibilities on customizing the Helm chart.

Known limitations

Problems with Azure Fileshare PVC

Currently, there is a known limitation when working on AKS that resonates around the use of Azure Fileshare. We recommend using another storage class for persistency on AKS.

Installing from the Helm repository

Currently only Helm 3 is supported.

To install the Helm chart from Helm repository, you can use the following commands:

helm repo add sonarqube https://SonarSource.github.io/helm-chart-sonarqube
helm repo update
kubectl create namespace sonarqube-dce
export JWT_SECRET=$(echo -n "your_secret" | openssl dgst -sha256 -hmac "your_key" -binary | base64)
helm upgrade --install -n sonarqube-dce sonarqube-dce --set ApplicationNodes.jwtSecret=$JWT_SECRET sonarqube/sonarqube-dce

The helm upgrade --install -n sonarqube-dce sonarqube-dce --set line allows you to customize the Helm chart values

The echo command allows you to set the value of your Application authentication JWT token. This value must be an HS256 key encoded with base64.

Installing from the Google Cloud Platform

SonarQube DCE can be deployed on Kubernetes through the Google Marketplace, using its "Click to Deploy" feature with the following current limitations: 

  • SonarQube DCE can't be deployed into "Autopilot" clusters.
  • SonarQube DCE is not compatible with Istio.

Prerequisites

Make sure that you have kubectl configured in your environment and that your cluster has Google's Application CustomResourceDefinition installed. That definition can be obtained from this file

Pre-installation steps

  • Set the value of your Application authentication JWT Token. This value is an HS256 key encoded with base64. To do so, you may use the echo command below:
echo -n "your_secret" | openssl dgst -sha256 -hmac "your_key" -binary | base64 
  • If necessary, create the target namespace you want to install SonarQube DCE into.

Installing using Click to Deploy

  1. Go to the SonarQube DCE page on the Google Cloud Platform.
  2. Click Get started and follow the instructions.
  3. In the Deploy page, fill in the fields in the Click to Deploy on GKE tab: see Installation parameters below.
  4. At the bottom of the tab, click Deploy

Installing manually

For manual installation or development purposes, SonarQube can be configured using the mpdev CLI tool provided by Google. See Installation parameters below for the supported parameters with key.

Deleting the installation

To delete the installation of SonarQube from your cluster:

  1. Delete the created Application resource.
  2. Delete the PersistentVolumeClaims related to the search nodes and database (if applicable).

Installation parameters

NameDescriptionKeyType
Existing Kubernetes clusterKubernetes cluster in which the application will be deployed.

NamespaceTarget namespace to install SonarQube DCE into (The namespace must exist already, it will not be created automatically.).namespacestring
App instance nameName of the application in your Kubernetes clusternamestring
Application authentication JWT TokenThe HS256 key encoded with base64: see Pre-installation steps above.ApplicationNode.jwtSecretstring
Connection to a database - RecommendedIf enabled, SonarQube will be connected to your PostgreSQL database. The connection parameters JDBC URL, username, and password will be used. Make sure that the Embedded database option is disabled.jdbcOverwrite.enableboolean
JDBC URLThe JDBC URL used to connect to the database.jdbcOverwrite.jdbcUrlstring
JDB UsernameThe username used to connect to the database.jdbcOverwrite.jdbcUsernamestring
JDBC PasswordThe password used to connect to the database.jdbcOverwrite.jdbcPasswordstring
Application nodes replicasThe number of replicas for the Application NodesApplicationNodes.replicaCountinteger
Search nodes replicasThe number of replicas for the Search NodessearchNodes.replicaCountinteger
Enable initSysctl privileged initContainer to setup elasticearch kernel parametersThis should be disabled and set up by your cluster administrator. Refer to this documentation for more details. initSysctl.enabledboolean
Enable initFs root initContainer to setup filesystem parametersThis is generally not required on a Google Kubernetes cluster. Refer to this documentation for more details. initFs.enabledboolean
GCP Marketplace applicationThis flag must be enabled in the context of the installation from GCP.gcp_marketplaceboolean
Embedded database - For testing purposes onlyNot recommended for production: a test PostgreSQL database will be installed.postgresql.enabledboolean

Was this page helpful?

© 2008-2024 SonarSource SA. All rights reserved. SONAR, SONARSOURCE, SONARLINT, SONARQUBE, SONARCLOUD, and CLEAN AS YOU CODE are trademarks of SonarSource SA.

Creative Commons License