# Enable AI CodeFix

*AI CodeFix is only available in SonarQube Server* [*Enterprise and Data Center editions*](https://www.sonarsource.com/plans-and-pricing/sonarqube/) to provide AI-generated fixes for your issues.

Sonar’s AI CodeFix uses a large language model (LLM) to automatically generate AI-driven code fixes for the issues discovered by SonarQube Server. As a software administrator, you have flexible configuration options. You can use models hosted by SonarSource or connect to LLMs running within your own infrastructure. This includes cloud hyperscalers or completely self-hosted on-prem LLMs.

You have the option to define the LLM used to suggest fixes for a select set of rules in Java, JavaScript, TypeScript, Python, HTML, CSS, C#, and C++. Choose between:

* One of Sonar's two OpenAI models <code class="expression">space.vars.SQS\_20261\_Supported\_LLM\_version</code>.
* Bring your own LLM running on one of the following providers: [Azure OpenAI LLM Service model](https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/models?tabs=global-standard%2Cstandard-chat-completions) or [AWS Bedrock](https://aws.amazon.com/bedrock/model-choice/).
* Configure a self‑hosted LLM gateway (for example, Ollama, Litellm, or vLlm).

Using AI CodeFix is simple. When you request a fix, the affected code and issue description are sent to the LLM you define. AI CodeFix then proposes an edit that resolves the problem without changing the code’s functionality.

## Sharing your code with Sonar <a href="#sharing-your-code-with-sonar" id="sharing-your-code-with-sonar"></a>

For fully self-hosted SonarQube Server configurations, AI CodeFix is designed to operate without outbound internet access. All necessary prompts and rule descriptions for AI CodeFix are provided with the SonarQube product installation.

* Requirement: The SonarQube Server instance must be able to reach your configured LLM endpoint.

If you use Sonar’s AI CodeFix LLM, the affected code snippet will be sent by the AI CodeFix service to the selected LLM. Service agreements with Sonar’s LLMs prevent your code from being used to train those models.

For details about terms and conditions, please refer to the [AI CodeFix terms](https://www.sonarsource.com/legal/ai-codefix-terms/) in our [Legal Documentation](https://www.sonarsource.com/legal/).

## Enabling AI-generated fix suggestions <a href="#enabling-ai-generated-fix-suggestions" id="enabling-ai-generated-fix-suggestions"></a>

As an Instance Admin, you can enable or disable AI-generated fix suggestions on your projects. Select your provider below and follow the steps for that provider.

{% tabs %}
{% tab title="SONAR" %}
To configure AI CodeFix using Sonar’s hosted OpenAI service:

1. Go to **Administration** > **Configuration** > **General Settings** > **AI CodeFix** and select **Enable AI CodeFix**.
2. Under **Provider**, select your model:
   * <code class="expression">space.vars.SQS\_20261\_Recommended\_LLM\_version</code> (recommended)
   * GPT-4o
3. Lastely, select either **All projects** or **Only selected projects** to decide which projects have access to AI CodeFix suggestions..

When choosing **Only selected projects**, add projects individually from the list to activate the feature. New projects are not added automatically.

{% hint style="info" %}
You’ll need a connection to the internet to access SonarQube Server’s AI CodeFix service.

The service is provided via api.sonarqube.io and has these static IP addresses:

* 99.83.135.55 (CIDR: 99.83.135.55/32)
* 15.197.164.24 (CIDR: 15.197.164.24/32)
  {% endhint %}
  {% endtab %}

{% tab title="AZURE OPENAI" %}
To configure AI CodeFix using your own Azure OpenAI LLM Service model:

1. Go to **Administration** > **Configuration** > **General Settings** > **AI CodeFix** and select **Enable AI CodeFix**.
2. Under **Provider**, select **Azure OpenAI**, then provide the following:
   * **Endpoint** (required): the full URL to your Azure OpenAI deployment, including the `deployment-id` and `api-version` parameters. For example: `https://<YOUR-ENDPOINT>.openai.azure.com/openai/deployments/<YOUR-DEPLOYMENT-ID>/completions?api-version=<YOUR-API-VERSION>`
   * **API Key** (required): your Azure OpenAI API key. For more information, see the [Azure OpenAI Service documentation](https://learn.microsoft.com/en-us/azure/ai-services/openai/).
3. Optional: for enhanced security, go to **Administration** > **Configuration** > **General Settings** > **Security** and select **Enforce Azure OpenAI Endpoint Domain** to restrict accepted endpoint URLs to domains ending in `.openai.azure.com`. Verify that your endpoint from step 2 matches this domain before enabling this setting.
4. Lastely, select either **All projects** or **Only selected projects** to decide which projects have access to AI CodeFix suggestions..

When choosing **Only selected projects**, add projects individually from the list to activate the feature. New projects are not added automatically.

{% hint style="warning" %}
Sonar recommends using <code class="expression">space.vars.SQS\_20261\_Recommended\_LLM\_version</code> based on Sonar's benchmarks. If you select a different model, fix suggestion quality may vary. Sonar is not responsible for quality degradation when using models other than the recommended one.

For more information, see the [Azure documentation on service models](https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/models?tabs=global-standard%2Cstandard-chat-completions).
{% endhint %}
{% endtab %}

{% tab title="AWS BEDROCK" %}
To configure AI CodeFix using your own AWS Bedrock model:

Prerequisites: Your SonarQube Server instance must be running on Amazon EC2, Amazon ECS, or Amazon EKS, and must have permission to access AWS Bedrock.

1. Go to **Administration** > **Configuration** > **General Settings** > **AI CodeFix** and select **Enable AI CodeFix**.
2. Under **Provider**, select **AWS Bedrock**, then provide the following:
   * **Region** (required): the AWS region where your Bedrock model is deployed. For example: `us-east-1` or `eu-central-1`.
   * **Model ID** (required): the identifier of the Bedrock model to use. For a list of supported models and their IDs, see the [Amazon Bedrock documentation](https://docs.aws.amazon.com/bedrock/latest/userguide/models-supported.html).
3. Lastely, select either **All projects** or **Only selected projects** to decide which projects have access to AI CodeFix suggestions..

When choosing **Only selected projects**, add projects individually from the list to activate the feature. New projects are not added automatically.

{% hint style="warning" %}
Sonar recommends using <code class="expression">space.vars.SQS\_20261\_Recommended\_LLM\_version</code> based on Sonar's benchmarks. If you select a different model, fix suggestion quality may vary. Sonar is not responsible for quality degradation when using models other than the recommended one.
{% endhint %}
{% endtab %}

{% tab title="SELF-HOSTED GATEWAY" %}
To configure AI CodeFix using a self-hosted LLM gateway (for example, Ollama, LiteLLM, or vLLM):

1. Go to **Administration** > **Configuration** > **General Settings** > **AI CodeFix** and select **Enable AI CodeFix**.
2. Under **Provider**, select **Custom**, then provide the following:

   * **Endpoint** (required): the URL of your OpenAI-compatible self-hosted gateway endpoint. For example: `http://localhost:11434/v1`.
   * **Model ID** (required): the name of the model served by your gateway. For example: `llama3.2.1b`.
   * **Header name** and **Value** (optional): add any HTTP headers required by your gateway, such as authentication tokens. It's possible to add multiple headers.

   SonarQube Server only accepts gateways that expose an OpenAI-compatible API. For example, if you are using Ollama, use the OpenAI-compatible endpoint (`/v1`) rather than the native Ollama API (`/api`).
3. Lastely, select either **All projects** or **Only selected projects** to decide which projects have access to AI CodeFix suggestions..

When choosing **Only selected projects**, add projects individually from the list to activate the feature. New projects are not added automatically.

{% hint style="warning" %}
Sonar recommends using <code class="expression">space.vars.SQS\_20261\_Recommended\_LLM\_version</code> based on Sonar's benchmarks. If you select a different model, fix suggestion quality may vary. Sonar is not responsible for quality degradation when using models other than the recommended one.
{% endhint %}
{% endtab %}
{% endtabs %}

Once enabled, developers can get AI-generated fix suggestions from the **Issues** page in their projects. See the [fixing](https://docs.sonarsource.com/sonarqube-server/user-guide/issues/fixing "mention") page for more details.

### Disabling AI CodeFix <a href="#disabling-ai-codefix" id="disabling-ai-codefix"></a>

To disable AI CodeFix completely in SonarQube Server and hide the feature from all users, including Instance Admins, set the system property `sonar.ai.codefix.hidden` to `true`. For more information, see the [configuration-methods](https://docs.sonarsource.com/sonarqube-server/server-installation/system-properties/configuration-methods "mention") page.

## Getting AI-generated fix suggestions <a href="#getting-ai-generated-fix-suggestions" id="getting-ai-generated-fix-suggestions"></a>

Once AI CodeFix is enabled, users will be able to select **Generate AI Fix** on eligible issues and copy/paste the fix into their IDE with the **Open in IDE** feature when using connected mode.

The easiest way to use AI CodeFix is by using AI CodeFix in your IDE. Simply open your project in SonarQube for [VS Code](https://app.gitbook.com/o/2ibCvzwZt86Nlk2zloB7/s/6LPRABg3ubAJhpfR5K0Y/ "mention") or SonarQube for [IntelliJ](https://app.gitbook.com/o/2ibCvzwZt86Nlk2zloB7/s/NvI4wotPmITyM0mnsmtp/ "mention"), and set up connected mode with SonarQube Server. In your IDE, select an issue marked with the ![$ai-icon-sparkle](https://content.gitbook.com/content/3VWSqvZ4eaBLWvA6epdv/blobs/bFdE3axsVUA94MupDyPC/4be9087a2b059c269f15df202838af7a74e71a96.svg) icon, open the **Rule description** > ![$ai-icon-sparkle](https://content.gitbook.com/content/3VWSqvZ4eaBLWvA6epdv/blobs/bFdE3axsVUA94MupDyPC/4be9087a2b059c269f15df202838af7a74e71a96.svg)**AI CodeFix** tab, and select ![$ai-icon-sparkle](https://content.gitbook.com/content/3VWSqvZ4eaBLWvA6epdv/blobs/bFdE3axsVUA94MupDyPC/4be9087a2b059c269f15df202838af7a74e71a96.svg)**Generate Fix**. A fix will be generated in the code editor and you’ll have a chance to **Apply** or **Decline** the suggestion.

* For IntelliJ, see [AI CodeFix #AI CodeFix in your IDE](https://app.gitbook.com/s/NvI4wotPmITyM0mnsmtp/ai-capabilities/ai-codefix#ai-codefix "mention").
* For VS Code, see [AI CodeFix #AI CodeFix in your IDE](https://app.gitbook.com/s/6LPRABg3ubAJhpfR5K0Y/ai-capabilities/ai-codefix#ai-codefix "mention").

For complete details about using AI CodeFix to fix your issues in SonarQube Cloud, see [#getting-ai-generated-fix-suggestions](https://docs.sonarsource.com/sonarqube-server/user-guide/issues/fixing#getting-ai-generated-fix-suggestions "mention"). See the [#ai-codefix-rules](https://docs.sonarsource.com/sonarqube-server/quality-standards-administration/managing-rules/rules-for-ai-codefix#ai-codefix-rules "mention") article to learn more about which rules are eligible for AI CodeFix.

## Usage limits <a href="#limits" id="limits"></a>

Limits are placed on the AI CodeFix feature to manage abuse. Developers will be notified directly when the monthly allocation is reached for your organization. If the instance is blocked due to reaching the allowance, users attempting to generate a fix will see an error message. Usage quotas are reset on the first day of each month.

SonarQube Server instances that are using a self-hosted LLM are not subject to Sonar’s limits however, you may encounter rate limits from your self-hosted LLM provider.

## Marking a project as containing AI-generated code <a href="#marking-a-project-as-containing-ai-generated-code" id="marking-a-project-as-containing-ai-generated-code"></a>

Sonar recognizes that code should be monitored with additional quality standards and offers administrators a series of project labels and custom quality gate certifications described on the [overview](https://docs.sonarsource.com/sonarqube-server/quality-standards-administration/ai-code-assurance/overview "mention") page.

## Related pages <a href="#related-pages" id="related-pages"></a>

* [#ai-codefix-rules](https://docs.sonarsource.com/sonarqube-server/quality-standards-administration/managing-rules/rules-for-ai-codefix#ai-codefix-rules "mention")
* [overview](https://docs.sonarsource.com/sonarqube-server/ai-capabilities/overview "mention") of AI capabilities in SonarQube Server
* [connected-mode](https://docs.sonarsource.com/sonarqube-server/user-guide/connected-mode "mention")
