⚒️ Model Configuration

Large Language Model configuration

Quick switch between your models

To quick switch between models try out the keyboard shortcut: Cmd/Cntrl + K + M or search for Model Selection: Switch Model

Model quick switch on command pallet

Configure copilot and interactive model

Aide uses 2 different kind of models:

  • Copilot (This is used for Tab-autocomplete)
  • Interactive (This is used for interacting with the editor, either via the inline-chat or the sidebar chat)

Copilot models are generally trained using the Fill-In-Middle objectives and are suitable for auto-completing code.

Interactive models on the other hand, are more smarter and can do various tasks. They can handle scenarios which require making in-editor changes or as a programming partner using the sidebar chat.

For copilot models we generally recommend: DeepSeekCoder6.7B or CodeLlama7B models Interactive models are more varied, the best hosted model right now in the market is Claude Opus and closely followed by GPT4 (not the Turbo one!) If you are self-hosting or using OSS models, we highly recommend DeepSeekCoder33B models (which has been very powerful in our testing and works across the variety of scenarios we have in the editor)

Setting up providers

You can access the provider settings page using Cmd/Cntrl + Shift + , you should see a page like this: Model Configuration

Here you can choose from a variety of providers which we support, the list is provided below and we also talk about the specifics of setting them up.


Anthropic provider requires you to provider the Anthropic API Key, which you can get from here (opens in a new tab) Set this api key in the anthropic provider which you can find by scrolling down the list of the providers.


OpenAI has this helper page here (opens in a new tab) You can also set a custom API BASE if required (while using private deployment powered by OpenAI)

Azure OpenAI

While using Azure OpenAI we require your deployment id, url and the api key. Once you have that enter it for configuration in the provider page.


TogetherAI is pretty straightforward and you can setup your account here (opens in a new tab) You also get $25 free when using TogetherAI, we highly recommending trying out different models here before deciding on one.


Ollama requires 0 configuration, you should be good to go when you have Ollama running in the background.


LMStudio allows you to set a custom port, so you just need to give us the local url and we will talk to the api server running on LMStudio.


Fireworks AI allows you to create your API key here (opens in a new tab)

They have one of the fastest inference engines, we recommend using them as a provider for the copilot models.

Vllm support

We also support Vllm, to use it set the details on the OpenAI compatible provider.

Switching model to provider

Since we can have a single model run using multiple inference providers, we allow configuring this on the fly.

Click on the edit button beside the model and chose your configured provider.

Provider Configuration

Please do not hesitate to reach out on Discord, if this list does not cover the providers you are using. We are happy to add support for your custom infra and help you setup LLMs on your own infra.