Continue - VS Code AI Programming Assistant Complete Configuration
Open-source AI code assistant, supporting code completion, chat, editing, compatible with all mainstream LLMs, completely free
Smart Completion
Tab key triggers code completion
Multi-Model Support
GPT, Claude, Local Models
Context Awareness
Understands project code structure
Highly Customizable
Custom commands and providers
1. Installation and Initial Configuration
1. Install Plugin
- Open VS Code
- Go to Extensions Store (Ctrl/Cmd + Shift + X)
- Search for "Continue"
- Click Install on the plugin published by Continue
- Restart VS Code
2. Shortcuts
Ctrl/Cmd + L- Open chat panelCtrl/Cmd + I- Edit selected codeTab- Accept code completionCtrl/Cmd + K- GenerateCode
2. Configuration File Details
config.json Complete Configuration
{
"models": [
{
"title": "GPT-4o",
"provider": "openai",
"model": "gpt-4o",
"apiKey": "your-api-key",
"apiBase": "https://api.n1n.ai/v1"
},
{
"title": "Claude 3.5 Sonnet",
"provider": "anthropic",
"model": "claude-3-5-sonnet-20241022",
"apiKey": "your-api-key",
"apiBase": "https://api.n1n.ai/v1"
},
{
"title": "DeepSeek Coder",
"provider": "openai",
"model": "deepseek-coder",
"apiKey": "your-api-key",
"apiBase": "https://api.deepseek.com/v1"
}
],
"tabAutocompleteModel": {
"title": "Codestral",
"provider": "mistral",
"model": "codestral-latest",
"apiKey": "your-api-key"
},
"embeddingsProvider": {
"provider": "openai",
"model": "text-embedding-3-small",
"apiKey": "your-api-key",
"apiBase": "https://api.n1n.ai/v1"
},
"contextProviders": [
{
"name": "code",
"params": {}
},
{
"name": "docs",
"params": {}
},
{
"name": "terminal",
"params": {}
},
{
"name": "problems",
"params": {}
}
],
"slashCommands": [
{
"name": "edit",
"description": "Edit selected code"
},
{
"name": "comment",
"description": "Add comments to code"
},
{
"name": "test",
"description": "Generate unit tests"
},
{
"name": "fix",
"description": "Fix problems in code"
}
]
}Chat Models
Configure multiple models for selection, supports quick switching
Code Completion
Use dedicated code models like Codestral
Embedding Models
Used for code search and context understanding
3. Code Auto-completion Optimization
VS Code Settings
// VS Code settings.json Configure
{
// Continue code completion settings
"continue.enableTabAutocomplete": true,
"continue.tabAutocompleteOptions": {
"multilineCompletions": "always",
"maxPromptTokens": 1500,
"debounceDelay": 350,
"maxSuffixPercentage": 0.4,
"prefixPercentage": 0.85,
"template": "Please complete the following code:\n{{{prefix}}}[BLANK]{{{suffix}}}\n\nFill in the [BLANK]"
},
// Trigger conditions
"continue.tabAutocompleteOptions.triggers": [
{
"language": "python",
"triggerWords": ["def", "class", "if", "for", "while"]
},
{
"language": "javascript",
"triggerWords": ["function", "const", "let", "if", "for"]
}
],
// Disable auto-completion for certain files
"continue.tabAutocompleteOptions.disableInFiles": [
"*.md",
"*.txt",
"package-lock.json"
]
}💡 Performance Optimization Tips
- • Use faster models like Codestral or DeepSeek Coder
- • Adjust debounceDelay to reduce latency
- • Limit maxPromptTokens to improve response speed
- • Disable auto-completion for large files
4. Custom Context Providers
Create Custom Providers
// Custom context provider example
// ~/.continue/contextProviders/myCustomProvider.js
module.exports = {
title: "Project Documentation",
description: "Include project-related documentation",
async getContext(query, extras) {
const docs = await loadProjectDocs();
return docs.map(doc => ({
name: doc.title,
description: doc.summary,
content: doc.content
}));
}
};
// Register in config.json
{
"contextProviders": [
{
"name": "myCustomProvider",
"params": {
"docsPath": "./docs"
}
}
]
}Built-in Providers
- • code: Current code file
- • docs: Project documentation
- • terminal: Terminal output
- • problems: Errors and warnings
- • git: Git history and diffs
Use Cases
- • Include project-specific documentation
- • Integrate external API documentation
- • Add team coding standards
- • including TestData
- • Reference database schemas
5. Custom Slash Commands
Create Refactor Command
// Custom slash command
// ~/.continue/slashCommands/refactor.js
module.exports = {
name: "refactor",
description: "Refactor selected code",
async run(sdk) {
const selection = await sdk.getSelectedCode();
const prompt = `Please refactor the following code to make it clearer and more efficient:
${selection}
Requirements:
1. Extract duplicate code into functions
2. Improve variable naming
3. Add necessary type annotations
4. Optimize performance`;
const result = await sdk.llm.complete(prompt);
await sdk.applyEdit(result);
}
};Common Command Ideas
/optimize - Optimize performance/security - Security check/docs - GenerateDocumentation/convert - Convert code/review - Code review/clean - Clean code6. Best Practices and Tips
🎯 Improve Efficiency
- ✅ Use
@symbol to quickly reference files - ✅ Configure multiple models, switch based on task
- ✅ Create project-specific slash commands
- ✅ Use context providers to include relevant documentation
- ✅ Set appropriate token limits
âš¡ Performance Optimization
- ✅ Run Ollama locally to reduce latency
- ✅ Use smaller models for completion
- ✅ Limit context window size
- ✅ Disable unnecessary context providers
- ✅ Adjust debounce delay parameter
7. FAQ Solutions
Code completion not working?
1. Check if tabAutocompleteModel configuration is correct
2. Confirm API Key is valid
3. Check error messages in output panel
4. Try restarting VS Code
How to use local models?
// Configure Ollama local models
{
"models": [{
"title": "Local Llama",
"provider": "ollama",
"model": "llama2:13b"
}]
}How to reduce API costs?
• Use cheaper models like GPT-3.5-turbo
• Reduce maxTokens limit
• Enable caching feature
• Use local models for simple tasks