Dify Platform Model Configuration Tutorial

Configure custom LLM APIs in Dify, build enterprise-grade AI applications, implement workflow automation and knowledge base management

Quick Configuration Steps

Add Custom Model

1

Configure LLM API endpoints in Dify

  • Login to Dify console
  • Go to 'Settings' → 'Model Providers'
  • Click 'Add Custom Model'
  • Fill in API configuration information

Configure API Parameter

2

Set correct endpoint address and authentication information

API Base URL:https://api.example.com/v1
API Key:YOUR_API_KEY
Model Name:gpt-4o, claude-3-5-sonnet, etc.

Test Connection

3

Verify configuration is correct

  • Find the newly added model in the model list
  • Click the 'Test' button
  • Check if response is normal
  • Save configuration

Platform Features

🔄

Visual Workflows

Drag-and-drop to build complex AI application flows

📚

Knowledge Base Management

Easily create and manage vectorized knowledge bases

🤖

Multi-Model Support

Use multiple LLM models in collaboration

📋

Application Templates

Rich preset templates to get started

👥

Team Collaboration

Multi-user collaborative development and application management

🔌

API First

Complete API support for secondary development

ConfigureExample

Basic Configuration

Add custom OpenAI-compatible models in Dify

# Dify custom model configuration

1. Model provider configuration
{
  "provider": "openai",
  "provider_name": "Custom LLM API",
  "provider_type": "custom",
  "api_base": "https://api.example.com/v1",
  "api_key": "YOUR_API_KEY",
  "models": [
    {
      "model": "gpt-4o",
      "label": "GPT-4o (Custom)",
      "model_type": "llm",
      "features": ["agent-thought", "vision"],
      "max_tokens": 128000,
      "price": {
        "input": "0.005",
        "output": "0.015",
        "unit": "0.001"
      }
    },
    {
      "model": "gpt-3.5-turbo",
      "label": "GPT-3.5 Turbo (Custom)",
      "model_type": "llm",
      "features": ["agent-thought"],
      "max_tokens": 16385,
      "price": {
        "input": "0.0005",
        "output": "0.0015",
        "unit": "0.001"
      }
    }
  ]
}

Application Scenarios

Smart Customer Service System

24/7 customer service bot with knowledge base integration

Workflow: Knowledge retrieval → LLM understanding → Generate response

Documentation Q&A Assistant

Smart Q&A system based on enterprise documentation

Workflow: Document import → Vectorization → Semantic search → AI summarization

Content Generation Platform

Automated content creation and optimization

Workflow: Requirement input → Multi-model generation → Quality evaluation → Output

Data Analysis Assistant

Natural language queries and data insights

Workflow: Question understanding → SQL generation → Data query → Visualization

Usage Tips

  • • Use environment variables to manage API configurations for different environments
  • • Set reasonable temperature and max_tokens parameters for models
  • • Utilize Dify's caching features to reduce duplicate calls
  • • Use workflow conditional nodes to implement complex logic
  • • Regularly backup important application configurations and knowledge base data