Error Diagnosis Tool: Leave No Issue Unfound
Intelligently analyze API error causes and provide targeted solutions to help developers quickly locate and fix issues, reducing debugging time.
Common Error Types
🚫 Authentication Errors
- • Invalid API key
- • Insufficient permissions
- • Token expired
- • Invalid request signature
⚠️ Request Errors
- • Invalid parameter format
- • Exceeds length limit
- • Unsupported model
- • Invalid options
⏱️ Limitation Errors
- • Rate limiting
- • Quota exhausted
- • Concurrency limit
- • Account suspended
💥 Service Errors
- • Internal server error
- • Gateway timeout
- • Service unavailable
- • Overloaded service
Intelligent Diagnosis Example
Error Message
Error: 429 Too Many Requests
{
"error": {
"message": "Rate limit exceeded. Please retry after 2 seconds.",
"type": "rate_limit_error",
"code": "rate_limit_exceeded"
}
}Diagnosis Result
Cause
Your request rate exceeded the API's rate limit. The current limit is 60 requests per minute.
Solutions
- 1. Wait 2 seconds before retrying
- 2. Implement an exponential backoff retry mechanism
- 3. Use batch APIs to reduce the number of requests
- 4. Consider upgrading to a higher quota plan
Preventive Measures
// Implement rate limiting
const rateLimiter = new RateLimiter({
requests: 60,
per: 'minute'
});
// Automatic retry
const retry = async (fn, retries = 3) => {
try {
return await fn();
} catch (error) {
if (error.code === 'rate_limit_exceeded' && retries > 0) {
await sleep(2000 * (4 - retries));
return retry(fn, retries - 1);
}
throw error;
}
};Error Code Quick Reference
| Error Code | Meaning | Quick Fix |
|---|---|---|
| 401 | Unauthorized | Verify the API key is correct |
| 403 | Forbidden | Check account permissions and status |
| 429 | Too Many Requests | Reduce request frequency |
| 500 | Server Error | Retry later or contact support |
| 503 | Service Unavailable | Wait for service recovery |
Debugging Tips
🔍 Request Debugging
// Enable detailed logging
const client = new OpenAI({
apiKey: process.env.OPENAI_API_KEY,
debug: true,
logger: {
log: (message) => {
console.log('[API]', message);
}
}
});
// Capture full error details
try {
const response = await client.chat.completions.create({...});
} catch (error) {
console.error('Status:', error.status);
console.error('Headers:', error.headers);
console.error('Body:', error.body);
}📊 Performance Monitoring
// Monitor API performance
const monitor = {
requests: 0,
errors: 0,
totalLatency: 0,
track: async (fn) => {
const start = Date.now();
monitor.requests++;
try {
const result = await fn();
monitor.totalLatency += Date.now() - start;
return result;
} catch (error) {
monitor.errors++;
throw error;
}
},
report: () => ({
totalRequests: monitor.requests,
errorRate: monitor.errors / monitor.requests,
avgLatency: monitor.totalLatency / monitor.requests
})
};Best Practices
Error Handling Checklist
✅ Must Implement
- □ Retry mechanism
- □ Timeout handling
- □ Logging
- □ User-friendly error messages
🎯 Recommended
- □ Error monitoring and alerts
- □ Fallback strategies
- □ Error statistics and analysis
- □ Automatic recovery mechanisms
Community Solutions
Developer Experience Sharing
👤
@developer123
When encountering 429 errors, I used an exponential backoff strategy with Redis as a request queue, which perfectly solved the rate limiting issue.
Rate limitingRetry strategy
👤
@ai_engineer
To handle token limit issues, I developed middleware that automatically splits long text and intelligently processes over-length inputs.
Token limitText splitting
Quickly Diagnose Your API Issues
Intelligent error analysis and detailed solutions make debugging straightforward.
Start Diagnosis