PHP SDK Integration Guide

Quickly integrate the LLM API into your PHP application; supports Laravel, Symfony, and other major frameworks.

Composer install

One command

Simple configuration

Works out of the box

高性能

Async concurrency support

Framework integration

Laravel/Symfony

1. Install & Configure

Composer install

# Install with Composer
composer require n1n/llm-api-php

# Or add in composer.json
{
    "require": {
        "n1n/llm-api-php": "^1.0"
    }
}

# If using the Guzzle HTTP client
composer require guzzlehttp/guzzle

System requirements

  • • PHP >= 7.4
  • • Composer 2.0+
  • • ext-json extension
  • • ext-curl or Guzzle HTTP client

2. Basic Usage

Getting Started

<?php
require_once 'vendor/autoload.php';

use N1N\LLMClient;
use N1N\Models\ChatMessage;

// Initialize client
$client = new LLMClient([
    'api_key' => 'your-api-key',
    'base_url' => 'https://api.n1n.ai/v1'
]);

try {
    // Basic chat
    $response = $client->chat()->create([
        'model' => 'gpt-3.5-turbo',
        'messages' => [
            ['role' => 'system', 'content' => 'You are a helpful assistant'],
            ['role' => 'user', 'content' => 'Introduce the PHP language']
        ],
        'temperature' => 0.7,
        'max_tokens' => 500
    ]);
    
    echo $response->choices[0]->message->content;
    
    // Using message objects
    $messages = [
        new ChatMessage('system', 'You are a PHP expert'),
        new ChatMessage('user', 'How to connect to a database with PHP?')
    ];
    
    $response = $client->chat()->create([
        'model' => 'gpt-4o-mini',
        'messages' => $messages
    ]);
    
    // Get usage stats
    echo "Token usage: " . $response->usage->total_tokens . "
";
    echo "Cost: $" . $response->usage->cost . "
";
    
} catch (Exception $e) {
    echo "Error: " . $e->getMessage();
}

3. Streaming Responses

Real-time output

<?php
use N1N\LLMClient;

$client = new LLMClient(['api_key' => 'your-api-key']);

// Streaming response
$stream = $client->chat()->createStream([
    'model' => 'gpt-3.5-turbo',
    'messages' => [
        ['role' => 'user', 'content' => 'Write a long story']
    ],
    'stream' => true
]);

// Handle streaming data
foreach ($stream as $chunk) {
    if (isset($chunk->choices[0]->delta->content)) {
        echo $chunk->choices[0]->delta->content;
        ob_flush(); // Flush immediately to browser
        flush();
    }
}

// Stream with callbacks
$client->chat()->streamWithCallback(
    [
        'model' => 'gpt-3.5-turbo',
        'messages' => [['role' => 'user', 'content' => 'Hello']]
    ],
    function($chunk) {
        // Handle each data chunk
        if ($content = $chunk->choices[0]->delta->content ?? null) {
            echo $content;
        }
    },
    function($error) {
        // Error handling
        echo "Stream error: " . $error->getMessage();
    }
);

💡 Streaming tips

  • • Use ob_flush() and flush() to ensure real-time output
  • • Set set_time_limit(0) to avoid script timeout
  • • Disable output buffering: ob_end_flush()
  • • Nginx config: proxy_buffering off;

4. Framework Integration

Laravel integration

  • ✅ Auto-registered service provider
  • ✅ Facade support
  • ✅ Queue job integration
  • ✅ Response caching support
  • ✅ Logging integration

Symfony integration

  • ✅ Bundle configuration
  • ✅ Dependency injection container
  • ✅ Event Dispatcher
  • ✅ Monolog logging integration
  • ✅ Cache component support

5. Error Handling

认证错误 (401)

Check if the API key is configured correctly

限流错误 (429)

Implement retries and respect Retry-After header

请求错误 (400)

Validate request parameter formats and model name

Server error (500)

Use exponential backoff retries

6. Advanced Features

Function Calling

Call custom functions to extend AI capabilities

Batch processing

Process multiple requests concurrently to improve efficiency

Session management

Maintain conversation context for smarter dialogues

Response caching

Redis/Memcached caching optimization

Asynchronous processing

ReactPHP async non-blocking calls

Queued jobs

Background processing for long-running tasks

7. Best Practices

🔒 Security recommendations

  • ✅ Store API keys in environment variables
  • ✅ Implement request signature verification
  • ✅ Set IP allowlists
  • ✅ Encrypt sensitive data in transit
  • ✅ Rotate API keys regularly

⚡ Performance optimization

  • ✅ Use Redis to cache responses
  • ✅ Implement connection pooling
  • ✅ Use async concurrent requests
  • ✅ Set reasonable timeouts
  • ✅ Use queues for batch tasks

📦 Recommended packages

  • • guzzlehttp/guzzle - HTTP client
  • • predis/predis - Redis cache
  • • react/event-loop - Async processing
  • • symfony/cache - Cache component