Text Generation API

Powerful text generation interface, supporting various generation tasks, from simple completion to complex creation

Core APIHighly CustomizableMulti-scenario Applications

API Overview

Interface Information

POSThttps://api.n1n.ai/v1/completions
Content-Type:application/json
✍️

Text Completion

Automatically complete sentences and paragraphs

📝

Content Creation

Generate articles, stories, poems

🔄

Text Transformation

Translation, rewriting, summarization

Request Parameters

Parameter NameTypeRequiredDescription
modelstringYesModel ID, e.g.: text-davinci-003
promptstring | arrayYesInput text or array of texts
max_tokensintegerNoMaximum number of tokens to generate (default: 16)
temperaturefloatNoControls randomness (0-2, default: 1)
top_pfloatNoNucleus sampling parameter (0-1, default: 1)
nintegerNoNumber of generation results (default: 1)
streambooleanNoWhether to stream output (default: false)
stopstring | arrayNoStop sequences, generation stops when encountered
presence_penaltyfloatNoReduce topic repetition (-2.0 to 2.0)
frequency_penaltyfloatNoReduce word repetition (-2.0 to 2.0)

Code Examples

Basic Text Generation

Python
import requests

response = requests.post(
    "https://api.n1n.ai/v1/completions",
    headers={
        "Authorization": "Bearer YOUR_API_KEY",
        "Content-Type": "application/json"
    },
    json={
        "model": "text-davinci-003",
        "prompt": "Write a poem about spring: ",
        "max_tokens": 100,
        "temperature": 0.8
    }
)

result = response.json()
print(result['choices'][0]['text'])

Batch Generate Multiple Results

JavaScript
const response = await fetch('https://api.n1n.ai/v1/completions', {
  method: 'POST',
  headers: {
    'Authorization': 'Bearer YOUR_API_KEY',
    'Content-Type': 'application/json'
  },
  body: JSON.stringify({
    model: 'text-davinci-003',
    prompt: 'Name this product: Smart Home Control System',
    max_tokens: 20,
    temperature: 0.9,
    n: 5,  // Generate 5 different results
    stop: ['\n']
  })
});

const data = await response.json();
// Output all generated names
data.choices.forEach((choice, index) => {
  console.log(`Option ${index + 1}: ${choice.text.trim()}`);
});

Using Stop Sequences to Control Output

// Generate Q&A pairs, stop when encountering "Q: "
const generateQA = async () => {
  const response = await fetch('https://api.n1n.ai/v1/completions', {
    method: 'POST',
    headers: {
      'Authorization': 'Bearer YOUR_API_KEY',
      'Content-Type': 'application/json'
    },
    body: JSON.stringify({
      model: 'text-davinci-003',
      prompt: 'Q: What is Artificial Intelligence? \nA: ',
      max_tokens: 200,
      temperature: 0.7,
      stop: ['Q: ', '\n\n']  // Stop when these sequences are encountered
    })
  });
  
  const data = await response.json();
  return data.choices[0].text;
};

Parameter Details

TemperatureParameter

Controls randomness and creativity of output. Higher values are more random, lower values are more deterministic.

Low Temperature (0-0.3)

Precise, consistent, conservative

Suitable for: fact queries, code generation

Medium Temperature (0.4-0.7)

Balanced, natural, fluent

Suitable for: conversations, translation

High Temperature (0.8-2.0)

Creative, diverse, unexpected

Suitable for: creative writing, brainstorming

Top_p Parameter (Nucleus Sampling)

Controls the cumulative probability threshold for sampling, used in conjunction with temperature.

top_p = 0.1Only consider top 10% most likely words
top_p = 0.5Consider top 50% of vocabulary
top_p = 1.0Consider all vocabulary

Penalty Parameters

Presence Penalty

Penalizes already appeared topics, encouraging discussion of new topics.

  • • Positive values: avoid repeated topics
  • • Negative values: focus on same topics
  • • Range: -2.0 to 2.0

Frequency Penalty

Penalizes based on word frequency, reducing repetitive vocabulary.

  • • Positive values: reduce repetitive vocabulary
  • • Negative values: allow more repetition
  • • Range: -2.0 to 2.0

Application Scenarios

📝 Content Creation

{ "prompt": "Write an article opening about sustainable development: ", "max_tokens": 200, "temperature": 0.8 }

Suitable for generating blogs, articles, creative content

💼 Business Copy

{ "prompt": "Product: Smart Watch\nFeatures: Health Monitoring\nCopy: ", "max_tokens": 50, "temperature": 0.7 }

Generate marketing copy, product descriptions

🎯 Data Extraction

{ "prompt": "Extract keywords from the following text: ...", "max_tokens": 30, "temperature": 0.3 }

Information extraction, summary generation

🔄 Text Transformation

{ "prompt": "Rewrite the following content more professionally: ...", "max_tokens": 150, "temperature": 0.5 }

Style conversion, tone adjustment

Best Practices

Clear Prompts

Provide clear instructions and context, use separators to distinguish different parts

Set Parameters Appropriately

Adjust temperature and max_tokens based on task type

Use Stop Sequences

Define appropriate stop conditions to avoid generating too much irrelevant content

Error Handling

Implement retry mechanism and error handling logic

Important Notes

  • • Text Generation API is gradually migrating to Chat Completion API
  • • Some newer models may not support completions endpoint
  • • max_tokens is included in the total token limit
  • • Batch requests (n>1) consume more tokens
  • • Recommend using Chat models for better performance