Return to Blog

How to Use Text Generation APIs with JavaScript

Posted on 4/16/2025

How to Use Text Generation APIs with JavaScript

Text generation APIs let you build apps that create human-like text. With JavaScript, you can easily integrate these APIs using tools like fetch and async/await. One popular option is NanoGPT, offering access to over 125 AI models for tasks like text and image generation. It uses a pay-as-you-go model and keeps data private by storing it locally.

Key Takeaways:

  • NanoGPT Features: Access multiple models like GPT-4o, Claude, and Gemini. Local data storage ensures privacy.
  • Setup: Install Node.js, npm, and a code editor. Use environment variables to secure your API key.
  • API Integration: Use JavaScript to send prompts and handle responses with error handling and retries.
  • Customization: Adjust parameters like temperature for creativity and max_tokens for response length.
  • Cost Management: Monitor API usage to control expenses.

Whether you're building a chatbot, automating content generation, or creating AI-driven tools, NanoGPT simplifies integration while keeping costs and data secure.

Using OpenAI's Text Completion API in Node.js: A Tutorial

Node.js

Setup Requirements

Get your development environment ready with the tools and configurations outlined below.

Required Tools

  • Node.js: Download and install the latest LTS version from nodejs.org.
  • npm: Bundled with Node.js for managing dependencies.
  • Code Editor: Use a modern JavaScript editor like VS Code or Sublime Text.
  • Terminal/Command Line: Essential for running npm commands and managing your project.
  • Git: While optional, it's highly recommended for version control.

Once Node.js is installed, npm will be available for handling dependencies. After this, proceed to set up your NanoGPT API access.

NanoGPT API Key Setup

NanoGPT

To use the NanoGPT API, you'll need an API key. You can start with guest access (API usage tied to browser cookies) for testing or create an account for production use, which provides better security and persistent access.

Follow these steps:

  1. Visit the NanoGPT platform.
  2. Generate your API key from the dashboard.
  3. Store the key securely in your environment variables.

After obtaining your API key, it's time to organize your project.

Project Structure

Set up your project directory with the following structure:

project-root/  
├── src/  
│   ├── config/  
│   │   └── api.js  
│   ├── services/  
│   │   └── textGeneration.js  
│   └── index.js  
├── .env  
├── .gitignore  
└── package.json

Here’s how to create and initialize the project:

mkdir my-text-gen-project  
cd my-text-gen-project  
npm init -y  
npm install dotenv axios

Next, create a .env file in the project root and add your API key:

NANOGPT_API_KEY=your_api_key_here

Make sure to add .env to your .gitignore file to keep your API key private:

.env

If you're using Git for version control, initialize it with the following commands:

git init  
git add .  
git commit -m "Initial project setup"

Now your development environment and project structure are ready to go!

Creating API Requests

This section explains how to structure and execute API requests using JavaScript. You'll learn the key components needed for seamless API integration.

API Request Structure

Start by creating a service file to manage API communication. For example, in src/services/textGeneration.js:

import axios from 'axios';
import dotenv from 'dotenv';

dotenv.config();

const textGenerationService = axios.create({
  baseURL: 'https://api.nano-gpt.com/v1',
  headers: {
    'Authorization': `Bearer ${process.env.NANOGPT_API_KEY}`,
    'Content-Type': 'application/json'
  }
});

export const generateText = async (prompt, parameters = {}) => {
  try {
    const response = await textGenerationService.post('/generate', {
      prompt,
      max_tokens: parameters.maxTokens || 100,
      temperature: parameters.temperature || 0.7,
      model: parameters.model || 'gpt-4o'
    });
    return response.data;
  } catch (error) {
    console.error('Text generation error:', error);
    throw error;
  }
};

This setup ensures a consistent structure for API requests, with environment variables used to protect sensitive information such as the API key.

Processing API Results

To handle API responses effectively, include proper error handling and response processing. Here's an example:

const handleGenerationResult = async (prompt) => {
  try {
    const result = await generateText(prompt);

    if (result && result.success) {
      const { text, usage } = result;
      // Process successful response
      return {
        generatedText: text,
        tokenCount: usage.total_tokens,
        cost: usage.total_cost
      };
    }
  } catch (error) {
    if (error.response) {
      switch (error.response.status) {
        case 401:
          throw new Error('Invalid API key');
        case 429:
          throw new Error('Rate limit exceeded');
        default:
          throw new Error('API request failed');
      }
    }
    throw error;
  }
};

This function ensures that errors like invalid keys or rate limits are caught and handled gracefully, while successful responses are processed for use.

Async JavaScript Methods

For handling multiple API requests or retries, use efficient async patterns:

const batchProcess = async (prompts) => {
  const results = await Promise.allSettled(
    prompts.map(prompt => generateText(prompt))
  );

  return results.map((result, index) => ({
    prompt: prompts[index],
    status: result.status,
    data: result.status === 'fulfilled' ? result.value : null,
    error: result.status === 'rejected' ? result.reason : null
  }));
};

// Usage example with retry logic
const generateWithRetry = async (prompt, maxRetries = 3) => {
  for (let attempt = 1; attempt <= maxRetries; attempt++) {
    try {
      const result = await generateText(prompt);
      return result;
    } catch (error) {
      if (attempt === maxRetries) throw error;
      await new Promise(resolve => setTimeout(resolve, 1000 * attempt));
    }
  }
};

The batchProcess function handles multiple prompts simultaneously, while generateWithRetry adds resilience by retrying failed requests. Both approaches improve efficiency and reliability when working with APIs.

Stay mindful of API usage limits and design your requests with error handling in place. Next, we'll dive into fine-tuning your requests with core parameter settings.

sbb-itb-903b5f2

API Parameter Settings

Core Parameters

Core parameters control how the API generates text, affecting aspects like randomness, length, and focus.

const parameters = {
  model: 'auto',           // Automatically selects the best model
  temperature: 0.7,        // Controls randomness (range: 0 to 1)
  max_tokens: 150,         // Limits the length of the generated text
  top_p: 0.9,              // Adjusts nucleus sampling
  frequency_penalty: 0.0,  // Reduces repetitive word usage
  presence_penalty: 0.0    // Encourages topic variety
};

Here’s a quick guide to using these parameters effectively:

Parameter Type Suggested Range Ideal Use Case
Temperature 0.2-0.4 Generating factual answers or code
Temperature 0.7-0.9 Creative tasks like brainstorming or storytelling
Max Tokens 50-100 Short responses or summaries
Max Tokens 500-1000 Longer, detailed outputs such as articles

Adjust these settings based on the type of text you need.

Writing Effective Prompts

To get the best results, structure your prompts with clear instructions and context. Here's an example:

const generateStructuredText = async () => {
  const prompt = {
    context: "You are an expert in JavaScript documentation",
    instruction: "Explain error handling techniques",
    format: "Provide code examples with detailed explanations",
    constraints: "Focus on try-catch blocks and async/await patterns"
  };

  const formattedPrompt = `${prompt.context}
Task: ${prompt.instruction}
Format: ${prompt.format}
Constraints: ${prompt.constraints}`;

  return await generateText(formattedPrompt);
};

This structure ensures the API understands the task and generates relevant, high-quality content.

Output Formatting

Once the text is generated, formatting it properly is key. Use the following code to clean and structure the output:

const formatGeneratedText = (response) => {
  const formatter = {
    removeExtraSpaces: text => text.replace(/\s+/g, ' ').trim(),
    formatCodeBlocks: text => {
      return text.replace(/```(\w+)?\n([\s\S]*?)```/g, (_, lang, code) => {
        return `<pre><code class="${lang || ''}">${code.trim()}</code></pre>`;
      });
    },
    sanitizeOutput: text => {
      return text
        .replace(/[<>]/g, match => ({
          '<': '&lt;',
          '>': '&gt;'
        })[match])
        .trim();
    }
  };

  return {
    raw: response.text,
    formatted: formatter.sanitizeOutput(
      formatter.removeExtraSpaces(
        formatter.formatCodeBlocks(response.text)
      )
    )
  };
};

This script removes unnecessary spaces, converts code blocks into HTML-friendly formats, and sanitizes special characters for safe display. Use these steps to ensure the output is clean and ready for use.

Usage and Security

Data Privacy Features

NanoGPT ensures secure text generation by keeping conversations stored locally on your device. Here's an example of how to handle data securely:

const secureDataHandler = {
  storeLocally: (data) => {
    // Encrypt data before saving
    const encrypted = encryptData(data);
    localStorage.setItem('conversation_data', encrypted);
  },
  clearData: () => {
    localStorage.removeItem('conversation_data');
    sessionStorage.clear();
  }
};

"Conversations are saved on your device only. We strictly inform providers not to train models on your data." - NanoGPT

In addition to protecting your data, it's important to manage API usage and ensure compliance with relevant regulations.

Cost Management

Keeping track of API usage is essential for controlling expenses. Here's a simple way to monitor and manage it:

const usageTracker = {
  tokens: 0,
  requests: 0,
  trackRequest: async (prompt) => {
    const tokens = await calculateTokens(prompt);
    this.tokens += tokens;
    this.requests++;
    return {
      currentTokens: this.tokens,
      totalRequests: this.requests,
      estimatedCost: calculateCost(this.tokens)
    };
  }
};

To optimize costs, consider these strategies:

Usage Type Optimization Strategy Impact on Costs
Short Responses Set max_tokens: 50-100 Reduces token usage
Batch Processing Group similar requests Lowers API call volume
Model Selection Use the auto model feature Balances cost and output

These adjustments can make a noticeable difference in keeping costs manageable.

To meet data privacy laws, ensure you have user consent and proper data retention policies in place. Here's an example of how to handle these requirements:

const privacyCompliance = {
  userConsent: false,
  dataRetention: 30,

  initializePrivacySettings: () => {
    return {
      localStorageOnly: true,
      automaticDeletion: true,
      dataEncryption: true
    };
  },

  validateCompliance: () => {
    const checks = [
      validateDataStorage(),
      validateRetentionPeriod(),
      validateUserConsent()
    ];
    return Promise.all(checks);
  }
};

const handlePrivacyError = async (error) => {
  console.error('Privacy compliance error:', error);
  await notifyUser({
    type: 'privacy_alert',
    message: 'Unable to process request due to privacy requirements'
  });
  return false;
};

This approach ensures your application adheres to privacy regulations while maintaining secure and efficient operations.

Problem Solving Guide

Common Problems

JavaScript text generation APIs often run into familiar issues. Here's how you can tackle them:

const errorHandler = {
  handleAPIError: async (error) => {
    switch (error.code) {
      case 'RATE_LIMIT_EXCEEDED':
        await sleep(1000);
        return retryRequest();
      case 'INVALID_API_KEY':
        return refreshAPIKey();
      case 'CONTEXT_LENGTH_EXCEEDED':
        return truncatePrompt(prompt, maxTokens);
      default:
        console.error('API Error:', error);
        throw new Error('Request failed');
    }
  }
};

Another common challenge is managing token usage to avoid prompt overflows:

const tokenCounter = {
  checkTokenLimit: (prompt) => {
    const estimatedTokens = prompt.split(' ').length * 1.3; // Estimate token usage
    return estimatedTokens <= 4096;
  }
};

Output Quality Tips

Once errors are under control, focus on improving the quality of the output by fine-tuning parameter settings:

Parameter Recommended Setting Purpose
Temperature 0.7 Balances creativity and coherence
Max Tokens 150-300 Controls response length
Top P 0.9 Maintains output relevance
Frequency Penalty 0.5 Reduces repetitive phrases

To ensure responses meet expectations, use a validation approach:

const outputValidator = {
  validateResponse: (response) => {
    return {
      isComplete: response.includes('.'),
      meetsLength: response.length >= minLength,
      isRelevant: checkRelevance(response, prompt)
    };
  }
};

Complex Implementation

Beyond the basics, advanced implementations require more comprehensive strategies. Here's an example of handling streamed responses:

const streamHandler = {
  processStream: async (response) => {
    const reader = response.body.getReader();
    const textDecoder = new TextDecoder();

    while (true) {
      const { done, value } = await reader.read();
      if (done) break;

      const chunk = textDecoder.decode(value);
      updateUI(chunk);
    }
  }
};

For handling multiple prompts simultaneously:

const complexHandler = {
  handleMultipleRequests: async (prompts) => {
    const results = await Promise.all(
      prompts.map(async (prompt) => {
        const response = await generateText(prompt);
        return validateAndProcess(response);
      })
    );
    return aggregateResults(results);
  }
};

Retrying requests with exponential backoff can also be helpful:

const requestOptimizer = {
  retryWithBackoff: async (request, maxAttempts = 3) => {
    for (let attempt = 1; attempt <= maxAttempts; attempt++) {
      try {
        return await request();
      } catch (error) {
        if (attempt === maxAttempts) throw error;
        await sleep(Math.pow(2, attempt) * 1000);
      }
    }
  }
};

Finally, wrap requests in an error boundary to ensure graceful handling of failures:

const errorBoundary = {
  wrapRequest: async (fn) => {
    try {
      return await fn();
    } catch (error) {
      logError(error);
      notifyMonitoring(error);
      return fallbackResponse();
    }
  }
};

These strategies help maintain reliable text generation, ensuring both quality and resilience in your code.

Summary

Implementation Steps

Here’s how to integrate NanoGPT into your JavaScript project:

1. Initial Setup

Start by setting up Node.js and structuring your project. Make sure to securely store your NanoGPT API key in environment variables.

2. API Integration

Use modern JavaScript to make API requests. Here’s a sample implementation:

const textGenerator = {
  async generate(prompt) {
    const response = await fetch('https://api.nano-gpt.com/v1/generate', {
      method: 'POST',
      headers: {
        'Authorization': `Bearer ${process.env.NANOGPT_API_KEY}`,
        'Content-Type': 'application/json'
      },
      body: JSON.stringify({ prompt })
    });
    return await response.json();
  }
};

3. Error Handling

Include error handling and retry logic for a smoother experience:

const errorHandler = async (request) => {
  try {
    return await request();
  } catch (error) {
    if (error.code === 'RATE_LIMIT') {
      await delay(1000);
      return retry(request);
    }
    throw error;
  }
};

These steps simplify the process of integrating NanoGPT into your JavaScript application.

NanoGPT Benefits

NanoGPT brings several perks to JavaScript developers looking to implement text generation:

Feature Benefit Implementation Impact
Pay-as-you-go Model Avoids subscription fees Manage costs based on actual usage
Local Data Storage Keeps data private Securely handle sensitive information
Multiple AI Models Access to over 125 models Flexibility for various text generation tasks

"We believe AI should be accessible to anyone. Therefore we enable you to only pay for what you use on NanoGPT, since a large part of the world does not have the possibility to pay for subscriptions."

NanoGPT stands out by storing all data locally on the user’s device, ensuring privacy. Additionally, it offers instant access to new AI models as they are released. This makes it a strong choice for JavaScript projects that require powerful text generation with scalable cost control.

For easier implementation, developers can use NanoGPT’s auto-model feature. This tool automatically selects the best model for a given task, simplifying development while delivering high-quality results without the need for complex configurations.