Skip to main content

Installation

npm install caistro
# or
yarn add caistro
# or
bun add caistro

Quick Start

import { Caistro } from "caistro";

const caistro = new Caistro({
  apiKey: process.env.CAISTRO_API_KEY,
});

async function main() {
  const response = await caistro.chat({
    messages: [
      { role: "user", content: "How do I scale Meta ads?" }
    ],
  });

  console.log(response.choices[0].message.content);
}

main();

Configuration

const caistro = new Caistro({
  apiKey: "cai_...",           // Required
  baseUrl: "https://api.caistrolabs.com", // Optional, defaults to production
});

Methods

chat(request)

Create a chat completion.
const response = await caistro.chat({
  messages: [
    { role: "system", content: "You are a marketing expert." },
    { role: "user", content: "How do I improve my CTR?" }
  ],
  temperature: 0.7,
  max_tokens: 500,
});
Parameters:
NameTypeRequiredDescription
messagesMessage[]YesConversation history
modelstringNoModel ID (default: Nous-20B)
temperaturenumberNoSampling temperature (default: 0.7)
max_tokensnumberNoMax tokens to generate (default: 512)

listModels()

List available models.
const models = await caistro.listModels();
console.log(models.data);

Error Handling

import { Caistro, CaistroError } from "caistro";

try {
  const response = await caistro.chat({ messages: [...] });
} catch (error) {
  if (error instanceof CaistroError) {
    console.error(`API Error ${error.status}: ${error.message}`);
  }
}

TypeScript Types

interface Message {
  role: "user" | "assistant" | "system";
  content: string;
}

interface ChatRequest {
  messages: Message[];
  model?: string;
  temperature?: number;
  max_tokens?: number;
}

interface ChatResponse {
  id: string;
  object: string;
  model: string;
  choices: ChatChoice[];
  usage: {
    prompt_tokens: number;
    completion_tokens: number;
    total_tokens: number;
  };
}