AI API Tutorial
Building tools for AI interactions doesn't have to be complex. In this tutorial, we'll create a streamlined Node.js tool that lets you interact with your AI directly from your terminal. Using modern JavaScript (ESM) and minimal dependencies, we'll build a tool that sends your prompts to the AppDirect.ai service and streams the responses back in real-time. Let's get started!
Prerequisites
Before we begin, ensure you have the following:
- Node.js installed on your system
- Basic knowledge of JavaScript/Node.js
- API credentials (for more information, see Using APIs)
- npm (Node Package Manager)
Project Setup
Create a new project folder and generate the npm configuration.
mkdir ai-cli
cd ai-cli
npm init -y
npm pkg set type="module"
Install Required Dependencies
npm install commander chalk axios dotenv ora sse xhr2
Package | Description | Purpose |
---|---|---|
commander | A complete solution for Node.js command-line interfaces | Handles command-line argument parsing and provides a clean interface for building CLI commands |
chalk | Terminal string styling | Adds colors and styling to our console output, making the CLI more readable and user-friendly |
axios | Promise based HTTP client | Manages API requests and supports streaming responses from the AI service |
dotenv | Zero-dependency module for loading environment variables | Securely manages API keys and configuration by loading them from a .env file |
ora | Elegant terminal spinner | Provides visual feedback while waiting for API responses with an animated loading indicator |
sse.js | Server-Sent Events library | Manages server streaming of the AI responses |
xhr2 | XMLHttpRequest Emulation for node.js | Shim for sse.js using XMLHttpRequest. This library is not needed in the browser. |
Each dependency was chosen carefully to provide specific functionality while keeping the tool lightweight and easy to maintain. They're all well-maintained packages with strong community support and regular updates.
Set up Environment Variables
Before calling AppDirect.ai APIs, we must set up the API key in our environment variables. Storing sensitive information like API keys and endpoints in a .env file helps keep credentials secure and separate from the code, preventing accidental exposure in version control systems.
Create .env
file:
API_KEY=your_api_key_here
If you haven't already, generate an API key by following the Using APIs guide. Copy and paste your API key into this .env
file. For this tutorial, ensure you have the following scopes:
- ai.read.self
- ai.write.self
- chats.read.self
- chats.write.self
List my AIs
Let's now add a command to list all your AIs. The output will look something like:
❯ node index.js list
✔ Your AIs:
- LLAMA 3
621b966f-d1e2-46ea-b55e-75b6d1b7365b
Standard LLAMA 3 model.
- Blog Assistant
f331d9f0-5935-4d09-b854-e7dd6d82a42d
AI that assists you in writing your blog
Add an API handler module that will include all the API interaction logic. Create a file called api-handler.js
:
import axios from "axios";
import { SSE } from "sse.js";
export const listAIs = async () => {
try {
const response = await axios({
method: "get",
url: "https://appdirect.ai/api/v1/me/ai?scope=OWNED",
headers: {
"X-Authorization": `Bearer ${process.env.API_KEY}`,
"Content-Type": "application/json",
},
});
return response.data;
} catch (error) {
throw new Error(`API Error: ${error.message}`);
}
};
This command makes a GET request to the API's list AIs endpoint, retrieving detailed information about each available AI.
Create index.js
Next, let's create the main entry point for the program. Use the commander
library to handle all command line interactions. Use the list
command to reference the logic create in the previous procedure. The list
command will also call the listAIs
API to list all the AIs that you own.
Create index.js
:
import { program } from "commander";
import chalk from "chalk";
import ora from "ora";
import dotenv from "dotenv";
import { listAIs, createAI, createChat, sendMessage } from "./api-handler.js";
import xhr from 'xhr2';
// This is not required in the browser
global.XMLHttpRequest = xhr;
program.version("1.0.0").description("AI CLI Tool");
program
.command("list")
.description("List all my AIs")
.action(async () => {
const spinner = ora("Fetching available AIs...").start();
try {
const models = await listAIs();
spinner.succeed("Your AIs:");
models.forEach((model) => {
console.log(chalk.white(`- ${model.name}`));
console.log(chalk.white(` ${model.id}`));
console.log(chalk.gray(` ${model.description}`));
});
} catch (error) {
spinner.fail(chalk.red(`Error: ${error.message}`));
}
});
program.parse();
Now you can list your AIs using:
node index.js list
If you have created an AI before on AppDirect.ai, this will display a formatted list of your AIs, their ids, and their descriptions. If you haven't created any AIs the command output will be empty.
Create AI
Add the following command to create a new AI:
❯ node index.js create-ai -n "Bob" -d "Your personal CLI helper"
✔ AI created successfully:
- Bob
c89b7bee-dbf7-480b-b6c5-779875f63a89
Your personal CLI helper
We will add a new function to api-handler.js
to call the APIs to create our new AI:
export const createAI = async (name, description, instructions, src) => {
if (!instructions) {
instructions = description;
}
if (!src) {
src = "https://res.cloudinary.com/dwyqkq1hq/image/upload/ro0duemq1iium4odnx6n.webp";
}
try {
const response = await axios({
method: "post",
url: "https://appdirect.ai/api/v1/ai",
data: { name, description, instructions, src },
headers: {
"X-Authorization": `Bearer ${process.env.API_KEY}`,
"Content-Type": "application/json",
},
});
return response.data;
} catch (error) {
throw new Error(`API Error: ${error.message}`);
}
};
Just like the listAIs
function, we call the /api/v1/ai
API to create an AI instance.
Add the following command to the CLI:
program
.command("create-ai")
.description("Create a new AI")
.requiredOption('-n, --name <name>', 'name of the AI')
.requiredOption('-d, --description <description>', 'description of the AI')
.action(async () => {
const spinner = ora("Creating a new AI...").start();
try {
const result = await createAI(options.name, options.description);
spinner.succeed('AI created successfully:');
console.log(chalk.white(`- ${result.name}`));
console.log(chalk.white(` ${result.id}`));
console.log(chalk.gray(` ${result.description}`));
} catch (error) {
spinner.fail(chalk.red(`Error: ${error.message}`));
}
});
Now you can create a new AI from the command line:
node index.js create-ai -n "AI Name" -d "AI Description"
You should see the name and id of your AI in the output.
Create a Chat Thread
In order to talk to the AI we must first create a chat session. The chat session maintains the entire conversation history so that the AI always has proper context of the conversation.
Add a new function to api-handler.js
:
export const createChat = async (aiId) => {
try {
const response = await axios({
method: "post",
url: `https://appdirect.ai/api/v1/ai/${aiId}/chats`,
headers: {
"X-Authorization": `Bearer ${process.env.API_KEY}`,
"Content-Type": "application/json",
},
});
return response.data;
} catch (error) {
throw new Error(`API Error: ${error.message}`);
}
};
Add this command to the CLI and then add the following code to index.js
just before program.parse();
:
program
.command("create-chat")
.description("Create a new chat session")
.requiredOption("-i, --id <id>", "id of the AI")
.action(async (options) => {
const spinner = ora("Creating a new chat thread...").start();
try {
const result = await createChat(options.id);
spinner.succeed("Chat created successfully:");
console.log(chalk.white(`- ${result.id}`));
} catch (error) {
spinner.fail(chalk.red(`Error: ${error.message}`));
}
});
You can now start a new chat session from the command line by passing the AI's id that we created earlier.
node index.js create-chat -i your_ai_id
The id of the chat session is returned in the response.
❯ node index.js create-chat -i c89b7bee-dbf7-480b-b6c5-779875f63a89
✔ Chat created successfully:
- cm6tusnoe0001s30yf83xs4bb
Talk to your AI
To start talking to your own AI, add a command that sends a message to the chat thread you created in the previous procedure.
Add a new function to api-handler.js
:
export const sendMessage = async (chatId, message) => {
try {
const eventSource = new SSE(
`https://appdirect.ai/api/v1/chats/${chatId}`,
{
start: false,
method: "POST",
withCredentials: true,
payload: JSON.stringify({
prompt: message,
date: new Date(),
}),
headers: {
"Content-Type": "application/json; charset=utf-8",
"X-Authorization": `Bearer ${process.env.API_KEY}`,
},
}
);
return eventSource;
} catch (error) {
throw new Error(`API Error: ${error.message}`);
}
};
Streaming plays a vital role in creating a responsive and efficient AI tool. Unlike traditional request-response patterns where you wait for the complete response before seeing any output, streaming receives and processes data in small chunks as they arrive. This approach is particularly crucial for AI applications where responses can be lengthy and generation times may vary.
When you send a message via the prompt, the CLI tool establishes a streaming connection with the AppDirect AI service, allowing it to display the response in real-time, which is similar to watching someone type a response. This creates a more interactive and dynamic experience, providing immediate feedback as the AI generates its response.
Let's use the sse.js library to simplify the parsing of the streaming events.
Lets expose this command to the CLI. Add this code to index.js
, just before program.parse();
:
program
.command("msg")
.requiredOption("-i, --id <id>", "id of the chat")
.argument("<message>", "message to send to AI")
.action(async (message, options) => {
const spinner = ora("Processing your message...").start();
try {
const eventSource = await sendMessage(options.id, message);
spinner.succeed("AI is responding");
eventSource.addEventListener("message.delta", (e) => {
const data = JSON.parse(e.data);
process.stdout.write(data.content.text);
});
eventSource.addEventListener("message.complete", () => {
console.log(chalk.green("\nResponse completed"));
});
eventSource.stream();
} catch (error) {
spinner.fail(chalk.red(`Error: ${error.message}`));
}
});
You can now send messages to your AI passing the id of the chat session we created earlier:
node index.js msg -i your_chat_id "Tell me a joke"
The id of the chat session is returned in the response.
❯ node index.js msg -i cm6tusnoe0001s30yf83xs4bb "Tell me a joke"
✔ AI is responding
Why don't scientists trust atoms?
Because they make up everything!
Response completed
Was this page helpful?
Tell us more…
Help us improve our content. Responses are anonymous.
Thanks
We appreciate your feedback!