A powerful CLI tool for GPT prompts written in JavaScript, supporting both OpenAI and DeepInfra models. Requires Node.js 20+
npm i -g node-gpt-cli
The tool supports both OpenAI and DeepInfra models. On install, default parameters are written to ~/.node-gpt-config.json
. You can modify them there:
{
"url": "https://api.openai.com/v1/chat/completions",
"model": "gpt-4-turbo-preview",
"max_tokens": 1024,
"temperature": 0.3,
"top_p": 0.5,
"frequency_penalty": 0,
"presence_penalty": 0
}
{
"url": "https://api.deepinfra.com/v1/inference/meta-llama/Llama-2-70b-chat-hf",
"model": "meta-llama/Llama-2-70b-chat-hf",
"max_tokens": 1024,
"temperature": 0.3,
"top_p": 0.5
}
Create an OpenAI API key and assign it to OPENAI_API_KEY
environment variable:
export OPENAI_API_KEY=<YOUR-API-KEY>
Create a DeepInfra API key and assign it to DEEPINFRA_API_KEY
environment variable:
export DEEPINFRA_API_KEY=<YOUR-API-KEY>
gpt4 what is best in life?
echo "what is my name?" | gpt4
The tool will automatically use the appropriate API based on your configuration. To switch between providers:
- Edit your config file:
nano ~/.node-gpt-config.json
-
Update the URL and model name:
- For OpenAI: Use
https://api.openai.com/v1/chat/completions
- For DeepInfra: Use
https://api.deepinfra.com/v1/inference/<model-name>
- For OpenAI: Use
-
Set the appropriate API key environment variable
gpt-4-turbo-preview
(Fastest, recommended)gpt-4
gpt-3.5-turbo
meta-llama/Llama-2-70b-chat-hf
mistralai/Mixtral-8x7B-Instruct-v0.1
google/gemma-7b-it
- Many more available at DeepInfra Models
url
: API endpoint URLmodel
: Model identifiermax_tokens
: Maximum response length (affects speed)temperature
: Controls randomness (0 = focused, 1 = creative)top_p
: Controls diversity (lower = faster)frequency_penalty
: Reduces repetition (0 = faster)presence_penalty
: Encourages new topics (0 = faster)
- Supports both OpenAI and DeepInfra APIs
- Fast response times with optimized settings
- Easy model switching
- Pipe support for integration with other tools
- Modern Node.js features and best practices
Run with debug logging:
DEBUG=true gpt4 "your prompt"
Test configuration:
npm run test