Deep Logic is a unified TypeScript CLI for deep research and AI reasoning chat capabilities, powered by Bun. It combines autonomous research agent functionality with interactive reasoning model support in a single, cohesive tool.
!NOTE This project consolidates the former "Deep Reasoning Suite" (Python TUI + Node.js research CLI) into a single, fast TypeScript/Bun application.
Prerequisites: Bun v1.0+
# Clone the repository
git clone https://github.com/NocturnLabs/deep-logic.git
cd deep-logic
# Install dependencies
bun install
Launch an interactive chat session with reasoning models:
bun start chat
Select a provider (IO Intelligence, Perplexity) and start asking questions with real-time streaming responses.
| Command | Description |
|---|---|
/model | Switch between available models |
/clear | Clear conversation history |
/quit | Exit the application |
Perform autonomous deep research on any topic:
# Single query
bun start research "Impact of autonomous coding agents on software jobs"
# Interactive follow-ups
bun start research "History of the Roman Senate" --interactive
# Different output formats
bun start research "Quantum computing" --format markdown
Options:
--format: Output format (text, json, markdown)--interactive: Enable follow-up questions after initial research--session-name: Attach to a named persistent sessionProcess multiple research queries from a file:
bun start batch data/questions/research.txt --delay 2000 --resume
!TIP The
--resumeflag checks for a.progress.jsonfile and skips already completed queries, making it safe to restart long-running jobs.
View usage costs and token statistics:
bun start analytics
Example Output:
=== Analytics Dashboard ===
Usage Statistics:
Total Queries: 142
Total Tokens: 4,251,000
Error Rate: 1.40%
Token Analytics:
Average Tokens per Query: 29,936
Cost Breakdown:
Total Cost: $4.25 USD
Cost per Query: $0.0299 USD
Manage the semantic search database:
# Convert chat history to vectors
bun start vectors convert
# Stats
bun start vectors stats
# Query
bun start vectors query "your search term"
Deep Logic includes an MCP server that exposes the curated vector database to AI agents. SQLite is used only as staging; VectorDB is the source of truth for semantic search.
bun run mcp
| Tool | Description |
|---|---|
search_chat_history | Semantic search using vector embeddings |
get_recent_chats | Get most recent chat records |
get_chat_stats | Get token usage and statistics |
get_chat_by_id | Get a specific chat record |
get_performance_stats | Get server uptime and performance metrics |
Add to your claude_desktop_config.json:
{
"mcpServers": {
"deep-logic-chat-history": {
"command": "bun",
"args": ["run", "/path/to/deep-logic/src/mcp-server.ts"]
}
}
}
Create config/config.yaml:
api:
# Gemini API key (or use GEMINI_API_KEY env var)
# apiKey: "your-gemini-api-key"
# Model to use for research
model: "gemini-2.0-flash"
# Generation parameters
temperature: 0.7
maxOutputTokens: 8192
searchEnabled: true
providers:
io:
apiKey: ${IO_INTELLIGENCE_API_KEY}
models:
- deepseek-ai/DeepSeek-V3.2
- deepseek-ai/DeepSeek-R1-0528
perplexity:
apiKey: ${PERPLEXITY_API_KEY}
models:
- sonar-reasoning
- sonar-reasoning-pro
logging:
directory: ./data
format: json
| Variable | Description |
|---|---|
GEMINI_API_KEY | Google Gemini API key for deep research |
IO_INTELLIGENCE_API_KEY | IO Intelligence API key for DeepSeek models |
PERPLEXITY_API_KEY | Perplexity API key for Sonar models |
deep-logic/
├── src/
│ ├── index.ts # CLI entry point (Commander.js)
│ ├── mcp-server.ts # MCP Server entry point
│ ├── commands/ # CLI command handlers
│ │ ├── chat.ts # Interactive chat command
│ │ ├── research.ts # Deep research command
│ │ ├── batch.ts # Batch processing command
│ │ └── analytics.ts # Analytics dashboard
│ ├── services/ # Business logic
│ │ ├── chatService.ts # Chat orchestration
│ │ └── researchService.ts # Research orchestration
│ ├── providers/ # LLM provider configurations
│ ├── ui/ # Terminal UI components (ora, boxen, chalk)
│ └── database/ # SQLite operations
├── config/ # Configuration files
├── data/ # Databases and runtime data
├── tests/ # Test files (Bun test)
├── package.json
└── tsconfig.json
# Run in watch mode
bun dev
# Run tests
bun test
# Run tests with coverage
bun test:coverage
# Lint
bun lint
# Format
bun format
| Package | Purpose |
|---|---|
@google/generative-ai | Google Gemini API client |
openai | OpenAI-compatible API client (IO Intelligence, Perplexity) |
commander | CLI framework |
@inquirer/prompts | Interactive prompts |
chalk | Terminal styling |
ora | Spinners |
boxen | Styled boxes |
marked + marked-terminal | Markdown rendering |
cli-table3 | Table formatting |
duck-duck-scrape | DuckDuckGo search integration |
@lancedb/lancedb | Vector database for semantic search |
ollama | Embedding generation client |
@modelcontextprotocol/sdk | MCP server implementation |
MIT