NocturnLabs
Projects

Deep Logic

Deep Logic is a unified TypeScript CLI for deep research and AI reasoning chat capabilities, powered by Bun. It combines autonomous research agent functionality with interactive reasoning model support in a single, cohesive tool.

Deep Logic

Deep Logic is a unified TypeScript CLI for deep research and AI reasoning chat capabilities, powered by Bun. It combines autonomous research agent functionality with interactive reasoning model support in a single, cohesive tool.

!NOTE This project consolidates the former "Deep Reasoning Suite" (Python TUI + Node.js research CLI) into a single, fast TypeScript/Bun application.


Features

  • Deep Research — Autonomous research agent using Google Gemini with DuckDuckGo web search integration
  • Reasoning Chat — Interactive terminal interface for IO Intelligence and Perplexity reasoning models
  • Session Management — Named sessions with persistence and analytics
  • Batch Processing — Queue multiple queries with progress tracking and resume capability
  • Analytics Dashboard — Token usage statistics and cost tracking
  • Vector Database — Semantic search over chat history using LanceDB
  • MCP Server — Expose curated knowledge to AI agents via Model Context Protocol
  • Markdown Rendering — Beautiful terminal output with syntax highlighting

Installation

Prerequisites: Bun v1.0+

# Clone the repository
git clone https://github.com/NocturnLabs/deep-logic.git
cd deep-logic

# Install dependencies
bun install

Quick Start

Interactive Chat Mode

Launch an interactive chat session with reasoning models:

bun start chat

Select a provider (IO Intelligence, Perplexity) and start asking questions with real-time streaming responses.

In-Session Commands

CommandDescription
/modelSwitch between available models
/clearClear conversation history
/quitExit the application

Deep Research

Perform autonomous deep research on any topic:

# Single query
bun start research "Impact of autonomous coding agents on software jobs"

# Interactive follow-ups
bun start research "History of the Roman Senate" --interactive

# Different output formats
bun start research "Quantum computing" --format markdown

Options:

  • --format: Output format (text, json, markdown)
  • --interactive: Enable follow-up questions after initial research
  • --session-name: Attach to a named persistent session

Batch Processing

Process multiple research queries from a file:

bun start batch data/questions/research.txt --delay 2000 --resume

!TIP The --resume flag checks for a .progress.json file and skips already completed queries, making it safe to restart long-running jobs.

Analytics Dashboard

View usage costs and token statistics:

bun start analytics

Example Output:

=== Analytics Dashboard ===

Usage Statistics:
  Total Queries: 142
  Total Tokens: 4,251,000
  Error Rate: 1.40%

Token Analytics:
  Average Tokens per Query: 29,936

Cost Breakdown:
  Total Cost: $4.25 USD
  Cost per Query: $0.0299 USD

Vector Database

Manage the semantic search database:

# Convert chat history to vectors
bun start vectors convert

# Stats
bun start vectors stats

# Query
bun start vectors query "your search term"

MCP Server

Deep Logic includes an MCP server that exposes the curated vector database to AI agents. SQLite is used only as staging; VectorDB is the source of truth for semantic search.

Running the Server

bun run mcp

Tools Provided

ToolDescription
search_chat_historySemantic search using vector embeddings
get_recent_chatsGet most recent chat records
get_chat_statsGet token usage and statistics
get_chat_by_idGet a specific chat record
get_performance_statsGet server uptime and performance metrics

Configuration for Claude Desktop

Add to your claude_desktop_config.json:

{
  "mcpServers": {
    "deep-logic-chat-history": {
      "command": "bun",
      "args": ["run", "/path/to/deep-logic/src/mcp-server.ts"]
    }
  }
}

Configuration

Create config/config.yaml:

api:
  # Gemini API key (or use GEMINI_API_KEY env var)
  # apiKey: "your-gemini-api-key"
  
  # Model to use for research
  model: "gemini-2.0-flash"
  
  # Generation parameters
  temperature: 0.7
  maxOutputTokens: 8192
  searchEnabled: true

providers:
  io:
    apiKey: ${IO_INTELLIGENCE_API_KEY}
    models:
      - deepseek-ai/DeepSeek-V3.2
      - deepseek-ai/DeepSeek-R1-0528
  perplexity:
    apiKey: ${PERPLEXITY_API_KEY}
    models:
      - sonar-reasoning
      - sonar-reasoning-pro

logging:
  directory: ./data
  format: json

Environment Variables

VariableDescription
GEMINI_API_KEYGoogle Gemini API key for deep research
IO_INTELLIGENCE_API_KEYIO Intelligence API key for DeepSeek models
PERPLEXITY_API_KEYPerplexity API key for Sonar models

Project Structure

deep-logic/
├── src/
│   ├── index.ts           # CLI entry point (Commander.js)
│   ├── mcp-server.ts      # MCP Server entry point
│   ├── commands/          # CLI command handlers
│   │   ├── chat.ts        # Interactive chat command
│   │   ├── research.ts    # Deep research command
│   │   ├── batch.ts       # Batch processing command
│   │   └── analytics.ts   # Analytics dashboard
│   ├── services/          # Business logic
│   │   ├── chatService.ts # Chat orchestration
│   │   └── researchService.ts # Research orchestration
│   ├── providers/         # LLM provider configurations
│   ├── ui/                # Terminal UI components (ora, boxen, chalk)
│   └── database/          # SQLite operations
├── config/                # Configuration files
├── data/                  # Databases and runtime data
├── tests/                 # Test files (Bun test)
├── package.json
└── tsconfig.json

Development

# Run in watch mode
bun dev

# Run tests
bun test

# Run tests with coverage
bun test:coverage

# Lint
bun lint

# Format
bun format

Technical Details

Dependencies

PackagePurpose
@google/generative-aiGoogle Gemini API client
openaiOpenAI-compatible API client (IO Intelligence, Perplexity)
commanderCLI framework
@inquirer/promptsInteractive prompts
chalkTerminal styling
oraSpinners
boxenStyled boxes
marked + marked-terminalMarkdown rendering
cli-table3Table formatting
duck-duck-scrapeDuckDuckGo search integration
@lancedb/lancedbVector database for semantic search
ollamaEmbedding generation client
@modelcontextprotocol/sdkMCP server implementation

Security

  • Input Validation: Strict regex sanitization blocks XSS/Command Injection patterns
  • Rate Limiting: Configurable delays in batch mode prevent API bans
  • Query Length Limits: Maximum query length enforced to prevent abuse

License

MIT