π§ NeuroLink The Enterprise AI SDK for Production Applications 13 Providers | 58+ MCP Tools | HITL Security | Redis Persistence Enterprise AI development platform with unified provider access, production-ready tooling, and an opinionated factory architecture. NeuroLink ships as both a TypeScript SDK and a professional CLI so teams can build, operate, and iterate on AI features quickly. NeuroLink i
Add this skill
npx mdskills install juspay/neurolinkComprehensive enterprise AI SDK with multi-provider support, but this is a README not actionable agent instructions
1<div align="center">2 <h1>π§ NeuroLink</h1>3 <p><strong>The Enterprise AI SDK for Production Applications</strong></p>4 <p>13 Providers | 58+ MCP Tools | HITL Security | Redis Persistence</p>5</div>67<div align="center">89[](https://www.npmjs.com/package/@juspay/neurolink)10[](https://www.npmjs.com/package/@juspay/neurolink)11[](https://github.com/juspay/neurolink/actions/workflows/ci.yml)12[](https://coveralls.io/github/juspay/neurolink?branch=main)13[](https://opensource.org/licenses/MIT)14[](https://www.typescriptlang.org/)15[](https://github.com/juspay/neurolink/stargazers)16[](https://discord.gg/neurolink)1718</div>1920Enterprise AI development platform with unified provider access, production-ready tooling, and an opinionated factory architecture. NeuroLink ships as both a TypeScript SDK and a professional CLI so teams can build, operate, and iterate on AI features quickly.2122## π§ What is NeuroLink?2324**NeuroLink is the universal AI integration platform that unifies 13 major AI providers and 100+ models under one consistent API.**2526Extracted from production systems at Juspay and battle-tested at enterprise scale, NeuroLink provides a production-ready solution for integrating AI into any application. Whether you're building with OpenAI, Anthropic, Google, AWS Bedrock, Azure, or any of our 13 supported providers, NeuroLink gives you a single, consistent interface that works everywhere.2728**Why NeuroLink?** Switch providers with a single parameter change, leverage 64+ built-in tools and MCP servers, deploy with confidence using enterprise features like Redis memory and multi-provider failover, and optimize costs automatically with intelligent routing. Use it via our professional CLI or TypeScript SDKβwhichever fits your workflow.2930**Where we're headed:** We're building for the future of AIβedge-first execution and continuous streaming architectures that make AI practically free and universally available. **[Read our vision β](docs/about/vision.md)**3132**[Get Started in <5 Minutes β](docs/getting-started/quick-start.md)**3334---3536## What's New (Q1 2026)3738| Feature | Version | Description | Guide |39| ----------------------------------- | ------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------- |40| **Context Window Management** | v9.2.0 | 4-stage compaction pipeline with auto-detection, budget gate at 80% usage, per-provider token estimation | [Context Compaction Guide](docs/features/context-compaction.md) |41| **Tool Execution Control** | v9.3.0 | `prepareStep` and `toolChoice` support for per-step tool enforcement in multi-step agentic loops. API-level control over tool calls. | [API Reference](docs/api/type-aliases/GenerateOptions.md#preparestep) |42| **File Processor System** | v9.1.0 | 17+ file type processors with ProcessorRegistry, security sanitization, SVG text injection | [File Processors Guide](docs/features/file-processors.md) |43| **RAG with generate()/stream()** | v9.2.0 | Pass `rag: { files }` to generate/stream for automatic document chunking, embedding, and AI-powered search. 10 chunking strategies, hybrid search, reranking. | [RAG Guide](docs/features/rag.md) |44| **External TracerProvider Support** | v8.43.0 | Integrate NeuroLink with existing OpenTelemetry instrumentation. Prevents duplicate registration conflicts. | [Observability Guide](docs/features/observability.md) |45| **Server Adapters** | v8.43.0 | Multi-framework HTTP server with Hono, Express, Fastify, Koa support. Full CLI for server management with foreground/background modes. | [Server Adapters Guide](docs/guides/server-adapters/index.md) |46| **Title Generation Events** | v8.38.0 | Emit `conversation:titleGenerated` event when conversation title is generated. Supports custom title prompts via `NEUROLINK_TITLE_PROMPT`. | [Conversation Memory Guide](docs/conversation-memory.md) |47| **Video Generation with Veo** | v8.32.0 | Video generation using Veo 3.1 (`veo-3.1`). Realistic video generation with many parameter options | [Video Generation Guide](docs/features/video-generation.md) |48| **Image Generation with Gemini** | v8.31.0 | Native image generation using Gemini 2.0 Flash Experimental (`imagen-3.0-generate-002`). High-quality image synthesis directly from Google AI. | [Image Generation Guide](docs/image-generation-streaming.md) |49| **HTTP/Streamable HTTP Transport** | v8.29.0 | Connect to remote MCP servers via HTTP with authentication headers, automatic retry with exponential backoff, and configurable rate limiting. | [HTTP Transport Guide](docs/mcp-http-transport.md) |5051- **External TracerProvider Support** β Integrate NeuroLink with applications that already have OpenTelemetry instrumentation. Supports auto-detection and manual configuration. β [Observability Guide](docs/features/observability.md)52- **Server Adapters** β Deploy NeuroLink as an HTTP API server with your framework of choice (Hono, Express, Fastify, Koa). Full CLI support with `serve` and `server` commands for foreground/background modes, route management, and OpenAPI generation. β [Server Adapters Guide](docs/guides/server-adapters/index.md)53- **Title Generation Events** β Emit real-time events when conversation titles are auto-generated. Listen to `conversation:titleGenerated` for session tracking. β [Conversation Memory Guide](docs/conversation-memory.md#title-generation-events)54- **Custom Title Prompts** β Customize conversation title generation with `NEUROLINK_TITLE_PROMPT` environment variable. Use `${userMessage}` placeholder for dynamic prompts. β [Conversation Memory Guide](docs/conversation-memory.md#customizing-the-title-prompt)55- **Video Generation** β Transform images into 8-second videos with synchronized audio using Google Veo 3.1 via Vertex AI. Supports 720p/1080p resolutions, portrait/landscape aspect ratios. β [Video Generation Guide](docs/features/video-generation.md)56- **Image Generation** β Generate images from text prompts using Gemini models via Vertex AI or Google AI Studio. Supports streaming mode with automatic file saving. β [Image Generation Guide](docs/image-generation-streaming.md)57- **RAG with generate()/stream()** β Just pass `rag: { files: ["./docs/guide.md"] }` to `generate()` or `stream()`. NeuroLink auto-chunks, embeds, and creates a search tool the AI can invoke. 10 chunking strategies, hybrid search, 5 reranker types. β [RAG Guide](docs/features/rag.md)58- **HTTP/Streamable HTTP Transport for MCP** β Connect to remote MCP servers via HTTP with authentication headers, retry logic, and rate limiting. β [HTTP Transport Guide](docs/mcp-http-transport.md)59- π§ **Gemini 3 Preview Support** - Full support for gemini-3-flash-preview and gemini-3-pro-preview with extended thinking capabilities60- π― **Tool Execution Control** β Use `prepareStep` to enforce specific tool calls, change the LLM models per step in multi-step agentic executions. Prevents LLMs from skipping required tools. Use `toolChoice` for static control, or `prepareStep` for dynamic per-step logic. β [GenerateOptions Reference](docs/api/type-aliases/GenerateOptions.md#preparestep)61- **Structured Output with Zod Schemas** β Type-safe JSON generation with automatic validation using `schema` + `output.format: "json"` in `generate()`. β [Structured Output Guide](docs/features/structured-output.md)62- **CSV File Support** β Attach CSV files to prompts for AI-powered data analysis with auto-detection. β [CSV Guide](docs/features/multimodal-chat.md#csv-file-support)63- **PDF File Support** β Process PDF documents with native visual analysis for Vertex AI, Anthropic, Bedrock, AI Studio. β [PDF Guide](docs/features/pdf-support.md)64- **50+ File Types** β Process Excel, Word, RTF, JSON, YAML, XML, HTML, SVG, Markdown, and 50+ code languages with intelligent content extraction. β [File Processors Guide](docs/features/file-processors.md)65- **LiteLLM Integration** β Access 100+ AI models from all major providers through unified interface. β [Setup Guide](docs/litellm-integration.md)66- **SageMaker Integration** β Deploy and use custom trained models on AWS infrastructure. β [Setup Guide](docs/sagemaker-integration.md)67- **OpenRouter Integration** β Access 300+ models from OpenAI, Anthropic, Google, Meta, and more through a single unified API. β [Setup Guide](docs/getting-started/providers/openrouter.md)68- **Human-in-the-loop workflows** β Pause generation for user approval/input before tool execution. β [HITL Guide](docs/features/hitl.md)69- **Guardrails middleware** β Block PII, profanity, and unsafe content with built-in filtering. β [Guardrails Guide](docs/features/guardrails.md)70- **Context summarization** β Automatic conversation compression for long-running sessions. β [Summarization Guide](docs/context-summarization.md)71- **Redis conversation export** β Export full session history as JSON for analytics and debugging. β [History Guide](docs/features/conversation-history.md)7273```typescript74// Image Generation with Gemini (v8.31.0)75const image = await neurolink.generateImage({76 prompt: "A futuristic cityscape",77 provider: "google-ai",78 model: "imagen-3.0-generate-002",79});8081// HTTP Transport for Remote MCP (v8.29.0)82await neurolink.addExternalMCPServer("remote-tools", {83 transport: "http",84 url: "https://mcp.example.com/v1",85 headers: { Authorization: "Bearer token" },86 retries: 3,87 timeout: 15000,88});89```9091---9293<details>94<summary><strong>Previous Updates (Q4 2025)</strong></summary>9596- **Image Generation** β Generate images from text prompts using Gemini models via Vertex AI or Google AI Studio. β [Guide](docs/image-generation-streaming.md)97- **Gemini 3 Preview Support** - Full support for `gemini-3-flash-preview` and `gemini-3-pro-preview` with extended thinking98- **Structured Output with Zod Schemas** β Type-safe JSON generation with automatic validation. β [Guide](docs/features/structured-output.md)99- **CSV & PDF File Support** β Attach CSV/PDF files to prompts with auto-detection. β [CSV](docs/features/multimodal-chat.md#csv-file-support) | [PDF](docs/features/pdf-support.md)100- **LiteLLM & SageMaker** β Access 100+ models via LiteLLM, deploy custom models on SageMaker. β [LiteLLM](docs/litellm-integration.md) | [SageMaker](docs/sagemaker-integration.md)101- **OpenRouter Integration** β Access 300+ models through a single unified API. β [Guide](docs/getting-started/providers/openrouter.md)102- **HITL & Guardrails** β Human-in-the-loop approval workflows and content filtering middleware. β [HITL](docs/features/hitl.md) | [Guardrails](docs/features/guardrails.md)103- **Redis & Context Management** β Session export, conversation history, and automatic summarization. β [History](docs/features/conversation-history.md)104105</details>106107## Enterprise Security: Human-in-the-Loop (HITL)108109NeuroLink includes a **production-ready HITL system** for regulated industries and high-stakes AI operations:110111| Capability | Description | Use Case |112| --------------------------- | --------------------------------------------------------- | ------------------------------------------ |113| **Tool Approval Workflows** | Require human approval before AI executes sensitive tools | Financial transactions, data modifications |114| **Output Validation** | Route AI outputs through human review pipelines | Medical diagnosis, legal documents |115| **Confidence Thresholds** | Automatically trigger human review below confidence level | Critical business decisions |116| **Complete Audit Trail** | Full audit logging for compliance (HIPAA, SOC2, GDPR) | Regulated industries |117118```typescript119import { NeuroLink } from "@juspay/neurolink";120121const neurolink = new NeuroLink({122 hitl: {123 enabled: true,124 requireApproval: ["writeFile", "executeCode", "sendEmail"],125 confidenceThreshold: 0.85,126 reviewCallback: async (action, context) => {127 // Custom review logic - integrate with your approval system128 return await yourApprovalSystem.requestReview(action);129 },130 },131});132133// AI pauses for human approval before executing sensitive tools134const result = await neurolink.generate({135 input: { text: "Send quarterly report to stakeholders" },136});137```138139**[Enterprise HITL Guide](docs/features/enterprise-hitl.md)** | **[Quick Start](docs/features/hitl.md)**140141## Get Started in Two Steps142143```bash144# 1. Run the interactive setup wizard (select providers, validate keys)145pnpm dlx @juspay/neurolink setup146147# 2. Start generating with automatic provider selection148npx @juspay/neurolink generate "Write a launch plan for multimodal chat"149```150151Need a persistent workspace? Launch loop mode with `npx @juspay/neurolink loop` - [Learn more β](docs/features/cli-loop-sessions.md)152153## π Complete Feature Set154155NeuroLink is a comprehensive AI development platform. Every feature below is production-ready and fully documented.156157### π€ AI Provider Integration158159**13 providers unified under one API** - Switch providers with a single parameter change.160161| Provider | Models | Free Tier | Tool Support | Status | Documentation |162| --------------------- | -------------------------------------------------- | --------------- | ------------ | ------------- | ----------------------------------------------------------------------- |163| **OpenAI** | GPT-4o, GPT-4o-mini, o1 | β | β Full | β Production | [Setup Guide](docs/getting-started/provider-setup.md#openai) |164| **Anthropic** | Claude 4.5 Opus/Sonnet/Haiku, Claude 4 Opus/Sonnet | β | β Full | β Production | [Setup Guide](docs/getting-started/provider-setup.md#anthropic) |165| **Google AI Studio** | Gemini 3 Flash/Pro, Gemini 2.5 Flash/Pro | β Free Tier | β Full | β Production | [Setup Guide](docs/getting-started/provider-setup.md#google-ai) |166| **AWS Bedrock** | Claude, Titan, Llama, Nova | β | β Full | β Production | [Setup Guide](docs/getting-started/provider-setup.md#bedrock) |167| **Google Vertex** | Gemini 3/2.5 (gemini-3-\*-preview) | β | β Full | β Production | [Setup Guide](docs/getting-started/provider-setup.md#vertex) |168| **Azure OpenAI** | GPT-4, GPT-4o, o1 | β | β Full | β Production | [Setup Guide](docs/getting-started/provider-setup.md#azure) |169| **LiteLLM** | 100+ models unified | Varies | β Full | β Production | [Setup Guide](docs/litellm-integration.md) |170| **AWS SageMaker** | Custom deployed models | β | β Full | β Production | [Setup Guide](docs/sagemaker-integration.md) |171| **Mistral AI** | Mistral Large, Small | β Free Tier | β Full | β Production | [Setup Guide](docs/getting-started/provider-setup.md#mistral) |172| **Hugging Face** | 100,000+ models | β Free | β οΈ Partial | β Production | [Setup Guide](docs/getting-started/provider-setup.md#huggingface) |173| **Ollama** | Local models (Llama, Mistral) | β Free (Local) | β οΈ Partial | β Production | [Setup Guide](docs/getting-started/provider-setup.md#ollama) |174| **OpenAI Compatible** | Any OpenAI-compatible endpoint | Varies | β Full | β Production | [Setup Guide](docs/getting-started/provider-setup.md#openai-compatible) |175| **OpenRouter** | 200+ Models via OpenRouter | Varies | β Full | β Production | [Setup Guide](docs/getting-started/providers/openrouter.md) |176177**[π Provider Comparison Guide](docs/reference/provider-comparison.md)** - Detailed feature matrix and selection criteria178**[π¬ Provider Feature Compatibility](docs/reference/provider-feature-compatibility.md)** - Test-based compatibility reference for all 19 features across 13 providers179180---181182### π§ Built-in Tools & MCP Integration183184**6 Core Tools** (work across all providers, zero configuration):185186| Tool | Purpose | Auto-Available | Documentation |187| -------------------- | ------------------------ | ----------------------- | ------------------------------------------ |188| `getCurrentTime` | Real-time clock access | β | [Tool Reference](docs/sdk/custom-tools.md) |189| `readFile` | File system reading | β | [Tool Reference](docs/sdk/custom-tools.md) |190| `writeFile` | File system writing | β | [Tool Reference](docs/sdk/custom-tools.md) |191| `listDirectory` | Directory listing | β | [Tool Reference](docs/sdk/custom-tools.md) |192| `calculateMath` | Mathematical operations | β | [Tool Reference](docs/sdk/custom-tools.md) |193| `websearchGrounding` | Google Vertex web search | β οΈ Requires credentials | [Tool Reference](docs/sdk/custom-tools.md) |194195**58+ External MCP Servers** supported (GitHub, PostgreSQL, Google Drive, Slack, and more):196197```typescript198// stdio transport - local MCP servers via command execution199await neurolink.addExternalMCPServer("github", {200 command: "npx",201 args: ["-y", "@modelcontextprotocol/server-github"],202 transport: "stdio",203 env: { GITHUB_TOKEN: process.env.GITHUB_TOKEN },204});205206// HTTP transport - remote MCP servers via URL207await neurolink.addExternalMCPServer("github-copilot", {208 transport: "http",209 url: "https://api.githubcopilot.com/mcp",210 headers: { Authorization: "Bearer YOUR_COPILOT_TOKEN" },211 timeout: 15000,212 retries: 5,213});214215// Tools automatically available to AI216const result = await neurolink.generate({217 input: { text: 'Create a GitHub issue titled "Bug in auth flow"' },218});219```220221**MCP Transport Options:**222223| Transport | Use Case | Key Features |224| ----------- | -------------- | ----------------------------------------------- |225| `stdio` | Local servers | Command execution, environment variables |226| `http` | Remote servers | URL-based, auth headers, retries, rate limiting |227| `sse` | Event streams | Server-Sent Events, real-time updates |228| `websocket` | Bi-directional | Full-duplex communication |229230**[π MCP Integration Guide](docs/advanced/mcp-integration.md)** - Setup external servers231**[π HTTP Transport Guide](docs/mcp-http-transport.md)** - Remote MCP server configuration232233---234235### π» Developer Experience Features236237**SDK-First Design** with TypeScript, IntelliSense, and type safety:238239| Feature | Description | Documentation |240| --------------------------- | --------------------------------------------------------------------------------- | --------------------------------------------------------- |241| **Auto Provider Selection** | Intelligent provider fallback | [SDK Guide](docs/sdk/index.md#auto-selection) |242| **Streaming Responses** | Real-time token streaming | [Streaming Guide](docs/advanced/streaming.md) |243| **Conversation Memory** | Automatic context management | [Memory Guide](docs/sdk/index.md#memory) |244| **Full Type Safety** | Complete TypeScript types | [Type Reference](docs/sdk/api-reference.md) |245| **Error Handling** | Graceful provider fallback | [Error Guide](docs/reference/troubleshooting.md) |246| **Analytics & Evaluation** | Usage tracking, quality scores | [Analytics Guide](docs/advanced/analytics.md) |247| **Middleware System** | Request/response hooks | [Middleware Guide](docs/custom-middleware-guide.md) |248| **Framework Integration** | Next.js, SvelteKit, Express | [Framework Guides](docs/sdk/framework-integration.md) |249| **Extended Thinking** | Native thinking/reasoning mode for Gemini 3 and Claude models | [Thinking Guide](docs/features/thinking-configuration.md) |250| **RAG Document Processing** | `rag: { files }` on generate/stream with 10 chunking strategies and hybrid search | [RAG Guide](docs/features/rag.md) |251252---253254### π Multimodal & File Processing255256**17+ file categories supported** (50+ total file types including code languages) with intelligent content extraction and provider-agnostic processing:257258| Category | Supported Types | Processing |259| ------------- | ---------------------------------------------------------- | ----------------------------------- |260| **Documents** | Excel (`.xlsx`, `.xls`), Word (`.docx`), RTF, OpenDocument | Sheet extraction, text extraction |261| **Data** | JSON, YAML, XML | Validation, syntax highlighting |262| **Markup** | HTML, SVG, Markdown, Text | OWASP-compliant sanitization |263| **Code** | 50+ languages (TypeScript, Python, Java, Go, etc.) | Language detection, syntax metadata |264| **Config** | `.env`, `.ini`, `.toml`, `.cfg` | Secure parsing |265| **Media** | Images (PNG, JPEG, WebP, GIF), PDFs, CSV | Provider-specific formatting |266267```typescript268// Process any supported file type269const result = await neurolink.generate({270 input: {271 text: "Analyze this data and code",272 files: [273 "./data.xlsx", // Excel spreadsheet274 "./config.yaml", // YAML configuration275 "./diagram.svg", // SVG (injected as sanitized text)276 "./main.py", // Python source code277 ],278 },279});280281// CLI: Use --file for any supported type282// neurolink generate "Analyze this" --file ./report.xlsx --file ./config.json283```284285**Key Features:**286287- **ProcessorRegistry** - Priority-based processor selection with fallback288- **OWASP Security** - HTML/SVG sanitization prevents XSS attacks289- **Auto-detection** - FileDetector identifies file types by extension and content290- **Provider-agnostic** - All processors work across all 13 AI providers291292**[π File Processors Guide](docs/features/file-processors.md)** - Complete reference for all file types293294---295296### π’ Enterprise & Production Features297298**Production-ready capabilities for regulated industries:**299300| Feature | Description | Use Case | Documentation |301| --------------------------- | ---------------------------------- | ------------------------- | ----------------------------------------------------------- |302| **Enterprise Proxy** | Corporate proxy support | Behind firewalls | [Proxy Setup](docs/enterprise-proxy-setup.md) |303| **Redis Memory** | Distributed conversation state | Multi-instance deployment | [Redis Guide](docs/getting-started/provider-setup.md#redis) |304| **Cost Optimization** | Automatic cheapest model selection | Budget control | [Cost Guide](docs/advanced/index.md) |305| **Multi-Provider Failover** | Automatic provider switching | High availability | [Failover Guide](docs/advanced/index.md) |306| **Telemetry & Monitoring** | OpenTelemetry integration | Observability | [Telemetry Guide](docs/telemetry-guide.md) |307| **Security Hardening** | Credential management, auditing | Compliance | [Security Guide](docs/advanced/enterprise.md) |308| **Custom Model Hosting** | SageMaker integration | Private models | [SageMaker Guide](docs/sagemaker-integration.md) |309| **Load Balancing** | LiteLLM proxy integration | Scale & routing | [Load Balancing](docs/litellm-integration.md) |310311**Security & Compliance:**312313- β SOC2 Type II compliant deployments314- β ISO 27001 certified infrastructure compatible315- β GDPR-compliant data handling (EU providers available)316- β HIPAA compatible (with proper configuration)317- β Hardened OS verified (SELinux, AppArmor)318- β Zero credential logging319- β Encrypted configuration storage320- β Automatic context window management with 4-stage compaction pipeline and 80% budget gate321322**[π Enterprise Deployment Guide](docs/advanced/enterprise.md)** - Complete production checklist323324---325326## Enterprise Persistence: Redis Memory327328Production-ready distributed conversation state for multi-instance deployments:329330### Capabilities331332| Feature | Description | Benefit |333| ---------------------- | -------------------------------------------- | --------------------------- |334| **Distributed Memory** | Share conversation context across instances | Horizontal scaling |335| **Session Export** | Export full history as JSON | Analytics, debugging, audit |336| **Auto-Detection** | Automatic Redis discovery from environment | Zero-config in containers |337| **Graceful Failover** | Falls back to in-memory if Redis unavailable | High availability |338| **TTL Management** | Configurable session expiration | Memory management |339340### Quick Setup341342```typescript343import { NeuroLink } from "@juspay/neurolink";344345// Auto-detect Redis from REDIS_URL environment variable346const neurolink = new NeuroLink({347 conversationMemory: {348 enabled: true,349 store: "redis", // Automatically uses REDIS_URL350 ttl: 86400, // 24-hour session expiration351 },352});353354// Or explicit configuration355const neurolinkExplicit = new NeuroLink({356 conversationMemory: {357 enabled: true,358 store: "redis",359 redis: {360 host: "redis.example.com",361 port: 6379,362 password: process.env.REDIS_PASSWORD,363 tls: true, // Enable for production364 },365 },366});367368// Export conversation for analytics369const history = await neurolink.exportConversation({ format: "json" });370await saveToDataWarehouse(history);371```372373### Docker Quick Start374375```bash376# Start Redis377docker run -d --name neurolink-redis -p 6379:6379 redis:7-alpine378379# Configure NeuroLink380export REDIS_URL=redis://localhost:6379381382# Start your application383node your-app.js384```385386**[Redis Setup Guide](docs/getting-started/redis-quickstart.md)** | **[Production Configuration](docs/guides/redis-configuration.md)** | **[Migration Patterns](docs/guides/redis-migration.md)**387388---389390### π¨ Professional CLI391392**15+ commands** for every workflow:393394| Command | Purpose | Example | Documentation |395| ---------------- | ------------------------------------ | -------------------------- | ----------------------------------------- |396| `setup` | Interactive provider configuration | `neurolink setup` | [Setup Guide](docs/cli/index.md) |397| `generate` | Text generation | `neurolink gen "Hello"` | [Generate](docs/cli/commands.md#generate) |398| `stream` | Streaming generation | `neurolink stream "Story"` | [Stream](docs/cli/commands.md#stream) |399| `status` | Provider health check | `neurolink status` | [Status](docs/cli/commands.md#status) |400| `loop` | Interactive session | `neurolink loop` | [Loop](docs/cli/commands.md#loop) |401| `mcp` | MCP server management | `neurolink mcp discover` | [MCP CLI](docs/cli/commands.md#mcp) |402| `models` | Model listing | `neurolink models` | [Models](docs/cli/commands.md#models) |403| `eval` | Model evaluation | `neurolink eval` | [Eval](docs/cli/commands.md#eval) |404| `serve` | Start HTTP server in foreground mode | `neurolink serve` | [Serve](docs/cli/commands.md#serve) |405| `server start` | Start HTTP server in background mode | `neurolink server start` | [Server](docs/cli/commands.md#server) |406| `server stop` | Stop running background server | `neurolink server stop` | [Server](docs/cli/commands.md#server) |407| `server status` | Show server status information | `neurolink server status` | [Server](docs/cli/commands.md#server) |408| `server routes` | List all registered API routes | `neurolink server routes` | [Server](docs/cli/commands.md#server) |409| `server config` | View or modify server configuration | `neurolink server config` | [Server](docs/cli/commands.md#server) |410| `server openapi` | Generate OpenAPI specification | `neurolink server openapi` | [Server](docs/cli/commands.md#server) |411| `rag chunk` | Chunk documents for RAG | `neurolink rag chunk f.md` | [RAG CLI](docs/cli/commands.md#rag) |412413**RAG flags** are available on `generate` and `stream`: `--rag-files`, `--rag-strategy`, `--rag-chunk-size`, `--rag-chunk-overlap`, `--rag-top-k`414415**[π Complete CLI Reference](docs/cli/commands.md)** - All commands and options416417---418419### π€ GitHub Action420421Run AI-powered workflows directly in GitHub Actions with 13 provider support and automatic PR/issue commenting.422423```yaml424- uses: juspay/neurolink@v1425 with:426 anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}427 prompt: "Review this PR for security issues and code quality"428 post_comment: true429```430431| Feature | Description |432| ---------------------- | ----------------------------------------------------------------------------------------- |433| **Multi-Provider** | 13 providers with unified interface |434| **PR/Issue Comments** | Auto-post AI responses with intelligent updates |435| **Multimodal Support** | Attach images, PDFs, CSVs, Excel, Word, JSON, YAML, XML, HTML, SVG, code files to prompts |436| **Cost Tracking** | Built-in analytics and quality evaluation |437| **Extended Thinking** | Deep reasoning with thinking tokens |438439**[π GitHub Action Guide](docs/guides/github-action.md)** - Complete setup and examples440441---442443## π° Smart Model Selection444445NeuroLink features intelligent model selection and cost optimization:446447### Cost Optimization Features448449- **π° Automatic Cost Optimization**: Selects cheapest models for simple tasks450- **π LiteLLM Model Routing**: Access 100+ models with automatic load balancing451- **π Capability-Based Selection**: Find models with specific features (vision, function calling)452- **β‘ Intelligent Fallback**: Seamless switching when providers fail453454```bash455# Cost optimization - automatically use cheapest model456npx @juspay/neurolink generate "Hello" --optimize-cost457458# LiteLLM specific model selection459npx @juspay/neurolink generate "Complex analysis" --provider litellm --model "anthropic/claude-3-5-sonnet"460461# Auto-select best available provider462npx @juspay/neurolink generate "Write code" # Automatically chooses optimal provider463```464465## Revolutionary Interactive CLI466467NeuroLink's CLI goes beyond simple commands - it's a **full AI development environment**:468469### Why Interactive Mode Changes Everything470471| Feature | Traditional CLI | NeuroLink Interactive |472| ------------- | ----------------- | ------------------------------ |473| Session State | None | Full persistence |474| Memory | Per-command | Conversation-aware |475| Configuration | Flags per command | `/set` persists across session |476| Tool Testing | Manual per tool | Live discovery & testing |477| Streaming | Optional | Real-time default |478479### Live Demo: Development Session480481```bash482$ npx @juspay/neurolink loop --enable-conversation-memory483484neurolink > /set provider vertex485β provider set to vertex (Gemini 3 support enabled)486487neurolink > /set model gemini-3-flash-preview488β model set to gemini-3-flash-preview489490neurolink > Analyze my project architecture and suggest improvements491492β Analyzing your project structure...493[AI provides detailed analysis, remembering context]494495neurolink > Now implement the first suggestion496[AI remembers previous context and implements suggestion]497498neurolink > /mcp discover499β Discovered 58 MCP tools:500 GitHub: create_issue, list_repos, create_pr...501 PostgreSQL: query, insert, update...502 [full list]503504neurolink > Use the GitHub tool to create an issue for this improvement505β Creating issue... (requires HITL approval if configured)506507neurolink > /export json > session-2026-01-01.json508β Exported 15 messages to session-2026-01-01.json509510neurolink > exit511Session saved. Resume with: neurolink loop --session session-2026-01-01.json512```513514### Session Commands Reference515516| Command | Purpose |517| -------------------- | ---------------------------------------------------- |518| `/set <key> <value>` | Persist configuration (provider, model, temperature) |519| `/mcp discover` | List all available MCP tools |520| `/export json` | Export conversation to JSON |521| `/history` | View conversation history |522| `/clear` | Clear context while keeping settings |523524**[Interactive CLI Guide](docs/features/interactive-cli.md)** | **[CLI Reference](docs/cli/commands.md)**525526Skip the wizard and configure manually? See [`docs/getting-started/provider-setup.md`](docs/getting-started/provider-setup.md).527528## CLI & SDK Essentials529530`neurolink` CLI mirrors the SDK so teams can script experiments and codify them later.531532```bash533# Discover available providers and models534npx @juspay/neurolink status535npx @juspay/neurolink models list --provider google-ai536537# Route to a specific provider/model538npx @juspay/neurolink generate "Summarize customer feedback" \539 --provider azure --model gpt-4o-mini540541# Turn on analytics + evaluation for observability542npx @juspay/neurolink generate "Draft release notes" \543 --enable-analytics --enable-evaluation --format json544545# RAG: Ask questions about your docs (auto-chunks, embeds, searches)546npx @juspay/neurolink generate "What are the key features?" \547 --rag-files ./docs/guide.md ./docs/api.md --rag-strategy markdown548```549550```typescript551import { NeuroLink } from "@juspay/neurolink";552553const neurolink = new NeuroLink({554 conversationMemory: {555 enabled: true,556 store: "redis",557 },558 enableOrchestration: true,559});560561const result = await neurolink.generate({562 input: {563 text: "Create a comprehensive analysis",564 files: [565 "./sales_data.csv", // Auto-detected as CSV566 "examples/data/invoice.pdf", // Auto-detected as PDF567 "./diagrams/architecture.png", // Auto-detected as image568 "./report.xlsx", // Auto-detected as Excel569 "./config.json", // Auto-detected as JSON570 "./diagram.svg", // Auto-detected as SVG (injected as text)571 "./app.ts", // Auto-detected as TypeScript code572 ],573 },574 provider: "vertex", // PDF-capable provider (see docs/features/pdf-support.md)575 enableEvaluation: true,576 region: "us-east-1",577});578579console.log(result.content);580console.log(result.evaluation?.overallScore);581582// RAG: Ask questions about your documents583const answer = await neurolink.generate({584 prompt: "What are the main architectural decisions?",585 rag: {586 files: ["./docs/architecture.md", "./docs/decisions.md"],587 strategy: "markdown",588 topK: 5,589 },590});591console.log(answer.content); // AI searches your docs and answers592```593594### Gemini 3 with Extended Thinking595596```typescript597import { NeuroLink } from "@juspay/neurolink";598599const neurolink = new NeuroLink();600601// Use Gemini 3 with extended thinking for complex reasoning602const result = await neurolink.generate({603 input: {604 text: "Solve this step by step: What is the optimal strategy for...",605 },606 provider: "vertex",607 model: "gemini-3-flash-preview",608 thinkingLevel: "medium", // Options: "minimal", "low", "medium", "high"609});610611console.log(result.content);612```613614Full command and API breakdown lives in [`docs/cli/commands.md`](docs/cli/commands.md) and [`docs/sdk/api-reference.md`](docs/sdk/api-reference.md).615616## Platform Capabilities at a Glance617618| Capability | Highlights |619| ------------------------ | ------------------------------------------------------------------------------------------------------------------------ |620| **Provider unification** | 13+ providers with automatic fallback, cost-aware routing, provider orchestration (Q3). |621| **Multimodal pipeline** | Stream images + CSV data + PDF documents across providers with local/remote assets. Auto-detection for mixed file types. |622| **Quality & governance** | Auto-evaluation engine (Q3), guardrails middleware (Q4), HITL workflows (Q4), audit logging. |623| **Memory & context** | Conversation memory, Mem0 integration, Redis history export (Q4), context summarization (Q4). |624| **CLI tooling** | Loop sessions (Q3), setup wizard, config validation, Redis auto-detect, JSON output. |625| **Enterprise ops** | Proxy support, regional routing (Q3), telemetry hooks, configuration management. |626| **Tool ecosystem** | MCP auto discovery, HTTP/stdio/SSE/WebSocket transports, LiteLLM hub access, SageMaker custom deployment, web search. |627628## Documentation Map629630| Area | When to Use | Link |631| --------------- | --------------------------------------------------------- | ---------------------------------------------------------------- |632| Getting started | Install, configure, run first prompt | [`docs/getting-started/index.md`](docs/getting-started/index.md) |633| Feature guides | Understand new functionality front-to-back | [`docs/features/index.md`](docs/features/index.md) |634| CLI reference | Command syntax, flags, loop sessions | [`docs/cli/index.md`](docs/cli/index.md) |635| SDK reference | Classes, methods, options | [`docs/sdk/index.md`](docs/sdk/index.md) |636| RAG | Document chunking, hybrid search, reranking, `rag:{}` API | [`docs/features/rag.md`](docs/features/rag.md) |637| Integrations | LiteLLM, SageMaker, MCP, Mem0 | [`docs/litellm-integration.md`](docs/litellm-integration.md) |638| Advanced | Middleware, architecture, streaming patterns | [`docs/advanced/index.md`](docs/advanced/index.md) |639| Cookbook | Practical recipes for common patterns | [`docs/cookbook/index.md`](docs/cookbook/index.md) |640| Guides | Migration, Redis, troubleshooting, provider selection | [`docs/guides/index.md`](docs/guides/index.md) |641| Operations | Configuration, troubleshooting, provider matrix | [`docs/reference/index.md`](docs/reference/index.md) |642643### New in 2026: Enhanced Documentation644645**Enterprise Features:**646647- [Enterprise HITL Guide](docs/features/enterprise-hitl.md) - Production-ready approval workflows648- [Interactive CLI Guide](docs/features/interactive-cli.md) - AI development environment649- [MCP Tools Showcase](docs/features/mcp-tools-showcase.md) - 58+ external tools & 6 built-in tools650651**Provider Intelligence:**652653- [Provider Capabilities Audit](docs/reference/provider-capabilities-audit.md) - Technical capabilities matrix654- [Provider Selection Guide](docs/guides/provider-selection.md) - Interactive decision wizard655- [Provider Comparison](docs/reference/provider-comparison.md) - Feature & cost comparison656657**Middleware System:**658659- [Middleware Architecture](docs/advanced/middleware-architecture.md) - Complete lifecycle & patterns660- [Built-in Middleware](docs/advanced/builtin-middleware.md) - Analytics, Guardrails, Evaluation661- [Custom Middleware Guide](docs/custom-middleware-guide.md) - Build your own662663**Redis & Persistence:**664665- [Redis Quick Start](docs/getting-started/redis-quickstart.md) - 5-minute setup666- [Redis Configuration](docs/guides/redis-configuration.md) - Production-ready setup667- [Redis Migration](docs/guides/redis-migration.md) - Migration patterns668669**Migration Guides:**670671- [From LangChain](docs/guides/migration/from-langchain.md) - Complete migration guide672- [From Vercel AI SDK](docs/guides/migration/from-vercel-ai-sdk.md) - Next.js focused673674**Developer Experience:**675676- [Cookbook](docs/cookbook/index.md) - 10 practical recipes677- [Troubleshooting Guide](docs/guides/troubleshooting.md) - Common issues & solutions678679## Integrations680681- **LiteLLM 100+ model hub** β Unified access to third-party models via LiteLLM routing. β [`docs/litellm-integration.md`](docs/litellm-integration.md)682- **Amazon SageMaker** β Deploy and call custom endpoints directly from NeuroLink CLI/SDK. β [`docs/sagemaker-integration.md`](docs/sagemaker-integration.md)683- **Mem0 conversational memory** β Persistent semantic memory with vector store support. β [`docs/mem0-integration.md`](docs/mem0-integration.md)684- **Enterprise proxy & security** β Configure outbound policies and compliance posture. β [`docs/enterprise-proxy-setup.md`](docs/enterprise-proxy-setup.md)685- **Configuration automation** β Manage environments, regions, and credentials safely. β [`docs/configuration-management.md`](docs/configuration-management.md)686- **MCP tool ecosystem** β Auto-discover Model Context Protocol tools and extend workflows. β [`docs/advanced/mcp-integration.md`](docs/advanced/mcp-integration.md)687- **Remote MCP via HTTP** β Connect to HTTP-based MCP servers with authentication, retries, and rate limiting. β [`docs/mcp-http-transport.md`](docs/mcp-http-transport.md)688689## Contributing & Support690691- Bug reports and feature requests β [GitHub Issues](https://github.com/juspay/neurolink/issues)692- Development workflow, testing, and pull request guidelines β [`docs/development/contributing.md`](docs/development/contributing.md)693- Documentation improvements β open a PR referencing the [documentation matrix](docs/tracking/FEATURE-DOC-MATRIX.md).694695---696697NeuroLink is built with β€οΈ by Juspay. Contributions, questions, and production feedback are always welcome.698
Full transparency β inspect the skill content before installing.