Description
A deep research engine for LM Studio, built around a Kimi-style Agent Swarm. Dynamically spawned specialised worker agents run in parallel, coordinating through shared state to produce a comprehensive, cited research report with contradiction detection — in a single tool call. https://github.com/imezx/deep-swarm-research-plugin
Stats
77 Downloads
4 stars
Last updated
Updated 24 days agobyREADME
Autonomous deep research for LM Studio. A swarm of specialized workers searches your local libraries and the web, then synthesizes everything into a structured report - one tool call, no API keys.
The main tool. Give it a topic, get back a full Markdown report with AI-written analysis, citations, contradiction detection, and a coverage breakdown across 12 research dimensions. When local document sources are enabled, workers search your RAG libraries progressively - proprietary first, web to fill gaps.
Parameters:
topic - what to research (be specific)focusAreas - optional angles to emphasize, e.g. ["side effects", "FDA status"]depthOverride - "shallow" / "standard" / "deep" / "deeper" / "exhaustive"contentLimitOverride - chars per page (1K-20K, auto-scales with depth)Scored DuckDuckGo results with domain authority tiers and snippet extraction.
Fetch and extract a single URL. Handles PDFs automatically.
Batch-fetch up to 10 URLs concurrently.
Index a local folder into a searchable library with full metadata.
Parameters:
name - descriptive name (e.g. "Company Policies", "Research Papers")folderPath - absolute path to the document folderpriority - "proprietary" / "internal" / "reference" / "general" (default: general)tags - array of routing tags: ["legal"], ["academic", "technical"], ["financial", "reports"], etc.description - optional descriptionExample usage:
RAG Add Library( name: "Client Contracts", folderPath: "/home/user/documents/contracts", priority: "proprietary", tags: ["legal"], description: "All active client contracts and SLAs" )
Show all indexed libraries sorted by priority, with file counts, chunk counts, word totals, tags, and file type breakdown.
Remove a library by its UUID (id).
Search across libraries with BM25 + fuzzy hybrid scoring mechanism.
Features:
Change a library's name, description, priority, or tags without re-indexing.
Detect modified, deleted, and newly added files since last indexing.
Persist the entire RAG index to a JSON file on disk.
Restore a previously saved index - instant library access without re-scanning.
The previous Local Docs Add/List/Remove/Search Collection tools still work as aliases for backward compatibility.
Supported file types: .txt, .md, .html, .csv, .json, .xml, .log, and many more.
This is the key innovation for organizations with large proprietary datalakes. Instead of treating all sources equally, the plugin searches in priority order:
┌─────────────────────┐ │ 1. PROPRIETARY │ < Your confidential data (contracts, internal memos, trade secrets) │ Searched first │ ├─────────────────────┤ │ 2. INTERNAL │ < Shared team knowledge (wikis, documentation, reports) │ Searched second │ ├─────────────────────┤ │ 3. REFERENCE │ < Curated reference materials (papers, standards, regulations) │ Searched third │ ├─────────────────────┤ │ 4. GENERAL │ < Miscellaneous local documents │ Searched fourth │ ├─────────────────────┤ │ 5. WEB │ < Public internet (fills remaining gaps) │ Searched last │ └─────────────────────┘
Workers also auto-route to the right library by tag:
academic and technical tagged librarieslegal and policy tagged librariestechnical and code tagged librariesfinancial and reports tagged libraries1. Index your document libraries:
RAG Add Library(name: "Research Papers", folderPath: "/papers", priority: "reference", tags: ["academic"]) RAG Add Library(name: "Internal Docs", folderPath: "/company/docs", priority: "internal", tags: ["reports"]) RAG Add Library(name: "Legal", folderPath: "/legal", priority: "proprietary", tags: ["legal", "policy"])
2. Enable local sources in plugin settings (Local Document Sources → On)
3. Run Deep Research as usual - workers will search your libraries progressively
4. Save your index so you don't need to re-index next session:
RAG Save Index(filePath: "~/.lmstudio/rag-index.json")
5. Next session, load it back:
RAG Load Index(filePath: "~/.lmstudio/rag-index.json")
| Shallow | Standard | Deep | Deeper | Exhaustive | |
|---|---|---|---|---|---|
| Rounds | 1 | 3 | 5 | 10 | 15 |
| Worker roles | 5 | 5 | 8 | 10 | 10 |
| Pages/worker | 5 | 8 | 12 | 18 | 25 |
| Search engines | 1 | 2 | 3 | 4 | 5 |
| Link depth | 1 | 1 | 2 | 2 | 3 |
| Fan-out | x1 | x1 | x2 | x2 | x3 |
| Content/page | 5K | 6K | 8K | 12K | 16K |
| Sources (upto) | ~25-50 | ~40-80 | ~80-150 | ~150-250+ | ~250-400+ |
No hard source cap - collection is fully adaptive. Local sources are additional - they don't eat into the web budget shown above.
| Setting | Description |
|---|---|
| Research Depth | Shallow → Exhaustive (scales everything) |
| Content Per Page | Chars extracted per page (auto-scales, up to 20K) |
| Link Following | Follow in-page citations and references |
| AI Query Planning | Use loaded model for query generation and synthesis |
| Safe Search | DuckDuckGo safe search level |
| Local Document Sources | Search indexed local/RAG libraries alongside the web |
When the user asks for research or wants to understand a topic in depth, use the "Deep Research" tool. After receiving the report: 1. Lead with the AI Research Analysis - it's the main synthesis. 2. Check the Contradictions section for disagreements between sources. 3. Cite sources by index: [1], [2], etc. 4. Note any coverage gaps and offer to dig deeper. 5. Present both sides where sources conflict. 6. Distinguish between local and web sources when relevant. 7. Prioritise findings from proprietary/internal sources when only they're available.
MIT License