Podcast Generation: "Generate a podcast about this" creates a dialogue script between two AI hosts.
Prerequisites
LM Studio 0.3.x or higher.
Embedding Model: Download nomic-embed-text-v1.5.Q4_K_M.gguf (or similar) in LM Studio.
Local Whisper (Optional, for YouTube):
Windows: Download whisper.cpp release (e.g., whisper-bin-x64.zip) and extract it. Set whisperBinaryPath to main.exe.
Fedora/Linux: Install yt-dlp and whisper-cpp via your package manager:
sudo dnf install yt-dlp whisper-cpp
Or build whisper.cpp from source to get the main binary.
In LM Studio, go to the Plugins tab → Local OpenBook → Settings.
Set whisperBinaryPath to the path of your whisper-cpp or main binary (e.g., /usr/bin/whisper-cpp). If it's in your PATH, the plugin will now try to find it automatically.
The plugin will fetch the text and add it to the chat.
Ask Questions: "What does the text say about X?"
The plugin will search your documents and cite sources.
Create a Podcast:
User: "Generate a podcast script about these documents."
The model will produce a dialogue script.
Configuration
Reranker Enabled: Toggles the advanced reranking step. Disable for speed, enable for accuracy.
Retrieval Limit: How many text chunks to retrieve (default: 5).
Known Limitations
Web/YouTube Source Indexing: Currently, content fetched from the web/YouTube is added directly to the chat context. It is not indexed for RAG retrieval unless you copy the text into a .txt file and drag it back into LM Studio.
Audio Generation: This plugin generates the script. You need an external TTS tool (or a future update) to convert the script to audio.