LOCAL-FIRST CHAT COMPARISON WORKSPACE

Test the same prompt against different chat setups.

llm-chat-lab is a first runnable shell for comparing prompt presets, model presets, memory modes, and workflow styles side by side β€” without pretending to be a full agent platform.

2-panel compare view Preset-driven Mock responses Local-first
Compare target
Prompt + model + memory
Focus
Inspection over chat novelty
Starter prompt
Summarize the tradeoffs of draft-first automation for an internal tool team.

One shared input, two different setups.

CtrlK focus Ctrl↡ run CtrlS export Esc blur

Concise operator

Fast feedback
Latencyβ€”
Tokensβ€”
Costβ€”

Structured analyst

More context
Latencyβ€”
Tokensβ€”
Costβ€”

Most chat UIs optimize for one conversation thread. This shell is biased toward inspection: same input, different setup, visible tradeoffs.

The current build is intentionally local and mocked. The point is to prove the compare interaction and project shape before wiring real providers.

Real provider adapters, saved runs, import/export, and screenshot- friendly result states are the next obvious upgrades.