An open-source toolkit for end-to-end LLM-based dialogue systems.
Get StartedSDialog is an MIT-licensed open-source toolkit for building, simulating, and evaluating LLM-based conversational agents end-to-end. It aims to bridge agent construction → dialog generation → evaluation → (optionally) interpretability in a single reproducible workflow, so you can generate reliable, controllable dialog systems or data at scale.
A standardized dialog schema with JSON import/export to unify dataset formats.
Persona-driven simulation with contexts, tools, and thoughts.
Precise control over agent behavior and dialogue flow.
Use built-in metrics and LLM-as-a-judge for comparison and iteration.
Inspect and steer model activations for analysis and intervention.
Works with OpenAI, Hugging Face, Ollama, AWS Bedrock, Google GenAI, and more.
The core of SDialog is built upon a few key concepts that enable flexible and powerful conversational AI simulations. These include programmable Personas, modular Agents that encapsulate them, and broad support for various LLM backends.
Define the characteristics, background, and communication style of a conversational participant. Use built-in personas or create your own.
The main actors in a dialogue. Agents are built around a persona and can be equipped with tools, thoughts, and orchestrators.
Connect to a wide range of LLM providers including OpenAI, Hugging Face, Ollama, AWS Bedrock, and Google GenAI.
Simulate conversations between two or more agents to generate realistic dialogue data for testing and training.
pip install sdialog
sudo apt-get install sox ffmpeg
pip install sdialog[audio]
import sdialog
from sdialog import Context
from sdialog.agents import Agent
from sdialog.personas import SupportAgent, Customer
from sdialog.orchestrators import SimpleReflexOrchestrator
# Configure your preferred LLM backend
sdialog.config.llm("openai:gpt-4.1", temperature=1)
# Define personas
support_persona = SupportAgent(name="Ava", politeness="high")
customer_persona = Customer(name="Riley", issue="double charge")
# Define a simple rule-based orchestrator
react_refund = SimpleReflexOrchestrator(
condition=lambda utt: "refund" in utt.lower(),
instruction="Follow refund policy; verify account, apologize, refund.",
)
# Create agents and attach the orchestrator
support_agent = Agent(persona=support_persona) | react_refund
simulated_customer = Agent(persona=customer_persona, first_utterance="Hi!")
# Generate a dialogue
dialog = simulated_customer.dialog_with(support_agent)
dialog.print(all=True)
dialog.to_file("dialog.json")
# Serve the agent via an OpenAI-compatible API
support_agent.serve(port=1333)
Explore SDialog's specialized modules for evaluation, interpretability, and audio generation.
Score dialogs with built-in metrics and LLM judges, and compare datasets with aggregators and plots. SDialog provides a rich evaluation module to assess the quality of generated dialogues.
from sdialog.evaluation import (
LLMJudgeRealDialog, LinguisticFeatureScore,
FrequencyEvaluator, MeanEvaluator,
DatasetComparator
)
reference_dialogs = [...]
candidate_dialogs = [...]
comparator = DatasetComparator([
FrequencyEvaluator(LLMJudgeRealDialog(), name="Realistic dialog rate"),
MeanEvaluator(LinguisticFeatureScore("flesch-reading-ease")),
])
results = comparator({
"reference": reference_dialogs,
"candidate": candidate_dialogs
})
comparator.plot()
Capture per-token activations and steer models via Inspectors for analysis and interventions. Attach Inspectors to capture activations and optionally steer (add/ablate directions) to analyze or intervene in model behavior.
from sdialog.interpretability import Inspector
from sdialog.agents import Agent
import torch
# Attach an inspector to a specific layer
agent = Agent(name="Bob")
inspector = Inspector(target="model.layers.16.post_attention_layernorm")
agent_with_inspector = agent | inspector
agent_with_inspector("How are you?")
# Get activations
activations = inspector[-1][0].act
# Steer the model by subtracting a learned direction
anger_direction = torch.load("anger_direction.pt")
agent_steered = agent | inspector - anger_direction
agent_steered("You are an extremely upset assistant")
Convert text dialogs to realistic audio conversations with speech synthesis, voice assignment, and acoustic simulation. SDialog's audio module supports multiple TTS engines, voice databases, and microphone simulation for high-fidelity audio output.
from sdialog import Dialog
dialog = Dialog.from_file("my_dialog.json")
# Convert to audio with default settings
audio_dialog = dialog.to_audio(
perform_room_acoustics=True
)
# Customize the audio generation
audio_dialog = dialog.to_audio(
perform_room_acoustics=True,
audio_file_format="mp3",
re_sampling_rate=16000,
)
audio_dialog.display()
SDialog is a community-driven project. We welcome contributions of all kinds, from standardizing datasets and developing new components to improving documentation and reporting bugs.
Contribute on GitHub