lmd 0.1.0
,
To use this package, run the following command in your project's root directory:
Manual usage
Put the following dependency into your project's dependences section:
LMD
LMD is a library for interfacing with LLM APIs. Connect to OpenAI, LMStudio, or any compatible endpoint. Send messages, get responses, manage conversations.
The system is modular, allowing for new Endpoints to be implemented through IEndpoint
and supporting common API out of the box.
Streaming is designed to be intuitive and threadable using ResponseStream
which can be blocked (forcing a wait for each iterative Response
) or classic with a callback for incoming responses.
[!WARNING] This README is likely severely outdated, as this library is rapidly changing and I simply cannot be bothered to update this README frequently.
Use
Import the library:
import lmd;
// Or for specifically OpenAI-style endpoints:
import lmd.common.openai;
Create an endpoint and load a model:
// Connect to LMStudio running locally
IEndpoint ep = openai!("http", "127.0.0.1", 1234);
Model model = ep.load();
// Or connect to OpenAI
IEndpoint openai = openai!("https", "api.openai.com", 443);
Model gpt4 = openai.load("gpt-4", "your-api-key");
Send messages and get responses:
// Simple completion
Response resp = model.send("What is 2+2?");
string answer = resp.choices[0].content;
// With system prompt
model.context.clear();
model.context.add("system", "You are a helpful assistant.");
Response resp = model.send("Explain quantum computing briefly.");
// Streaming responses
model.stream((StreamChunk chunk) {
foreach (choice; chunk.choices) {
write(choice.content);
}
})("Tell me a story");
Handle tool calls:
// Define tools
Tool[] tools = [
Tool("function", "get_weather", "Get current weather", /* parameters */)
];
Options opts;
opts.tools = tools;
Model model = ep.load("model-name", "owner", opts);
Response resp = model.send("What's the weather like?");
// Check for tool calls
if (resp.choices[0].toolCalls.length > 0) {
// Handle tool execution
model.context.add("tool", result, toolCallId);
}
Get embeddings:
// Get embeddings for text
Response resp = model.embeddings("Hello, world!");
// Access embedding vector
if (resp.choices.length > 0 && resp.choices[0].embedding.length > 0) {
float[] embedding = resp.choices[0].embedding;
writeln("Embedding dimensions: ", embedding.length);
}
Endpoints
Currently supports OpenAI-compatible endpoints:
/v1/chat/completions
- Main chat interface/v1/completions
- Legacy completion endpoint
/v1/models
- List available models/v1/embeddings
- Text embeddings
Works with:
- OpenAI API
- LMStudio
- Ollama
- Any OpenAI-compatible server
Roadmap
- [X] /v1/models
- [X] /v1/completions
- [X] /v1/chat/completions
- [x] /v1/embeddings
- [ ] /v1/audio
- [ ] /v1/image
- [X] Streaming
- [ ] Standardize no-think and support OpenAI-specific Options keys
- [X] IOptions and IContext
- [ ] Claude and Gemini support
- [ ] Document all code with DDocs formatting
- [ ] Support other schema than HTTP
Contributing
Please do not contribute. I don't want to review your code.
License
LMD is licensed under AGPL-3.0.
- 0.1.0 released 13 days ago
- cetio/lmd
- proprietary
- Copyright © 2025, cet
- Authors:
- Dependencies:
- none
- Versions:
-
0.1.0 2025-Sep-19 ~main 2025-Sep-30 - Download Stats:
-
-
0 downloads today
-
0 downloads this week
-
1 downloads this month
-
1 downloads total
-
- Score:
- 0.2
- Short URL:
- lmd.dub.pm