Show HN: LLMOne – Deploy LLMs from bare metal to production in hours
github.comI spent days trying to deploy DeepSeek on a server this year. Install Ubuntu, NVIDIA drivers, CUDA, Docker, configure vLLM, debug memory issues, tune performance settings. Every deployment was different. Every server had its own quirks. Worse still, these issues are more pronounced on non-NVIDIA accelerators, such as Ascend or Intel NPU.
So, we made LLMOne, which will automates this. You can use it at bare metal (via BMC) or SSH (coming soon) into an existing server, select models, and it handles everything: OS installation, driver setup, inference engine configuration, model deployment, and deploy applications such as Open WebUI or Dify.
The code is open source (Mulan PSL v2, like Apache 2.0). No vendor lock-in.
There is a User Tutorial Video: https://youtu.be/P4MgIPW5K70
How it works:
1. Uses BMC (Redfish) to remotely install OS on bare metal, but not PXE (without DCHP Server configure) 2. Installs appropriate drivers (NVIDIA, Huawei Ascend, etc.) 3. Sets up containers and inference engines (vLLM, MindIE or OpenVINO - picks the right one) 4. Deploys models and runs performance benchmarks 5. Can also deploy apps like OpenWebUI, Dify alongside the models
The whole process runs unattended. What used to take me 2-3 days of tweaking now finishes in 1-2 hours.
Technical bits:
We avoid Kubernetes entirely - found it adds complexity without much benefit for single-node LLM deployments. Everything runs in Docker containers with custom orchestration.
The BMC integration was tricky. Different servers expose different Redfish capabilities, so we built adapters for some vendors, such as iDRAC from Dell and iBMC from Huawei.
Performance varies by hardware, but we've seen ~2200 tokens/sec on RTX 4090 with TensorRT-LLM backend, ~1900 with vLLM. The system runs Evalscope benchmarks automatically so you know what you're getting.
Why this exists:
We work with chip vendors and AI server resellers. When servers arrive at customer sites, instead of a multi-person support team spending days on deployment and debugging, one man can use this tool to get everything running.
While we focus on LLM deployment, the tech stack can actually deploy anything from bare OS to complex software stacks. The automation layer is generic enough for various workloads.
Current limitations:
For BMC support, we currently only support Dell iDRAC and Huawei iBMC. We're working on Supermicro support. We'd love to expand to other server vendors but need hardware access or Redfish Mock for testing and development.
SSH Mode and Apple Silicon Support is coming soon
Looking for feedback. Also, if you're a server vendor and can provide BMC access for testing, we'd appreciate the help expanding hardware support.
[dead]