Skip to main content

Architecture pattern

Use VictoriaLogs behind Grafana's Loki datasource

This is the end-to-end pattern teams adopt when they want VictoriaLogs in the backend and do not want to rewrite Grafana around a non-Loki datasource. The key is to keep compatibility server-side, where it can be measured and controlled.

Three-part path
Grafana, Loki-VL-proxy, VictoriaLogs
Simple enough to reason about.
Helm-ready
The chart supports image selection, stateful cache, and peer cache
Useful in Kubernetes.
Read-only surface
The proxy owns query and metadata compatibility, not ingest
Keeps ownership boundaries clear.
Observability built in
Operational resources, cache layers, and per-route latency stay visible
Before and after cutover.
Grafana Loki datasourceLoki-VL-proxyVictoriaLogsOptional vmalert

Why keep Grafana on Loki

Grafana's native Loki datasource already powers the query builders, Explore, and Drilldown workflows your users know. Preserving that contract keeps the migration smaller and easier to validate.

Why keep compatibility server-side

The server-side layer can be observed, rate-limited, cached, and rolled out progressively. Those controls are much harder if the compatibility work is spread across clients and dashboards.

How caching fits the pattern

Compatibility-edge cache, memory cache, disk cache, and optional peer cache all exist to make expensive metadata and query routes more predictable when many Grafana users hit the same paths.

How to scale it

Start with a basic deployment, then add persistent disk cache or a peer-cache fleet when the workload justifies it. The docs already cover both patterns.