Comparison
Compare native Loki and VictoriaLogs-routed Grafana query workflows
This comparison is intentionally narrow: it is about Grafana read and query workflows, not a generic benchmark or ingestion comparison. The decision point is whether you want to keep Grafana on Loki semantics while routing those reads to VictoriaLogs through an explicit proxy layer.
| Area | Native Loki backend | VictoriaLogs via Loki-VL-proxy |
|---|---|---|
| Grafana datasource type | Native Loki backend, native Loki datasource. | Native Loki datasource on the client side, Loki-VL-proxy in the middle, VictoriaLogs in the backend. |
| Indexing and search model | Loki docs: labels index the stream, while the content of each log line is not indexed. | VictoriaLogs docs: all log fields are indexed and full-text search runs across all fields; the proxy preserves Loki read workflows on top. |
| High-cardinality posture | Loki docs: labels should stay low-cardinality and high cardinality reduces performance and cost-effectiveness. | VictoriaLogs docs: high-cardinality values such as `trace_id`, `user_id`, and `ip` work fine as fields as long as they are not promoted to stream fields. |
| Deployment and control surface | Loki can run single-binary, but the scalable architecture is explicitly multi-component and query-frontend based. | VictoriaLogs docs position the backend as a single zero-config executable; the proxy adds one small observable compatibility layer in front. |
| Field semantics at the Grafana edge | Loki-native labels and field expectations. | Configurable label and metadata translation so dotted OTel fields can coexist with Loki-safe label surfaces. |
| Caching levers on the read path | Backend-specific cache model and operational knobs inside Loki itself. | Tier0 response cache plus L1/L2/L3 cache reuse, long-range query window cache, and optional peer fleet reuse. |
| Patterns and Drilldown compatibility | Native Loki or Grafana app behavior. | Handled as explicit contracts and compatibility tracks, including the Loki-compatible patterns endpoint. |
| Migration control | No translation layer to tune or phase in. | Progressive rollout is possible because Grafana can be cut over through a controlled proxy layer first. |
| Proxy-only latency visibility | No separate proxy decomposition because there is no extra compatibility layer. | Metrics split client-visible latency from upstream latency, and logs add per-request `proxy.overhead_ms` decomposition. |
When the proxy path is attractive
Choose the proxy path when the user-facing contract is already Loki and you want to preserve that contract while making VictoriaLogs the data backend. It is especially useful when migration control and observability matter more than pretending the systems are identical.
What changes operationally
You gain a real translation and cache layer that needs to be monitored. The upside is that you also gain a controlled place to tune translation modes, protect the backend, and see route-specific regressions.
Where the proxy path can be more efficient
The strongest efficiency case is repeated read traffic. Tier0, local cache, disk cache, peer cache, and long-range window reuse can remove repeated VictoriaLogs work on hot routes instead of making every dashboard refresh look like a fresh backend request.
Where native Loki stays simpler
If you do not need VictoriaLogs in the backend and do not want a translation layer, native Loki is operationally simpler because it removes an entire compatibility component from the path.
What the official docs actually say
- Loki docs: the content of each log line is not indexed, only labels index streams.
- Loki docs: high-cardinality labels reduce performance and cost-effectiveness.
- VictoriaLogs docs: all fields are indexed, full-text search runs across all fields, and high-cardinality values work as ordinary fields.
- VictoriaLogs docs: the backend is a single zero-config executable and publishes up to `30x` less RAM and up to `15x` less disk than Loki or Elasticsearch.
What published measurements actually show
- TrueFoundry reported `≈40%` less storage and materially lower CPU and RAM than Loki on its `500 GB / 7 day` test.
- Project benchmarks show `query_range` warm hits at `0.64-0.67 us` versus `4.58 ms` cold delayed-path requests.
- Project benchmarks show peer-cache warm shadow-copy hits at `52 ns` after the first owner fetch.
- Project benchmarks show long-range prefiltering cut backend query calls by about `81.6%` on the published benchmark shape.
Current boundaries that still matter
- The proxy path is still read-focused; standard Loki push is blocked.
- Tail remains single-tenant and browser tailing still needs origin allowlisting.
- `X-Scope-OrgID: *` is proxy-specific convenience, not native Loki all-tenants semantics.
- Some field and Drilldown browse surfaces still use approximate merged cardinality in multi-tenant views.
Why those boundaries exist
The project is explicit about where compatibility lives. Where VictoriaLogs has a clean native path, the proxy prefers it. Where Grafana or Loki-facing contracts need shaping, the proxy keeps that work visible instead of pretending the backend is identical.