Skip to main content

Datasource answer

Use Grafana's native Loki datasource with VictoriaLogs

If the concrete question is how to keep the Grafana Loki datasource while moving the backend to VictoriaLogs, the answer is to point Grafana at Loki-VL-proxy. Grafana still thinks it is talking to Loki.

Datasource type stays loki
Grafana configuration stays on the built-in Loki datasource
Only the URL changes.
No custom plugin
The proxy preserves the Grafana-side integration point
Less operational churn.
Field translation controls
You can stay Loki-first, hybrid, or OTel-native on field surfaces
Useful for dotted fields.
Route-aware telemetry
Client, proxy, cache, and upstream paths stay visible
Useful before and after cutover.

Minimal datasource shape

Grafana stays on the standard Loki datasource type. The proxy URL becomes the datasource target.

datasources:
  - name: Loki (via VL proxy)
    type: loki
    access: proxy
    url: http://loki-vl-proxy:3100

Why this matters

  • No UI-side datasource fork to maintain.
  • No extra Grafana plugin lifecycle to secure or upgrade.
  • Existing dashboards and Explore entry points keep the Loki contract.
  • Migration work moves into a controllable server-side layer.

1. Bring the proxy up first

Do not debug Grafana and backend reachability at the same time. Stand up the proxy, make sure `/ready` is healthy, and prove a simple labels call before you touch Grafana.

curl -sS http://127.0.0.1:3100/ready
curl -sS http://127.0.0.1:3100/loki/api/v1/labels

2. Pick translation behavior deliberately

If VictoriaLogs stores dotted OTel fields, choose the label and metadata mode that matches how your users browse data in Grafana. This is where most accidental surprises come from.

3. Validate more than query_range

Users feel label browsing, detected fields, patterns, and service buckets just as strongly as they feel line queries. Treat those as first-class acceptance checks.

4. Watch route-aware telemetry during cutover

Downstream latency, upstream latency, status codes, and cache hit ratio by route are the fastest way to see whether the new datasource path is actually safe for users.

What to validate when you point Grafana at the proxy

The risky parts are not the datasource form itself. They are field semantics, metadata lookups, and the latency or cache behavior of the translation layer under real dashboards and user queries.

Label and field semantics

Choose the translation profile that matches your Grafana builder and field-exploration needs. Loki-safe underscore labels can coexist with dotted field metadata for OTel-backed schemas.

Metadata endpoints

Labels, label values, detected fields, detected field values, and patterns are all part of the user experience. They need to be validated as first-class surfaces, not afterthoughts.

Latency split

Watch downstream client latency, upstream VictoriaLogs latency, and measured proxy overhead separately. The observability model is built for exactly that split.

Cache efficiency by route

High-cardinality label browsing and repeated query_range traffic are the places where cache hit or miss behavior changes user experience and backend cost fastest.