Migrating out of InfluxDB
As part of the DBOD service offering, we only support InfluxDB version 1.8.3. This version was released in May 2020 and became available in DBOD around November 2020. It was officially deprecated by InfluxData, in terms of upstream maintenance and support, on January 1, 2022.
This version of InfluxDB has always worked well for DevOps workloads. However, for more intensive data streams with high cardinality, it has never scaled well in terms of memory allocation.
In the meantime, two alternatives for storing time-series data have been made available as part of the IT services offering:
Considering this, existing InfluxDB users, or new users looking for solutions to store time-series data, may be interested in these alternatives.
This document is intended as a guideline for users interested in migrating out of InfluxDB in favour of any of these other services.
Suggested path toward sustainable services
Since both Grafana Mimir and TimescaleDB can ingest and store time series, their usage or migration criteria toward one service or another come down to use case, data ownership, and integration needs.
We recommend using Grafana Mimir if your use case is monitoring oriented (metrics/monitoring of IT systems or experiments):
- Your database is primarily used to collect metrics about services, infrastructure, or applications (like host stats, service uptime, latency, errors).
- You want direct integration with Grafana for visualization and alerting.
- Your data model aligns well with Prometheus metrics (labels, metric names, no complex relational queries).
- You don’t want to manage your own DB but instead leverage a central, scalable monitoring platform.
- You’re OK with defined retention policies (e.g., 40 days by default, longer possible if agreed).
- Queries are mostly aggregate functions, filters, and PromQL.
In other words, for monitoring data, better use the CERN IT Monitoring Service.
We recommend using TimescaleDB if your use case is more analytical (time series data that is part of a scientific workflow, analysis, or application schema):
- Your time series is part of a broader application schema, involving relational tables (users, jobs, metadata).
- You need SQL features: joins, window functions, subqueries, advanced aggregations that PromQL can’t handle easily.
- You need to define your own retention/compression policies beyond what the Monitoring service provides.
- You need full control of the schema and database.
- TimescaleDB integrates better into apps that already use PostgreSQL.
In other words, for scientific/operational data with SQL needs, better use TimescaleDB.
Common ingestion methods in InfluxDB instances
In general, InfluxDB data in DBOD gets written mainly through these paths:
- Telegraf (most common “agent” approach)
- Telegraf collects from hundreds of inputs (system, apps, MQTT, SNMP, etc.) and writes to InfluxDB via the outputs.influxdb plugin. It supports HTTP (default) and UDP for v1.
- HTTP API + Line Protocol (direct writes from apps/scripts)
- Anything that can POST text can write points using the /write endpoint with Line Protocol; people often use curl or app code.
- Language client libraries:
- Using the influxdb-client library in Python; similar v1 clients exist for Go, Java, JS, etc.
- Protocol compatibility listeners (Graphite, collectd, OpenTSDB, UDP)
- Can ingest Graphite, collectd, OpenTSDB and UDP protocols directly. Prometheus endpoints are also supported.
Migrating to the IT Monitoring Service
Ideally, if you only need to keep data for a few weeks or months, you could write in parallel to InfluxDB and the Monit infrastructure during that retention period and then simply switch from one service to the other.
If you would like to start writing to the Monitoring Service, please consult the official CERN IT Monitoring Guide.
If your host is Puppet managed, you can very easily send your metrics via the Monit Agent, as explained at: https://monit.docs.cern.ch/puppet_monit_agent/description/
If your application/host generating the data points is in Kubernetes, the CERN IT Monitoring team provides a Helm chart that includes all the necessary components to collect both metrics and logs. More information at: https://monit.docs.cern.ch/kubernetes_helm_chart/description/
For custom producers (non-standard apps), CERN Monitoring also supports direct HTTP/OTLP ingestion. For apps that cannot expose Prometheus metrics, they can post OTLP directly to monit-otlp.cern.ch or send metrics in JSON format using monit-metrics.cern.ch. For more information, read: https://monit.docs.cern.ch/other/metrics/
If your host is not Puppet managed, and you were previously sending data via Telegraf, as a general rule, you could just replace or complement [[outputs.influxdb]] with an output to [[outputs.openTelemetry]] and a corresponding service_address, as mentioned in the previous link. An example of this would be:
[[outputs.opentelemetry]]
# MONIT OTLP endpoint (gRPC port 4316)
service_address = "monit-otlp.cern.ch:4316"
# (optional but recommended) compress payloads
compression = "gzip"
# If your Telegraf has the CA trust chain for CERN, leave TLS defaults.
# Otherwise you could point tls_ca to a PEM bundle.
# tls_ca = "/etc/ssl/certs/ca-certificates.crt"
# Add auth header: use your MONIT tenant as the user and its password
# Best: inject a pre-built value via env var so creds aren’t in the file.
[outputs.opentelemetry.headers]
Authorization = "${OTEL_AUTH}" # e.g. "Basic BASE64(tenant:password)"
# (optional) set resource attributes (labels) that appear in Grafana/Mimir
[outputs.opentelemetry.attributes]
"service.name" = "my-host-or-app"
"deployment.environment" = "prod"
"host.name" = "${HOSTNAME}"
and to set the Authorization header once:
# Build a Basic auth header: base64("TENANT:PASSWORD")
export OTEL_AUTH="Basic $(printf '%s:%s' 'TENANT' 'PASSWORD' | base64 -w0)"
On the other hand, if your host is not Puppet managed, and you were previously sending data via HTTP API + Line Protocol, or via Language client libraries (Python, Go, etc. writing Line Protocol), you could keep writing in Influx Line Protocol format but use Telegraf as proxy. To configure a Telegraf proxy, you'd need to define an [[inputs.influxdb_listener]] (that understands InfluxDB Line Protocol) and then output your metrics to MONIT via [[outputs.openTelemetry]], as explained above. An example of this would be:
# Telegraf listens locally and pretends to be an InfluxDB 1.x /write endpoint.
# Point your existing senders at: http://<this-host>:8186/write?db=<yourdb>&precision=ns
[[inputs.influxdb_listener]]
# Listen on HTTP (change to :443 + TLS options below if you want HTTPS)
service_address = ":8186"
# Timeouts & size limits (tune for your traffic)
read_timeout = "10s"
write_timeout = "10s"
# 0 = unlimited; set explicit caps if you expect large batches
max_body_size = 0
max_line_size = 0
# Capture the ?db=<name> from the URL as a metric tag.
# (The 'db' query is ignored by Telegraf for routing, so tagging preserves it.)
database_tag = "influxdb_db"
# Use the faster LP parser if you ingest big bursts (Telegraf ≥1.22)
parser_type = "upstream"
# --- Optional: serve HTTPS &/or client-auth ---
# tls_cert = "/etc/telegraf/certs/telegraf.crt"
# tls_key = "/etc/telegraf/certs/telegraf.key"
# tls_allowed_cacerts = ["/etc/telegraf/certs/clients-ca.pem"] # for mTLS
[[outputs.opentelemetry]]
service_address = "monit-otlp.cern.ch:4316" # gRPC
compression = "gzip"
[outputs.opentelemetry.headers]
Authorization = "${OTEL_AUTH}" # Basic <base64(tenant:password)>
Finally, if your host is not Puppet managed, and you were previously sending data via Protocol compatibility listeners (Graphite, collectd, OpenTSDB, UDP), these are not supported natively by MONIT but you could use Telegraf inputs (inputs.graphite, inputs.collectd, inputs.opentsdb, inputs.udp) and then output via OpenTelemetry (as explained above). Alternatively, you could also migrate to Prometheus exporters (e.g., Node Exporter)
Once your metrics will be in MONIT, you can:
- build dashboards and alerts directly in monit-grafana.cern.ch
- use PromQL instead of InfluxQL for querying
Migrating to TimescaleDB
As before, if you don’t need to migrate data from InfluxDB, you can simply request a PostgreSQL database in the DBOD service and mention in the description that you need the TimescaleDB plugin enabled. Once you receive the connection details via a SNOW ticket, you’re ready to go. For more information, refer to the Getting started with TimescaleDB document of the DBOD User Guide.
If you do need to migrate data from InfluxDB to TimescaleDB, the influx_to_timescale_migration tool that we developed can help you dump the contents of InfluxDB measurements into CSV files and insert them into TimescaleDB tables.
For more information, please open a SNOW ticket with the DBOD team — we’ll be happy to help you.