After 1.5.0 earlier in the year, Prometheus 1.6.1 is now out. There’s a plethora of changes, so let’s dive in.
The biggest change is to how memory is managed. The -storage.local.memory-chunks and -storage.local.max-chunks-to-persist flags have been replaced by -storage.local.target-heap-size. Prometheus will attempt to keep the heap at the given size in bytes. For various technical reasons, actual memory usage will be higher so leave a buffer on top of this. Setting this flag to 2/3 of how much RAM you’d like to use should be safe.
The GOGC environment variable has been defaulted to 40, rather than its default of 100. This will reduce memory usage, at the cost of some additional CPU.
A feature of major note is that experimental remote read support has been added, allowing the read back of data from long term storage and other systems. The previous built-in experimental support for writing to Graphite/OpenTSDB/InfluxDB has been removed in favour of the experimental remote write interface. These are now available via the example remote storage adapter, which can also read from InfluxDB via remote read.
In terms of general features and improvements, there are a few highlights:
- Joyent Triton discovery has been added.
- Promtool has a linter for /metrics pages.
- There’s new storage, alerting and evaluation-related metrics.
- Checkpoint and timeseries maintenance impact has been reduced. T
- here have been numerous UI improvements.
In terms of bug fixes, federation now exposes an empty instance label if one is not set, so if you are using honor_labels you’ll no longer pick up the instance label of the Prometheus itself.
For a full list of changes see the release notes.
*Above blog originally published by Brian Brazil on RobustPerception.io.
In July 2016 Prometheus reached a big milestone with its 1.0 release. Since then, plenty of new features like new service discovery integrations and our experimental remote APIs have been added. We also realized that new developments in the infrastructure space, in particular Kubernetes, allowed monitored environments to become significantly more dynamic. Unsurprisingly, this also brings new challenges to Prometheus and we identified performance bottlenecks in its storage layer.
Over the past few months we have been designing and implementing a new storage concept that addresses those bottlenecks and shows considerable performance improvements overall. It also paves the way to add features such as hot backups.
The changes are so fundamental that merging them will trigger a new major release: Prometheus 2.0.
Important features and changes beyond storage are planned before its stable release. However, today we are releasing an early alpha of Prometheus 2.0 to kick off the stabilization process of the new storage.
Release tarballs and Docker containers are now available. If you are interested in the new mechanics of the storage, make sure to read the deep-dive blog post looking under the hood.
This version does not work with old storage data and should not replace existing production deployments. To run it, the data directory must be empty and all existing storage flags except for -storage.local.retention have to be removed.
For example; before:
./prometheus -storage.local.retention=200h -storage.local.memory-chunks=1000000 -storage.local.max-chunks-to-persist=500000 -storage.local.chunk-encoding=2 -config.file=/etc/prometheus.yaml
./prometheus -storage.local.retention=200h -config.file=/etc/prometheus.yaml
This is a very early version and crashes, data corruption, and bugs in general should be expected. Help us move towards a stable release by submitting them to ;our issue tracker.
The experimental remote storage APIs are disabled in this alpha release. Scraping targets exposing timestamps, such as federated Prometheus servers, does not yet work. The storage format is breaking and will break again between subsequent alpha releases. We plan to document an upgrade path from 1.0 to 2.0 once we are approaching a stable release.
*Above blog originally published by Fabian Reinartz on Prometheus.io.
Prometheus has had significant community and project growth in the last 2 years and the core team is always looking toward the future. Here are three high-level roadmap goals for Prometheus:
- Implement and evaluate the read path of the generic remote storage interface. In combination with the already existing generic write path, this will allow anyone to build their own remote storage to use behind Prometheus, including querying data back via PromQL through Prometheus.
- Improve time series indexing such that Prometheus will be able to handle larger numbers of time series over a longer amount of time more efficiently.
- Aim to make Prometheus’s metrics exchange format an IETF standard. There is early work going on around this, but no clear outcome yet.
For technical and case study presentations about Prometheus, check out the Prometheus playlist on the CNCF YouTube channel.
Participate in technical sessions on the monitoring tool, hear case studies and learn how Prometheus integrates with Kubernetes and other open source technologies by attending PromCom 2017, August 17-18 at Google Munich. Speaking submissions close May 31st. Submit here.