Jan Akerman

Grafana Jsonnet Workflow

May 02, 2021 - 4 min read

Working at a company that’s currently undergoing massive growth, I’ve recently taken an interest in the topic of developer productivity and the idea of providing engineers with golden paths to success. Inspired by suggestions by some of my colleagues, I started looking into Jsonnet + Grafonnet as a way of providing consistent and reusable Grafana dashboards.

An easy development workflow can be the difference between small issues being fixed on discovery and issues stacking up on a forever growing team backlog. Scripting Grafana dashboards is one of the signs of dashboard maturity, but it comes at the cost of no longer being able to use Grafana’s rich UI to edit dashboards and the additional complexity of scripting tools. Since a clunky development workflow can significantly impact on the quality of a project over time, I was keen to see what could be done to improve this.

I happened to come across Grizzly - a tool that looks to solve some of the toil around managing Jsonnet dashboards. Below I describe a simple development workflow that uses Grizzly’s watch command to automatically render and apply dashboards to a local Grafana instance as they are edited. When happy, a Make target renders the final JSON dashboards.

This post assumes the following repository structure, separating the Jsonnet files from the rendered JSON.

dashboards/
    jsonnet/
        ...
        main.jsonnet
    json/

A Docker compose file sets up a local Grafana instance, running Grizzly to build and upload the rendered dashboards, then starting Grizzly to watch for file changes.

version: "2"
services:
  grafana:
    image: grafana/grafana:7.5.5
    ports:
    - 3000:3000
    user: "104"
    environment:
      - GF_AUTH_DISABLE_LOGIN_FORM=true
      - GF_AUTH_ANONYMOUS_ENABLED=true
      - GF_AUTH_ANONYMOUS_ORG_NAME=Org.
      - GF_AUTH_ANONYMOUS_ORG_ROLE=Admin
      - GF_USERS_ALLOW_SIGN_UP=false
  grizzly:
    image: grafana/grizzly:0.1.0
    volumes:
    - ".:/src/"
    depends_on:
      - grafana
    environment:
      - GRAFANA_URL=http://grafana:3000
    working_dir: /src/dashboards/
    entry point: >
      ash -c "grr apply jsonnet/main.jsonnet &&
        grr watch jsonnet/ jsonnet/main.jsonnet"

Any edits to the Jsonnet files under dashboards/jsonnet/ will be visible after refreshing Grafana. When you’re happy, all you need to do is render and commit the JSON files. Wrapping this up in a Makefile keeps the workflow memorable and self-documenting.

watch:
	@docker compose up

render:
	@docker run -v "$(shell pwd):/src/" grafana/grizzly:0.1.0 export dashboards/jsonnet/main.jsonnet dashboards/json/
	@ cp dashboards/json/Dashboard/* dashboards/json
	@rm -rf dashboards/json/Dashboard

The render target is using Grizzly to render the JSON and copy them into dashboards/json, cleaning up the intermediary files. The workflow is now (1) make watch (2) edit Jsonnet files (3) make render. A pre-commit hook that runs make render simplifies this workflow even further.

Ideally, you’d be applying these rendered dashboard JSON files as part of your CI/CD automation. I’d recommend Terraform (check out the Grafana provider), but you could just use Grizzly (grr apply).

Keep it local

Whilst I’d strongly recommend not previewing edits on your production Grafana instance, it’s not necessary to use a local Grafana instance. The downside of using a local Grafana instance is that you don’t have real data when viewing your changes. Hopefully, you’re using an IaC (Infrastructure as Code) tool to configure your Grafana instances data sources, so you could simply point it at your local instance to solve that.

Failing that, you can provision data sources as yaml files and mount them into the Grafana container. If your set up is relatively simple, a bash script is probably would probably do.

#!/bin/bash

GRAFANA_URL="http://localhost:3000"

if [ -z "$PROM_URL" ]; then echo '$PROM_URL not set' && exit 1; fi
if [ -z "$PROM_USER" ]; then echo '$PROM_USER not set' && exit 1; fi
if [ -z "$PROM_API_KEY" ]; then echo '$PROM_API_KEY not set' && exit 1; fi

curl -X POST $GRAFANA_URL/api/datasources -H 'Content-Type: application/json' --data "{
  \"name\":\"prometheus\",
  \"type\":\"prometheus\",
  \"access\":\"direct\",
  \"url\":\"$PROM_URL\",
  \"basicAuthUser\":\"$PROM_URL\",
  \"basicAuth\":\"true\",
  \"secureJsonData\":{
	  \"basicAuthPassword\":\"$PROM_API_KEY\",
  }
}"

 Gotchas

Since Grizzly currently accepts only a single Jsonnet file as input, it’s best to define all of the dashboards in a single Jsonnet file. Multiple JSON files can be rendered by nesting the dashboards within a JSON object as below.

local grafana = import 'vendor/grafonnet/grafana.libsonnet'

{
	grafanaDashboards+:: {
		'my-service-dashboard.json': {
			uuid: 'service-dashboard',
			title: 'service',
			timezone: 'browser',
			schemaVersion: 16,
		},
		'my-database-dashboard.json': {
			uuid: 'database-dashboard',
			title: 'Database',
			timezone: 'browser',
			schemaVersion: 16,
		},
	},
}

Jsonnet supports importing from other Jsonnet files so it’s clean enough to maintain a single entry point and import the individual dashboards.

Note that not only is the uuid field important for Grizzly to work correctly, it’s also used as the name for your rendered JSON file.

Taking it further

Grizzly also seems to provide a handy command to generate Grafana snapshots. I think a nice improvement would be generate snapshots of the modified dashboards & automatically add them to a PR, but that might be going overboard.

Now I’ve got a smooth workflow set up, my next task is to actually learn Jsonnet, instead of copy + pasting examples!


Jan Akerman

Engineer @ Form3. UK.

Github: @janakerman