Today, I wanted to try out ListenBrainz in conjunction with my self-hosted Navidrome music server. I didn’t want to collect new data for several days before getting interesting suggestions from there, so I had to import some historical data. However, I haven’t been using any of the services that ListenBrainz can import from, so I needed another way to get that data. Unfortunately, Navidrome doesn’t record all song-play events in its database, and if none of the scrobbling functions are enabled they aren’t sent anywhere else either.
Or are they?
It turns out Navidrome does log when it would have scrobbled a playback:
time="2024-09-20T20:55:31+02:00" level=info msg=Scrobbled artist="Rick Astley" requestId=sol/pnfNbNn14e-000682 timestamp="2024-09-20 20:55:31.014047855 +0200 CEST m=+12536.584492352" title="Never Gonna Give You Up" user=linus
and while the logs on the machine running Navidrome are rotated pretty quickly, they also end up in my Loki instance.
So I can ask Loki what I’ve been listening to:
$ logcli query '{unit="navidrome.service"} |= `Scrobbled` | json | line_format `{{.MESSAGE}}` | logfmt'
...
2024-09-20T20:31:06+02:00 {MESSAGE="time=\"2024-09-20T20:31:06+02:00\" level=info msg=Scrobbled artist="Rick Astley" requestId=sol/pnfNbNn14e-000581 timestamp=\"2024-09-20 20:31:06.183307102 +0200 CEST m=+11071.753751639\" title="Never Gonna Give You Up" user=linus", artist="Rick Astley", requestId="sol/pnfNbNn14e-000581", time="2024-09-20T20:31:06+02:00", timestamp="2024-09-20 20:31:06.183307102 +0200 CEST m=+11071.753751639", title=""Never Gonna Give You Up""} time="2024-09-20T20:31:06+02:00" level=info msg=Scrobbled artist="Rick Astley" requestId=sol/pnfNbNn14e-000581 timestamp="2024-09-20 20:31:06.183307102 +0200 CEST m=+11071.753751639" title="Never Gonna Give You Up" user=linus
...
and then to only give me the data I care about, as JSON:
$ logcli query '{unit="navidrome.service"} |= `Scrobbled` | json | line_format `{{.MESSAGE}}` | logfmt | line_format ""' --include-label artist --include-label title -o jsonl
...
{"labels":{"artist":"Rick Astley","title":"Never Gonna Give You Up"},"line":"","timestamp":"2024-09-20T21:10:42.321605+02:00"}
Now we have JSON which we can reshape into the format expected by ListenBrainz.
And my favourite tool for that is jq
.
This script turned out a bit bigger than what I’d want to put in the command line:
{
listen_type: "import",
payload: map({
listened_at:
.timestamp
| scan("(....-..-..T..:..:..)") # cut off the microseconds and timezone
| first # scan gives back a list, and we're expecting one element
| strptime("%Y-%m-%dT%H:%M:%S")
| strftime("%s"),
track_metadata: {
track_name: .labels.title,
artist_name: .labels.artist
}
})
}
Now we just need to plug the pieces together…
logcli query \
--since=10000h --limit=0 --include-label artist --include-label title -o jsonl
'{unit="navidrome.service"} |= `Scrobbled` | json | line_format `{{.MESSAGE}}` | logfmt | line_format ""'
| jq -f listen.jq
| curl 'https://api.listenbrainz.org/1/submit-listens' -X POST --json @- -H @<(echo "Authorization: Bearer $token")
I was quite thrilled when this worked on the first attempt — I had inspected jq’s output, but I wasn’t expecting the first attempt to throw it into the API to be an immediate success!
Not sure what the moral of the story is. Maybe that saving logs is helpful and JSON is a convenient lingua franca for structured data.