Status January 2021
Lots of things this month, the biggest being that I started to blog a bit more, thanks to the #100DaysToOffLoad challenge. Since I started, I feel it became easier to write my ideas down. I spend less time reviewing (for best or worse) and reevaluating whether a post is worth publishing. At first, I went with 1 post per day, I had enough ideas to put in writing but I was pressuring myself to release something everyday.
Self-Hosting and Risk Assessment
Someone reached out to me to get advices on the best password manager out there. From their perspective, the two options that were competing against each other were Bitwarden and Keeweb. As such, the solution would be self hosted. This sparked an interesting discussion that I wanted to translate to a blog post.
A password manager is very important, it holds the secrets to your accounts and maybe some other information as well.
The case against Signal
Session sharing with tmux
One thing I’ve been using forever but even more since I started working from home is tmux session sharing.
tmux -S /tmp/shared -S is telling tmux to use the following path as its socket
To share it with another user, you’ll need to change the permissions on that socket. The easy solution is just to open it up to everyone:
chmod 777 /tmp/shared Though it would be wiser to selectively allow another user to attach your session.
Full Disk Encryption with Proxmox and ZFS
I’ve recently bough a MicroServer Gen 10+ from HP to use as a lab and also as a NAS. I wanted to install Proxmox on it and also use the opportunity to discover ZFS.
In the server, I’ve got:
4x 4TB HDD for data storage 2x 256GB NVMes for the root system and also zfs cache/log volumes I used the two NVMe in a standard software RAID 1 and created a RAIDZ2 on the 4 HDD.
Getting some stats out of this blog
Since I started the #100DaysToOffload challenge, I’ve been looking at my logs using goaccess on my terminal. It works well though I wanted to look at alternatives. One obvious solution is Matomo though it felt too complicated for my usage and maybe too intrusive for my taste.
I started looking into alternatives and stumbled upon Plausible, an opensource analytics suite that boasts itself as being GDPR compliant and privacy friendly. The metrics it tracks are simple and enough for my usecases, I decided to give it a go.
As you go and progress in your life, you take decisions. These decisions have a context which explains why you took them. Time passes, this context starts to vanish and your memory has a hard time restituting it in the way it was.
This is a problem that is common within teams: you take a decision and 12 months later, your team is reevaluating that decision after having lost the context that led it to it.
Prometheus metrics on Caddy
Continuing with my Caddy experimentation, I wanted to get some metrics out of it. Caddy provides metrics out of the box on its admin API:
https://localhost:2019/metrics This works well if you have prometheus running on the same server though in my case, prometheus is running on a different VM and on a private network.
Caddy will enforce https on every vhost by default. For public domains, it’ll try to get a Let’s Encrypt certificate and for IP addresses and localhost, it’ll use an internal CA to sign certificates that it’ll serve for these.
Caddy as a reverse proxy to Dendrite
I use Dendrite as I experiment with Matrix. It consumes a lot less resources than Synapse though it is in beta and doesn’t support the specification in full. The trade-off suits me.
I wanted to try Caddy for a long time and I just did, switching this blog and a few other services to it. The last one to be migrated was Dendrite. The repository gives an example configuration for nginx but not for Caddy.
Kafka crashed __consumer_offsets compaction
Disclaimer: this post is about an ancient (0.10.0.1) version of Kafka and the bugs described have since been fixed.
I’ve got a (legacy) kafka cluster that is still being used, pending an upgrade that takes a bit more time than expected. This kafka cluster is still used for many workloads, we have plenty of experience with it and it runs flawlessly. Or so that’s what we thought.
At some point, we lost a broker.