How to make a selfhostable service that plays nice

go back

This is some random idiot's guide on how to make a selfhostable service that plays nice with most people's setups.

Most people being a sample size of just one.

This is really more about me ranting about bad "selfhostable" things that do not work with my setups rather than actual battle-tested advice, but follow it anyway or you'll make me slightly annoyed before letting me decide I'm not using your thing.

If you are making a service where you are the only person reasonably expected to host it, selfish prick (/s), you can skip this, or read it for entertainment.

Do not be too heavy on resources

If your service cannot be reasonably hosted on half the resources of a modern 5$/mo cloud server, especially when alternatives prove the same use case can be done with those resources, your service does not play nice.

Especially looking at you, GitLab and Mastodon.

(I would've also included Nextcloud if this was being made a few years ago, but they seem to have gotten their shit together and work reasonably well, if still a bit slow, on my 5$/mo server)

Do not force deployment methods

Docker, Podman, Ansible, Kubernetes, they're all good tech (well, at least three). However, please for the love of the deity of your choice, do not force me to use one.

Your choice will most likely be incompatible for the setup I have been continuing for however months (before I get tired and reformat my server).

This also means do not force installer scripts. Hear that, Pi-hole?

Oh and I almost forgot curl | bash was a thing. If you're forced to only take one single piece of advice from this, it should be do not do that.

Now, I will not deny that these options can be good to have for a lot of people (except curl | bash). But they should be just that, options. If you force people to use one specific method, you'll most likely miss out on a possibly larger user base because you will be locking out a certain amount of people.

If you don't have the manpower to port to every single deployment platform out there (understandable), give a manual installation option to people, and users of other deployment platforms will make it work, assuming you're friendly with that community and your codebase is not complete spaghetti.

Do not force dependencies

Reverse proxies, Linux distributions, Linux the kernel itself, even. Do not force people to using specific software, as you will never know what the server your service is getting installed on will use.

A big offender of this is, again, Pi-hole. With forcing lighttpd among various other things I can't recall, and only supporting 5 Linux distributions. 3 being Debian derivatives, and 2 being Red-Hat derivatives.

This is again something not every developer will have manpower for, but as with the previous, give manual installation steps, accept that your code will probably need changes, and be friendly to those communities. They'll make it work

(Database servers might be considered dependencies too, but with everyone and their pet ants using Postgres today I am not exactly sure if this matters as much)

Let better software take care of things

Your service should not do everything. Let better software take care of Let's Encrypt, let better software take care of HTTPS termination, let better software take care of being bound on ports 80 and 443.

Your service should do what it needs to, and nothing more. Things like CORS or CSP setups can and probably should be done by your service instead of the reverse proxy, but for stuff that would need to be done for other services (HTTPS, port 80 binding, etc), consider leaving those to what software the server owners want.

If the server owner has multiple services installed, they will frequently already have these set up, so why bother with trying to make them work yourself instead?

Do not bundle other software

I especially see this on container setups, where one container will bundle everything next to the service, a reverse proxy, a database, Redis, all of which can just be installed once and be re-used for a lot of services.

If you want to have a one-click container setup, consider making a docker-compose file with all your dependencies and configuration instead. That is a bit easier to "un-tangle" and adapt.

An example of this being Wallabag's official Docker image, which contains Nginx, PHP, RabbitMQ, but also Ansible, which runs on each start of the container and takes years to get it up and running.

Do not assume you will be the only thing on a server

This is more of a "catch-all" advice that contains just about everything I said in my previous words.

Do not be too heavy on resources, because there will be multiple services that will all have to compete for those.

Do not force deployment methods, because multiple services will be deployed on this server.

Do not force dependencies, because multiple services will need different dependencies, which if done too much will violate Do not be too heavy on resources

Let better software take care of things , because most things will be needed by multiple services, yours is not special.

Do not bundle other software, because that "other software" is likely to be used by multiple services.

Consider scaling down, instead of scaling up

A vast majority of your service's users will use it on a personal, perhaps family scale. Overcomplicating your installation process just so you can scale up to trillions of users is, except special instances, useless.

Services that both federate and "scale up" are especially weird to me, because federated services can inherently scale just by hosting more instances and connecting them together. Instead of taking advantage of this property, some federated services (Mastodon comes to mind) will try to scale one instance up instead. Why?

In my personal experience, services that do not bother with "scaling" are easier to understand and have easier installation. Are making these worse a good tradeoff for you?

(Note, this does not mean you shouldn't make your software efficient. Some software manage scaling by making itself even more efficient, and those efforts actually make sense for a majority of use cases)

Buying more hardware is a bullshit non-solution to any of these problems

A lot of these problems are based on using the underlying server as efficiently as possible, and there ought to be people offering the ""solution"" of "buy more hardware resources", because they are "cheap".

The problem with this being that it's a very "rich" and/or "developed-country" centric look at this problem. "Of course I can buy hardware for cheap, so this means you should be able to, too"

This solution completely ignores people who cannot buy more hardware for use in a server, for any of these reasons

Or in my case, all of the above

This "might not be a deal" for big or even medium-sized organizations, but for individuals this is frequently not a solution, and is only an excuse to keep software inefficient.

Especially for software that claims to be decentralized in any way, this should never be an excuse. This is how decentralization fails, with people falling back to centralized hosts like matrix.org and mastodon.social+mastodon.online.

(Buying more hardware, in this specific context, also means upgrading cloud "VPS" servers or similar stuff. This is not limited to physical hardware)