On Monday, June 26th, 2023 at 07:44:07 UTC, our esteemed host @self posted:

wanna see some code? c’mere

Federation came a few weeks later on 17 July. Our subs now have regulars from across Lemmy and Mastodon.

It’s been a wild party and we’re not stopping any time soon. Go wedgie a nerd today.

  • @selfMA
    link
    English
    36 days ago

    the actual Lemmy service running awful.systems isn’t clustered yet – all of Lemmy, its Postgres database, and pict-rs run on a single Hetzner Cloud CPX31 with quite a bit of CPU and memory to spare. some services (mostly static hosting for a couple of things, plus the staging environment for upgrades and configuration changes) are offloaded to a CX21 that’s definitely overprovisioned for the usage it gets. the CPX31 hosting us is behind a LB11 load balancer for future expansion – if I need to stand up another instance of the Lemmy frontend, I can live reconfigure the LB11 to round robin onto that host without any downtime. there are downsides to using Hetzner’s LBs – they’re extremely inflexible and basically can’t be configured outside of a typical “round robin and terminate TLS” use case, though they’re very nicely automated (they’ll even manage Let’s Encrypt for you) for that use case.

    as with all federated services, a Lemmy server that’s being used in any capacity will slowly fill its available disk space with posts and associated data. currently we’re still rolling with the storage included with the CPX31, but Postgres and especially the rather inefficient image cache are gradually filling that disk. part of the plan for the deployment is to either offload the image cache to object storage (which can be extremely cheap, but definitely do the math on egress charges) or, more likely (because it helps keep us portable between cloud vendors), I’ll expand the LVM for the node’s disk onto a Hetzner volume when we get to around 75% capacity.

    if you’re looking at establishing an instance along these lines, make sure you look at rate limiting first if you run into any performance issues. before considering any upgrades, check your access logs to make sure you’re not seeing a spike due to malicious traffic. ActivityPub is a unique challenge to rate limit properly since some of your endpoints will always have a ton of repeated, automated traffic from other instances, but there are a few guides out there that have good defaults, and getting this right before you have a ton of users will save you time later.

    other than the above, we have an external email service and backups that I can provide more detail on if you need recommendations as you get closer to rolling out your deployment.

    obligatory: I don’t recommend Hetzner as a company, but David and I have yet to find a host with comparable pricing that isn’t somebody’s hobby or Oracle, a company neither of us will deal with due to personal experience and industry reputation. the above runs about $25/month at the prices I get for Hetzner’s resources (I think on Cloud they lock you into whatever rate you were at when you joined, so mine are cheaper than the ones on their main site), but you may get better value from a server auction or other host depending on your needs.

    • @selfMA
      link
      English
      36 days ago

      and reading this back, it feels like there’s so much free space on that CX21 maybe I should do a best-effort WriteFreely on there, just to justify the budget for the node

    • David GerardOPA
      link
      English
      2
      edit-2
      6 days ago

      on Hetzner - actually looked into what if I move my personal box recently, and Hetzner are still it. the only comparable service for price is OVH and at least Hetzner can probably work a computer. there’s a reason large swathes of Mastodon are hosted on Hetzner or OVH and the reason has a dollar sign on the front.

      • @selfMA
        link
        English
        26 days ago

        god, OVH was fucking terrible when I tried them a few years ago. a lot of fediverse admins swear by them now though so either something changed or the bar for usable hosting and global outages lowered