Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: How Do You Deploy?
12 points by canto 4 days ago | hide | past | favorite | 19 comments
If you're just starting, are a YC startup or similar, where and how do you deploy/ship your apps?

Say, you have your github, you've setup some pipelines for CI (or not ;), but how about deployment?

You'll need some storage, maybe a db of some sort and some compute or serverless.

You do AWS lambda, beanstalk, eks/aks/etc, raw vm, api gateway use railway or heroku on your own?

Or you hire devops or a product engineer with some cloud exp to handle this?

No right or wrong answers here :-) I appreciate all input!






Just starting, for 95% of products I recommend a Heroku-alike. Don’t worry about scaling or scalability until you’ve figured out your PMF. Don’t invest in the infrastructure involved in affordable scaling. Just build. At most, containerize for portability. Outsource your infrastructure maintenance and management and config and upgrades and security and and and.

Your job at this stage isn’t to build something technically awesome and beautifully architected or even production-ready by most standards. It’s to figure out what people will pay for. Focus your energy on that.

Once you’ve got your first hundred customers/thousand users and are serving >100k requests/day, think about what it will take to scale infra affordably.

(YMMV if your product is exceptionally compute-heavy, such as a custom ML model, or involves complex data pipelines out of the gate or other edge cases.)


100% agree! I much appreciate your input. It validates my theory how things should be done - IMHO. I'm just genuinely curious how people around the world, with different backgrounds, deploy their stuff :)

Various ways in various companies, but in my last few companies pretty much always “CI runs tests and builds/uploads a Docker image, CD deploys it onto a K8s cluster somewhere.”

In the case of my major current personal project, I do it with a GitHub Actions workflow: https://github.com/JonLatane/jonline/blob/main/.github/workf...

A deploy looks like this: https://github.com/JonLatane/jonline/actions/runs/1346474905...

Here, I do a canary deploy to a dev server (jonline.io), then cut a GitHub release and deploy to production servers (bullcity.social, oakcity.social). (All really live on the same single-box dinky K8s cluster.)


1. Ansible playbook to provision the server.

2. Upload a container archive to the server.

3. A systemd path unit will extract the archive and load it into the container runtime.

4. Orchestration tool like docker compose will run or re-run services. Docker compose file was already setup in step 1.

DB is usually Sqlite in WAL mode with Litestream.

Checkout the repo at https://github.com/confuzeus/simple-django for a full example.


Not everything needs K8s, over the years I have worked with many different deployment approaches, the way I do deploy hugely depends on the kind of work I'm doing but my rule is to keep it as simple as possible.

In my own projects I have stayed the longest with Ansible, once the scripts are built, you can use them to deploy most web apps in the same way and stuff rarely breaks.

For websites I have switched away from ansible to simple shell-scripts ("npm run build && scp ..."), I have also done this for web apps but it starts getting a bit more complex when doing healthchecks/rollbacks.

In general, most of my work involes web apps and I start with this and grow from there:

- Monolith backend + Postgres + same language for backend and frontend with shared code.

- Small Linux server within a cloud with fixed pricing (like DigitalOcean) with backups enabled.

- When the project allows it, postgres is installed in the VM (backups help to recover data and keep the price small).

- Use nginx as the entrypoint to the app, this is very flexible once you are used to it, for example, you can do caching + rate limit with simple configuration.

- Use certbot to get the SSL certificate.

- Use systemd to keep the app running.

- A cheap monitoring service to keep pinging my app.

- Deploys are triggered from my computer unless it is justified to delegate this to the CI.

It's been a while since I have found Ansible to be too slow and I have been willing to complete building my with a general-purpose tool for deploying webapps this way but I have no idea if I'll be ever done with this.

Perhaps the most important project I used to run with this approach is a custom block-explorer API which indexed Bitcoin + a few other cryptocurrencies and it scaled well with a single-VM (nginx aggressive caching for immutable data helped a lot), this means that the postgres storage required more than 1TB.


I’m also a DigitalOcean user, but I prefer managed K8s and don’t think there will ever be a reason to go back to having to deal with host OS things again. I’d rather just pay for my CPU/RAM, and give it Docker images to run, than worry about all that. And DOKS (DigitalOcean K8s) doesn’t cost any more than bare DigitalOcean boxes.

Cert-Manager is a CertBot-compatible K8s service that “just works” with deployed services. Nginx ingresses are a pretty standard thing there too. Monitoring is built-in. And with a few API keys, it’s easy to do things like deploy from GitHub actions when you push a commit to main, after running tests.

And perhaps most importantly, managed Kubernetes services let you attach storage for DB and clusters with standard K8s APIs (the only thing provider-/DigitalOcean-specific is the names of the storage service tiers). Also the same price as standard DigitalOcean storage with all their standard backups… but again, easier to set up, and standardized so that if DigitalOcean ever gets predatory, it’s easy enough to migrate to any of a dozen other managed K8s services.


My flow is more or less this:

git merge -> ci -> oci artifact -> cd -> cloud.

Every deployable is packaged as an image, and can be deployed to serverless runtimes available on many clouds, VMs, and k8s (I assume other orchestrators too, but haven't tried).

My goal is to commoditize my cloud provider, while minimizing my costs. Everything is configured through terraform, so standing up an equivalent environment on a new cloud is pretty trivial.

I've tried to be very mindful about what I depend on from the provider (eg using provider specific sdks). I have had mixed results at sticking to this. I would like to improve this to the point where I could automatically fail over to other providers.


My project is just a website. It has mostly content and a small API for form processing.

I take a "power pack" approach. Everything - content, code, infra - is packed together and deployed together. It runs in docker compose on DigitalOcean.

To deploy, I just push to GitHub. A service on the server side rebuilds whenever it sees new commits. It's also part of the power pack. I don't like having things spread across multiple services. A pre-push hook lints and builds the static site before deployments, so failed builds are very rare.

This works well for me because I often work offline or with bad internet. It's important for me to run everything locally. I can also run just the static site generator if all I do is edit the content.


Thanks for your input! This seems super cool! Have you considered heroku and alike? Or you just stick to DO for the additional remote compute that you use?

I like that my VPS is just a generic Linux box, and not a PaaS. If DigitalOcean doesn't work, I can redeploy anywhere else in a few minutes.

I also find this a lot easier to reason about than a bunch of scattered services talking to each other through APIs. In the end, it's just docker plus a bunch of scripts.


The end result tends to be similar across a lot of apps: some CI (e.g., Github Actions) step that does some combination of: uploading docker images to a registry, ssh'ing into boxes and running some restart/kick script(s), uploading some release binary. What action depends on the app obviously - but I've seen a common theme of that "action" being the last step of some CI pipeline.

It depends on the product, of course. Currently I typically recommend fly.io with a Dockerfile for getting something online quickly and cheaply, including the Dockerfile and the fly config in the git repo.

If the product gains traction you can either scale up at Fly, chose to move to a different cloud offering or even decide to self-host.


Thanks :)

GitLab CI/CD pipeline, with a runner hosted on my hardware, to a VPS. App deployed via Docker container, files/config rsync-ed to the VPS.

Too many people over complicate everything with elastic-cloud-distributed-load-balancer on Vercel and end up overpaying and not controlling their infrastructure.


That's true, but these usually come with LB and a bit of redundancy. Granted, it's not crucial at the very initial stage, but it also gives the easy of deploy. My take is that not much devs are capable of standing up their infra, even clicking it up in aws, thus services like heroku or fly.io exists and do well I presume.

Load balancers and redundancy are tech-creep, IMHO. At least in the initial stages. But I think people in tech, and in general, underestimate how performant modern hardware is. They run their service of 1/2 vCPU which was carved from some modern Intel Xeon / AMD Epyc, and they are afraid that their little startup will eat all the CPU.

You can achieve redundancy by spinning two docker instances of the same container and setting Caddy reverse proxy in front of them. You don't need k8s for that.


I use GitHub+Vercel+Cloudflare to achieve the lowest cost deployment. All the expenses are basically just the cost of purchasing the domain name. After I have enough traffic, I can start using the paid features of Cloudflare and Vercel.

Docker compose

git push heroku



Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: