Hacker News new | past | comments | ask | show | jobs | submit login
Ruby on Jets: Like Rails but serverless (github.com/boltops-tools)
155 points by aa_memon on Jan 22, 2023 | hide | past | favorite | 84 comments



Does it really make sense to map each controller method to a lambda? Wouldn’t it make more sense to have gateway route to a single function per controller and then do the method routing inside that function? You’re going to get a lot of lambda functions otherwise.


I've built some nice small frameworks for projects on both CloudFlare Workers and GCP Functions that do exactly this. They are so simple to build that I can't imagine needing to even open source them... but maybe should write up a blog post on the details at some point. So many people seem to miss this detail for some reason.

The main benefit is that it sure is a lot easier/faster to deploy one function than one for each route (CI/CD goes so much faster). As well as the cold starts.


Nothing prevents you from routing everything to a single function, no need to even split by controller.

The disadvantage of having a huge number of Lambdas is the ratio of cold starts. The advantage is that you can granularly tweak resource utilization on a per-operation basis.


> granularly tweak resource utilization on a per-operation basis

This is a really good point. Either I took a thorny path learning Lambda or they don't make it terribly obvious that when you pre-allocate larger amounts of RAM (thus more $$$ in GB/s cost) you also gain substantially better CPU performance.

So when you have a compute intensive task with a low memory footprint and want to improve latency, you may actually want to allocate lots of unused RAM.


I don’t know how this works. But I’m curious:

Wouldn’t it be more cost effective to have one big address space than a lot of fine tuned ones? Even if you’re stateless, you could benefit from the fact that utilization spikes can be handled with less overall redundancy. Assuming the dollar cost is proportional per allocation.


provisioned concurrency pretty much nukes the concern.


This is not as trivial as you make it sound. You would need to know the amount of functions you want to have provisioned at all times, if you need more, cold starts will occur. In addition this creates base costs, which would not exist otherwise.


I would argue that if you needed n number of functions, where n is unknown, or n cannot be set to a reasonable number that can handle traffic, and cold starts are unacceptable, you would be using the wrong bit of infrastructure.


I agree. Usually it is a a trade-off between cold starts and the costs of `n` warm functions.


It's typically fine - AWS bills by GB/s (in ms), not by number of functions in an account.

There are limits to the number of functions per account, and number of resources per CloudFormation stack (etc); but within those parameters it's usually a good idea to use one function for a specific controller or action. This allows you to limit IAM permissions, configure memory/CPU, and define associated resources at a per-controller level.


You will definitely run into more cold starts though, if that is a consideration for your project


A serverless function that never cold-starts is just a container.


That doesn’t mean that more cold starts aren’t bad.


Sure but if you can’t tolerate the added latency of a cold start, serverless might not be right for you.

Conversely, if you’re doing something like processing message queues and you don’t care about latency at all, serverless is perfect.


I'm guessing it also helps keep deploys small (atomic?) when a controller method changes. The unmodified lambdas stay untouched.


this makes logging much more difficult to search through


maybe this will eventually have an algorithm for deciding when to deploy separately and when not to, based on cost, performance, etc.


I used Ruby on Jets for a small project a while ago. It worked really well, however I got the impression that the project was somewhat stalled and maybe losing support. Has that changed?

edit Yea the github activity is still pretty concerning. But Rails seems to be having a little bit of a renaissance so maybe it will pick up here also.


Because people don't think it's Sinatra like, but omasake like Rails.


I had to look up "omasake", but I admit I still don't understand its use in this context. ELI5?


It's "omakase" (in the words of Wikipedia) :

> Omakase is a Japanese phrase, used when ordering food in restaurants, that means 'I'll leave it up to you' (from Japanese 'to entrust' (任せる, makaseru))

See the article:

"Rails is Omakase" by David Heinemeier Hansson on Dec 27, 2012

https://dhh.dk/2012/rails-is-omakase.html

> Rails is omakase. A team of chefs picked out the ingredients, designed the APIs, and arranged the order of consumption on your behalf according to their idea of what would make for a tasty full-stack framework.


Wondering why this framework isn't more talked about, especially amongst rails devs to take advantage of AWS Lambda.


Given how trivial it is to set up serverless Rails using something like ECS Fargate, the benefit of running specifically on Lambda is lower than migrating off a tried and tested technology.


With lambda you pay for what you use. It scales endlessly and is quite cheap. Depending on your typical usage and load it can be a huge cost saver (also mentally) as lambda will scale.

With ECS Fargate it costs money even without any traffic and you are responsible for correctly implementing auto scaling.

I'm coming from 10y rails dev background and now full stack typescript. Thinking in pure lambda functions is a breath of fresh air for me.


if this is powering your primary API and you have 24/7 traffic, fargate can actually be cheaper in some circumstances. there's nuance in lambda config, including concurrency, temp space, and memory. pay for usage is nice until it's not.


> ... is nice until it's not.

Like everything, no? This is akin to saying "All tautologies are tautological."

Your sentence preceding this one laid out some issues; you probably could have left it at that <shrug>.


could have done without the editorial critique as well.


cause we fat - got all the good stuff


Why though? what benefits does that give us?


I’m not entirely sure I wanted to see what HN thought. I was thinking if one’s already using rails+aws are there workloads that are not core app functionality, not frequent, but that when they run need to serve a large number of requests that Jets might be the right tool for the job?


I've been using Jets for a small project, and it's mostly been great. Coming from a Rails background, it was a very quick way to get a bunch of stuff running without any ongoing costs or babysitting. The documentation is pretty good, though there has been a few times where I've gotten a little stuck and have had to dig through the source to figure it out. Overall, I'm grateful for its existence.

Ultimately though I don't think I'd use it beyond a proof of concept or without some sort of clear idea for how it will sustain itself.

https://github.com/boltops-tools/serverlessgems/issues/6


Has anyone investigated what latency is like? Looks amazing for a low cost deployment, but I’d guess there could be significant startup latency for each controller.


There's a Django-ish version of this out there, and when I looked into using it briefly some time ago they had instructions on how to keep things "warm".

All that to say: I guess it's been considered, but I don't have measurements handy. It's not unknown tho.


There is some support for prewarming https://rubyonjets.com/docs/prewarming/


former CTO @ Arist (YC S20). We were able to get all the way through series A using Ruby on Jets and it continues to be a major driver behind Arist's growth. Tung, the guy behind it, is simply fantastic.


Having used it what are your thoughts on whether Jets replaces rails vs using alongside rails for certain types of workloads?


We leaned really heavily into Jets and found it to be a drop-in replacement for most things we take for granted in Rails. There was some tinkering around that had to be done to get OAuth and a few other things to work properly, but Arist has now paved the path for future orgs to do this easily with patches to Jets and some additional tooling that is easy to find from issues in the Jets issue tracker. The whole abstraction around "ApplicationJob" for spawning trigger-based lambdas in addition to the regular web-serving lambdas is much better and more powerful than having to deal with things like Sidekiq. The fact that every resource gets its own lambda is also very powerful.


It looks like Ruby on Jets makes you 100% dependent on 1 company: AWS/Amazon; not an evolution I'd welcome.


Yeah I'd agree, this was the instant turnoff for me, I don't like Amazon. But I totally understand people trying to build a more modern Ruby framework.


Which cloud mega-corp do you like out of Amazon, Cloudflare, MS, Oracle, IBM, and Google?

I like digital ocean, and I’d consider Hetzner, but AWS is pretty top notch outside of cost and their parent’s general biz practices.


Maybe any of Cloudflare, Oracle, IBM, since they don't have as much data (yet)


Is that like, weird? How many companies are realistically hosting their compute/data across multiple providers for the sake of diversifying that risk? Certainly some, but a tiny minority right?


It's not only about hosting across multiple providers for the sake of diversifying that risk.

It's an unhealthy situation to be fully dependent on 1 single provider. It creates monopolies (instead of diversity and choice) which means less freedom, and/or more corruption and unhealthy conditions for all involved.


No I get the concern. I’m asking how many companies are realistically mitigating that risk, today. Like why is RoJ being called out for it here, when building systems that are tightly tied to various cloud providers is a pretty common thing (and dare I say the risk is even a bit overblown).


It is part of some regulatory frameworks. I’ve met someone from an European insurance company once, who told me they have be be able to switch to a different hyperscaler in 24 hours.

I also think it’s generally a good business practice to keep the number of companies you depend upon as small as possible. Only being able to deploy to one cloud provider is like only having one customer. It can work, but it is risky.


I think it's more than you realize because it was more than I realized. It only takes getting burned by a cloud provider once to get the execs to notice, then they talk about lock-in, and the good ones try to avoid it. Plus, remember people move companies and take their knowledge with them. It's the kind of thing that gets talked about in CIO circles [0], so it's more than a tiny minority.

0. https://www.protocol.com/enterprise/target-cio-mike-mcnamara... (Disclaimer: I work for Target but not currently on cloud stuff, though did previously)


In contrast to provider-native serverless solutions, cloud agnostic solutions carry a high price tag for the entire application life cycle in form of operations - those people petting your k8s. A common misconception regarding serverless applications is focusing on compute alone and in that context an agnostic solution may seem lucrative after traffic reaches certain threshold, justifying running a cluster 24/7. However many scalable application architecturs benefit from asynchronous processing and event-driven models which require reliable messaging infrastructure with considerable operational overhead. This is where serverless applications utilizing managed services shine, making it possible for small teams to deliver very impressive things by outsourcing the undifferentiated ops work to AWS. On the other hand, if the compute layer is the only lock-in-inducing component in your architecture, a properly architected application is relatively easy to migrate to a serverful model. As a crude simplification, just replace the API Gateway with Express.


It’s not so much about actively using multiple providers, as building your application to be able to switch if you wanted.


Every startup I've worked for has had at least 3 data centers in different cities.


maybe its just me but serverless seems ill suited for these heavy garbage collected languages. Since you're just spinning up a process shutting it down once it does its job, you're really wasting a lot of cpu cycles on building up the garbage collector. A lot of the overhead is fine when you're using a persistent instance where you can "warm the cache" so to speak. nodejs would be the edge of what I consider good for serverless. go is probably ok if you turned off the GC and rust would be ideal.

Then again, I've managed to completely avoid serverless as I find it to be a pita when you want to do anything more complex than a trivial crud app.


> you're really wasting a lot of cpu cycles on building up the garbage collector.

Are you sure you aren't thinking of JIT compiling? The context where I think of "warmup" is (say) Java, where the runtime profiles hot paths to emit optimized machine code.

What would it even mean to "build up" a GC?


Serverless runtimes don't shut down the instance immediately after the invocation is complete but keep it up for an undetermined time depending on many factors (usually ~15 minutes) in order to process a possible new event without the start-up overhead (cold start). This means that in a system with constant traffic some instances may have a surprisingly long uptime. The instance gets destroyed when it throws an error though so a function with a disabled GC would clean the state when it runs out of memory. Slower processing done by memory-starved instances would probably eliminate any minimal performance gain you could get by disabling the GC though. (EDIT: I just realized that with no GC desperately spinning in place the starvation would not have much performance impact so this would actually be a very interesting approach if you can tolerate some invocations erroring out on out of memory errors, sorry!)

In regards to performance optimization of Lambdas, start-up time is the most important factor as cold starts happen also in cases of concurrent requests when no warm instances are available. This means that Go, Node and Python are pretty sweet for serverless. Thanks to recently released Lambda SnapStart feature that launches new instances from snapshots taken after application initialization Java (even with DI frameworks) is a plausible option if you're inclined that way. Previously you had to use a custom runtime and AOT compilation (that rules out any runtime DI stuff) to achieve tolerable performance with Java on Lambda.


Couldn't you just disable the garbage collector in those languages? I know you can select a "none" GC in the JVM, you should be able to do the same with other runtimes too.


Yeah Instagram has some blog posts outlining how they run their python web servers with the GC disabled

Easy enough to restart the processes every once in a while


Related:

Jets: Ruby Serverless Framework - https://news.ycombinator.com/item?id=19893796 - May 2019 (39 comments)


I use Dokku on a powerful Hetzner server for my rails apps. Am I missing out on something? My stuff has not too many users < 1000 and those are mainly from EU and US. How would you host this?


Another Rails on Lambda framework is Lamby: https://lamby.custominktech.com/


I wonder if this would help Mastodon (the project, not the generic term) since I thought the recent Twitter flight caused them scaling woes, but I don't know enough of the details to readily know if one lambda per route would help them or not


It'll certainly help AWS to get a lot more profit :)

Mastodon is not a single service, but open source software groups/individuals/organisations run. So it's not really accurate to say Mastodon itself had issues with the huge influx of users, but rather some popular instances of Mastodon had issues.

Mastodon is supposed to be more decentralized (federated being the way of achieving that) than currently existing social media, so it'll benefit more from heavier focus on performance optimizations and more experienced gained around running it on dedicated instances, at least in the long run and considering the core idea behind Mastodon.


> Mastodon is not a single service, but open source software groups/individuals/organisations run

Yes, as I tried to specify by qualifying "the project, not the noun" but I was on my phone so linking to github.com/mastodon/mastodon was more trouble than I thought necessary. Evidently not

> So it's not really accurate to say Mastodon itself had issues with the huge influx of users, but rather some popular instances of Mastodon had issues.

Understood, and the rails server running on some person's droplet is one trending link away from becoming a "popular instance." I believe Kris Nova's situation fell into that category. I'm not saying the mastodon/mastodon should change their deployment model to require Jets, but if Jets is mostly compatible with Rails, and mastodon/mastodon could write their Rails in a way that allows either deployment model, that seems like a win


Am I the only one who is kinda irritated with how AWS has effectively created a piecemeal proprietary operating system and slowly started displacing good server ops? All while "supporting" open-source development.

I really don't want to be an AWS engineer. I like being able to host my own tools when I want.

Sorry for the rant, serverless stuff in general is cool.

As a followup... I'm curious if anyone can recommend solid opensource serverless runtime projects with decent management interfaces (CLIs are nice)? It would be neat if projects like Ruby on Jets could become "lambda-agnostic" for lack of a better tagline, if they're not already.


They started it, not there's many services providing this now. You don't have to go with AWS for a few years now if you want to go serverless. (Even if they are the most popular) For self hosted context there's https://fnproject.io/ for example.


You might be interested to check out something like wintercg which aims to make serverless environments interoperable (for JavaScript).

https://wintercg.org/


It’s really not that complex to move off of Lambda. The only lambda specific part your application needs to know about is how lambda arguments are passed. There are plenty of middleware providers out there to convert lambda function invocations to HTTP method calls in your application (mangum and others).

Lambda has supported docker images (OCI) for a while now, so you can build your application into an OCI, set the entry point to a lambda middleware and call it a day.

If you want to move off lambda you only need to change the entry-point to an ASGI server instead of the middleware and change the base image layer to be a standard one instead of the lambda specific one which has the lambda runtime installed. You can even build both images and deploy to both lambda as well as a non-serverless system such as k8s, ECS, swarm, systemd, …


Looking at Ruby on Jets, it seems the entire routing layer is also in AWS. Entry points are kinda a big deal, just like calling conventions.


I have found the Serverless framework to be great. Never really had to deal with AWS console at any point

https://www.serverless.com/


Sure its a layer of abstraction but super leaky if you want to do anything meaningful.


Well for some values of "meaningful" OK, sure.

But lots of times a plastic bucket, even with a leak, is a better tool for the job than an extruder, mold, and warehouse full of polymer.

I had to build a parallel CI pipeline once that was like: when this, run a FaaS function, then when this that or whipesnicket, run one of these other functions, then run this thing that moves some things from here to there and posts the results to zulip chat.

I literally spent 2 hours in AWS docs, got to "ugh, don't wanna know this, unlikely to use again" and then spent one hour in the serverless docs building it that way, and then it ran for 4 years. (Last year, I finally replaced it with GitHub Actions.)

Admittedly this wasn't a profound and moving tale of heroic ops or anything, but the Serverless framework proved to be pretty nifty for my use case. (Haven't used it since, but would again in a similar circumstance.)


I stopped using the Severless framework when CDK came out. For simple cases it's still fine I guess but a lot of time I found myself falling back to plain CloudFormation (ugh) or relying in plugins with questionable maintenance status. I would not recommend it for new projects and even AWS SAM makes applying some best practices like least privilege principle easier.


CDK is pretty good, the one main painpoint is that some resources are not supported so again you are going to revert to writing cloudformation. I really wish one of AWS’ criteria for “done” when a mew resource is added is that it must be added to the CDK.


If it was only AWS.

Cloud OSes are nothing more than a return back to timesharing in new clothes.

The pendulum has swung back into thin clients, browsers and mobile OSes are the new terminals.


we're building open serverless-centered cloud-like p2p network to aolve exactly this issue. to eliminate vendor lock and make it open so anyone can both provide and consume compute resources and develop backends on top of that.

https://fluence.network


Ruby syntax with a focus on fine grained transforming functions. Isn’t that essentially exactly what Elixir/Phoenix tries to provide, but without the Amazon lock-in or price tag?


Because Ruby is OOP, Elixir is functional. Some people just get used to OOP to structure program.


Fair enough. But I did twenty years as a Smalltalk developer, including 6 years being tools/gui lead at one of the commercial vendors.

I’ve played with Ruby, but not enough to get invested. It wasn’t OOP at the level I was used to.

I do Elixir, and it’s the first time I’ve had those “this is so elementarily cool” feelings since my 10 year hiatus from Smalltalk. The thing I like least about Elixir is it’s use ruby-esque do/end and it’s quasi inclusion of new lines as a sometimes relevant syntactic element.


It's not just ruby syntax, it's ruby. Maybe some people want the ruby ecosystem, or just don't want to leave what they are used to.

I haven't used Elixir, I couldn't say how similar it is to ruby, but my understanding is that it is a pretty different language despite some syntax similarities. Perl and C have some syntax similarities, but...


"Like Rails but serverless", I wonder what's their definition of servers?

As far as I understand it relies on .. AWS servers!


It's the same definition that the entire software industry uses and has for like a decade now. At this point crying that "serverless ackshually has servers!" is the same as pointing out that "free education isn't actually free!" as if the people using the term are unaware of this.

"Serverless" means that the developer doesn't need to think about the servers, and can interact with higher level abstractions on top of the physical servers to deploy their services.


"Function Oriented Architecture" is how I think about it.

Whether this is more confusing or less presumably depends on how similarly you think to me.


Locking yourself into servers of one company doesn't sound like progress to me


We should have buried Ruby along with Flash a while ago. It is not part of the future anymore (thankfully).


Care to elaborate on anything you wrote?


I'm guessing Ruby or Rails hurt him.


And instead we should all use ???

Hello?




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: