GoCardless (YC S11) is hiring senior software engineers and web operations engineers in London.
We're a fast-growing online payments startup that makes it really simple to collect money with Direct Debit. We've been around for 2 and half years, and are now a team of 25. We're backed by a bunch of top-tier investors (inc. Y Combinator, Accel, Balderton), pay very competitive salaries, and will shortly be moving in to a shiny new office.
We're looking to hire senior software developers to work on our core product, and web operations engineers to scale & manage our infrastructure. We've got lots of interesting challenges to solve this year: building the next generation of our product to handle the growth we're seeing, finding more intelligent ways to fraud-assess our merchants and customers, expanding what we do to work internationally (we're already beta-testing a European expansion).
There's plenty more information at https://gocardless.com/jobs. If you're interested in finding out more, email me at harry@gocardless.com.
This is actually something I'd quite like to do with CodeCube - it wouldn't be hard at all. Replace SSE with websockets, attach to the container's stdin as well as stdout, add something like term.js, and it'd work a treat.
Very interesting. How exactly does the retry-later logic work? Does it just push the message on to a 'deferred' queue, that you manually process at your convenience?
Totally agree with your last paragraph, introducing a message queue has solved a lot of problems for us.
RabbitMQ queues support two features that can be combined to implement a deferred retry queue, and RabbitMQ will do all the work for you.
The first is a "message-ttl". This tells RabbitMQ to discard messages after a specified number of milliseconds. The second is a "dead letter queue". Messages that are discarded from a queue can be routed to a dead letter queue automatically.
When we have a job that we wish to "retry later", the framework re-queues the message in a secondary queue with a name derived from the original name. For example, if the original queue was "prod-emailer", the derived queue name might be "prod-emailer-1m" indicating that the contents of this queue are messages originally bound for prod-emailer but were delayed by 1 minute.
This delayed queue is configured with a x-dead-letter-exchange of the original exchange, x-dead-letter-routing-key of the original routing key, and x-message-ttl of 60,000. With this configuration, RabbitMQ handles the timeout automatically. When the message expires from the -1m queue, RabbitMQ sends it back to the exchange and it gets routed to the intended queue by the pre-existing bindings.
The framework expects all messages to be in an "envelope" of JSON which lets us annotate the jobs. When we mark a job for retry, we also increment an "attempt-count" attribute in the JSON. The workers can them implement their own "retry N times" policies.
I haven't thought about how this would work if we were using topic exchanges. We are only using direct at the moment.
I wrote a similar system for C# - we had three queues per application, work, delay, and error. In our system, the deferred queue used per-message TTLs that would push messages back onto the work queue. This allowed us to inspect the deferred and error queues while the application was running.
No problem. Frustratingly, people rarely seem to talk about how they use RabbitMQ in practice. Also, there are a lot of things that are still a mystery to me (for example, best practices for dealing with things like server shutdowns, or what to do when a NACK "fails", etc).
I need to poke around Hutch and steal some ideas for an RPC implementation. We're just using fire-and-forget type work for now, but I'd love to be able to use RabbitMQ w/anonymous reply queues as a workaround for PHP not being able to do asynchronous RPCs.
Another open-source alternative is Sentry (https://getsentry.com/), which supports multiple languages and frameworks. A hosted version is also available.
The Ruby support in Sentry currently isn't quite as good as the Python support, but it's not bad at all, and constantly improving.
Perhaps it wasn't clear enough, but the post pointed out that typing 'bundle exec' and explicitly using --path isn't necessary. Once you've added one item to your PATH, and two lines to your bundler config file, you don't need to think about it ever again, it just works. If you want all your gems in one place, simply remove the BUNDLE_PATH line and change the BUNDLE_BIN directory to whatever you want (presumably something within your home directory).
I don't want to get in to the rbenv vs RVM debate - they're both good tools and it's an issue that has been done to death. I linked to five articles in the post, and you can find many, many more with a simple Google. My personal motivation for switching was that bundler's overridden 'cd' function includes some commands that fail, which is fine under most circumstances but it breaks as soon as you use 'set -e' in bash. We spoke to the author and he said he said that RVM wouldn't be 'set -e' compatible in the foreseeable future.
To be clear, my goal is not to sway your decision, but to provide an informed discussion for those making the evaluation. Whether we want it to be, it is an rbenv vs RVM debate, because those are the tools we're evaluating.
My observation is that rbenv users often tack on additional ad hoc solutions to arrive at the same convenience provided by RVM. The primary objection I hear to RVM is the `cd` override, which is optional.
The issue with not using --path is that gems will leak between different projects, so you'll probably run in to trouble when projects depend on different versions of the same gem.
I've also used shell plugins to remedy the bundle exec issue, but they've often caused more problems than they solve. I'd much rather just stick something in my PATH than use shell plugins.
I find it much easier to simply explicitly require a specific version in my Gemfile if I'm having trouble with a gem, rather than having the extra overhead of managing multiple sets of gems.
I think it's a more correct solution as well. You won't have any guarantees that Bundler will resolve the dependencies correctly in production when it doesn't in your development environment.
I'm using oh-my-zsh with the bundler plugin, which aliases all binstubs to a function that prepends bundle exec if there's a Gemfile. It works very well in practice. rbenv handles the correct ruby version, Bundler loads the right gem, and it all falls back to the newest version on the system default ruby.
It's a few more moving parts behind the scenes than I'd prefer, but once configured it's completely transparent.
You might have good luck with the background removal techniques because your background is stationary. In our case the camera was stationary but the board constantly moved so that tactic was not very useful in our case.
Edit: also something to consider is using a kalman filter to help predict and smooth out noisy predictions. We did that and it helped considerably.
We're a fast-growing online payments startup that makes it really simple to collect money with Direct Debit. We've been around for 2 and half years, and are now a team of 25. We're backed by a bunch of top-tier investors (inc. Y Combinator, Accel, Balderton), pay very competitive salaries, and will shortly be moving in to a shiny new office.
We're looking to hire senior software developers to work on our core product, and web operations engineers to scale & manage our infrastructure. We've got lots of interesting challenges to solve this year: building the next generation of our product to handle the growth we're seeing, finding more intelligent ways to fraud-assess our merchants and customers, expanding what we do to work internationally (we're already beta-testing a European expansion).
There's plenty more information at https://gocardless.com/jobs. If you're interested in finding out more, email me at harry@gocardless.com.