Python for the code, Postgres for the backend, Markdown for the frontend. Because I am too lazy to reinvent the wheel, and most of that is good enough.
I would have much preferred to use Perl, R, dotnet, or just about anything else, but there was an already made library in Python that was good enough for 90% of my needs, and 100% with some elbow grease, so my lazyness made me learn Python instead!
But wait- it gets dirtier than this! The code is done is Vi, with no care in the world about version control (rsync does most of what I need: backups), and deployed to servers by manual SCP, with no automatic tests, no care about CD or automatization - yet: as I need more and more servers, I realize I have to spend time learning say ansible to provision them faster than a copy paste of bash (I have low standards but I won't do curl|bash, just... no) but I am delaying that as much as possible to concentate on implementation.
The servers are on Debian VMs on Azure, just because they gave me free credits and do not perform as bad as AWS. When the free credits run out, I will move to DO. Then I will see what else I can use, and that will be the right time to care about ansible or whatever.
It is as ugly as I can get away with to get my MVP out, using tech from 1996 lol.
People saying they wouldn't use git don't know git. I admit it does take investment to learn but once you are over the hump it is insanely easy and useful (yes even easier than rsync and 10x more useful). Especially when you consider that using rsync precludes your from collaborating even with one other person which is crazy easy if you use github (for example). People not using git: get over the hump, it is worth it, even on little side projects.
>> ...with no care in the world about version control (rsync does most of what I need: backups), and deployed to servers by manual SCP...
> People saying they wouldn't use git don't know git.
A few things:
* to the grandparent, rsync is awesome, but comparing it to version control is comparing apples to orangutans.
* to the parent, version control != git, and there are easy arguments to be made against git
* distributed version control is one of the greatest things to emerge in the last 20 years. If git isn’t your thing, check out (haha) fossil or mercurial
Git's UX is horrible for people who don't know git, i.e. git beginners. All the documentation and error messages are written as if the user is already intricately familiar with the inner workings of git. As a result, it's very hard to learn because you don't know what you're about to do without hours of googling cryptic error messages.
Perfect example that happened to me today:
$ git push
fatal: The current branch insert_branchname_here has no upstream branch.
To push the current branch and set the remote as upstream, use
git push --set-upstream origin insert_branchname_here
What does it mean to "set the remote as upstream"? That phrase makes no sense to me. Is that going to overwrite master on origin? I have no idea, but I'll be damned if I am going to hit enter and risk overwriting my upstream branch, so I Ctrl+C out of there and had to read a bunch of awful reference docs posing as a tutorial.
Git is just plain terrible in almost every way. It just happens to be less terrible for certain workflows than CVS/SVN/mercurial/whatever.
Yes, git’s UX is absolutely terrible. It is an abomination. It is the worst command line tool I’ve extensively used in that regard. For a long time I stubbornly used mercurial, which is vastly superior.
But, it has become a de facto standard. This is a tragedy. But it is a fact.
So it is best just to spend time learning it, find a subset of commands and make a cheat sheet you can refer to, and accept that the world is imperfect. Hopefully at some point in the future git will die and we will transition to the next big thing, and they will get it right that time.
There are many such things in the world of tech, and in most cases it is best to say “yes, it’s awful, but no, I can’t change it, and rather than expending energy on hating it (which is totally justifiable) I will expend the energy on minimising its damaging impact to my productivity and focus on accelerating my work elsewhere.”
I say this because I have spent so many hours ranting about git and every time it gets in my way (git submodules anyone?) I used to get so stressed by its design that it would hurt my productivity.
Be the Zen Dev: acknowledge it, isolate it, keep it at arms length.
Tips: Make a cheat sheet, use a minimal feature set, don’t try to do anything clever with it, and always have a way out (eg backups! Take copies of your directory before dealing with lesser know commands! Try things out on dummy repo’s before breaking your own, etc.).
I don't think it's a tragedy. UX can be fixed (even if difficult because of backwards compatibility and social concerns). Changing the underlying technology is much harder.
git has become a de facto standard because it gets everything else right.
The command failed, with a clear error message, and a direct instruction on exactly how to redolve it. That is good UX.
Your example boils down to not knowing the word "upstream". I would argue that is table stakes domain knowledge for decentralized version control, and is trivially easy to search for. That does not make bad UX.
Good UX does not eliminate the need for domain knowledge, it eliminates incidental complexity from a user's task, and one could argue makes a tool pleasant to use.
I don't disagree that there are rough edges to git (the debate over merge vs rebase is a glaring example), but "just plain terrible in every way" is gross hyperbole.
Sorry but if the error gives you the right command to run first time then it should just run it first time by default. Sane defaults for new users is good UX. Not an error telling you exactly what command to run instead. It already knows what you intended, for Christ sakes, because it's telling you what you should copy and paste instead!
But it doesn't know what you intended. It's guessing. The consequences of guessing incorrectly could be very, very bad, as you are pushing your code to a remote host.
Again, the whole point of git is that it is decentralized; there is no requirement that you have one single central server that you are pushing to; there could easily be multiple places that you might want to push something to.
Using something as default that makes sense for a centralized VCS when git is not centralized is, imho, worse than bad UX, as it teaches incorrect assumptions.
Again, this is table-stakes knowledge for using decentralized version control.
Well considering how many people who now know git and understand it, it doesn't make sense to do that necessarily. Rather they should make it a configurable option which do now see push.default (https://git-scm.com/docs/git-config)
I don't know. I've gotten away with using pretty much the commands in this guide for almost three years now (professionally): https://rogerdudler.github.io/git-guide/ - I even structure my orgs internal git documentation around these commands for newbies (with sources of course).
There are times when you need to troubleshoot a few things but most of the time you can just delete your local repo and reclone it - easy peazy. One of the other great things about git is that even if you do somehow fuck something up upstream it is really easy to get it back. Heck I think git is pretty awesome.
Have you tried the official GitHub client? I've worked at a few places where non developers started to use it because the ability to collaborate through pull requests has so much benefit outside of development. I'm a huge fan of IntelliJ and use the inbuilt git client most of the time, it abstracts most of the complexity. Sourcetree is another good product.
Even if you only use Git for save points, it's game changing. I honestly can't imagine working without it. I'd be too afraid to change anything, in case I break it.
Recently I've seen a friend collaborate with teammate by sending each other zips. Using Skype! And as far as I can tell they're pretty productive. Dedication and hard work generally tends to produce results regardless.
When you use git you still need to use it right. How many times have you cloned a repo just to find out it won't build/run/compile without help? Personally, with small projects - more often than not. And I have a habit of asking if they do know for a fact it's going to build without assistance. It doesn't help.
I like git as a glorified change-log, slack commit messages especially. I try to keep everything tidy and build-able as much as anybody. But I'll still give people an archive (with .git included). And ironically this keeps git tidier as you don't obsess over whether or not to commit certain things.
When I first started learning how to program, I didn't know git existed. When I eventually learned a year later, I was already well acquainted with sending myself zips by email, and using email as "history". Git was quite scary to me at the time, (merges and reset/revert as I recall) so for a good year or two more I just kept using emails. It certainly wasn't pretty, but it required almost 0 mental cycles, worked without question, and actually gave some really nice features comparatively. (searching for a "commit" using complex queries)
We were eventually forced to use git/hg/svn for some courses over that time, so I became comfortable out of necessity and now vc for anything serious; had to unfortunately even use TFS for a while. (guilty secret, for many toy projects, I absolutely just have a folder on my fileserver and rely on its backup/replication, zipping the folder if I want to "snapshot".)
Please don't try to sell the world on working this way. It's just terrible and lazy. You can learn enough git to use it productively in 5 commands. It's not even difficult.
I think you've entirely misread my post. I was giving a "cute" story concurring with the author that you can manage to be productive with duct tape and spit, and that it can be a learning period when understanding proper tooling, especially if one is coming in without any real oversight or mentorship at first; if you're reading this thread on HN you're probably far more exposed to CS than I was at that point. There's no "selling anyone" on anything.
I would however frankly say that I think the inability to recognize how intimidating git can be to an uninitiated was the _reason_ it took so long for me and many of my classmates; (this was over a decade ago, and I've been using vc professionally the entire interim) certainly far more damaging than anything you accuse me of propagandizing; It was extremely frustrating and dismissive to constantly be told by more experienced programmers that I should feel far more confident than I was about a tool that in hindsight, I don't look askew at myself for finding some aspects of unintuitive, and I'm a constant user now.
And, while I'm on this tirade, if you're addressing "working this way" as a negative to put unimportant toy projects on a heavily replicated and backed up server, I think you and I have different priorities in how we use our time.
That's pretty easy. All you have to do is extract the first zip, go into the directory, run `git init; git add -a; git commit -m first` then unzip the second one and run... Hey wait a second.
You can do some Googling for how to use them and contrary to popular opinion it’s really not much to it. Mercurial users are moaning about it, but Git has a better track record of maintaining backwards compatibility.
The advantage over SVN is in working with local or personal repositories. And forks of other projects in your local / personal repos, which happen all the time; heck, you will want to work with forks of your own repos for experiments; post-Git of course, b/c doing that with SVN is a pain in the ass.
I agree with you 95%. If you're running something basic with less than 5 people this is reasonable with one minor caveat:
I'd still use git.
Hell even on my own I frequently use git for single file shell script. Commit a working copy. Tweak it to add a feature. Decide it's unmaintainable. Revert to before I fucked it up.
Also, theres a decent half-step between ssh and Ansible: mssh. You can run ssh across a set of hosts quite trivially. It working on the order of a few dozen machines just fine.
True, that works too. I'm good with bash but I find many aren't. I actually work with bigger setups (think thousand machines) that have range set up which is a big value add with mssh but not in the kinds of small setups we're discussing.
I am not convinced that git is easy... I believe it's very powerful if you put the time into learning it, but anything with 50,000 separate guides on the internet must be somewhat unintuitive, right?
Definitely are way more intricacies to the cli than I'll ever learn, but 99.7% (estimated) of my git usage is running the same 4-5 commands all day, which I have aliased to convenient acronyms. Don't even have to think about it.
I’ve still been unable to find something that has the right balance of intuitiveness as well as surfacing advanced features and workflows besides SourceTree. The only downside is that SourceTree is slow and buggy at times.
It’s exactly like merging but you have to resolve any conflicts commit-by-commit instead of all at once in a merge commit. Useful for if you’ve been working on a feature and want to pull in changes your colleagues have made on the develop or master branch without muddying up your history with merge commits. Having clean history makes diffing easier for code reviews. I’ll admit even after years of rebasing, I still screw it up from time to time.
There are 50,000+ ways to learn anything, and most won't work for most people. Sometimes I have to read 20 different takes on something before it finally clicks.
The fact that there are so many people trying to explain it in their own way is a good sign.
I agree that managing complex workflows, or really any situation involving a lot of contributors/branches/activity may send you running to your nearest search engine with regularity.
That being said, for GP's use, they can work solely from master, requiring only a few commands with little to no chance of difficulty.
You can write off something as being hard if lots of people want to help you learn it and share their tips and advice, but then you're just ruling out learning anything worthwhile, really.
I meant all this is super old school and super dirty, except markdown that is somehow recent (2004)
Yet markdown is not essentially different from running sed to render something into html, which I would have totally done if I hadn't also needed some other features like basic math that would have been too tedious in sed
Pretty much the same except that I use `git push` to deploy instead of `scp`. If I ever need to move beyond a single server that'll be an issue but for now it's fine.
For my hobby stuff I am similar. Perl and either SQLite or Text files. Sometimes Postgres, but I rarely find it is needed, even for fairly popular anonymous boards.
I use vi. I do use git locally, but I don't trust it, so I also use rsnapshot, probably similar to your rsync. You might want to try rsnapshot some time if you have not. It uses less disk space by hard linking dupes using a perl script.
I'm writing it in Elixir and Phoenix (Elixir's goto web framework) with PostgreSQL to back my data. For the front end, I'm using good old fashioned server side rendered templates and Turbolinks with sprinkles of Javascript where needed.
Despite having worked with Flask and Rails for the last 6-7 years I'm going for Elixir this time around because there are aspects of the app that lend itself well to what Phoenix has to offer and I'm using this as an opportunity to grow as a developer. Every time I learn a new language / framework I find myself leveling up in other technologies I know.
So far I'm super excited with the way things are turning out. I'm still pretty early on in the project but I've gotten a decent amount of functionality implemented already.
I'm really surprised Elixir isn't more popular. The language author and the supporting community is awesome, and I've been the happiest I've ever been working with this tech stack. Of course I'm still drinking the "omg this is new and awesome" kool-aid, but even without really knowing everything too well, I'm able to accomplish real world tasks and that's all I care about in the end.
On the front-end I might still experiment, but I think I'll settle for at least a decent chunk of time on this:
1. React + Styled Components (so html/css/js is finally no longer decoupled in a way that just makes no sense)
2. Page.js or something similar for routing, because I've been bitten too often by react-router and like, and the idea that routing is it's own thing makes sense to me.
3. Baobab.js or something similar for state management: basically, A. a single state object with B. a kind of cemtralized action-flow to change this state, and C. listeners of sorts within various components that trigger a re-render (cursors in the case of Baobab). I could go full-on redux, but it doesn't seem, necessary and I kind of relish running into the situation where, as my apps grow, it turns out to be immediately obvious why Redux does what it does.
I'd be very curious to get some feedback on my choices, and/or alternative suggestions. In particular regarding 3.
EDIT: to be clear, this is for full-on SPA's with no kind of crawling/SEO needs.
In Elixir+Phoenix you get all the CRUD conveniences as in other frameworks, but you also get trivial Reactive/real-time capabilities that would get very hairy in something like Rails.
Go and, honestly, sqlite with backups written to S3. It's the absolute cheapest. I can run multiple apps on a t2.nano (t2.micro if I am feeling fancy). My apps cost something like $1.50 to run a month, and they can easily handle medium-sized traffic, plus Go is just so dead simple to deploy. scp a build binary, and boom I am done. Once the app grows and more collaboration/developers/requirements call for it, I will add more infrastructure.
Sure, I don't have anything formal written up, but I think this might make a good topic for my first blog post. It really isn't complicated though in all honestly. I primarily stick to the stdlib, use a go sql driver for sqlite, I used to just run s3 sync on a cron but I am experimenting doing it in code now with the go aws lib, and just a couple bash scripts to "deploy" (I mean, really, it's go build, scp, and a webpack build + s3 upload for the frontend when that's needed. Go and sqlite really does most of the work here in terms of keeping things light and small.
My most complicated build required somewhat of an SOA approach, but rather than go heavy-duty microservices and use grpc where you need a bunch of load balancing and service discovery involved, I went with something simple called nats, that ended up costing me 5 dollars a month, unfortunately.
Thanks for sharing your setup here. Go and SQLite are great! I have a few questions :-)
Is the Go service directly accessible from Internet, or is it hosted behind a reverse proxy?
When you deploy a new binary, is there a small downtime between stopping the old binary and starting the new one?
How do you supervise the Go process? You use something like systemd?
When SQLite backup is ongoing, does it block writes to the database?
When you backup to S3, if an attacker gets control of your EC2 instance, is he able to erase your S3 backups, or is it configured in some kind of append-only mode?
Where do you store your logs and do you "read" them?
I will answer my own questions, in the context of the apps I develop and maintain:
- The Go service is hosted behind a reverse proxy (nginx or haproxy) to enable zero downtime deployments, by 1) starting the new process, 2) directing new requests to the new process, and 3) gracefully stopping the old process.
- Since we've started to use Docker, we let the Docker daemon supervise and restart our services. Before Docker, we used systemd. Before systemd was available on our system, we used supervisord.
- We thought about using SQLite for some apps. But SQLite can only have a single writer at a time, which goes against the zero downtime deployment described above (two processes can be processing requests at the same time). Thus we use PostgreSQL (and MySQL for legacy reasons) which provides online backups. Must be noted that online backups are possible with SQLite, provided the application implement it using SQLite Online Backup API [1]. Another solution, which doesn't require application cooperation, is to snapshot your disk, if your system supports this.
- We backup to rsync.net, which provides an append-only mode, through their snapshot feature [2]. An attacked cannot override or erase the snapshots of your previous backups. I think it's possible to do something similar with S3, albeit in a bit more cumbersome way, using S3 versioning and MFA deletion.
- About logs, we're still not satisfied by what we use currently.
net/http and REST api all the way down, I for the most part try to stick to the stdlib as much as possible. The few exceptions are logging (I like zerolog) the sql driver, and auth. I do keep the front/backend separation because doing site hosting in S3 with a cloudfront cache is pennies on the dollar (.63 cents a month) and I tend to lean towards using vue.js more these days. I have been known to just serve the html statically directly from go as well, and use go templates in the past, and use minimal jquery where it's needed (I still like jquery :-/, not everything needs to be full-blown SPA-mode)
I've recently decided to drop javascript frontends for a hobby project of mine and just go with bare go templates. Someone hopefully will laugh at it, but it has been a great, productive decision. It's been easy and quick again to slap ugly lists or tables of stuff together.
The entire headache of choosing a web framework, generating a seed of magnificient complexity, wrestling with the javascript build ecosystem, figuring out how to balance state between JS ui components, JS state management, backend state management and databases. Not necessary. Do an SQL query, put that into a neat datastructure and run html templating, followed by <go build>. Maybe add bootstrap for some neat CSS.
Not that I'm counting, but there are 20 mentions of React, 13 mentions of Vue, 10 mentions of Python, 9 mentions of Go, 8 mentions of rails. The OP had a couple mentions of Wordpress.
AND Zero mentions of Meteor, lol!
I think I'm the only indie developer using Meteor and Blaze, but I have to say they make my life WAY better. Granted, it took some time to figure out exactly how to optimize the stack to make it scale correctly, but NOW Meteor is the only solution I've heard of that does this:
Meteor now automatically builds two sets of client-side assets, one tailored to the capabilities of modern browsers, and the other designed to work equally well in all supported browsers, so that legacy browsers can continue working exactly as they did before. This “legacy” bundle is equivalent to what Meteor 1.5 and 1.6 delivered to every browser, so the crucial difference in Meteor 1.7 is simply that modern browsers will begin receiving code that is much closer to what you originally wrote.
The apps I build now run like native apps in a modern browser. Laser fast. It's beautiful, like the first time using broadband.
After everyone and their mother proclaimed Meteor to be dead, they rose from the grave and locked in like Godzilla. I don't begrudge anyone their choices, but if you haven't taken a look at Meteor lately, having used Wordpress, Rails, Ember, React, and vanilla JS with node in the past, I'm very grateful Meteor's development team is STILL knocking it out of the park.
I can assure you that you are not alone. I am developing meteor with react. I took the clever beagle pup as the base. My app is a working smoothly and meteor does a lot of things which I don't completely understand, but it just works.:)
Clever Beagle is a great starting point as it's optimized to how Meteor works. Ryan Glover, the brains behind CB and Meteor Chef, is both generous and smart.
Scaling requires attention, but not more or less than any other stack. Zeit’s now enables on demand horizontal scaling and they’re relatively inexpensive. Meteor, for whatever reason, got a bad reputation for scaling, but Josh Owens wrote up a guide in 2014: http://joshowens.me/how-to-scale-a-meteor-js-app/
I’ve never experienced any website that responds faster than one built with meteor post 1.7 in an updated browser. Lightning fast
Apollo can also be used with graphQL, but I’m happy with the results using mongo and DDP out of the box with one simple command meteor create Deploying to now is also a one liner thanks to meteor now
It’s not cool anymore but I use PHP and MySQL with a $5/mo inmotionhosting account. This is after I decided to not take on the overhead of native with multiple code base and App Store deployments. Good ol’ webapp it is.
However every single day I do wish there was a solution for my webapp to get access to contacts on the phone and of course notifications. It would have been a game changer given the nature of the product. Currently it is a world of compromises for hobby developers (shameless plug: http://ping3.com)
I use the lamp stack too, hosting on scaleway nowadays; aws is great but for my own non critical hobbiest projects scaleway is hard to beat. Having said that, cpanel is great on shared hosting with the ability to specify your PHP version and modules.
As a language PHP can be a mess but the ecosystem is fantastic; the release of PSRs, composer and 7.x have been great.
For getting web apps up fast, React, Firebase, Git, and Heroku. It's a well trodden path so the tooling is simple and the components are reliable.
For my VR stuff... Unity is really the only choice that's fast to get off the ground, so Unity it is.
For hacking together backend heavy stuff, Python. Flask if I need to expose an API endpoint. Postgres if I need a real database, sqlite if I don't. Deployed to my evergreen playpen virtual instance via Docker containers because it's, once again, a well trodden, simple, and undramatic path.
Do you use firebase auth? It looks really good. I just had a heck of a time implementing user management and a couple oauth2 flows and it looks like it could have saved me a bit of time. My only concern is their plans are a little weird, it just jumps from free tier to jumbo size, wish there was something a little in between for less money.
Yup, I do use it. From a security and convenience standpoint it's just so much easier than whatever I could have come up with myself. But yes, the pricing sucks. I don't have a good answer there unfortunately.
I'm curious, I've started doing development in unity but I find myself unaware of good patterns to use, any pointers to advice on unity design patterns?
Unity is pretty squarely in the space of event based programming, so whatever design patterns apply there are generally useful. Because of stuff like animations making timelines unpredictable, you should embrace the event based approach as much as possible.
Other than that, try your best to decouple game state from the actual objects that are flying around. You will never do it 100%, but the more egregious Unity code I've seen all shares the trait that they just jammed variables into scripts for objects without really thinking about how they'd tie together a distributed network of objects that have no easy way of addressing each other. The answer is that you don't: you still need a central game state.
I built FormAPI [1] with Ruby on Rails. I used plain Rails views for most of the CRUD things, and used React/Redux to build the template editor [2], which is pretty complex. I've been extremely productive with Rails, and the performance has not been an issue at all. My plan is to rewrite the API and workers in Elixir when it will save me $2,000 per month on hosting costs. And even then, I would probably be making so much money that it wouldn't matter.
I started on Heroku, but I had a lot of free AWS credits from Stripe Atlas that I wanted to use. So I moved to Convox [3] on AWS, and it has been absolutely awesome. I have a rock-solid cluster across 3 different availability zones, and I'm also running Postgres with high availability (RDS).
I haven't had a single second of downtime since I migrated [4] (I up the status dashboard a few months ago, but moved to AWS earlier than that.) Convox supports auto-scaling and rolling deploys. It waits for the web process to be healthy, and if something crashes it rolls back the deploy without any downtime. I can also terminate any instance at will, and another one will pop up to replace it with zero downtime. After using it for the last ~6 months, I feel confident enough to start offering a 99.999% SLA.
Flask (hosted by pythonanywhere) and Neo4j from Graphene.
Chose both because I wanted consistency between the abstraction of my business logic, and the spec of my implementation.
I get some chippy guff about using graphs, but by taking the time to define my business logic as a grammar, expressing that grammar as a graph, then implementing it directly into the model, I get a lot of "aha," moments from potential customers.
Graphs can be analogous to functional languages in that if you are using them, there is a higher likelyhood you've reasoned something through before implementing it.
I actually wound up building most of my current project in Rust, on top of actix-web.
Partly because I'm apparently a masochist, but also because... I mean c'mon, modern architecture is CRUD and job queues. Rust can do that fine if you don't get distracted by the bells and whistles.
My stack: Gitlab CI/CD, deploying using Ansible to Docker Swarm, running Keycloak for auth, RabbitMQ for messaging, Postgres, Elixir/Phoenix for the API server (GraphQL), Apollo + React Native for frontend and mobile apps.
Why? For me it's best of breed vs simplicity. OpenID Connect is the most mature auth, rabbitmq very good messaging, elixir a lovely language, graphql the most programmer-friendly connection between frontend and backend, and react native allows 90% code sharing between web SPA and mobile apps.
And if you combine all of these (I mean if you finally set it all up), it's a low number of lines of code environment.
Speaking for myself: mainly Groovy and Grails, with a little Java mixed in. Postgresql when I need a relational DB. Bootstrap for basic CSS, and jQuery to add AJAXy bits. I have not yet adopted a JS framework like React or Vue, although I've been looking at giving Vue a shot. I use Ansible for automation, Git for version control, Eclipse as an IDE. Deploy to CentOS linux mostly. Just starting to go down the path of adopting Docker and (probably) Kubernetes. Cloud infrastructure is either Linode or AWS.
All of that said, I'm a big believer in "use the right tool for the job" so if something comes up that requires C++, I'll use C++. If something needs Python, then Python it will be. If Prolog is right for something, I'll use Prolog. Or COBOL, or Erlang, or Ruby, or CL, or Perl, or SNOBOL, etc., etc, yadda, yadda, ad infinitum...
My side project is using Rails, React, Postgres, and deployed to heroku. There’s nothing flashy but I can prototype quickly and it’s easy for 2 people to manage.
My software dev path:
Didn't know anything serious about web dev/software systems 12 months ago.
So if my stack seems shiny, I just picked what many people on Medium/Hacker News were talking/raving about.
Front-end: React SPA (Via Create React App),Redux, React-Bootstrap
Why: I had no prior framework experience, single command setup, and plenty questions on Stack Overflow and Medium articles to learn with.
Going the plain Javascript route with no frame work,seemed like I would be on a slippery slope for spaghetti code down the line.
React's one-way data flow and component architecture made things incredibly easy to mentally digest.
I found React's documentation VERY well organized and explained, and Vue's documentation was intimidating when compared to React.
Version Control: BitBucket Git
Why: Free private tier for solo use and enjoyed previous experience with Atlassian products
CI/CD: BitBucket Pipelines
Why: Already using Bitbucket, this just worked seamlessly with it. Only $10 for 1000 more build minutes.
Firebase: Hosting/Cloud Functions/Firestore/Auth
Why: Everything is automagical
HTTPS static website hosting for the create-react-app (This is what first got me started)
Then came Storage,Firestore,Cloud Functions, and Auth.
Wish it had an automagical SQL product.
Low Entry Cost
Cloud Provider: Google Cloud
Why: Had no experience with any cloud provider.
I found the pricing/product offerings really easy to digest and interpret when compared to AWS.
Email Sender: Sendgrid
Why: Good starting free tier. Decent Docs. Easy to setup.
Backend Compute: Google Compute Engine +Docker+Python-based API app
Why: I can do local dev easily with Docker, push the image to Google Container Registry, then pull the image to the VM.
It's easier to do than learning to setup a CI/CD piped GKE cluster
Eventually I want to pay the "tax" go to a GKE CI/CD Kelsy Hightower setup,so I can git push/PR/merge a change, and it's in production.
I don't do enough changes to the backend right now to justify a CI/CD piped GKE setup.
Other:
Email Client: Gsuite
Why: Custom Domain Name
I'm already familiar with Gmail
Project Management: Trello
Why: Integrates with Bitbucket
Multi Device Support (Phone,Tablet,Desktop)
Easy to mind map and organize features
What I value:
Good Docs
Free tier for solo devs/low use, and progressive options for paid plans (Gives me breathing room on cash while I figure things out)
Lots of questions/answers on Stack Overflow or in-depth Medium articles.
Currently been working on a web app for about 8 months.
Python for the backend under the argument of "build using what you know". ReactJS for frontend development because Python for frontend work I find very antiquated.
Architecture wise I'm hosted in AWS. Data is stored in MySQL and Redis. All self hosted because I'm to cheap to pay per request pricing and the overhead is very minor when done right when rolling my own
My current indie web app is still in development so final numbers for prod are TBD. However I'm using it for
* Storing time counters on user actions (timestamp of last time a user posted/edited X, meant to be a throttling mechanism against abuse)
* Site content caching, around 20-50kb per page, each page being user generated content
Sizes will obviously vary on traction so not sure about the final numbers.
----
My last 9-to-5 employer was a very well known and my largest caching tier in Redis there was 512GB in a cluster configuration. I'm using the same server configuration and sharding logic for the indie thing, just on a smaller scale
I use multiplatform Kotlin to compile shared business logic to JVM and Javascript. The frontend then uses Typescript+React/Redux while the backend is a Kotlin Spring app with Postgres.
Not the OP but I started with cookie based auth and Spring Security and then moved to OAuth 2 - it’s extremely painless to setup (100 lines of code at most) but it did take a while to understand it all.
The Baeldung Spring Security course helped a lot.
If in a hurry though you can always use an external service like Auth0. I had that setup in an afternoon whereas understanding OAuth 2 and Spring Security took a week.
For the backend I start with Node, then get tired of the runtime errors and switch to Go. Go is cool but soon I get tired of writing “if err == nil” so I switch to Haskell. Haskell is nice but soon I get tired of not finding decent libraries for some not very popular things that I use, so I say fuck it and go to rails. Ruby is nice and Rails feels well thought out and solid, but then I can’t help be annoyed that it’s not fast, so fuck it, I switch to Rust. I get tired of lifetimes and wrapping everything in RefCells so I switch to Erlang. Erlang is cool and all, but no fucking meta programming. Sucks, let’s try Lisp...
I am not making fast progress, I think the problem is my keyboard, I should try buying that Ducky one I have been eyeing for quite a while.
You are right! I should stick with Scala. It will also give me the opportunity to learn a new language!
Play looks good, but maybe I should go with Scalatra instead? Should I be using Slick? It's quite different than any ORM I used so far, maybe I should bake my own Scala ORM. OK then, here we go. One more decision, REST or GraphQL? GraphQL seems to be the future. Let's go with that. Man, types are nice and Scala's type system is powerful, but sometimes I wish I had the flexibility of a dynamic language. It would make prototyping much faster.
May I recommend creating an AI to automatically rewrite your codebase into the flavor of the month, thus saving yourself significant amounts of labor which you could use to create an AI to improve the AI which is rewriting your codebase?
> types are nice and Scala's type system is powerful, but sometimes I wish I had the flexibility of a dynamic language. It would make prototyping much faster.
You can use both scala and clojure seamlessly.
I'm writing my first gui app in clojure for prototyping swing, but if I choose to rewrite parts of it in scala the rest of the clojure code would continue to work.
On a single project, I agree. But each time I start a new pet/side project I like to use a different stack. It's a good motivation to learn new technologies, and you learn more about them by using them "in anger" than you do in a purely theoretical environment.
Looking at nuxt.js, express and mongodb and then deploying customer pages as static pages on s3 rather than running these dynamically from the app. Picked up 100 customers in a single day so now I have to move quick.
Erlang with Ruby-like syntax plus macros..something like that. It compiles to the same VM as Erlang's and has almost all of the same semantics as Erlang.
Honest question. Is metaprogramming really that beneficial for any application development? I can certainly see the appeal, but how much will it improve productivity or quality compared to shipping features without it?
Isn't the actual wrong turn focusing on a metaproblem rather than the problem you set out to solve?
Metaprogramming is just macros: code that generates code. One use is to get rid of repetitive code patterns. Another is to move certain computations to compile time. Your statement that metaprogramming is used to solve a metaproblem is just wrong. Those two words are only related in that they share a prefix.
I get what and why of metaprogramming. I was specifically trying to quantify working without it. My reference to metaproblem wasn't what metaprogramming you want to do but rather solving the problem of a lack of it by switching languages mid-development.
How often has a commercial project been rewritten to gain metaprogramming mid-dev? I added commercial because personal projects have different objectives.
Third is to introduce new concepts into your language as if they were part of that language from the get-go. Which can entail both eliminating repetitive code patterns and doing computation at compile time.
Before you write a line of code, make a clear decision as to whether it's a learning experience or a business venture. Plan your project based on the goal you've chosen. If it's a learning experience, consciously avoid doing anything that doesn't push you to learn something new. If it's a business venture, consciously avoid doing anything unless you believe it has the highest RoI.
Some purely recreational projects do turn into viable businesses, but far more projects have failed due to indecision and yak shaving. If you're building a business, you don't gain anything by experimenting with a new stack. If you're learning React, you don't gain anything by writing a bunch of CSS and marketing copy. Don't be afraid to step away from the keyboard and ask yourself "Is this a good use of my time?".
So much this. If you want to build a business then there will be a lot of non-technical things to learn along the way. Keep the tech familiar so you can focus on those.
I had to make this decision recently and my conclusion was if your goal is to build something that makes money, do it with what you know. Why? Because you can spend 100% of your time building instead of 50/50 learning/building (or worse). Projects with the goal of being a business should iterate and fail fast. If you win the app lottery and actually build something people will pay for, they won't want to wait for you to learn while you build it out. Someone else will do it with some "old" tech like ruby on rails and take your customers and revenue. Spend your time working on the business, not the tech. Developers think code is important but the reality is that it almost doesn't matter. You can build something successful with the worst tech decisions imaginable. I know it's hard to accept but it's true. I've personally witnessed an absolute frankenstein's monster app, made of a mish-mash of what are now considered the worst technologies, get sold for $30+ million and the founders retired to the Caribbean where the build a mansion and an upscale restaurant just for fun. I pity the people devs who have to inherit that, but the value wasn't in the tech and it almost never is.
If you're spending 100% of your time building you're doing it wrong anyway.
I feel like this is just missing so much conte xt.
If you are working full time and can hack in your after hours to learn a new tech stack first and then start your project there might be an aggregate benefit over just starting the project right away using what you know.
Context matters to a huge degree in these kinds of discussions. I feel like people just like to give blanket statement advice.
I've also heard for side projects when you want to learn a new technology to experiment with one piece of it, so you're not totally overwhelmed, and so you can see how it works with stuff you know. For instance, use the rails backend deployed to AWS that you know, but swap out angular for react. I think this works for your question pretty well - if the side project becomes a money-making project, at least there's only one part of it that will be a technical risk, and often if you need to do something quickly, you can work around and do it in the parts that you know.
Prioritize. If your goal is to get something out quickly and make money, use what you know. If your goal is to learn a new stack, learn the new stack. If you learn enough to comfortably release to customers, do it; otherwise, either get comfortable or rebuild.
The intention at the outset. There's a low chance of success whether you are trying to make money or not. If you make money accidentally in a new tech stack... who cares because you can afford to support it.
Also build using what you know isn't always the best advice. Figure out what you want to optimize for and optimize for it. That may include doing things in a different stack.
C++/Qt for GUI, Go or Python for backend, Python for misc scripts/prototypes, C for uCs. They are all well trodden paths with plenty of good libraries available, and good tooling available. I have spent a bit of time playing with Zig most recently, it could finally replace C for me!
React/Redux frontend and .Net Core for backend API. The site takes advantage of Server-Side Rendering and Lazy Loading based on Routes/components. Authentication is built into the App with JWT and Authorize attributes on controllers.
I use Go, Vue.js, Postgres, NATS, Redis, Keel, and deploy everything Kubernetes.
So far I am super happy with the stack, runs cheaply and requires pretty much no maintenance. Just tag a release on GitHub, Google Cloud Builder builds the image, Keel updates deployments and in a few seconds I have updated production with zero downtime.
While some people are happy with their copy/paste binaries/code to remote servers, I think a good pipeline to release and run your workloads makes working on side projects a lot more pleasure where you can focus on code and features. It also provides you with a valuable experience that can make you more money than the side project itself :)
I’ve a small side project I’m working on. I am writing the backend in Clojure and then deploy with Jenkins using the Groovy DSL — it’s useful that everything runs on the JVM. The Jenkins code calls Terraform to set up the servers, drawing on optimized AMIs I set up with Packer. I avoid Docker for all the reasons I’ve written about elsewhere (you can Google if you’re interested). Packer does everything that people would otherwise use Docker for. For now the frontend is plain HTML and jQuery. I haven’t done the mobile app yet so no need for Angular/Cordova or React.
On the backend my API is using node+koa and postgres+postgis. Front end is vue. Koa is amazing, and there's a postgis package for knex (a SQL query builder) that lets me do geospatial stuff right in node, which is a game changer. Backend is deployed in containers, Google container registry builds the images for free on every git push and custom bash scripts deploy on the digitalocean instance. Front end is deployed with Netlify, also on the free tier, and also amazing. It's incredible how cheap (and mostly free) it is to deploy side projects these days!
How do you deploy your backend containers without downtime?
Do you host your Node app and PostgreSQL on the same machine?
I understand your frontend is hosted on Netlify and your backend on DigitalOcean: are they accessible under the same hostname (using Netlify proxy for example), or do you use CORS requests?
I do have a slight downtime (1 second or so) when upgrading containers since the bash scripts just pulls the new image, stops/rm the container and start up the new one... Although my project is for an enterprise customer who works 7am-5:30pm so I just do it outside of those hours. Each end point is it's own instance too, which minimizes the likely hood of anyone knowing.
It was all designed to run on kubernetes which would allow rolling updates with zero downtime, but I can't justify the cost of a cluster at the moment.
The node app and postgres are on different instances. They could easily be on the same, though. Postgres is open to the world since GIS analysts need to be able to connect to it with QGIS. The API is also open to the world using CORS for the same reason (a custom GIS collection tool accesses the REST API).
I think a 1 second downtime when deploying a new version of the backend is tolerable if the app is a single-page application, because the app can implement a retry logic client-side. This is possible only if the backend can be stopped and restarted really quickly. Otherwise, a rolling update or green-bue deployment is necessary.
Another question: do you expose your Node service directly to the Internet, or is it behind a reverse proxy?
Is there a guide on the internet that would help me figure out how to use Vue with Django? I tried a couple last month but they were poorly structured or didn't explain enough about what they were doing, and since I'm not great at JavaScript I gave up..
I've completely separated it out. Except for some authentication pages, it's all JSON APIs. Django project does not have any frontend assets of Vue/vuex
That's my stack as well. But sure you mean SNS, not SQS?
Though I have been thinking if I should replace elastic beanstalk with Kubernetes, since it seems to become the non-proprietary standard for deploying containers?
A 20$ admin theme from themeforest, based on bootstrap3. Nice enough that it does not warrant massive re-styling right now. If the app becomes bit of a success, we'll outsource this part to more competent people ( me and partner are both backend-ish people)
Wow, lots of Vue mentioned! I was expecting mostly react, but surprised to see there was actually some pushback against it.
> I've heard React described as "10 times as much work for a 20% better user experience", and I think that's about right (maybe not 10x, but probably 2x)
For my recent projects, I've used Go + DynamoDB for a GraphQL backend, and Next.JS with SSR on the frontend. Deploying both to lambda with Apex Up. Stripe for billing, SendGrid for transactional email.
Any tips on the go/dynamodb side of things? I’ve looked at it and experimented a little and it seems quite easy to get data into dynamodb and quite convoluted to get it out.
I'm using https://github.com/guregu/dynamo for the data reading/writing. In my experience, the trick with Dynamo is the schema design. I haven't had problems with the libraries/queries as long as the schema made sense for how I'd be accessing the data.
Elixir, Phoenix and Postgres. I like a lot about Elixir and find it's a great solution for many of the soft-realtime apps that I've been building lately.
Standing up a CRUD app in Phoenix is dead simple as well.
I haven't done an official count (comments plus the site) but a very strong showing for full stack Rails or Django, and when people do stick in a front end it is usually React or Vue.
Firebase suits my needs almost perfectly for projects where I'd rather focus on the front-end and design than maintaining a back-end, since I'm only using it for very small apps and sites.
Past that, I'm fond of Node, but that may be an unpopular thing to say on HN. :)
Anyone knows of similar kits/packages to Laravel (PHP) and RailsKits (RoR)?
Something that will let you focus on creating the part that's unique to your project rather than having to build everything (or a lot) from scratch? (Rails/Django/etc are not enough).
Use Laravel Passport for OAuth. Use the Vue Cli to create your SPA. Then just write a class in the Vue app to store the bearer token and send it through on each request.
I'm on mobile so going into detail is difficult. If it would be useful I'll write a guide for you.
Surprised no ones talking GraphQL with as much coverage as it gets on this site.
I’m running my newest solo project on Prisma with GraphQL Yoga as the backend and React + Apollo as the front end. It’s really quite something and allows for extremely quick development.
I'm sticking to standardized languages/tech with multiple implementations to keep control and deployment options: JavaScript, C, SQL, HTML and SGML, and Prolog, plus standardized APIs
Python and Cython, flask, gunicorn, various ML frameworks, Postgres for web services.
Python with Cython because this allows extremely nice flexibility between concise, expressive code and low-level targeted performance optimization.
Flask with gunicorn has scaled extremely well for us, but there are many good alternatives. Postgres because flexibility with customizations and data types in the database has been the most important thing for us.
I would have much preferred to use Perl, R, dotnet, or just about anything else, but there was an already made library in Python that was good enough for 90% of my needs, and 100% with some elbow grease, so my lazyness made me learn Python instead!
But wait- it gets dirtier than this! The code is done is Vi, with no care in the world about version control (rsync does most of what I need: backups), and deployed to servers by manual SCP, with no automatic tests, no care about CD or automatization - yet: as I need more and more servers, I realize I have to spend time learning say ansible to provision them faster than a copy paste of bash (I have low standards but I won't do curl|bash, just... no) but I am delaying that as much as possible to concentate on implementation.
The servers are on Debian VMs on Azure, just because they gave me free credits and do not perform as bad as AWS. When the free credits run out, I will move to DO. Then I will see what else I can use, and that will be the right time to care about ansible or whatever.
It is as ugly as I can get away with to get my MVP out, using tech from 1996 lol.