The free download requires account registration, which will discourage most people from trying it. Even third-party login might make it easier, but I didn't find that option.
The article is mostly about exemplifying the various leve of optimisation you can get by moving “hot code paths” to native code (irrespective whether you write that code in rust/c++/c.
Worth noting that if you’re optimising for memory usage, rust (or some other native code) might not help you very much until you throw away your whole codebase, which might not be always feasible.
It should be about the same, though the main differences are likely to be caused by the speed of the QR code generator, and the PNG compressor.
But assuming that the hypothetical C and C++ versions would be using generators and compressors of similar quality, it performance characteristics should be similar.
The big plus(es) to using Rust over C/C++ are a) the C and C++ versions would not be memory-safe, and b) it looks like Rust's WASM tooling (if that's the approach you were to use) is excellent.
(As someone who has written C code for more than 20 years, and used to write older-standard C++ code, I would never ever write an internet-facing server in either of those languages. But I would feel just as confident about the security properties of my Rust code as I would for my Java code.)
The general rule of thumb I’d give is that a performance aware but not micro-optimized Haskell program will typically run in about 2x to 5x the time of a comparable C program, and will take somewhere between 2x and 10x as much memory. For a naive Haskell program the range is much bigger- maybe 2x to 10x as much time and 10x to 1000x as much memory (it’s easy to do a lot of allocations in Haskell).
For extremely optimized Haskell you can get close to the speed of C, but there’s still a garbage collector.
There are also certain classes of problem where a naive Haskell implementation can beat other languages by mile, including C, if you use the same implementation in both languages. Laziness can be really great sometimes. This didn’t happen much in practice though because the kind of code that’s really efficient with lazy evaluation is very obviously not in a strict language so people don’t usually write code that way.
In the end I’d say Haskell is a good choice for performance sensitive but not performance critical program. In a larger Haskell application if you have a performance critical bit you can usually write Haskell code that will be fast enough if you know what your doing. For something stand alone that needs to be as fast as possible, or the most critically performance sensitive parts of a bigger application, I’d consider using C or C++.
To rephrase using my experience: "performance aware" Haskell is about as "fast" as Go, but needs more memory, and both are slower than the same Java code - but both are way more fun to write ;). Optimising Go is easier for most people though, in Haskell you _really_ need to know Haskell internals (how to read core and write unboxed code) and understand laziness.
The goal of this implementation is not to be fast, but to be clear.
I am doing some inefficient things (like two pass encoding) on purpose to keep things simple and clear. So using this particular piece of code to judge a language's performance potential is not really the way to go here.
The point they’re making is that there is no performance without tradeoffs and “fast” is meaningless unless you define what you’re measuring. Asking the question implies a misunderstanding of the intent of the implementation, OP was trying to subtly let them know.
Thanks for your advice. I've reverted the license back to 1.0, adjusted the royalty rate, and made the royalty disclaimer more apparent in case someone says it contains a HIDDEN royalty :)
As a small team, we don't expect to make great money just by the royalty. We choose to open source this project for many good reasons. But the only reason of adding royalty is that we want to protect our work from being stolen by some other random guy.
Yes these licenses are common, but the dual licensing with essentially the same effect as a single license seems fishy and suggests there might be something hidden. Just my two cents.
My concern is, are GPL-family licenses good enough to let contributors involve with this project? Because I heard that GPL licenses are like viruses and someone hates them.
Or maybe dual licenses are good enough if one of them is Apache License (for personal use only)? This is what VGG License 2.0 does.
I can see your concern with GPL-family being like a virus. Indeed, it is very restrictive and contributors may not want to get involved due to the logistical overhead of trying to add features to/improve it.
Addressing your concern about not being paid for your work, this is an age-old problem with open source. It's hard to make money off a product that is freely available to download. A good rule of thumb is to make users feel like they _want_ to give you money, instead of making them feel like they _have_ to give you money.
Some options you have if you want to keep it open source but still make money:
1. Do the dual licensing approach, and use one open source license of your choice, and one commercial license to allow businesses to use your engine in their proprietary software.
2. Donations link in the open source software
3. Paid technical support/Freemium model
4. Paid training programs to train users to become an expert with your engine
5. Early access subscription model: create a system where users can pay a monthly subscription fee in order to get new features earlier than official release.
https://www.youtube.com/watch?v=p0lFyPuH8Zs
I need a transcript as the FOSDEM website didn't provide a one (404 not found), but youtube has.