Yeah, that's an issue, esp with WebGL.. but you can get pretty far by reducing calls with a cache, things like "don't set the uniform / attribute if you don't need to".. but I hear WebGPU has a better API for this, and eventually this should get native performance.. though, I also wonder, is this really a bottleneck for real-world projects? I love geeking out about this.. but.. I suspect the real-world blocker is more like "user doesn't want to wait 5 mins to download AAA textures"
> Nerfed timers
Yeah, also an issue. Fwiw Mainloop.js gives a nice API for having a fixed timestep and getting an interpolation value in your draw handler to smooth things out. Not perfect, but easy and state-of-the-art afaict. Here's a simple demo (notice how `lerp` is called in the draw handler): https://github.com/dakom/mainloop-test
Re: multithreading, I don't think that's a showstopper... more like, techniques you'd use for native aren't going to work out of the box on web, needs more custom planning. I see this as more of a problem for speeding up systems _within_ systems, i.e. faster physics by parallelizing grids or whatever, but for having a physics WASM running in a worker thread that shares data with the main thread, it's totally doable, just needs elbow grease to make it work (will be nice when multithreading _just works_ with a a SharedArrayBuffer and easily)
Multithreading yes that works the way you mention but I meant multiple rendering contexts.
In standard OpenGL the de-facto way to do parallel GPU resource uploads while rendering is to have multiple rendering contexts in a "share group" which allows them to share some resources such as textures. So then you can run rendering in one thread that uses one context and do resource uploads in another thread that uses a different context.
There was a sibling comment that mentioned something called off screen canvas which hints that it might be something that would let the web app achieve the same.
I've been using Figment [0] Needs a little bit of hand-holding when you have structs with different field names, but otherwise does everything I need for that particular part of the problem.
Small, related anecdote: back in the year 2001 or so, this is in the same family of how I built websites...
The difference is I wrote in in C as Apache Modules. So, like, most people were using other people's C modules (like PHP or CGI), but once you dug deeper and you wrote your logic/site _as_ a C module, it was so much more fun (and powerful too).
I didn't have much of a templating language, so the answer to "can we change the text on this page?" was usually, "sure, just give me a few minutes to recompile the whole Apache server" :D
Few others have brought up Apache Modules, and they are incredibly similar to my idea. :D Did not know about them while I was developing it. The main difference as far as I could see was the fact that you had to recompile / restart the server. Which I try to avoid, so little changes require almost no recompiling.
In apache 1.3 it was far from unusual to do a complete bundled compile and restart the entire thing every time because there were gremlins that showed up when you dynamically loaded the more complex modules often enough that it was operationally less aggravating overall to take the brute force approach (I did quite a bit of that a couple decades back, for my sins).
apache 2+ is a very different (and rather more robust) beast, and also has the 'graceful restart' system - see https://httpd.apache.org/docs/2.4/stopping.html - which makes the parent tell its worker processes to drain their request queues, -then- exit, after which each one is replaced in turn until you've fully upgraded to the new configuration+code.
This approach has its disadvantages, of course, but not that morally different from how erlang processes hot reload into new code, and once you knew what you were doing the end result was simple, predictable, and nicely transparent to end users.
You might also want to look into ISAPI extensions[1,2] in Microsoft’s IIS, those are also just DLLs that the web server loads into itself, and were once advertised as the most performant way to serve dynamic stuff from it. It doesn’t look like there’s a way to request that extensions be reloaded, though: the server either unloads them at its discretion (once no in-flight requests are using them?) or not at all (if “extension caching” is enabled). But there’s an advert[3] from somebody who shimmed that capability onto it back in 2006.
(You wouldn’t have had a good day debugging these things, mind you. But it’s something that people experimented with back in the day, alongside Web servers programmable in Java[4] or Tcl[5].)
Where can one find resources about writing Apache Modules?
When I was experimenting with writing my own HTTP server, I eventually figured out that I'm not really interested in writing my own production-quality server from the ground up—instead, I might be interested in just writing an application layer, in the form of a module for Apache, or nginx, or something. But the resources to create such modules seem to be scarce and/or hard-to-find.
Putting aside the politics and all, focusing on the tech- one thing I came across when trying to do my own collision detection was the idea of using the GPU and occlusion queries for pixel-perfect results.
I didn't come up with the technique, but it's super cool, and since you're not taxing the GPU with tons of 3d triangles, it's perhaps a bit more free to do stuff like that.
Ignorant question, but I assume that means there is no "browser-facing" or accessibility to the GPU for a web based application in this way (WebAssembled or even otherwise)?
This article makes several different points that would ideally each be tackled on their own.
You don't need a router when you have pattern matching (just split the url and match on static and dynamic vars however you need)
Auth is typically DIY in any language, or SaaS like Firebase/Auth0. It's not a language or framework problem, necessarily
CSS/JS tooling makes no sense for many frontend Rust frameworks like Dominator, which is in Rust (not JS) and has its own approach to styling that works for it (e.g. attaching styles to signals, so they change on the fly)
I get what the author is saying - in fact I've been around the block a couple times solving all the different points in the article and it is painful. For example, see https://github.com/dakom/dominator-workers-fluent-auth which does a lot of the stuff here on the Cloudflare workers framework (also adds another wishlist item - localization with Fluent)
A "batteries included" full framework that does _everything_ is nice to have, but many real-world projects will want to replace those opinionated decisions and go the DIY route anyway. Rust is more than mature enough, even on the frontend web, to tackle all of that - if you know where to look (wasm-bindgen, etc.)
> You don't need a router when you have pattern matching (just split the url and match on static and dynamic vars however you need)
Web frameworks allow for much more: URL redirections, specific management of append-slash and case-sensitive URLs, complex regex matching, etc.
> Auth is typically DIY in any language, or SaaS like Firebase/Auth0. It's not a language or framework problem, necessarily
False. Django, Laravel, Rails and batteries-included languages have really good auth management. I personally consider it a gigantic mistake in 90% of orgs to outsource auth to external parties.
The ability for experienced web devs to just hit the ground running and have 10 basic CRUDs running in a single day because they don't have to deal with this needless complexity is simply amazing for small businesses.
Erm, just return a new url after the match? Get fancy with state machine like enums? Rust has everything you need here, not getting why you think this requires a framework.
> Specific management of [...]
again, match for that, map your url parts, whatever - it doesn't need a 10,000 pound gorilla when it can be done in a line or two of code
> complex regex matching
erm, regex crate?
> Django, Laravel, Rails
Yup, those are valid choices. I said it's not a framework problem necessarily.
Not a one size fits all sorta problem.
Wordpress is also a fine choice if your business is knocking out new sites for a new client every month.
But, if you're building a single product over the course of a year or two, it's not the end of the world to spend a couple weeks rolling your own auth and hook it up to transactional emails and everything else. It's just one small problem to deal with, not major in the grand scheme of things. YMMV
> Rust has everything you need here, not getting why you think this requires a framework.
Because you're reinventing a wheel that doesn't need reinvention, and the most likely thing is that you will neither reinvent it nor pick the best library that an opinionated framework with hundreds of eyeballs has.
> again, match for that, map your url parts, whatever - it doesn't need a 10,000 pound gorilla when it can be done in a line or two of code
Sufficiently large and complex websites will have that need.
> But, if you're building a single product over the course of a year or two, it's not the end of the world to spend a couple weeks rolling your own auth and hook it up to transactional emails and everything else. It's just one small problem to deal with, not major in the grand scheme of things. YMMV
No individual problem is large, but it is objectively a dozen little problems, all with a nontrivial chance to blow up into larger problems.
Code reuse and frameworks exist because, unless you've been doing web development for a long time, you _will_ run into issues that have already been solved.
Do you see that you have static and dynamic parts matching, multiple variable capture, etc.?
I'm not talking from a theoretical perspective. I'm showing you an actual example of a fully baked auth system that does everything on the wishlist (and more) and has no need at all for a router because Rust is itself powerful enough to do all that out of the box
> and the most likely thing is that you will neither reinvent it nor pick the best library that an opinionated framework with hundreds of eyeballs has.
Having used Spring has convinced me that framework popularity has no correlation with quality.
Of course, goes without saying - the parts I'm most happy to discuss here are about the tech... though... this was a slow-burn of a project I started nearly a year ago and then abandoned. I think the pixel-perfect occlusion query stuff is neat (for someone like me who never worked on AAA games at least), but, I wouldn't really be able to explain how it works without re-learning, hehe ;)
I really like the combo of Shipyard ECS for the game stuff and Dominator for UI. I've used Dominator on large projects and it continues to be a great framework.
Shipyard is just super clear and is easy to reason about. I know Bevy is the hot thing these days, and maybe I'd check that out for a larger / collaborative project, but Shipyard really hit the sweet spot for letting me get into the nitty gritty (which I could manage here since it's just simple 2d state machine sorta stuff)
I do consider it a feature, in hindsight. Learning to program by asking "dumb" questions was great, because chats were ephemeral, nobody cared if the same question was asked for the 10 millionth time or risk of embarrassment being like 12 years old and asking greybeards for help.
Nobody also felt bad saying "RTFM" because, whatever, it blows over in a minute, there's no permanent record of having a harsh moment, more free to just move on.
The same old questions being asked due to no search also provided more opportunities to answer those questions, so, newbies could start to learn by teaching.
So, yeah, I think something beneficial was lost, even if I wouldn't go back to that approach- it's more of a tradeoff than a definitive improvement
> I do consider it a feature, in hindsight. Learning to program by asking "dumb" questions was great, because chats were ephemeral, nobody cared if the same question was asked for the 10 millionth time or risk of embarrassment being like 12 years old and asking greybeards for help.
I pity the new generations for not having this kind of opportunity: the opportunity to make mistakes, say dumb stuff and goof off with all these things vanishing in a matter of minutes, hours at most.
I miss the old internet: at any point you could pick a new nickname and get a fresh and clean new email address from many of the webmail providers and just start a new online life.
And it was considered normal. It was actually a "best practice" to never use nicknames.
This approach simply doesn’t work when users are allowed to vote or have any sort of scoring mechanism. Since bad actors will also create multiple “online lives” and manipulate those systems with a few clicks
Remember when phrases like "Never use your real name online" used to be near universal? Yeah, this is something I also miss about the old Internet.
Like, even back then you could absolutely tie your IRL identity up with your online identity, but the difference of course was that it wasn't a requirement of existing online, like it is now. Like yeah, you can stay anonymous but a) it's super difficult since the modern day assumption is that you're not doing that and b) that you're up to no good, because why would you be hiding who you are, unless you were doing something shady. And now even "normal" people lament just where we went wrong and what happened to online privacy. To the aware, privacy dying like this was clear as day, but I suppose most just didn't hear, or chose to ignore, the alarm bells.
And now everything is logged, analysed, and associates with the people who produced the messages and other sundry content. There is no ephemera, we need laws just to be forgotten by services (as an EU citizen, I'm glad about law existing here, but it shouldn't need to be a law, it ideally should be assumed), and we're constantly getting watched by both states and surveillance capitalists alike. Not actively in most cases, mind you, but passively, with our movements, our interactions online, and just what we do, just getting aggregated into these humongous data sets of Big Data, to train statistical models on. Mostly to surveil us even harder, or to manipulate us in the form of advertisement, which can be even more insidious in some ways.
I'm sure that stuff like the Cambridge Analytica fiasco could have occurred even without this destruction of privacy, anonymity, and ephemeral content, but I posit that it would have been way more difficult had people not been encouraged to put everything about themselves into services that would log them and build evermore complex models about them and their thoughts. And now this kind of stuff can be used to destroy democracies, and as alluded to earlier, manipulate for example our spending habits. And now we all wonder just where this all went wrong.
It's often desirable for UI to be described in a declarative fashion, i.e. instead of where you have "do something" (set button color to red), refactoring so it becomes "is something" (button is red if state is x)
I might not be describing that well, because once you go down that road it really becomes a whole overall approach that infects the whole program (like functional reactive programming), and so it's really about how the whole flow fits together from top to bottom, and that can be very elegant.
I don't think that's the right fit for everything, i.e. in gamedev it might make more sense to just update some object's position imperatively, but for UI it tends to work pretty well.
Asimov and Silverberg's Nightfall novel seems related- a great sci-fi story about a civilization on an alien planet that isn't prepared for the long cycle of astronomical movements (they go crazy during an eclipse every 2000 years or so). Fun read :)
Asimov and Silverberg confused me for a moment - Nightfall was an Asimov-only short story from 1941. Still think it's one of his best. TIL there was a collaborative novel as well, which carried on after the events of the short story.
* * *
If you like novels that explore civilisations that have long astronomical cycles, the other classic is the Helliconia trilogy by Brian Aldiss. It's set in a double star system where the main planetary orbit takes 2500 Earth years, and the seasons last for Earth centuries. Civilisation tends to collapse when winter comes, but gradually approaches semi-industrial levels of technology by autumn.
Yeah, that's an issue, esp with WebGL.. but you can get pretty far by reducing calls with a cache, things like "don't set the uniform / attribute if you don't need to".. but I hear WebGPU has a better API for this, and eventually this should get native performance.. though, I also wonder, is this really a bottleneck for real-world projects? I love geeking out about this.. but.. I suspect the real-world blocker is more like "user doesn't want to wait 5 mins to download AAA textures"
> Nerfed timers
Yeah, also an issue. Fwiw Mainloop.js gives a nice API for having a fixed timestep and getting an interpolation value in your draw handler to smooth things out. Not perfect, but easy and state-of-the-art afaict. Here's a simple demo (notice how `lerp` is called in the draw handler): https://github.com/dakom/mainloop-test
Re: multithreading, I don't think that's a showstopper... more like, techniques you'd use for native aren't going to work out of the box on web, needs more custom planning. I see this as more of a problem for speeding up systems _within_ systems, i.e. faster physics by parallelizing grids or whatever, but for having a physics WASM running in a worker thread that shares data with the main thread, it's totally doable, just needs elbow grease to make it work (will be nice when multithreading _just works_ with a a SharedArrayBuffer and easily)