Website width isn’t calculated correctly for mobile and you need to scroll left and right in order to read the text. I was doing that right after reading
Because it's not a text organised into sections, paragraphs and whatnot with html, it's just a plain text file rendered into one big paragraph with whitespace set to `pre` ¯\_(ツ)_/¯.
It's arguable whether this is better or worse, because now you'll have the text wrapping where needed _but also_ again where it would have on a wider viewport, even though it already wrapped. So on mobile you'll have a full line, then half a line, then a full line, then half a line, and so on, wrapping alternatingly at the natural edge of the viewport, and again at the original line end terminated by a line break.
It probably changes from person to person, but while I find wrapping like that harder to read than seeing it all normally, for code it is still much better than scrolling back and forth for every line.
Using Firefox Mobile reader mode turns the page into one, continuous brick of text. No way of discerning sections, lists, or anything since no newlines seems to work. I might have missed something, but that ain't a fix. I'd prefer scrolling sideways.
There's a big difference between "mobile first" and "mobile hostile" (even if the hostility is unintentional). Here, for example, the website is just 10-15 characters too wide on mobile, regardless of orientation. It forces you to scroll left and right constantly just to read the lines. And regardless of orientation actually feels hostile, if I flip my phone to landscape, the font size increases and thus I still can't read the whole line.
I agree that designing for mobile first is annoying for the web in general. But... This is text! Most of the website is text, and it doesn't wrap or resize properly for a small screen.
Here, it's likely just a bug with the window width calculation, not a "mobile first" argument.
If a website is 99% text and it can't be read from every screen size from a phone to an ultrawide monitor... Than it's a bug, not a design choice.
Counter argument: content is mostly consumed on mobile device nowadays, why should I require users scroll horizontally to keep reading and in doing so losing focus of the actual row? “Mobile first” was a more of a methodology, aka start designing from the small device and going up. And this website clearly failed at keep me reading from my mobile device. While I don’t necessarly think “mobile first” was a mistake, I believe today an adaptive layout it’s a more appropriate methodology for the existance of a ton of different devices, screens, ratios and pixel density. Like most websites still sucks on my 32’’ external monitor, it’s not a mobile only problem
Counter-counter argument: A person creating a personal homepage has no obligation to cater to any particular audience. See, for example, jwz.org [NSFW when linked from here, very much on purpose]
You are right that nobody is beholden to creating pages that can be viewed on any device. However, it is not unreasonable to say that a person has mismatched interests when they say they care a lot about quality, interoperability and values in general, but then don't care about making the document explaining this viewable on most devices.
Yes, ultimately it's not important. But if you only publish your website using Gopher, at some point you have to accept that you're interested by making cool things rather than the other things mentioned.
> Counter-counter argument: A person creating a personal homepage has no obligation to cater to any particular audience.
That’s irrelevant. Most of the Web users are using mobiles, so if you decide to set a website up, the very basic thing is to ensure it’s readable on mobile. And since plain HTML is already readable on any device by default, it would be quite strange to voluntarily make it unusable on these devices.
The mobile crowd may not be your desired or intended audience though. It's fine to publish a website in Icelandic even if more people could read it in English.
All that's needed for a blog site to be mobile-friendly is to just not actively break it. Plain html articles with no css read great on mobile, e.g. http://motherfuckingwebsite.com/
"Mobile first" means that the design is accessible on all devices and resolutions. Traditionally websites would be built primarily for large displays, and a separate mobile version would be tacked on, if the authors cared about those users.
In 2023, there's no technical reason websites should be inaccessible on any device. Doing so intentionally is needlessly user-hostile.
This is a very first-world opinion. Low-end (second hand) smartphones are some of the only available computing devices available for a large part of the world.
What even does "keep it simple" mean. What even is "understandable". Every article about this sort of thing skips past the interesting arguments and assumes that their approach is the simple one, that their aim is simplicity.
If you are presented with the option of an abstraction – perhaps a library or framework, perhaps a service or API – is it "simpler" to use it or to not?
If you don't use it, and you build the alternative, you know more about how the system works as a whole, and therefore arguably it's simpler. But complexity is still introduced, and in some cases the abstraction layers still need to be built (but maybe not all of them).
If you do use it, you can consider the system to be simpler because you hide the complexity, and on the happy-path of usage that may be true, but the complexity still exists and you don't understand it because it's hidden. Arguably it's simpler, but it's easy to point to how it's more complex.
I've met engineers who strongly prioritise one or the other of these sorts, and to them that way is obviously simpler, but the problem is that both approaches are obviously simpler when you look at them in a certain way, you have to get past the obvious to realise that there are trade-offs.
Imagine that you want to make an apple pie. Normally you would buy all the components and utensils, then bake it in a pre-existing oven.
But if you want to make an apple pie from scratch, you first need to create a Universe, and you likely go for the simplest Universe that supports apple pies. The overall stack would be simpler (especially if you pursue simplicity and ease of understanding as a goal), but the last step, making the actual pie, may be more involved, and the taste of the resulting pie may be not top-notch. In exchange, you have a pie which you completely understand from first principles.
> In exchange, you have a pie which you completely understand from first principles.
If you ever get to it, with trying to create a Universe and all. It’s like the drum loops joke:
> I thought that using loops was cheating, so I programmed my own using samples. I then thought that using samples was cheating, so I recorded real drums. I then thought that programming them was cheating, so I learned to play the drums for real. I then thought that using purchased drums was cheating, so I learned to make my own. I then thought that using pre-made skins was cheating, so I killed a goat and skinned it. I then thought that was cheating too, so I grew my own goat from a baby goat. I also think that this is cheating, but I’m not sure where to go from here. I haven’t made any music lately, what with the goat farming and all.
You can add stuff to make something easier to build or deploy (faster, more automated), while the addition makes the whole more complex (less understandable, harder to troubleshoot).
I don't get your comment in regard to this article.
The author finishes by very explicitly saying do your own thing your way. Moreover, they suggest they're not happy using systems that they find understanding obscure/a burden.
One of the article's themes is simplicity and its relationship to how we design software. So no, I think the comment you're replying to is totally apt and welcome discussion of the idea.
I did read the article. My comment is not a criticism of the article, it's a general point that I think about a lot and which I think was worth discussing in the context of this article.
I think this article is a fairly clear example of one side of the debate, and while it's somewhat open to the idea that there are other ways of doing things, it is also somewhat dismissive of using other peoples' abstractions.
Maintainability is one of my core values too, down to doing my own bike maintenance, like you! But it is far from being #1 to such a degree that this would make sense for me.
The Framework laptop is a good divergence point for us. Whereas you continue to eke out 9s on the DIY side, I realize resilience by having a spare decade old ThinkPad lying around at all times on which I can run Linux and Neovim in a pinch, and trusting there will be a lot of sub-$100 ThinkPads in the future should that one break on me too. I carry around the bits of CS and mathematics needed to trust myself to write a slow, informal, bug ridden parser with Haskell's combinators from Markdown to HTML if I ever have to. I don't see that day coming any time soon.
I'd hire you if I could. You'd be the perfect counterweight to a great many folks who tilt in the opposite direction.
The author is honest with themself and earnest with the reader. They're unapologetically themself, and figuring out how to achieve their goals in ways that align with their values and bring them joy. If those values do not align with yours, that is okay. That is beautiful! Now you have a clue as to what path you might explore.
The article is kind and humble and authentic, and I think that the author and the article make the world a better place. Thank you for writing it, and thank you for sharing it.
> The author is honest with themself and earnest with the reader.
Earnest certainly, but being honest with themselves? Very much the opposite. And humble? Very much the opposite. The whole piece is self-congratulatory with the occasional /r/confidentlyincorrect nugget here and there.
To me it's easier to think about this as an exercise, an aid for the author to achieve their internal goals, likely of learning, being in control, etc. It's only incidentally intended for the reader, so the reader experience, Spartan as it is, is not the point, the writer's experience is.
Since you read it, do you mind sharing its contents with us? It looks like many of us, who came here from a mobile platform, gave up reading way too early.
Author wants to understand and control the whole toolchain used to build their website (of course excluding most basic things like CPU architecture up to the programming language), because they want to be able to fix it themselves and want it to work in 10 years too, so not relying on others. They tried Hugo, didn't like it because of that, writing plain html was too much work, so they built a single binary dynamic site generator with a webserver themselves
I gave it a quick look and I am almost certain that this person is trolling us. The website literally mentions it uses “plain HTML” when in reality the website is “plain text”. There’s zero formatting HTML here. No headers, no paragraphs, no lists, not even a <center>
The little HTML and CSS it uses actively renders the website worse. The poster is a troll.
Maybe, but Git has hooks. The author could deploy on push to do it only when needed, and immediately when relevant, rather than having to decide on a trade-off between immediacy and doing a lot of useless operations.
Obviously, this approach might not work for everyone, but I like to self-host my repos and use a git hook (post-receive) like this:
#!/bin/sh
BUILDDIR=/home/buildhook/.ib-build
BUILD=`grep build |cut -d" " -f2`
if [ -n "$BUILD" ]
then
touch $BUILDDIR/$BUILD
fi
And then the buildhook user has a job that runs every minute by cron:
#!/bin/sh
BUILDDIR=/home/buildhook/.ib-build
BUILD=`ls -t $BUILDDIR | head -1`
if [ -n "$BUILD" ]
then
rm -f $BUILDDIR/$BUILD
(
echo Building $BUILD on builder...
ssh builder time ./build $BUILD
) 2>&1 | mail -s "Building $BUILD on builder" build@example.com
fi
For my use case, on this repo, pushing to a branch with the word "build" in its name will trigger a build of that commit, which builds and packages it into a dpkg, and the build server hosts a private apt repository so I can just `apt-get update; apt-get install blah` on all the servers.
An alternative strategy is pushing to a different repo on a build server, etc.
I guess that would have been fine if we didn't have alternatives (ie: callbacks). He is already running a web server, so he could listen through that for updates.
you're making a lot of assumptions about how my deployment works here!
my git repositories are all on the same server that hosts my website - git is not doing a remote fetch - it's a local fetch, which makes it basically instantaneous.
even if it were a remote fetch, i would be fine with it. 525,600 http requests per year is less wasteful than NetworkManager's heartbeat (which defaults to 1 heartbeat request per _30 seconds_ on Arch)
for some context, a stock nginx server with 8 cpus can handle ~500k+ http requests _per second_.
Perhaps they may be right in their critiques, but as an industry we’re rapidly heading off into mass stupidity in order to follow a bunch of collective “ought to”s and “shoulds”. Many of which have the current validity of an urban myth.
What passes for “engineering” these days would be laughable if we weren’t actively building our future on it. My father was an aerospace engineer and I feel I can hear him rolling over in his grave at was passes for engineering in modern software development.
Yes, this is a comment section so everyone is free to comment. But personally it’s taken me 25 years in the industry to finally ignore all those chattering voices and actually do what I feel makes sense for those few opportunities where I can.
And on a personal site no less? That old “here’s to the crazy ones” line in the Apple commercial sure rings far from true. Given what the internet has become and that once startups are now literally the largest corporate behemoths on the landscape- it should not be a surprise.
Teddy Roosevelt’s quote about “the man in the arena” is so true these days.
thx for the encouragement! i try to take criticism lightly, especially on hn, and especially when it's about a post on my personal website.
my website is my sacred & unique playground, not a perfectly optimized website. i write it all myself, and it evolves over time.
my style definitely strikes people the wrong way sometimes - i'm used to people being critical about it. i make some weird decisions for the sake of contorting my content into a form that i like, but i think that's what makes my content endearing. it's not like everything else out there. some rough edges, sure, but i'll get better over time as i explore my "personal form" :)
to me, a website is like a long-term art project. it should represent the author - and in my case, my site represents me, unabashedly.
My website, game engine, and webserver is one binary. The source has 1000 lines of C code.
The binary has a size of 164k (I don't know why it is so damned big).
I included my own webserver. The existing servers add way too much (hidden) complexity.
You can check out the result, an interactive fiction game in German, here: http://vmd34232.contaboserver.net/
(no tracking, no ads, not commercial)
Ah that's nice, long ago I used parchment.js to load a inform7 created z5 file on my website. You could try to compress your executable with upx https://upx.github.io/
I agree wholeheartedly with all of the points made in the article, but this website kinda sucks as an example.
- The website is "one binary", but the deploy strategy involves compiling it instead of running a binary artifact
- Won't rely on github pages, but depends on external resource at openlibrary.org
- Writing your own stuff lets you take advantage of open standards, but the .html files in thoughts/ are structured as plaintext with a false file extension
- When people go this route, they usually try to cram everything into the initial page load. This has several static files served dynamically.
- The Golang code does not cache any responses, and does not store templates in memory.
- Several dependencies in the go.mod file, all of them seem unused?
Most importantly:
No discussion about the technical benefits of the "single-binary" ethos compared to modern infrastructure.
It's my humble opinion that it's a great idea, and this is a poor example.
The code uses `//go:embed` which actually embeds a fake filesystem into the binary. It’s really easy to miss that part, but it truly is one single binary, despite seeming to reference files by paths.
Maybe it's just not the kind of thing I do, I'm not sure I see the huge need for dynamism.
For my personal blog, I wrote a simple static site generator in Python. It converts markdown files into articles, an index page, and an RSS feed. The HTML templates and CSS are written by hand. It's required minimal maintenance over the years, and for the most part Just Works. After an edit, I rebuild the site on my local machine, and deploy with `rsync`.
I do have a couple of "dynamic" things hosted behind the same domain. For those, I configure nginx to reverse-proxy to a local service - which could be a python script, or anything else.
The only real advantage I can see to my setup, beyond personal preference, is that redeploys come with precisely 0 downtime. Presumably, if you're updating a binary, there's a brief moment where the old one shuts down and the new one starts up (unless you're doing some kind of clever reverse-proxy switchover)
I share a number of the same values as the author. I also have a hard time with not understanding things and relying on others. But I guess I understand Hugo and GitHub pages (I use gitlab pages) well enough that I'm happy to use them.
I wonder if the author owns a car, and if he does, which?
Locking my Hugo build with something like Nix makes that comfortable. I can upgrade as I discover bugs but can always go back if it ain't broke. I do wish sometimes they would stop adding features.
What it's driving me to do in this same vein is build my own Hugo theme.
Also, tend to agree about dynamic sites vs JavaScript behemoths.
Disagree with the authors styling, and am a big fan of classless lightweight CSS, of which there are a number to choose from these days (I keep a list).
So the tl;Dr is they have all of the complexity and bugs of a static site generator, and all of the complexity and bugs and security risks and runtime cost of a dynamic website too.
I think they missed the point of a static site generator: you "compile" your site once, and then you have static assets that never need any maintenance ever again (unless you decide to change something) and which can be served by anything, anywhere.
This approach seems to take the worst aspects of static site generators and dynamic sites and bundle them together: you now have a binary to maintain (that has to generate the HTML et al anyway) and you have the potential for security/other bugs and you now have extra CPU/RAM/IO load on your server and you need specific hosting that allows you to run your binary.
Don't get me wrong this is fun and all and nice that it works for the author, but I don't think it is a sensible way to make things simpler, more reliable, or easier maintain (the opposite in my mind).
I will agree that Hugo is terrible IME - so so so much complexity for very little benefit when compared to Jekyll et al.
Thanks, I didn’t see those other dates. I was taking mine from the “closing arguments”:
> the web needs more weirdness. and more excitement. and more personality. SO GET OUT THERE AND MAKE A FUCKING DYNAMIC WEBSITE. THERE'S NOTHING STOPPING YOU. YOU WILL BE GLAD YOU DID. 10/10 WOULD RECOMMEND. WITH MUCH LOVE. JES ~2022-04-01
A lot of people here are criticizing this article as if they too hadn’t been through a phase like the author’s. I very much doubt it’s the case that the commenters have all been forever wise — that they have never had a partly formed, more naive, sophomoric era of their lives. A lot of people just aren’t brave enough to document them online. Kudos to JES for that, honestly.
i'm in my 30s & have been in tech for a decade, how long is this "phase" expected to last? ;)
> I very much doubt it’s the case that the commenters have all been forever wise
representing and sharing my internal, authentic self with the world _is wise_ imo. i think the world deserves more expressions of authenticity.
i don't stuff of my personality into a shoebox before presenting myself to the world on _my own website_. if it seems a little manic or expressive, well, that's because i am a little manic and expressive.
To be honest the next phase could well be described as old and boring. It’s certainly a phase in which the positive side is being more confident in oneself, but on the other hand being much less malleable to the dynamics of the world around you.
The next change will come from within so one could argue it’s almost deterministic from here onwards. Set your course well!
Thanks for pointing this out. I was unnecessarily harsh in my mind when reading this, and your comment reminded me of my own first websites. I still have them in my personal archives, and I'm very happy that the Internet Archive doesn't know anything about them.
I quite like this! I see a lot of people quoting the line "I have very high and unusual standards" and only paying attention to the first adjective. It's obvious that many of the things people are complaining about are simply not important to them, which is fine! Part of the magic of the personal website the author talks about comes from how different people prioritize different parts of the website creation and usage experience differently.
Where I do think the article misses the ball is on static vs. dynamic websites. If you need a dynamic website (for example, to display the user's IP), build a dynamic website. If you don't, build a static website. There really isn't another argument presented here beyond "some things you can't do with a static website", which is really just the fundamental tradeoff of a static site.
Anyways, a neat glimpse into a particular development philosophy that I will very likely never do but I will share with some similarly-minded friends :)
With just really minimal efforts of styling, a webpage can be looked so much cleaner. This[1] is a really good example of it. This website is simply unreadable and shows lack of care for their readers, which makes me spend even lesser time on it than I would.
Idea is good, write-up not so good. You need to cater for people using differently sized reading devices and rewrite the article to be more readable. If it alienates people then you've lost the battle.
Lot of (IMO) useless hot air comments saying this or that in here...
You can do your fancy build process, single binary, whatever. The bottom line is if you want to call yourself a (good/strong/solid) web developer, your site needs to be accessible across all form factors.
I'm on mobile so I can't check, but I would bet this thing doesn't get even close to a good score on a tool like Lighthouse. Then the fact that some people are applauding OP and turning around making fun of "modern" software development (whatever that means) don't even realize they're part of perpetuating that problem (applauding sites which are missing key peices of accessibility and functionality)
This isn't really opinion or up for discussion either, this is quite literally the benchmark of living in our reality. We have accesibility guidelines and standards built over the years by hundreds of dedicated folks who worked hard to make those standards and guidelines clear - SO USE THEM! (meaning HTML5/CSS3/WCAG) Software is for humans, to be used by humans. Sure, I get the aspect that there is a bit of artistic freedom which OP has taken and by no means has to follow any guidelines at all. But this is definitively NOT a high standard or normal in any sense of what decent modern web development is.
I’ve been working on something like that but, and I’m not sure if you’ve used WASM much, but WASM has none of the web APIs so you need to lean on JS to do UI or local storage, etc.
I’ve been fiddling with my own 2D canvas UI, and would like to get WebGL thrown in the mix but that’s a learning curve of it’s own.
The effort to build my own input widgets and intercept click events, etc was actually less then I would have imagined.
I find WebAssembly a fairly odd duck though and highly disappointing. As the quote goes, there’s little actually “web” or “assembly” about it. Performance seems on average no faster than JS- sometimes worse. The driving factor being how and how much you need to serialize/deserialize into its TypedArray memory space. And while you can port legacy codebases over, they then will be using various kludges to use the existing JS API’s instead of the POSIX-style interface they’re likely expecting.
If anything, I have a newfound respect for the performance of current JS engines. And am about to go all in on HTML Custom Elements for the UI. Once I ignored the various features most tutorials talk you through and looked at them as just a class overriding HTMLElement, they made a lot more sense to me.
> but WASM has none of the web APIs so you need to lean on JS to do UI or local storage, etc.
Yeah, haha I am eagerly waiting on these additions. I'm following the spec quite closely and would love it if one day an index.wasm could replace index.html as a web application entry point.
> I find WebAssembly a fairly odd duck though and highly disappointing.
100% agree. It was introduced about 8 years ago and we still can't use it to make a div appear (without heavy JS thunking). The consortium is discussing how to use web assembly in serverless functions and to replace docker - meanwhile in the web world, it's essentially a faster Web Worker.
I get it, to some extent. I serve one of my sites with my own web server. I detest using programs that drag in zillions of dependencies over which you have no control, and every modern web server does exactly that.
That said, compiling your web content into your server? That's a step too far. Data and the application that process that data are two very different things, and (imho) should remain separate.
That's an implementation detail. For all we know the author has a very firm split between their application folder and their blog posts folder, and they only get combined at the compilation stage. You don't need the content to be stored separately at runtime in order to maintain separation of concerns in your codebase
Compiling contents into server wouldn't be a viable strategy in many situations, like if users are allowed to upload stuff, or when you have too much content, so much that it doesn't fit in memory etc.
But for a blog that's a collection of couple dozens of text blobs few Kilobytes each -- meh, whatever. You'll get tired of your blog before it becomes a technical problem.
Deploying via go static binaries is nice, and putting a little html into the binary at compile time is a built in feature. I use that to package the swagger ui without getting complicated. But for real config, I have that in a separate file, so different flavors get different config.
Similarly, I was recently itching to generate Github Pages-like static sites from my self-hosted Gitea instance to include a blog. I had set up a global webhook filtered for 'webdeploy' branch pushes that would send a request to a specific Caddy path whitelisted for Gitea. An exec function for Caddy would run a shell script (yeah, I know) that would clone/pull the repo into a known directory, create a proxied subdomain for it in Cloudflare, and push necessary changes to Caddy's config.
While I don't have the same hesitations about depending on a small chain of open-source projects, I really didn't like the idea of caddy-exec despite my basic precautions, so I abandoned this approach until I can ponder it a bit more.
This is how the configuration UI in a lot of cheap router firmware was (is still?) designed. A few hundred KB of RAM and a MB or two of flash doesn't leave much room for inefficiency.
OP I like you. You are a refreshing voice in the current tech jungle. I think your post clicked for me and made me think about my projects/ideas and how to approach them.
So, thank you I guess. I would like to see more of your stuff and your approaches.
It's a kind of demo of SingleFile and the self-extracting ZIP format [1]. The self-extracting ZIP format is an improvement when you want to archive web pages and read them without relying on an extension. The fact that resources are compressed also considerably reduces archive size.
I agree it would be "great" a complete website in the ZIP. I think this is technically possible, someone just have to code it.
For me single binary website is like single line program.
You can do it, but will not be the most scalable thing.
I like to have separate things independent: server and content; so I can switch any of the 2 at whim.
As said before BTW the format in Mobile is horrible!
I work as an external examiner for CS students from time to time and I always find it sort of humorous when they draw the topic on decoupled architecture. Not because the theory is wrong in any sense, but because I’ve never seen it used the way that it is taught. I bring this up because your “switch any of the two at a whim” sort of caught my interest. When would you ever want to switch your content?
Don’t get me wrong I would absolutely separate my content and my “webserver” but I suspect from the cron job that the author has done so. A single binary doesn’t mean single file.
Reminded of me once also wrote my websites in C using only POSIX compliant system libraries, with assets embedded in the single binary as base64 / hex arr. The worst part is I bragged about it.
Tiddlywiki is a single selfcontained javascript executable, would make more sense to me. As of yet I'm very happy with my simple obsidian and hugo setup.
What exactly is the problem with hugo or ssg in general? I think they are fantastic. You want a webserver to host only static files, some javascript/wasm for dynamic stuff if needed and if really needed sqlite for persistent shared storage.
It reminds me of the guy who coded and compiled his impressive crypto exchange in c by hand to get the attack surface low and the performance extremely high. Does anyone remember the name of the site? I think he mentioned (years ago) he was on the brink of giving up.
I like the thinking and dependencies are a weight that make at least personal projects more maintenance long term, even if they help in the short term.
This doesn't fly at clients/customers usually but what you control needs to be highly maintainable and simple. Whatever works for you to achieve this is good. In regards to personal projects or internal products, in that case a framework with massive dependencies just isn't easier to maintain long term over simple web standards and market standard formats like HTML/CSS/JSON/Markdown/etc.
My only complaint is the lack of capitalization on the content, so many tech/devs do this, just don't. Even Sam Altman...
How dare you not capitalize in a capitalist system. /s
It’s always refreshing to see someone’s personal and unique website that isn’t full of paywalls or newsletter pop up spam while you scroll that most articles linked on HN contain[0]. I particularly like the fish bouncing around at the top like the DVD logo on idling DVD players. The site could use a few fixes to the formatting to fit the content into the device’s width, but I applaud the author for making the website in their own manner. Truly the spirit of a hacker!
[0] I would be in favor of getting rid of the modern atrocities of website design that frequently make the front page of HN. Simpler, more effective designs without distractions should be boosted.
To all those complaining about "mobile hostile": the site has horizontal scroll on desktop too. Probably just a miscalculation somewhere on the author's side.
Also, however way you spin it, mobile is a hostile environment compared to desktop. It's inconvenient to have to deal with tiny screen and defective keyboard. So, any usability defect that also exists on the desktop risks overflowing the cup of patience.
To be clear, on desktop the horizontal scroll exists but the site is perfectly readable regardless. On mobile you have to utilize the horizontal scroll to see the content.
Well, I don't browse Web on my phone because it's always a frustrating experience.
But, you sort of confirm what I wrote: mobile display, interactions, navigation are a lot worse in general, so any small detail that goes wrong has larger impact in the already painful environment.
Paths in a URL aren't actually accessing files on a hard drive. They're sending a request to a program to serve the client the content defined by the path. Many times, said "server" program then goes and looks at a path on its host drive and serves a file there, but that is by no means a requirement - it's just a string that serves as an identifier.
In this case, the content served in response to the identifier "/static/style.css" can very easily (and according to the author, is) baked directly into the (single) binary.
Maybe. Seems odd that they'd use a vestigial 'static' directory in the request path, though. I didn't read it because the layout makes it useless on mobile browsers, but I have a feeling they mean that the whole site is coded into one binary like a self-contained ssg rather than the site only requiring one file to work.
Because that means every page request downloads all the static content. It’s generally nice for people if they only have to download the shared assets once.
The stylesheet is just under 3KB even with no minification or compression. At that size, the cost is negligible, and inlining will consistently be faster to load even than a warm cache of an external stylesheet. In most conditions, you’ve got to get towards 100KB before it even becomes uncertain. Loading from cache tends to be surprisingly slow, vastly slower than most people expect. (Primary source: non-rigorous personal experimentation and observation. But some has been written about how surprisingly slow caches can be, leading to things like some platforms racing a new network request and reading from disk, which is bonkers.)
Seriously, people should try doing a lot more inlining than they do.
I think it depends on how you set your cache? If it’s not configured to re-check with the server it may be much faster.
Then again, for 3KB the overhead of doing a cache check after parsing the HTML for the first time and then rendering it again may already be too much :)
Exactly. Inline is just surprisingly much faster than a lookup by URL, which has to check “do I have this cached? What type (immutable, revalidate, &c. &c.)? Now (if suitable) fetch it from the cache.” before it can get on with actually acting on the resource’s contents.
> i have very high and unusual standards,
Which was kind of funny TBH :)