This reads like The Onion. I can’t think that anyone would think LinkedIn’s website has improved. According to the article the goals were merely to maintain performance. It’s a pure content site that benefits little if it all from single-page.
Loads quickly for me. I keep hearing that LinkedIn loads slowly but people fail to mention specifics about their devices and internet speeds. Are you on home wifi, LTE or something else?
Something I have noted from working with Jira and Magento 2 is that if your servers are on another continent, the network delay in getting the dozens of little snippets of JS and ajax API requests adds considerable delay.
Jira is especially cumbersome as when you update one thing it refreshes half a dozen other items and you have to wait for the whole lot before it becomes interactable again. In those cases, I would prefer it either to update on the component I touched or to not pretend to be reactive and just be a flat out form POST.
As it stands it is the worst of both worlds and the network delay compounds to make it a slow, page jittering mess the whole time I am using it.
Yes, I'm on a fast network and using a newish Android device. It's mostly the rendering performance. All I want is to get to the mailbox quickly. I've never wanted to use the timeline and notifications features and whatever else they're loading.
Even on a desktop I get a loading screen for maybe 2 seconds and the site loads in ~4 seconds.
A modern website that works well on all devices quickly is soundcloud. Seamless browsing/listening and excellent user experience.
Thats kind of the problem right? Take a distribution of your potential user base connection speeds. Modern web app / SPA engineering seems to be to draw a line somewhere on that distribution and FU to the rest.
I think the line is drawn at how quickly you can transfer from localhost.
It seems like devs tend to ignore network latency entirely because it's something outside of our control. It seems to be getting worse too, things like the windows start menu can freeze when you're on a intermittent connection, SPA's can break entirely. I've spent a good chunk of my career fixing n+1 problems over a network, even when it's on a fast connection to a local database the latency becomes noticeable are fairly small values of n.
Reason #1: They did a horrible job, this post reads like a list of things not to do.
Reason #2: They still did a much better job than my company. We had a similar rewrite from a reasonably well performing system to a dog slow monstrosity. However instead of acknowledging it was slow, we put on our doublethink hats and deployed anyways. "We" picked a couple metrics that were easier to game (time to first byte) and then we threw up our mission accomplished flag and moved on. We also wrote posts like this bragging about the performance improvements we'd made.
I hate the sort of "techcrunch driven development" that's so big in wanna-be facebook companies. Instead of solving business and customer needs, directors and vps are mostly focused on splashy articles and speaking engagements that make their resume look good.
Yeah, cut down 2mb of CSS files and you're gonna get a speed up :|
I know, I know. LinkedIn is big and has a big team and this level of rigid formalism is what's required to manage a large team of developers and keep them from committing non-performant code. The joys of big teams. I really applaud them for being able to pull it off. The site is far, far more performant than it used to be.
For the most part though, for most of the rest of the world of web applications: just stop including so many fucking libraries in your js and don't do write-once ultra-scoped CSS.
you could reduce the complexity of the selectors by converting everything to id selectors, that would also end up bloating the css, but offsetting cpu usage to memory usage.
edit: I wouldn't even call it frustum calling since I don't think frustum culling makes much sense in a retained mode API (the frustum culling happens when rendering the retained mode representation through the underlying immediate mode API). 'Level of detail' maybe.
For one of our client, Adya[1], our team at Alaris Prime[2] implemented the very same idea for the web. It is a file manager that displays a huge list of folders/files from multiple clouds (file storage) providers. We needed a fast-loading, smooth UX and way to do it was to move things around as little as possible - only the visible viewport of the user's browser.
We should be writing a case study on it but never got around to doing it.
This blog post really shows how overly complex their system is. I'm constantly bumping into state management rendering issues on the 'new' LinkedIn. For example, when you add a new connection, you can't immediately send them a message until you reload the page. Pretty much the number one workflow you'd expect someone using the site to do.
"Before releasing it to the public, we knew it had to be at least as fast as our existing site."
Which wasn't saying much because the original site didn't load well at all. At least it didn't have all the state management issues. It is like they take all these newfangled technologies and then struggle to implement them correctly. So sad.
Any rewrite or big change is a gamble. By setting this metric they avoid performance degradation. I'm admitting that I'm just assuming this, but the changes are probably a start for easier future optimization. Setting two big targets: make it a lot faster and make it a lot better, seems to me like a mistake.