> The new mobile web version of Twitter is much faster and better looking than the desktop one.
It's certainly much faster. I'm using it on my netbook which used to run the desktop version equally as fast.
Why can't big development teams think about accessibility as it applies to older hardware? It's clear that it's getting to be a big problem for Twitter let alone other companies (see previous discussions about Bloat)
The only thing I can think of is that the developers use the shiny new hardware and it runs okay for them. Or if the devs want to change, the management and board runs the fast hardware and it's "working for them".
You're missing the point. The goal is to maximize a small set of metrics. Engagement, New User Experience, etc.
They make these changes and roll them out. They look at the numbers. They see that 5% of their users (those on older hardware) spend 20% less time on the site. They see that 20% of their users are spending 50% more time on the site. They file a ticket about the drop in engagement for older devices. It goes into the backlog. Next sprint/quarter rolls around, they see a couple of options. One is to speed up the site on old devices another is to add a new feature that they estimate will increase engagement by another 20%. The second option seems to increase their bottom line more, so it gets funded and the old device support stays on the backlog. Repeat cycle.
I can guarantee that at any sufficiently high traffic site they don't use developer hardware as a benchmark. They see the numbers, they know its slow for you and they make a conscious decision that you aren't worth the opportunity cost of new features.
This but with one minor correction, the developers usually want to fix the experience. It's the management / project owners / etc that use the aforementioned analytics to make their judgement.
Perversely, I've also often observed that those who spend the most time judging a sites performance on its analytics are usually the ones who actually use the site the least. or at least this is what I've observed with past projects I've worked on.
> those who spend the most time judging a sites performance on its analytics are usually the ones who actually use the site the least
It's a weird part of human tribal/social dynamics. People who already generally like a thing are open-minded to new information that presents the thing in a positive light, and just generally ignore new information that presents the thing in a negative light. Likewise, people who already generally dislike a thing filter out the prosthelytizations of people who like the thing, but pay attention when they notice reasons to dislike the thing.
Basically, our brains' belief-evaluation machinery is really just a wrapper around a core "generate excuses to keep thinking what I'm thinking" algorithm.
We can exploit this—adversarial justice systems work much better than non-adversarial ones, because you've got two sides who each have paid attention to half the evidence, brought together in the same room to present it all. But if we aren't exploiting it, aren't even aware of it, it can become a real problem.
One further (possible) correction: a "discussion" was had in the past whether to fix this, dev wanted to fix and PM didn't, or vice versa - whoever is the most politically powerful wins, regardless of metrics impact (all relevant facts aren't reported to senior management so sanity could prevail).
One thing is always clear -- a big Co's interest is not always your interest.
Which is why I love what Sindre is doing -- letting us to customise our experience with products that ignore it by themselves. I wish there were more projects doing that. Demand is definitely there.
Runtime speed has not been a priority in most apps/websites in a long time. Even relatively new phones struggle with things like modals and swipe effects on mobile websites. So much effort is spent on making things feel "native". Native, in my opinion, should mean fast before anything else.
I used to work at a place that had a ~5 year old pc on a <1Mb internet connection to test their software. If it ran find on that, then it would run fine on most of their users hardware.
For the Android app development we do for clients, I have a $10 trac phone I got from Walmart.com. That was the phone price without subsidy or any extra cost.
Android can be a bunch of hurdles, but we use same mentality as this for our app designs.
I hear a lot of people in tech idolizing "good design". But is a design really good if it doesn't work everywhere?
Performance should be a key factor of judging any design. Both heuristic user performance and actual measurable software speed.
> The only thing I can think of is that the developers use the shiny new hardware and it runs okay for them.
I think that was one of the problems with Google+ in the early days. They launched a social network that assumed a huge screen resolution (because that's what they used), so the interface was too big and clunky for a lot of people.
There's no real way to measure hardware as it pertains to power and speed. The best thing we have is browser and OS detection. If there was a way to determine hardware that'd be pretty useful.
navigator.hardwareConcurrency provides the number of logical CPU cores. Additionally, GPU detection is possible with WebGL extensions. That said, this is clearly incomplete from what data would be ideal.
I find myself consistently wondering why I pay money for a third party client five years into my use of Twitter (I was late to the party).
What's more is that Twitter knows what I follow, what I read, what I like, and what I retweet. So why is it still suggesting I follow pop celebrities instead of well known developers? It seems that despite everything they know about me, they don't know me at all.
Moments is a prime example of how out of touch Twitter is. I've clicked on it three times to see, and each time the stories are celebrities or sports. They know enough about who I follow, what I tweet about and the hashtags that come up to target enterprise technology ads to me, why can't they use the same data to customize that page? It's utterly useless in its current form.
Useless to you, but you're probably not in the target demographic for the feature.
Moments is about lowering the bar to new users of Twitter (the discovery problem) and get existing users more engaged in sharing content so Twitter is more valuable to them.
For those users, celebrities and sports is exactly what's going to draw them in. Twitter can then sell ads against those users.
I'd be surprised if it had any kind of algorithmic targeting any time soon, or if enterprise tech firms were looking to get involved in producing native Moments content in any serious numbers.
Before Moments there used to be a tab that showed you tweets that your followers were replying to. I used to use that all the time to find new people to follow.
Way back in the beginning, when you followed someone you got all their tweets - including replies to people you don't follow.
Then they made it an option to not get these, then they took away that option and made it mandatory.
This, plus their really terrible behavior towards developers (still going on, as evidenced by their continuance of the 100K tokens limit per app) has seen me basicly stop using the service entirely.
Especially when compared to Facebook's news sidebar. When it came out I remember it was similar to moments, lots of celeb and sports news. Now it knows only to show me politics and gaming news.
That. And GP. And this is exactly the reason why I don't believe a single word in all the stuff about AI and deep blabla. If Facebook cannot suggest better news, it's because it's most probably impossible to simulate how my best friend did suggest me to watch les combatants (a French movie I loved). Neither the casting, the director (all unknown) or the theme (a young guy going to army training to follow a girl) or any other objective feature of the movie could have helped. It's done purely on feeling, intimate knowledge of our respective tastes, long time friendship (30 years). In a word, alchemy. That's just not going to be modelized any time soon or later.
The only thing I get from suggestions are either dim variants of the things I already liked, or random stuff high on popularity charts.
which sadly does not work anymore as I gave up on the several API changes.
The beauty was that I could search (and memorize) any hashtag I liked and then sorted this by retweets AND if I was away for holidays I came back and see only these newer tweets and have those sorted again by retweets.
Also sorting your timeline by retweets is really powerful.
This way you do not waste time to discover things and you do not need to trust an algorithm to find something interesting (although I loved getprismatic.com and like http://tweetedtimes.com/) but I can actively search for stuff
I also tried to combined hackernews in the mix to find more stuff outside of my follower set and e.g. find people to follow but also have the possibility to sort hackernews by XY AND filter by my topic which I still really miss here too :)
There was also a nice feature to find the first tweet of a certain link/tweet to learn which people really post 'hot stuff' and which just copy things.
This is a massive issue with the stateless aspect of HTTP, since the history API doesn't refresh the page and the state and components don't update (since its a React app) there's no magic needed to keep scroll position, but the router could be doing something I'm not aware. I've been looking for that solution for years and the first time I've seen it simply and clearly done is the Real World example in the Redux source.
Most people don't know this but Chrome supports GM scripts natively - just save the file as `whatever.user.js` and drag + drop it onto the extensions window.
One downside of this approach: it redirects to the mobile interface after the HTTP request and DOMReady
This extension redirects requests before they've even started, making it much faster than what's possible with a Greasemonkey script. Greasemonkey scripts are also hard to discover and install these days.
Userscripts can also run before any content scripts through `@run-at document-start`. With a browser extension, you're giving up portability, security (an extension can wreck a lot more havoc than a site-specific script) and the freedom to publish without being dependant on a "store".
This extension redirects before a request is even made. `document-start` runs when the whole page is downloaded, but before the DOM is constructed, which is a lot slower. As for portability, both Firefox and Edge are committed to supporting Web Extensions, which is pretty much the Chrome extension API. This extension is not tied to any store. You can install it manually exactly as you would with a userscript. Actually, userscripts in Chrome, are just re-packaged as native extensions on install.
To your question: A lot. How easy is it to install a chrome plugin vs. install a packaged, auto-updating Greasemonkey script? What is the real difference in performance?
Does anybody have any insight into Twitter's development structure?
Do they have a separate mobile and desktop team; a single team; or separate teams where the desktop team just keeps the thing going until the mobile team can scale theirs up to desktop? Just curious.
I met an engineer from Twitter at a conference and he said he was on a team (6+ people) that worked full-time on the Lists page for the Twitter.com desktop site. So anecdotally, they have (or at least had) very specific teams.
Dunno if this is still the case, but throughout my tenure there (2010-2014), mobile web and desktop were totally separate teams with separate codebases.
I saw that too! I've recently dipped my toe in, making a module for a small project using React instead of Angular 1.x as I usually would do. One thing I'm still working out is how to look at how complex websites use React, as in the Chrome DevTools React plugin there's a lot of complexity in the React hierarchy, and code uglifying that appears to have been run on it for performance reasons makes it hard to reverse engineer
They don't ask "do you like this?" anymore. They ask something like "show more like this?" and I'm afraid clicking No will remove tweets from people that happened to be in the "while you were away" section.
Since we are talking about refined Twitter experience here, I thought I will share my issue and hopefully one of you has a solution.
So I follow a lot of lit mags and other literature related accounts and persons. Often the same story is (re)tweeted multiple times by the mag accounts and even by persons. Now, getting lesser tweets for a given time is one of the criteria I follow for following any twitter account so that I don't miss others or I've to look at one tweet exactly once in my feed.
How do I achieve that? Is there a Twitter setting for that? I doubt it - because many of those tweets are actually different tweets with the same link (but different shortened URLs) and separate comments about the article. Add to that many other tweets that fans and followers make and the magazines retweets them.
That's just too many tweets for one article. I understand that the mags have to do it for the reach and everything but, personally for me, it's very inconvenient. What do you do when you face such problems? (I could just subscribe to their feeds but I wanted to know about cleaning my Twitter feed if at all that's possible)
Navigate to about:debugging, and load the manifest.json as a Temporary Add-on.
It will stay installed until you close the browser. I don't think permanently loading extensions this way will work in the normal Extensions pane until Firefox 48 is in release.
I wrote an extension to bring lists out of hiding. Haven't done anything for the new mobile web app yet, waiting to see if they'll make an appearance officially soon.
Unfortunately, I don't think Twitter gives lists enough attention because it's an underused feature, but at the same time, if lists are unusable.. of course no one is going to use them.
Nice, I remember seeing this on ProductHunt. I'm working on electoralhq.com which provides Twitter Lists search and management tools. Shifting into building scoutzen.com now though in order to be more independent of Twitter Lists since they get such little attention from Twitter.
This isn't really the same thing as Refined Github though. It's just redirecting to the new Mobile Twitter. I've yet to notice a missing feature (although there are things missing that I don't consider "features", like trending topics and moments). It also works far more reliably for me: for example, try scrolling several dozen tweets down the feed in both Mobile and Desktop Twitter, clicking on a tweet, then clicking back. I've yet to see Mobile lose my place, whereas Desktop does more often than not, and when it doesn't it has to spend a minute triggering its own infinite scroll.
Similar branding suggests similar functionality, i.e. not overhauling, but "refining," i.e. making useful small improvements. Yes, there are missing features - from account management ones to little things like being able to mention somebody while on their profile.
Update: I spoofed my user-agent string to get the normal mobile version in Firefox. It mostly works except images don't load! I'm done digging into this for now, but it's kinda bizarre.
I use Twitter on Firefox mobile and didn't know until this article that it's for some reason forcing an old ugly version. What UA string are you using? I tried the Android (Phone) one, and the Chrome one that the Phony extension provides but it still shows the ugly FF version of mobile twitter.
Because they want to deliver a good experience to every user and redirecting to the simple version is the easiest way to do that. If they served mobile.twitter.com to Firefox for Android users they would have to test the site in Firefox for Android and since only like 12 people use Firefox for Android it's not worth their time to do so.
They're going to have to deliver (and test) one version or another. Or if they're not testing, they might as well send an untested normal version instead of the untested simple version.
It's certainly much faster. I'm using it on my netbook which used to run the desktop version equally as fast.
Why can't big development teams think about accessibility as it applies to older hardware? It's clear that it's getting to be a big problem for Twitter let alone other companies (see previous discussions about Bloat)
The only thing I can think of is that the developers use the shiny new hardware and it runs okay for them. Or if the devs want to change, the management and board runs the fast hardware and it's "working for them".