Hacker News new | past | comments | ask | show | jobs | submit | more iamcalledrob's comments login

It doesn't appear to be visible on a phone


You have to turn your phone into landscape mode. It doesn‘t work in portrait.


Good point to show screenshots on mobile.


Curious about this too.

Like virtual mailboxes, the end result will surely be another subscription that indie developers have to pay for, providing a pointless phone number that goes straight to voicemail. Just to satisfy this requirement.


It would depend whether the web app is a "dynamic website" or a "super interactive app that happens to run in a browser".

For a website, it'd be Go + HTML/CSS (sprinkling JS where needed). Here's why: (1) Code will "just build" years later (2) Deployment is stupidly easy -- just build to a single binary with all resources embedded (3) Everything about the stack is stable and easy to reason about. Though perhaps not exciting.

For a "web app", I feel there are no great options. The half life of the JS ecosystem is about 5 minutes, so whatever you start with will be out of fashion shortly after. React has had some staying power, though isn't without its flaws. I generally feel like most "web apps" could actually be websites though.


Honestly, a "no bells and whistles" OS release is welcome.

The less the OS APIs change, the easier it is to build for the OS. As a small developer, constant API thrash is a nightmare. For example, the Android storage permissions API whiplash between 9, 10 and 11 was particularly painful.

Win32, for all its flaws, is brilliant in that regard.

The most disruptive change in 14 is probably the "full screen notifications" permission, which makes me a bit sad.

Now, only alarm clocks and VOIP apps are allowed to use full screen notifications. The motivation is good (prevent abuse), however this approach of "only these approved, pre-existing use cases are allowed" is an innovation stifler. This was functionality that made Android a lot more powerful than iOS.

There are plenty of valid use cases for a rich full screen, interactive notification-e.g. something that is extremely time sensitive.

The policy should be "don't abuse full screen notifs", not "only alarm clocks and voip". The latter ensures that there will be no further innovation in this area, and we'll never know if the limitations were overly strict or not.


And yet, they are precisely dropping binary compatibility for no good reason in this release.


I would like this but I don't know how much of a demand there is in the popular market. Eg. There's windows ltsc available but people aren't exactly clamouring over it. Stuff like debian has a limited user base.

I think tech for most people is like clothing - part of the excitement of having it is the fact that it's the latest and greatest. Stability is the priority of a minority it seems


LTSC on Windows is a crappy experience. For example, instead of shipping with a suitable long-term supported Camera program (either Win32 or UWP, who cares), they just decide not to put any camera program whatsoever, for no apparent reason. Just Google around to find a lot of users complaining about it. Same with every other program which MS claims needs to be updated frequently, which are excluded from the LTSC. Most of these "programs which require frequent updates" are fully local-only programs like the camera or media player. I'm quite sure it'll include Notepad soon, as it is now a UWP program. It's ridiculous.

Plus, you get like what, 5 years normally, 10 if you're lucky? Peanuts. We're talking about API stability in much longer terms (which Win32 itself somewhat manages).


I think that developers do care about API stability (who wants to rewrite working code for no good reason?), but the effects of this time wasting are only felt indirectly by normal people.

Since consumers have very little choice in their mobile OS, developers just have to put up with the thrash.

It is possible to build new OS capabilities whilst maintaining a good deal of stability for existing APIs.


Their approach to permissions is mind-bendingly weird. The technology is completely good, what’s weird are the policies of which permissions the user is asked about. I don’t think there’s ever been a permission popup for full-screen notifications, has there? From what I remember that’s how a lot of adware has been able to install itself so easily.


I think it’s likely a goal of Google’s to automate Play Store app reviews to the maximum possible extent, and “don’t abuse” sorts of policies don’t play well with that since it requires human judgement. It’s much easier to restrict permissions to a couple of app categories and completely rule out abuse.


> There are plenty of valid use cases for a rich full screen, interactive notification-e.g. something that is extremely time sensitive.

Like ads?

Yeah... Is there ever a thing that wasn't abused by ad companies yet?


It's a real problem. There's a huge overlap between "things that can be abused by ads" and "things that allow boundary-pushing products".

On Desktop, historically we chose the "buyer beware" route which, for all its 10-IE6-toolbars flaws, led to an environment that felt like it was full of possibilities.

On Mobile we are going for a "you get your little sandbox" route, which delegates innovation to the OS vendors only.


These are both outgrowths of where they came from. Desktops came along in the 1970s as (relatively) powerful do-anything machines. Phones came along as simple phone (and later SMS)-only devices that have gradually grown more advanced features.


I understand the frustration, but it is a different threat model. Your phone more-or-less knows your exact location at all times (and has to, by design). In my view this calls for a different set of tradeoffs re: application permissions.


Right.

But there has to be a better model than delegating innovation to Apple and Google. It's a huge conflict of interest too, since they are both Ad companies.

We're heading down a path where there are products that only Apple and Google can build, and it's a huge problem for the ecosystem.


Putting Apple in the same basket as Google in terms of ads is just misleading.


And not just location, also who you communicate with (contacts and "phone" permissions), are you currently walking/driving/sitting still, who is around you (wifi and location/bluetooth permissions), it can record audio/video (camera, mic permissions), and it's al based on the user knowing enough to allow or disallow those permissions when needed.


By restricting full-screen, you're punishing the top 85+% of users who are fully capable of understanding what a permission ask for "Is allowed to display full screen while locked" is asking. Generally spekaing, the race to design everything for the bottom 10% of users is resulting is a world of forced conformity, severe artificial limitations, and basically the Simpsons episode where Homer bubble wraps the entire playground, but in a tech sense.


The "build" step of an Electron app doesn't build Chromium, so this wouldn't be very feasible. Building Chromium requires an insane amount of computing power.

According to Google, a Chromium build on normal hardware takes 6+ hours.

And alas, even if it was feasible to custom build based on what you need, it would have to be done via configuration--since there's no way to know at compile time which language features will be used, since your app could (and probably does) include remote scripts.


> 6+ hours

Yeah that sounds about right - I use a chromium on Android fork and according to the lead developer, it takes about 3 hours for a release to compile and that is after optimizing the process as much as possible.


Hopefully a relatively easy fix on the OS side: ensure that these toasts have a higher z-index than app overlay windows.

Sidenote: I really hope Android doesn't lose the ability to do things like these overlay windows.

These types of "power user" features are often where innovation happens. It makes me sad when flexible general purpose APIs are replaced with locked down, specific ones. (e.g. overlay windows API -> a specific "chat heads" API).

The new APIs might be fine for current use cases, but ensure that innovation terminates at whatever th OS vendor designed.


> Hopefully a relatively easy fix on the OS side: ensure that these toasts have a higher z-index than app overlay windows.

The point of the system overlay permission is to draw on top of everything else. Maybe Android should introduce some kind of overlay overlay overlay to overlay clipboard messages, but I think that would probably overcomplicate the API.

Perhaps it's better to instead send a notification when an application rendering a system overlay accesses the clipboard as well. That way, users closing out of the app can see that the application has accessed the clipboard.

Of course sufficiently privileged applications can dismiss notifications, but that requires even more difficult to attain permissions.


I think this makes a good case for something in between that can display the annoying popup chat bubbles, but can't override toasts.


Or just make it so the system-wide Toast cannot be overdrawn, it will always stay on top.


An overlay buster buster


> It makes me sad when flexible general purpose APIs are replaced with locked down, specific ones. (e.g. overlay windows API -> a specific "chat heads" API).

Doesn't have to be a replacement. It's good when you add specific APIs to do specific things that aren't very dangerous and then have the super dangerous one left as a risk signal. Innovation can still happen, common things get easier, and dangerous things get rarer. Feels pretty good to me.


Apple's switch away from the physical home button to gestures has created a usability threshold for the iPhone that many can't cross.

Intricate gestures are difficult to grok for some, and difficult to perform for others. Try using an iPhone and closing an app with shaky hands.

Currently the iPhone SE still has a physical button, but I'm worried what device I'll start recommending to older/less tech savvy people when that goes away.

iOS itself is a bit of a disaster zone too now. I see people constantly get stuck having activated the "press to edit your lock screen" by mistake, or getting confused by a constant stream of ads for iCloud, Apple Arcade etc.

It's sad because most of this poor UX is unnecessary. It feels like its origins are in Apple no longer caring, combined with running out of real ideas and getting distracted with things like widgets.


My biggest gripe is continuously changing interfaces. You've sold me. I'm a customer, I'm using your thing. Why do you want to make it difficult for me to memorize my use of your thing? Moving menus and buttons around all the time is craziness. I don't have time, cognitive capacity, or interest in finding new ways to do the same functionality from before.

Things do need to change over time, I get it, I create things too. Sometimes new functionality evolves and has to go somewhere, sometimes you find a previous design was bad and there truly is an improved layout that will help most. Fine. Those sorts of states should converge quickly so I can memorize and dedicate it to muscle memory vs having to actively look and think all the time.


> You've sold me. I'm a customer, I'm using your thing. Why do you want to make it difficult for me to memorize my use of your thing?

Typical company these days: you're who? Ah yes, you're an existing customer. You're already paying us, sunk time into learning our product, and rearranged your work or life to be at least minimally dependent on us. We can safely ignore you - it's unlikely you'll leave near-term, so our focus is much better spent on acquiring new customers.

To be clear: I hate it, but this seems to be how most software products are being developed these days - all focus is on making them dumb and pretty enough to sell to first-time users, at the expense of already onboarded users.


> it's unlikely you'll leave near-term

Yeah. You’ve chosen Apple for a reason, and since you have only one real alternative that you emphatically do not want, you’ll just have to deal with whatever we throw at you.


The parent refers to the phenomenon described in detail here: https://nothinghuman.substack.com/p/the-tyranny-of-the-margi...


The two phenomena exist together. Yes, companies target Marls, but most, especially startups, seem to be more interested in acquiring new Marls than it is in milking the ones they already have.


This is a big part of what made me start using linux. On windows, there's a new way to do basic stuff all the time and they screw around with menus that work perfectly well just to have something new.

That means that any tutorials will quickly get outdated and you can spend half your mental capacity just keeping up with this crap. The amount of times I googled how to do something in MS office, clicked an article from half a year ago and found that one of the options it mentions doesn't exist anymore is too damn high.

Things are nice on linux, especially the CLI world. You learn a little program once and use it for decades without thinking about it.


As always with Linux it depends on which distribution you're on but that hasn't been my experience at all with distributions like Ubuntu and Mint. I used Ubuntu back when the close, maximise and minimise buttons were on the top right and they moved them right from under me. I've tried to adjust to using flat packs but struggled with their very serious limitations while programs I rely on are no longer available in other package types. I have seen them see-saw between the horrible global menu and window menus, make wholesale changes to areas of the settings screen like display settings and mouse and trackpad settings. And I don't even know how many wildly different iterations of that horrible main menu application selector UI I have had to suffer through.


I use MATE, an old desktop environment descendant from GNOME 2. If I apply the principle of "It should last at least about as long as it's been around...", I can hopefully use it in peace :) (It'll probably look the same in another 20 years!)

I really don't see the benefit of almost anything else that came later (I did add an app launch shortcut that I rarely use). I also autohide most of the UI by default (bars and menus), so it's just there to do its basic function and allow me to focus. Performance is excellent.

That said, I think the main difference is mostly from community-focused development, it tends to bring out genuine usability concerns (which is why I think most *nix DEs work fine).


I've been through all the ups and downs around Ubuntu and gnome, but at least those changes generally happen at a time of my choosing too. I don't go to a commonly used app one day to do something quick only to find the entire UI was updated overnight.

Of course even the cli world has had it's changes though, systemd alone made decades of documentation obsolete.


As an older person, CLI is unusable. It is impossible to memorize all the switches, need to consult chatgpt at every step. I paid my dues to the CLI gods in my miniVAX days.


The recent overhaul of the Apple Watch was very egregious. Maybe I find it extra offensive since the device is physically attached to me at all times so the muscle memory is particularly strong. There are only 2 buttons on the device and they decided to completely change their behavior, throwing away 10 years of experience I had with the device. The new design isn’t even bad or anything, it’s just that the old design wasn’t bad either, so throwing it away was completely unjustifiable. It’s also offensive because I made the mistake of listening to their marketing and strapped one to everyone elderly in my life to help protect them from falls and heart failure, and now I have to help teach them a bunch of new stuff for no reason. Some of them just decided to stop using the thing instead, and I can hardly blame them.


Which version is this? I’m still on watch 5 and wondering whether I should get a new one.


COUGHmozilla


>... getting confused by a constant stream of ads for iCloud, Apple Arcade etc. It's sad because most of this poor UX is unnecessary.

UX went from an altruist field around making tasks easier to perform to tricking people into spending money, clicking away rights to their personal data, etc.

It's the logical path of end users being the product. This axiom started around "free" services like Facebook, but now we see it even in expensive products like iPhones and Windows 11.


Not an elderly or disabled person, I probably even fit some people's definition of a touch screen wizard since I regularly use swiping keyboards without looking, but I really can't be bothered with those weird slidey gestures between apps on iOS or Android. At least on Android I can switch to regular (configurable!) bottom buttons.


Gods I miss having real customizable gestures on Android. It took me years to unlearn my gesture set, especially "double tap with three fingers" to close the current app and go home.


I used to use xposed edge to configure the crap out my gestures, it was great. Now that xposed isn't really a thing anymore, I followed the rest of the world and use my phone with two hands most of the time and use my index finger for the top of the screen with one hand.

Not terribly hard at my age, but why do I have to?


In principle, I like for example the four-finger sliding gesture to switch apps on the iPad, but it’s implemented in a way that makes it very easy to accidentally swipe to the second-next app instead of to the next app. Similarly, there’s a gesture for Speak Screen where you swipe down from the top with two fingers, but it regularly takes me more than three attempts before it doesn’t instead swipe down the lock screen or the control center. I honestly don’t understand how they think this is fine.


My mum finds the iPhone hard to use (she has early stage dementia), she has an Apple Watch for fall detection, I've tried turning off everything but still there's stacks of stuff you can't turn off. I'd love to make a simple interface with "Answer Phone" and a list of people she can call but its not possible, once you see her use it, you realise how insanely complex the UI is. Swiping is also harder for older people because their hands are dryer and less conductive. We need to keep the iPhone though for the fall detection, I'm examining options but there's not much around.


Luckily this kind of thing was announced this year:

https://www.macrumors.com/2023/05/16/apple-previews-ios-17-a...


Wow, Point and Speak looks amazing. Despite all their faults, Apple really deserves credit for bringing great accessibility stuff like this to millions of consumers.

>Detection Mode in Magnifier and Point and Speak

>In the Magnifier app, Point and Speak helps users interact with physical objects that have several text labels. For example, while using a household appliance, Point and Speak combines input from the Camera app, the LiDAR Scanner, and on-device machine learning to announce the text on buttons as users move their finger across the keypad.

>Point and Speak is built into the Magnifier app on iPhone and iPad , works with VoiceOver, and can be used with other Magnifier features such as People Detection, Door Detection, and Image Descriptions to help users navigate their physical environment more effectively.


> to millions of consumers.

It's billions!


If we're being pedantic, a single billion; only Android has 2+ billions of users.


It still looks a bit complex, I'll see how it goes


Have you looked at the RAZ Memory Cell Phone? I've been considering that for my mom, who also has early stage dementia.


Thanks, that looks great, I'll see if they're in Australia yet.


> Apple's switch away from the physical home button to gestures has created a usability threshold for the iPhone that many can't cross.

Funny you mention home buttons being more usable. I've got mild wrist pain (not RSI levels fortunately), and I find the pressure required to press it disturbingly high, often resorting to using the assitivetouch button instead.


It was better with actual physical button. Current SE has a fake software button that didn't move, only vibrate at certain pressure. It feels more difficult to press even at easiest setting because of the way pressure detection works.


The samsung s8 was the best compromise for this - it didn't have a physical home button, but had an area on screen where the home button would normally be that acted like the home button with a firm press of the finger.

Not sure if it had special hardware or just well written drivers, but it always worked flawlessly even with a hung app in the foreground.


Imo ios gestures are not even that good compared to android. The ability to go back by swiping from any edge is so much easier than reaching to the top left of the screen...


iOS does that too though, you almost never have to use the top left arrow


Generally it's when the bean-counters get more leverage during economic slowdowns you see the dark patterns emerge.


Not sure if they are to blame here though. Cars, sure. But for phones design and software cost doesn't decrease with worse UX, support costs probably even rise.


QUIC's promise is fantastic, and latency-wise it's great. And probably that's what matters the most for the web.

However I have run into issues with it for high-throughput use-cases. Since QUIC is UDP based and runs in user-space, it ends up being more CPU bound than TCP, where processing often ends up being done in the kernel, or even hardware.

In testing in a CPU constrained environment, QUIC (and other UDP-based protocols like tsunami) capped out at ~400Mbps, CPU pegged at 100%. Whereas TCP+TLS on the same hardware could push 3+Gbps.

It'll be interesting to see how it plays out, since a goal of QUIC is to be an evolving spec that doesn't get frozen in time, yet baking in to the kernel/hardware might negate that.


Luckily, there are ways to reduce syscalls (like Generic Segmentation Offload and other tricks[1]). But I agree that not having things run in the kernel makes it more challenging for high-throughput scenarios.

[1] https://blog.cloudflare.com/accelerating-udp-packet-transmis...


Is this mostly syscall overhead? Could io_uring and friends solve some of this?


Sure, their offering (in terms of features) is similar to most other shared hosting services.

But what I like is that they've consistently been no-nonsense, super fairly priced, and aren't condescending to technically competent people.

Sure, I could do this myself, but sometimes you just want to get a website online and have someone else worry about hosting it.


> Sure, I could do this myself, but sometimes you just want to get a website online and have someone else worry about hosting it.

Exactly, I run numerous systems at the VPS level, the OS, the libraries, the databases, the applications, the replication, the automation of all that, as part of my day job... but that's big production stuff with big production requirements. If I want to throw up a small static site (or even simple non static site), and forget about it, NFS is the place to go.

That the UI is exactly the same each time I login, even with years in between, is a huge feature, especially in the mess of UI/UX churn madness that is the rest of the web. NFS is the only shared host that is fair and trustworthy, and that is part aligned incentives, part being run by awesome human beings.


> You don’t want to deal with processing a raw http request from the web server. You don’t want to split headers. You don’t want to sanitize input params, deal with character encoding, content types, gzipping, cache control, etags, basic authentication, flushing headers, chunking bodies, file streaming, tcp sockets, slow client avoidance, and probably 1000 other things I can’t recall.

The Golang stdlib does all this for you, no framework needed :)

I honestly think Golang SSR with html templates + a sprinkling of Javascript to enhance is an extremely pragmatic way to go.

You build your entire service into a single binary (assets included with go:embed).

You shed the complexity of the framework AND the web server AND deployment in addition to React etc.


Golang is in a special class on its own. Write some handler functions and pass around a context struct and call it a day. I wish every stdlib took care of all that stuff :)


Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: