So the underlying issue is that the iPhone 16 Pro SKU was misdetected as having Neural Accelerator (nax) support and this caused silently wrong results. Not a problem with the actual hardware.
No it wasn't. A hardware defect so disastrous that it affects floating point computation on the neural engine, yet so minor that it does not affect any of the software on the device utilizing that hardware is exceedingly improbable.
The conclusion, that it was not the fault of the developer was correct, but assuming anything other than a problem at some point in the software stack is unreasonable.
All neural accelerator hardware models and all neural accelerator software stacks output slightly different results. That is a truth of the world.
The same is true for GPUs and 3d rendering stacks too.
We don't usually notice that, because the tasks themselves tolerate those minor errors. You can't easily tell the difference between an LLM that had 0.00001% of its least significant bits perturbed one way and one that had them perturbed the other.
But you could absolutely construct a degenerate edge case that causes those tiny perturbances to fuck with everything fiercely. And very rarely, this kind of thing might happen naturally.
You are correct that implementations of numerical functions in hardware differ, but I do not think you correctly understand the implications of this.
>And very rarely, this kind of thing might happen naturally.
It is not a question of rarity, it is a question of the stability of the numerical problem. Luckily most of the computation in an LLM is matrix multiplication, which is s extremely well understood numerical problem and which can be checked for good condition.
Two different numerical implementations on a well conditioned problem and which requires much computation, differing significantly would indicate a disastrous fault in the design or condition of the hardware, which would be noticed by most computations done on that hardware.
If you weigh the likelihood of OP running into a hardware bug, causing significant numerical error on one specific computational model against the alternative explanation of a problem in the software stack it is clear that the later explanation is orders of magnitude more likely. Finding a single floating point arithmetic hardware bug is exceedingly rare (although Intel had one), but stacking them up in a way in which one particular neural network does not function, while other functions on the hardware run perfectly fine, is astronomically unlikely.
I have seen meaningful instability happen naturally on production NNs. Not to a truly catastrophic degree, but, when you deal in 1024-bit vectors and the results vary by a couple bits from one platform to another, you tend to notice it.
The hardware itself is utilized by many pieces of software on any Apple device. Face ID uses it, Siri uses it, the camera uses it, there are also other Apple on device LLM features, where you could easily test whether the basic capabilities are there.
I highly doubt that you could have a usable iPhone with a broken neural engine, at the very least it would be obvious to the user that there is something very wrong going on.
> The conclusion, that it was not the fault of the developer was correct, but assuming anything other than a problem at some point in the software stack is unreasonable.
What do you mean? The developer is perfectly justified in being upset over a basic example not functioning correctly, due to bug on behalf of Apple's developers. It just wasn't reasonable to assume that the bug was due to malfunctioning hardware.
Apple's documentation is utter garbage, but this code almost seems like a separate issue (and notably the MLX library uses loads of undocumented properties in metal which isn't cool). It looks like the change used to allow the NAX kernel to be used on the iPhone 17 or upcoming 18 if you're on 26.2 or later, to instead only allow it on the iPhone 17 Pro or upcoming 18. I'm fairly sure the GPU arch on the A19 is 17. They changed it so it will only use that kernel on the 17 Pro or upcoming 18, which is notable as the A19 Pro in the 17 Pro has a significantly changed GPU, including GPU tensor cores. The only real change here is that it would limit to the pro variants for the "17" model.
> The neural accelerator exists in iPhones going back many years.
What has existed before is the Apple Neural Engine (ANE) which is very different from the newer Neural Accelerator support within the GPU blocks. In fact MLX does not even support ANE yet since at least in previous versions it was hardware-limited to computing FP16 and INT8 MADDs, and not even that fast.
Sure, I directly and explicitly talked about Apple's version of tensor cores in the GPU. But the ANE is by every definition a neural accelerator. Yes, I'm aware of Apple's weird branding for their tensor cores.
"In fact MLX does not even support ANE yet"
I didn't say otherwise. The ANE is a fantastic unit for small, power-efficient models, like extracting text from images, doing depth modelling, etc. It's not made for LLMs, or the other sorts of experimental stuff MLX is intended for. Though note that MLX's author's reason for not supporting the ANE is that it has a "closed-source" API (https://github.com/ml-explore/mlx/issues/18#issuecomment-184...), making it unsuitable for an open-source project, and given that MLX didn't want to just lean on CoreML. But anyways, the ANE is fantastically fast at what it does, while sipping juice.
In any case, the code change shown should have zero impact on the running of MLX on an iPhone 16 Pro. MLX tries to really leverage platform optimizations so maybe another bifucation is making the wrong choice.
The change's effects are dependent on what each SKU reports as its Metal architecture, both as identifying string (the equivalent to running 'metal-arch' in the Mac CLI) and as generation 'gen' number. Most likely you're misinterpreting the change as not affecting the iPhone 16 Pro, where in fact it does.
The MLX folks have various rationales for not supporting the ANE (at least as of yet), but one of them is that any real support requires implementing explicit splits in the graph of computations, where ANE-suitable portions are to be dispatched to the ANE and everything else goes back to the GPUs. That's not necessarily trivial.
Blog post dated 28 Jan 2026, the bug fix posted 29 Jan 2026, so I guess this story had a happy ending :)
Still, sad state of affairs that it seems like Apple is still fixing bugs based on what blog posts gets the most attention on the internet, but I guess once they started that approach, it's hard to stop and go back to figuring out priorities on their own.
I think you overestimate the power of a blogpost and the speed of bugfixing at Apple for something like this.
I almost guarantee there is no way they can read this blogpost, escalate it internally, get the appropriate approval to the work item, actually work on the fix, get it through QA and get it live in production in 3 days. That would only happen on really critical issues, and this is definitely not critical enough for that.
Three days is, agreed, too short. A week is just about possible, though...
I've seen a blog-post, authored a bug in Radar, assigned it to myself, and fixed it the same day. Whether it goes out in the next release is more a decision for the bug-review-board, but since the engineering manager (that would have been me) sits on that too, it's just a matter of timing and seeing if I can argue the case.
To be fair, the closer we are to a release, the less likely a change is to be accepted unless you can really sweet-talk the rest of the BRB, and there's usually a week of baking before the actual release goes out, but that has sometimes been shrunk for developer-preview releases...
Or, one of the developers of the library saw it, decided to fix it in their spare time (does that exist at Apple?) before it became a bigger thing.
If not, talk about coincident that someone reported an issue and all of that you mentioned was already done before that happened, and the only thing missing was merging the code to the repository which was done after the issue was reported. Not unheard of, but feels less unlikely than "Engineer decided to fix it".
MLX is a fairly esoteric library seeing very little usage, mostly to try to foment a broader NN space on Apple devices. This isn't something that is widely affecting people, and most people simply aren't trying to run general LLMs on their iPhone.
I don't think that fix is specific to this, but it's absolutely true that MLX is trying to lever every advantage it can find on specific hardware, so it's possible it made a bad choice on a particular device.
How do you know that it wasn’t merely that the blog post elicited multiple people to file the same duplicate bug in Apple’s radar system, which is how they ostensibly prioritize fixes?
I don't, but the effect is the same, "something might land in the news, lets fix it before it does, since multiple people reporting the same issue based on this public post someone made".
Methodology is one thing; I can't really agree that deploying an LLM to do sums is great. Almost as hilarious as asking "What's moon plus sun?"
But phenomenon is another thing. Apple's numerical APIs are producing inconsistent results on a minority of devices. This is something worth Apple's attention.
My mind instantly answered that with "bright", which is what you get when you combine the sun and moon radicals to make 明(https://en.wiktionary.org/wiki/%E6%98%8E)
Anyway, that question is not without reasonable answers. "Full Moon" might make sense too. No obvious deterministic answer, though, naturally.
>Moon plus sun is a wildly more massive, nuclear furnace of a moon that also engulfs the earth.
i just looked up mass of sun vs mass of moon (they differ by 10^30 vs 10^20), and the elemental composition of the sun: the moon would entirely disappear into the insignificant digits of trace elements which are in the range of .01 % of the sun. I could be off by orders of magnitude all over the place and it would still disappear.
Not sure about that. You can't have an eclipse without both the moon and the sun. Ergo, the eclipse is the totality (sorry!) of the sun and moon, or sun+moon (+very specific boundary conditions).
This thread reminds me of Scribblenauts, the game where you conjure objects to solve puzzles by describing them. I suspect it was an inspiration for Baba Is You.
Scribblenauts was also an early precursor to modern GenAI/word embeddings. I constantly bring it up in discussions of the history of AI for this reason.
Moon implies there is a planet the moon is orbiting. So unless the planet and its moon are too close to the sun the long term result could also be: solar system.
As an aside, one of my very nice family members like tarot card reading, and I think you'd get an extremely different answer for - "What's moon plus sun?" - something like I would guess as they're opposites - "Mixed signals or insecurity get resolved by openness and real communication." - It's kind of fascinating, the range of answers to that question. As a couple of other people have mentioned, it could mean loads of things. I thought I'd add one in there.
The scary part isn't "LLMs doing sums." It's that the same deterministic model, same weights, same prompt, same OS, produces different floating-point tensors on different devices
So true! And as any sane Apple user or the standard template Apple Support person would have suggested (and as they actually suggest) - did they try reinstalling the OS from scratch after having reset the data (of course before backing it up; preferably with a hefty iCloud+ plan)? Because that's the thing to do in such issues and it's very easy.
Reinstalling the OS sucks. I need to pull all my bank cards out of my safe and re-add their CVV's to the wallet, and sometimes authenticate over the phone. And re-register my face. And log back in to all my apps. It can take an hour or so, except it's spread out over weeks as I open an app and realize I need to log in a dozen times.
There was a magical period. I suspect it ended with the introduction of the Secure Enclave. But maybe it was a little later.
An encrypted iTunes backup of a device was a perfect image. Take the backup, pull the SIM card, restore the backup to a new phone with the sim card installed, and it was like nothing had happened.
No reauthentication. No missing notifications. No lost data. Ever.
"Well, now it's Feb. 1st and I have an iPhone 17 Pro Max to test with and... everything works as expected. So it's pretty safe to say that THAT specific instance of iPhone 16 Pro Max was hardware-defective."
That logic is somewhat [1] correct, but it doesn’t say anything about whether all, some, or only this particular iPhone 16 Pro Maxes are hardware-defective.
[1] as the author knows (“MLX uses Metal to compile tensor operations for this accelerator. Somewhere in that stack, the computations are going very wrong”) there’s lots of soft- and firmware in-between the code being run and the hardware of the neural engine. The issue might well be somewhere in those.
>ANE is probably the biggest scam "feature" Apple has ever sold.
It is astonishing how often ANE is smeared on here, largely by people who seem to have literally zero idea what they're talking about. It's often pushed by either/or people who bizarrely need to wave a flag.
MLX doesn't use ANE for the single and only reason that Apple hid the ANE behind CoreML, exposing zero public APIs to utilize ANE directly, and MLX -- being basically an experimental grounds -- wanted to hand roll their implementation around the GPU / CPU. They literally, directly state this as the reason. People inventing technical reasons for why MLX doesn't use ANE are basically just manufacturing a fan fiction. This isn't to say that ANE would be suitable for a lot of MLX tasks, and it is a highly optimized, power-efficient inference hardware that doesn't work for a lot of purposes, but its exclusion is not due to technically unsuitability.
Further, the ANE on both my Mac and my iPhone is constantly attenuating and improving my experience. Little stuff like extracting contents from images. Ever browse in Safari and notice that you can highlight text in the image almost instantly after loading a page? Every image, context and features detected effortlessly. Zero fans cycling up. Power usage at a trickle. It just works. It's the same way that when I take a photo I can search "Maine Coon" and get pictures of my cats, ANE used for subject and feature extraction. Computational photography massively leverages the ANE.
At a trickle of power.
Scam? Yeah, I like my battery lasting for more than a couple of minutes.
Apple intended ANE to bring their own NN augmentations to the OS and thus the user experience, and even the availability in CoreML as a runtime engine is more limited than what Apple's own software can do. Apple basically limits the runtime usage to ensure that no third party apps inhibit or restrict Apple's own use of this hardware.
Yes, "floating point accumulation doesn't commute" is a mantra everyone should have in their head, and when I first read this article, I was jumping at the bit to dismiss it out of hand for that reason.
But, what got me about this is that:
* every other Apple device delivered the same results
* Apple's own LLM silently failed on this device
to me that behavior suggests an unexpected failure rather than a fundamental issue; it seems Bad (TM) that Apple would ship devices where their own LLM didn't work.
I would go even further and state that "you should never assume that floating point functions will evaluate the same on two different computers, or even on two different versions of the same application", as the results of floating point evaluations can differ depending on platform, compiler optimizations, compilation-flags, run-time FPU environment (rounding mode, &c.), and even memory alignment of run-time data.
There's a C++26 paper about compile time math optimizations with a good overview and discussion about some of these issues [P1383]. The paper explicitly states:
1. It is acceptable for evaluation of mathematical functions to differ between translation time and runtime.
2. It is acceptable for constant evaluation of mathematical functions to differ between platforms.
So C++ has very much accepted the fact that floating point functions should not be presumed to give identical results in all circumstances.
Now, it is of course possible to ensure that floating point-related functions give identical results on all your target machines, but it's usually not worth the hassle.
I guess at the bit level, but not at the level of computation? Anything that relies on bit patterns of nans behaving in a certain way (like how they propagate) is in dangerous territory.
> Anything that relies on bit patterns of nans behaving in a certain way (like how they propagate) is in dangerous territory.
Why? This is well specified by IEEE 754. Many runtimes (e.g. for Javascript) use NaN boxing. Treating floats as a semi-arbitrary selection of rational numbers plus a handful of special values is /more/ correct than treating them as real numbers, but treating them as actually specified does give more flexibility and power.
> Many runtimes (e.g. for Javascript) use NaN boxing.
But I've never seen them depend on those NaNs surviving the FPU. Hell, they could use the same trick on bit patterns that overlap with valid float values if they really wanted to.
Can you show me where in the ieee spec this is guaranteed?
My understanding is the exact opposite - that it allows implementations to return any NaN value at all. It need not be any that were inputs.
It may be that JavaScript relies on it and that has become more binding than the actual spec, but I don't think the spec actually guarantees this.
Edit: actually it turns out nan-boxing does not involve arithmetic, which is why it works. I think my original point stands, if you are doing something that relies on how bit values of NaNs are propagated during arithmetic, you are on shaky ground.
> An operation that propagates a NaN operand to its result and has a single NaN as an input should produce a
NaN with the payload of the input NaN if representable in the destination format.
> If two or more inputs are NaN, then the payload of the resulting NaN should be identical to the payload of
one of the input NaNs if representable in the destination format. This standard does not specify which of
the input NaNs will provide the payload.
As the comment below notes, the language should means it is recommended, but not required. And there are indeed platforms that do not implement the recommendation.
Don't have the spec handy, but specifically binary operations combining two NaN inputs must result in one of the input NaNs. For all of Intel SSE, AMD SSE, PowerPC, and ARM, the left hand operand is returned if both are signaling or both or quiet. x87 does weird things (but when doesn't it?), and ARM does weird things when mixing signaling and quiet NaNs.
I also don't have access to the spec, but the people writing Rust do and they claim this: "IEEE makes almost no guarantees about the sign and payload bits of the NaN"
"On RISC-V, most floating-point operations only ever generate the canonical NaN, even if a NaN is given as the operand (the payload is not propagated)."
And from the same article:
"IEEE 754-2008 recommends, but does not require, propagation of the NaN payload." (Emphasis mine)
I call bullshit on the statement "specifically binary operations combining two NaN inputs must result in one of the input NaNs." It is definitely not in the spec.
> For an operation with quiet NaN inputs, other than maximum and minimum operations, if a floating-point result is to be delivered the result shall be a quiet NaN which should be one of the input NaNs.
The same document say:
> shall -- indicates mandatory requirements strictly to be followed in order to conform to the standard and from which no deviation is permitted (“shall” means “is required to”)
> should -- indicates that among several possibilities, one is recommended as particularly suitable, without mentioning or excluding others; or that a certain course of action is preferred but not necessarily required; or that (in the negative form) a certain course of action is deprecated but not prohibited (“should” means “is recommended to”)
i.e. It required to be a quiet NaN, and recommended to use one of the input NaN.
hey, I appreciate your love of language and sharing with us.
I'm wondering if we couldn't re-think "bit" to the computer science usage instead of the thing that goes in the horse's mouth, and what it would mean for an AI agent to "champ at the bit"?
Actually it was originally "champing" – to grind or gnash teeth. The "chomping" (to bite) alternative cropped up more recently as people misheard and misunderstood, but it's generally accepted as an alternative now.
Do you have a source on this, or a definition for what it means to be "primary" here? All I can find is sources confirming that "champing" is the original and more technically correct, but that "chomping" is an accepted variant.
Does it bother anyone else that the author drops "MiniMax" there in the article without bothering to explain or footnote what that is? (I could look it up, but I think article authors should call out these things).
No because it was obvious from context clues that it was an LLM model. Not every word needs to be defined. Also if you were unsure and decided to search “MiniMax M2.1”, every result would be about the LLM.
There are tons of terms that aren't explained that some people (like me) might not understand. I think it's fine that some articles have a particular audience in mind and write specifically for those, in this case, it seems it's for "Apple mobile developers who make LLM inference engines" so not so unexpected there are terms I (and others) don't understand.
Yes, maybe. But it would be nice if there would be footnotes or tooltips. Putting the explanation in the text itself breaks the flow of the text so that would make it worse indeed.
My personal favorite is iHP48 (previously I used m48+ before it died) running an HP 48GX with metakernal installed as I used through college. Still just so intuitive and fast to me.
I was pretty delighted to realize I could now delete the lame Calculator.app from my iPhone and replace it with something of my choice. For now I've settled on NumWorks, which is apparently an emulator of a modern upstart physical graphing calc that has made some inroads into schools. And of course, you can make a Control Center button to launch an app, so that's what I did.
Honestly, the main beef I have with Calculator.app is that on a screen this big, I ought to be able to see several previous calculations and scroll up if needed. I don't want an exact replica of a 1990s 4-function calculator like the default is (ok, it has more digits and the ability to paste, but besides that, adds almost nothing).
Calculator.app does have history now FWIW, it goes back to 2025 on my device. And you can make the default vertical be a scientific calculator now too.
Also it does some level of symbolic evaluation: sin^-1(cos^-1(tan^-1(tan(cos(sin(9))))))== 9, which is a better result than many standalone calculators.
Also it has a library of built in unit conversations, including live updating currency conversions. You won’t see that on a TI-89!
And I just discovered it actually has a built in 2D/3D graphing ability. Now the question is it allows parametric graphing like the MacOS one…
All that said, obviously the TI-8X family hold a special place in my heart as TI-BASIC was my first language. I just don’t see a reason to use one any more day to day.
I run a TI 83+ emulator on my Android phone when I don't have my physical calculator at hand. Same concept, just learned a different brand of calculators.
built-in calculator apps are surprisingly underbaked... I'm surprised neither of the big two operating systems have elected to ship something comparable to a real calculator built in. It would be nice if we could preview the whole expression as we type it..
That’s certainly an improvement - but why can’t I modify a previous expression? Or tap to select previous expressions?
What I want is something like a repl. I want to be able to return to an earlier expression, modify it, assign it to a variable, use that variable in another expression, modify the variable and rerun and so on.
I think on the numworks you can use the arrow keys to pull up an old expression. I think it would be really cool if someone built out an interpreted, nicely rendered calculator language/repl that could do variables and stuff. Might be an interesting idea
You can, but it seems to just select & use the results of previous expressions. I often want to modify & iterate on the formulas I've previously entered. Or rerun them.
I think there was a calculator like this about a decade ago released for macos, but I can't remember what it was called. Brilliant little piece of software. I assume most people didn't understand it, and it slowly disappeared.
Typing on my iPhone in the last few months (~6 months?) has been absolutely atrocious. I've tried disabling/enabling every combination of keyboard setting I can thinkj of, but the predictive text just randomly breaks or it just gives up and stops correcting anything at all.
It’s not just you, and it got bad on my work iPhone at the same time so I know it’s not failing hardware or some customization since I keep that quite vanilla.
Interesting post, but the last bit of logic pointing to the Neural Engine for MLX doesn’t hold up. MLX supports running on CPU, Apple GPU via Metal, and NVIDIA GPU via CUDA: https://github.com/ml-explore/mlx/tree/main/mlx/backend
Good article. Would have liked to see them create a minimal test case, to conclusively show that the results of math operations are actually incorrect.
I'd think other neural-engine using apps would also have weird behavior. Would've been interesting to try a few App Store apps and see the weird behavior
> Or, rather, MiniMax is! The good thing about offloading your work to an LLM is that you can blame it for your shortcomings. Time to get my hands dirty and do it myself, typing code on my keyboard, like the ancient Mayan and Aztec programmers probably did.
They noticed a discrepancy, then went back and wrote code to perform the same operations by hand, without the use of an LLM at all in the code production step. The results still diverged unpredictably from the baseline.
Normally, expecting floating-point MAC operations to produce deterministic results on modern hardware is a fool's errand; they usually operate asynchronously and so the non-commutative properties of floating-point addition rear their head and you get some divergence.
But an order of magnitude difference plus Apple's own LLM not working on this device suggests strongly to me that there is something wrong. Whether it's the silicon or the software would demand more investigation, but this is a well reasoned bug in my book.
I should think I'll probably see someone posting this on the front page of HN tomorrow, no doubt. I first read it when it was already enormously old, possibly nearly 30 years old, in the mid 1980s when I was about 11 or 12 and starting high school, and voraciously reading all the Golden Age Sci-Fi I could lay my grubby wee hands on. I still think about it, often.
I found the article hard to read. I turned on reader mode. I still found it hard to read. Each sentence is very short. My organic CPU spins trying to figure out how each sentence connects to the next. Each sentence feels more like a paragraph, or a tweet, instead of having a flow. I think that's my issue with it.
My TL;DR is that they tried to run an on-device model to classify expenses, it didn't work even for simple cases ("Kasai Kitchin" -> "unknown"), they went deeeeeep down the rabbit hole to figure out why and concluded that inference on their particular model/phone is borked at the hardware level.
Whether you should do this on device is another story entirely.
> What's to be gained... by offloading inference to someone else?
Access to models that local hardware can't run. The kind of model that an iphone struggles to run is blown out of the water by most low end hosted models. Its the same reason that most devs opt for claude code, cursor, copilot, etc. instead of using hosted models for coding assistance.
Claude code produces stuff orders of magnitude more complicated than classifying expenses.
If the task can be run locally on hardware you own anyway, it should.
I would really not want to upload my expense data to some random cloud server, nope. On device is really a benefit even if it's not quite as comprehensive. And really in line with apple's privacy focus so it's very imaginable that many of their customers agree.
I severely doubt your thesis around iPhones being Veblen goods.
You are claiming that if the price of the iPhone went down, apple would sell fewer phones?
Correspondingly, you are arguing that if they increased prices they could increase sales?
You are claiming that 100s of millions of people have all made the decision that the price of an iPhone is more than it is worth to them as a device, but is made up for by being seen with one in your hand?
Not all goods that signify status are Veblen goods.
>Correspondingly, you are arguing that if they increased prices they could increase sales?
Veblen goods aren't like this. If they were, everything would be priced at infinity. Veblen goods have to take into account the amount of spending money their target customers have, and how much they're willing to spend. Apple products are priced this way. They're not targeted just at people who can afford Rolls-Royce Silver Shadows, they're targeted at regular people who are willing to spend too much money on a phone when they can get an equivalent Android phone for half the price. Those people have limited money, but they're willing to overpay, but only so much.
>You are claiming that if the price of the iPhone went down, apple would sell fewer phones?
Quite likely, yes. If they adopted razor-thin profit margins on iPhones, their phones would be seen as "cheap" and wouldn't have the cachet they have now. More people would start looking at alternatives, and start buying Samsung Galaxies and other flagship Android phones.
Increasing demand with increasing prices is the very definition of a Veblen good. I never said anything like pricing them at infinity (an exceptionally stupid way of saying that something is not for sale).
I simply pointed out that there isn’t really any reason to believe that a mass produced easily available phone that holds a massive percentage of the entire global cell phone market would see increased demand from increased prices. It is an extraordinary claim with nothing resembling evidence. The most damning evidence is that the most expensive iPhone, the Pro Max, is outsold 2:1 by the base model for the last three generations, despite being visually distinguishable. (The 17 saw initial sales of Pro Maxes higher than base, but that appears to have corrected. Easily understandable that early adopters are more willing to pay for the best version of new tech)
There is an argument to be made that the Pro Max flirts with Veblen for small parts of the market, or that certain submarkets in poorer countries treat the iPhone that way, but that all looks more like conspicuous consumption. I still don’t believe that Pro Max sales increase if the price increases. A few individuals or submarket will not have the ability to invert a demand curve for an Apple device.
Again, I think that you are confusing conspicuous consumption with a Veblen good. This sentence is the giveaway:
> Those people have limited money, but they're willing to overpay, but only so much.
What you are describing is a normal demand curve. As price rises fewer people are willing to pay. People being unable to pay for something they still want does not make something a Veblen good (that would make insulin a Veblen good). You are describing a steep demand curve, not a reversed one.
Just because you perceive that an equivalent android can be purchased for half the price does not mean that everyone uses your criteria. I tried switching to a lower priced android made by google. In no way was it equivalent for my purposes. and I still wouldn’t want it. I am happy to pay the price, not because I care about being seen with an iPhone, but because it is the tool that I have determined to best suit my purposes. Many people refuse to believe this, but many people like the Apple ecosystem.
I mean, I think it's cultural. In US it seems like everyone has an iphone, it's almost kinda quirky not to have one. But in some other places, an iPhone is more than your monthly salary - having one is definitely a symbol of status. Less so than it used to be, but it still has that.
iPhones in the US have an estimate ~55% market share depending on source. Owning an Android wasn't unusual in the least when I lived there, and appears to be pretty popular.
I don't think its unusual that a country with high median income and higher average income will tend to gravitate towards more expensive phones. Given that Apple doesn't make a cheap phone, it kind of follows that wealthier countries will buy more iPhones.
Of course the opposite is true as well, In a country where an iPhone is measured in months of salary, they won't sell well, but I'd be willing to bet that Androids in that price tier sell like shit in those countries too.
Is it a status symbol? arguably. But it also correlates pretty strongly with median income.
Fair, but that’s a comment on a US-centric website, run by a US-centric company, in a US-centric industry, on a US-centric medium. So if they didn’t mean US, I think the onus is on them to clarify exactly where this applies.
Admittedly, I hate companies that live off their marketing. Nintendo, Disney, Apple. I hate that these companies can weaponize psychology against humans.
Yes because 60% of US phone buyers buy an iPhone to stand out from the average US phone buyer and they shouldn’t because it doesn’t run local llm’s well?
Some companies definitely do just exist on marketing. Some clothing brands are objectively overpriced crap and pure wealth signalling. Or something like a juicero.
But I agree Apple doesn't even though they've gone into a direction I couldn't follow them in.
Not really. They back it up with "good enough tech" that looks pretty and sucks people in with marketing, and then locks them into a closed ecosystem. Admittedly, some of their tech is actually very good (e.g. M-series ARM-based CPUs), but much of it is nothing special, or worse, just copying something else that competitors have been doing for years, presenting it as brand-new, and claiming credit for it.
They did this with the always-on screens for phones. My LGs had this many, many years ago. It was so bad that when Apple finally brought it out and acted like they had invented it, coworkers saw my LG and asked if I had gotten the latest iPhone, and I had to point out that it was a 5-year-old LG.
And then there's other stuff that Apple has which is just plain bad, but they present as new and wonderful, such as the "island" keyboard.
I'd almost say most companies live or die off their marketing. One could argue that understanding your customer as well as or better than they understand themselves is a strength.
To wit, some people do value form over function. Some people do prefer a safe, curated walled garden.
I am not among them--I say this as someone who cannot stand using most Apple products for more than a minute. But I respect what they offer(ed) and for some people even recommended them. (Now I'm less sure because it seems like everything tech has gone to shit, but I can't tell if that's just "old man yells at cloud" or what)
Ideally there would be enough competition for us all to find what we're looking for. I think anticompetitive behavior is a worse sin
All three of these companies are supremely dedicated to the customer experience. It’s a weird thing to be annoyed at. Ninty is the only company really experimenting with gaming hardware. Disney parks are a thesis on hiding the “behind the scenes” stuff perfectly. Apple does its best to make things just kinda work well, and if you’re in their ecosystem fully, it usually does work out.
Not everyone cares for the most capable device on the planet. Sometimes people just want a pretty familiar and easy experience. I haven’t used my phone for anything more than browsing the web and texting in ages. I absolutely don’t care about whatever function you think I’m missing due to Apple, honestly.
As a side note, the fathers of Psychology were absolutely terrible scientists. The entire field almost failed because they took it so far into pseudo-science land. Of course Jung isn’t correct.
Can you prove that is still the case with the iPhone SE by showing a comparable hardware with similar long support on software updates and lower price?
> Its a demonstration of wealth. This is called Veblen good
Just the other day I was reminded of the poor little "I am rich" iOS app (a thousand dollar ruby icon that performed diddly squat by design), which Apple deep-sixed from the app store PDQ.
The author is debugging the tensor operations of the on-device model with a simple prompt. They confirmed the discrepancy with other iPhone models.
It’s no different than someone testing a calculator with 2+2. If it gets that wrong, there’s a hardware issue. That doesn’t mean the only purpose of the calculator is to calculate 2+2. It is for debugging.
You could just as uncharitably complain that “these days no one does arithmetic anymore, they use a calculator for 2+2”.
I mean, Apple's LLM also doesn't work on this device, plus the author compared the outputs from each iterative calculation on this device vs. others and they diverge from every other Apple device. That's a pretty big sign that both, something is different about that device, and this same broken behavior carried across multiple OS versions. Is the hardware or the software "responsible" - who knows, there's no smoking gun there, but it does seem like something is genuinely wrong.
I don't get the snark about LLMs overall in this context; this author uses LLM to help write their code, but is also clearly competent enough to dig in and determine why things don't work when the LLM fails, and performed an LLM-out-of-the-loop debugging session once they decided it wasn't trustworthy. What else could you do in this situation?
Somewhere along the line, the tensor math that runs an LLM became divergent from every other Apple device. My guess is that there's some kind of accumulation issue here (remembering that floating-point accumulation does not usually commute), but it seems genuinely broken in an unexpected way given that Apple's own LLM also doesn't seem to work on this device.
If you’d read the whole thing, you would go on a debugging journey that both involved bypassing the LLM and was appropriate for HN (vs not dismissing the article), so you might want to do that.
They asked MiniMax on their computer to make an iPhone app that didn't work.
It didn't work using the Apple Intelligence API. So then:
* They asked Minimax to use MLX instead. It didn't work.
* They Googled and found a thread where Apple Intelligence also didn't work for other people, but only sometimes.
* They HAND WROTE the MLX code. It didn't work. They isolated the step where the results diverged.
> Better to dig in a bit more.
The author already did 100% of the digging and then some.
Look, I am usually an AI rage-enthusiast. But in this case the author did every single bit of homework I would expect and more, and still found a bug. They rewrote the test harness code without an LLM. I don't find the results surprising insofar as that I wouldn't expect MAC to converge across platforms, but the fact that Apple's own LLM doesn't work on their hardware and their own is an order of magnitude off is a reasonable bug report, in my book.
Fascinating the claim is Apple Intelligence doesn't work altogether. Quite a scandal.
EDIT: If you wouldn't mind, could you edit out "AI rage enthusiast" you edited in? I understand it was in good humor, as you describe yourself that way as well. However, I don't want to eat downvotes on an empty comment that I immediately edited when you explained it wasn't minimax! People will assume I said something naughty :) I'm not sure it was possible to read rage into my comment.
> Fascinating the claim is Apple Intelligence doesn't work altogether. Quite a scandal.
No, the claim is their particular device has a hardware defect that causes MLX not to work (which includes Apple Intelligence).
> EDIT: If you wouldn't mind, could you edit out "AI rage enthusiast" you edited in? I understand it was in good humor, as you describe yourself that way as well. However, I don't want to eat downvotes on an empty comment that I immediately edited when you explained! People will assume I said something naughty :) I'm not sure it was possible to read rage into my comment.
Your comment originally read:
> This is blinkered.
> - MiniMax can't fit on an iPhone.
> - There's no reason to expect models to share OOMs for output.
> - It is likely this is a graceful failure mode for the model being far too large.
> No fan of Apple's NIH syndrome, or it manifested as MLX.
> I'm also no fan of "I told the robot [vibecoded] to hammer a banana into an apple. [do something impossible]. The result is inedible. Let me post to HN with the title 'My thousand dollars of fruits can't be food' [the result I have has ~nothing to do with the fruits]"
> Better to dig in a bit more.
Rather than erase it, and invite exactly the kind of misreading you don't want, you can leave it... honestly, transparently... with your admission in the replies below. And it won't be downvoted as much as when you're trying to manipulate / make requests of others to try to minimize your downvotes. Weird... voting... manipulating... stuff, like that, tends to be frowned upon on HN.
You have more HN karma than I do, even, so why care so much about downvotes...
If you really want to disown something you consider a terrible mistake, you can email the HN mods to ask for the comment to be dissociated from your account. Then future downvotes won't affect your karma. I did this once.
Huh. I thought the minimum comment score was -4 (which would make the maximum amount of karma loss 5, since each comment starts at 1 point), but I didn't know if that was a cap on karma loss or just a cap on comment score.
neural nets or AI are very bad at math, it can only produce what's in the training data. So if you have trained it from 1+1 to 8+8 it can't do 9+9, it's not like a child brain that it can make logical conclusions.
> Update on Feb. 1st:
> Well, now it's Feb. 1st and I have an iPhone 17 Pro Max to test with and... everything works as expected. So it's pretty safe to say that THAT specific instance of iPhone 16 Pro Max was hardware-defective.
reply