I'd forgotten tech reporting was just as bad in 1999 as it is today. Here's the important part missing from the article: that serial number is available from userspace, and cannot be intercepted by the kernel in any way. They provided a way to disable it, but not to report, control, or intercept how it is accessed. It is returned by the unprivileged, untrappable* CPUID instruction.
Every single UEFI computer sold today has a unique serial number (GUID). There are MAC addresses. There are HDD serial numbers. There are zillions of unique identifiers accessible to the operating system. Various copy protection schemes use one or more of these. But what they all have in common is that they are under the control of the OS. A privacy-conscious OS can forbid access to these identifiers for userspace applications, or can fake them to something else. This is how e.g. sandbox environments like the App Store can force apps to use some kind of "advertising ID" for this stuff, and ensure that apps aren't sneakily fetching some true unique system ID.
But with the PIII serial number, userspace apps can fetch it without the OS knowing about it. And the disable bit is a one-time operation, so it is not possible to grant serial number access to some apps and not others. This leads to a situation where any arbitrary unprivileged userspace app can uniquely identify your machine, and where vendors relying on this feature might compel you to leave it enabled (e.g. DRM). Now random apps running under an untrusted user can fingerprint your machine, just because you want to watch Netflix.
And that is why this design was utterly broken and a privacy nightmare. Not because it's a unique ID. We have tons of those.
* VMs can trap CPUID, but of course VM support came later anyway.
rr optionally uses CPUID faulting to mask off RDRAND, but this is completely non-critical issue. If this is not available then libraries just need to have envvar switch to disable RDRAND usage. It also uses CPUID faulting for makes traces portable between systems. (if you really need that you can run rr inside VM, and it will work fine)
But core rr functionality works just fine on Zen CPUs, after MSR to disable some determinism-breaking speculations was discovered. These problems were not related to CPUID.
Common to have unique SN in a processor. Let the SW vendors do copy protection too. E.g. at Sonos we used them to associate with the software signed certificate such that you couldn't run a given Players software on another Player without the same SN. When making products via contract manufactures, especially in China, it was a wise procedure.
When I built my pc the cpu came with some free game or game coupon or something. It wasn’t in the box, I instead had to download and run some AMD software that verified that I was in fact using the CPU. I’m under the impression that it detected my specific CPU not just make/model but some sort of unique ID. Presumably this stops people from claiming the code and reselling the cpu,(or at least without reselling it as “new”) or from spinning up a VM and trying to redeem random promos without owning the real CPU.
Thought it was interesting that they did that but didn’t think much more of it. I don’t even remember what the promo was. Might have just been extended warranty or something?
In November I activated such an AMD Promo for my Ryzen 5000X. The game I got was Far Cry 6. Hilarious because the game keeps
getting delayed to now mid September or so of this year, almost a year after I bought the CPU. Not exactly the best Promo to show off the power of the CPU :)
> But didn't you say it'd help to prevent counterfeiting of chips?
The stolen part I get, but did it used to be easier to counterfeit chips? There's a lot that goes into making something that looks like a PIII, and even then, I assume Intel had state-of-the-art fabs, so I'm surprised this was a concern.
The hardware scams I've heard of stamping better specs on something, for hard drives, a firmware hack that makes it appear to be higher capacity, and unauthorized hardware made in off-hours on the same production line.
Those are the direct scams - but there are also lesser scams where a supplier replaces a part with a working part made by another manufacturer and pockets the difference. The parts may work correctly (in some cases it was only discovered because the parts worked FASTER than expected) - but it is fraud.
What Intel meant at the time was "someone is selling our chips relabeled to a higher SKU and pocketing our cut". This happened in 1996 with criminal gang remarking 120 to 150, 133 to 166, etc, pocketing extra ~$30-50 per CPU.
Intel was so pissed someone else was making money on binning they locked multiplier on later 133/166/200, all Pentium MMX and later models ending easy overclocking.
Except the classic case of the Celeron 300a which let you go from 300Mhz to 450Mhz with a simple change of a motherboard FSB clock setting.
I was nervous when I ordered the parts - but stunned that it was so easy for me to do, and left me with a machine that was effectively faster than anything Intel was officially selling at the time.
FSB (and later BCLK) overclocking is what we got left with for a while until Intel killed that too with Sandy Bridge just after introducing special upscale K series CPUs generation earlier with Nehalem. All in an effort to sell gimped parts while charging extra for the whole deal.
>"Prior to 1984, these doodles also served a practical purpose. If a competitor produced a similar chip, and examination showed it contained the same doodles, then this was strong evidence that the design was copied (a copyright violation) and not independently derived."
It's amazing how in 1999 OP was worried about UID=Big Brother, but today we've all willingly handed over security and privacy to 3~5 companies, even without talking Facebook, Google or Amazon. Jeez, did everyone forget LexisNexis? They were doing this way before FAANG. But it's game over: there is no more privacy or security. We lost, they won. I give to EFF every year, but it's more and more futile every year.
I remember a time when people circulated lists of software that phoned home calling them out as spyware for collecting nothing more than our IP addresses. These says everything wants to connect to the internet. The EFF is still one of those groups I have no reservations donating to, but yeah, it feels like we're only going in the wrong direction.
This was a self-own by Intel where they claimed that the processor serial number would be used for online authentication which was then exaggerated to "the processor serial number will be a super-cookie sent in every HTTP request" which does sound pretty big-brotherish.
The ME, on the other hand, is obviously good since it "allows" you to watch 4K Netflix on your PC.
> The ME, on the other hand, is obviously good since it "allows" you to watch 4K Netflix on your PC.
The ME has nothing to do with this, it's entirely about the GPU. 7th generation Intel GPUs and 10xx or newer nVidia GPUs support the DRM that Netflix requires, the CPU just needs to be fast enough to handle its part of the equation.
Intel did release a browser plugin/activex which claimed to expose the processor serial number to websites. But this was such a scandal that it almost immediately disappeared, along with the documentation, so I never found out how it actually worked.
Every single iPhone and iPad transmits its hardware serial number to Apple when you launch the App Store app, or on first boot after restore for "activation".
> Intel has revealed that each Pentium III chip will carry a unique serial number that can be read by the computer's software. Intel claims that the serial number will facilitate e-commerce, promote "digital content protection," prevent counterfeiting of Intel processors, and help to track stolen ones. We know users have questions about this controversial feature, so we assembled this FAQ. Q: Why are privacy experts concerned? A: Privacy experts are concerned because the CPU's electronic serial number could be used for purposes that may not be in users' best interests.
In many ways, all of the predictions made at the time in replies to articles like this have come true. Everyone is spying on us constantly; legitimate end-users are absolutely peeved off / made to suffer SaaS and activation crap, and the pirates do just continue exactly as before (albeit with a nice, NSA-provided open-source RE environment to play in...)
> A software product could even "seek permission" from the vendor -- via the Internet or your modem -- each time it ran, so that the vendor would know whenever you started the program.
And better yet, whenever you complain about how invasive it is, someone will appear to take up page space to call you names for wanting things to be better.
Pretty sure I remember Sun compilers, Veritas Volume Manager, and several other pieces of software having license keys that expired in the 1990's. Several of them used something called FlexLM.
Flexlm is alive and kicking. All of thr EDA software I use daily is protected by it. Its so archaic I can only assume its been broken seven ways from Sunday, but its still there. Id be interested to know if anyone here knows why it's the de facto standard for on premises software licencing.
The reason it became de-facto was it had bindings to C/C++/Java (what most commercial software was written in from the 90s onwards) and worked on all main platforms (including most Unixes). There wasn't much else which provided that in the 90s, early 2000s.
RLM is a slightly more modern alternative to FlexLM which some software has moved to over the past 10 years from FlexLM.
Thanks, interesting to see that this seems like a niche market thats relitively untapped. What seems initially strange is its protecting SW that can be as much as 100k per seat (e.g. ASIC EDA licenses) I'm sure less scrupulous operations are pirating the SW, though perhaps the EDA vendors know the compabies who do pay are honest lest they are found out so a truly secure licensing system is less critical to them.
Yes, I remember that. One of my first jobs out of school was a sysadmin for various unix machines (Sun, DEC, HP, and others I've forgotten.) FlexLM was everywhere. Annoying to configure. I'd install it, and have no way to test if it was working, since I didn't actually run any of that software. I'd email some people in another group to get help testing it, and would never hear back. The joys of corporate america. I moved on pretty quick.
I mean, in 1999 one of the dominant software distribution models was shareware where frequently applications would stop working after some amount of time and demand payment.
With subscription, even continuing to pay increasing amounts isn't a guarantee than the product will remain available once the vendor decides to cancel it. And, piracy isn't an option for developing countries where the software price is over a year's average wage.
It's never guaranteed, no matter what payment model. There is plenty of discontinued closed-source software from way before a subscription model became a thing that everyone does, and some of it managed to stay alive only thanks to piracy. With subscriptions it's way worse, because it has to rely on a central server, sometimes for no good reason at all, and you can't even do that. It's just bound to happen that the company will pull the plug once it's no longer profitable.
Back in the day, the license keys weren't checked at a central server, there was some kind of check sum or database inside the software instead. This was wonderful for both pirates and legitimate users for when the company disappeared.
I still remember a working Starcraft CD key (though it later turned out that something on the order of 01234566789 worked).
I remember my Windows XP QQWD7 serial key too. Even when central server was the default, there was usually still a way to do an offline activation. Nowadays, not anymore.
Discontinued software can still be usable for a very long time. Proper written applications like Office can work on Windows versions that are decades older than itself, or the worst case continue working on VMs until the end of time, like those legacy apps that are still running from before the 90s.
No joke. It was one of the first apps I used and in non-European languages its leaps and bounds ahead of most of FOSS today that can't even do RTL right, let alone letter merging etc. And doc files work perfectly with all the modern editors too.
The World of Open Source, without that enterprise software subscription/licensing per core model flavours corporations can account for and personally I feel with a slight tongue-in-cheek, is a product of accountants came before programmers.
Or home users, in which software doesn't expire, just updates and support does. More a terminology aspect in that way.
Today more things are subscription level, but mostly for content to drive that software.
I look back over the last 22 years, and we had very different attitudes towards privacy back then. It's sad to think that almost 2 generations are now growing up without knowing what privacy was, or how much actual freedom there was in being a free range kid with the only obligation being home for supper.
We overestimate the change over the course of a year, but seriously underestimate it over a couple of decades.
22 years ago the vast majority of communications were either unencrypted or, if you had a fancy mobile phone, encrypted using weak, ITAR-compliant ciphers. Encryption on the Internet was for e-commerce sites with the budgets for the extra hardware, certification, and compliance needed for SSL, and the protocol was also kneecaped by ITAR outside of the US. Hell, even early DRM systems were laughably easy to break because of export-grade encryption (see DVD CSS).
The primary privacy protection for communications in 1999 was legal, not technical:
1. Police needed a warrant to listen on your communications (or, if they only wanted to know who you were calling, no warrant was needed)
2. Private wiretapping would land you in jail, and required covert access to someone's house, making it riskier to pull off
3. Analog telephone systems (already out-of-date) were entirely protected by a law that made it illegal to provide consumer-grade equipment that could be easily modified to tune to 800mhz. This is still law today, despite the frequencies being unused for analog, and is a thorn in the side of amateur radio.
This could be summed up as "we promise not to spy on you if you promise not to resist us if we change our mind".
An interesting expression of this idea happened with the whole NSA Clipper Chip debacle. Effectively, the US government wanted to move from unencrypted everything to key-escrow encryption, where private citizens would be technically prevented from wiretapping your phones, but law enforcement could still do so. It failed so hard that the US government just stopped regulating crypto export and the NSA retreated to slipping vulnerabilities in crypto standards (e.g. Dual_EC_DRBG, TLS Extended Random).
The actual legal protections I mentioned above melted away under the heat of the War on Terror. The US government adopted a classified interpretation of wiretapping law that boiled down to "if we aren't listening, we haven't spied on you". Effectively, the NSA would wiretap everything and store it securely, and then once they had legal justification to actually wiretap you, they'd open up what they had already recorded. In theory, this is just turning a wiretap order into one issued about 30 days ago. In practice, the "legal justification" part was someone filling out a form in XKeyscore and clicking a button, with no further verification in the vast majority of cases.
It was only after much of this leaked - twice, I might add - that people outside of encryption enthusiast communities actually started taking technical privacy protections seriously. Things like end-to-end encrypted messaging, Let's Encrypt, and efficient cipher implementations that actually made encrypting everything useful are things that people in 1999 could only dream of (except for the above-mentioned cypherpunks). On the other hand, all of this extra security is fundamentally reactionary. We would not be encrypting the whole web were it not for certain Nation-State Actors abandoning their already-flimsy legal protections and going for full-take.
Of course, none of that matters when you were just going to self-surveil and post everything you do to Facebook anyway. But that isn't really all that new. People have always been bad at keeping secrets, advertisers have always been spying on you (before the Internet), and there's hundreds of years of legal precedent concerning when, where, and how much privacy you lose when you open your mouth or go out in public. If there is a difference between the 90s and today, it's that today's technology makes you a lot more aware of when your privacy has been violated. Target may know when you're pregnant before your father does, but Facebook will brag about it to you.
At least there was/is a legal framework under which the behaviour of telecommunications firms was regulated. Exempting "information services" from those regulations is, in my opinion, the biggest policy failure in the last 30 years.
Back in 1999, we didn't have streaming TV services to any real degree. What ads you saw on the bulk of media was not visible to the intermediaries. The internet hadn't descended into the click-addition as engagement, as we hadn't created neural networks tuned towards rewards that result in damage to real world humans. Faked photos and videos required real effort and resources to create, and print media was still a viable funding method for investigative journalism.
The base of technology is neutral -- machines that do math for us are not good or bad on their own. The goals that advertisers and their enablers so heavily fund are goals that can and should be regulated when they cause actual harm to humans. Going "viral" can be a good thing, but when it destroys lives, the platforms that make that happen should bear some level of responsibility and liability for that. That's what feels to me to be the biggest loss of today's internet. It took real dollars to run a smear campaign on someone 20-30 years ago...
Further, over time, many authorities have to be repeatedly reminded that User-Agents with UUID's != the user themselves, and every attempt by technologists to cram more UUID on more and more closely held technology with more and more ubiquitous and trending toward always on data streams just makes this threat harder and harder to play down.
You don't need to serialize and track everything. We need to stop doing it. This is also why the systemd machine-id was a step in the wrong direction.
Every single UEFI computer sold today has a unique serial number (GUID). There are MAC addresses. There are HDD serial numbers. There are zillions of unique identifiers accessible to the operating system. Various copy protection schemes use one or more of these. But what they all have in common is that they are under the control of the OS. A privacy-conscious OS can forbid access to these identifiers for userspace applications, or can fake them to something else. This is how e.g. sandbox environments like the App Store can force apps to use some kind of "advertising ID" for this stuff, and ensure that apps aren't sneakily fetching some true unique system ID.
But with the PIII serial number, userspace apps can fetch it without the OS knowing about it. And the disable bit is a one-time operation, so it is not possible to grant serial number access to some apps and not others. This leads to a situation where any arbitrary unprivileged userspace app can uniquely identify your machine, and where vendors relying on this feature might compel you to leave it enabled (e.g. DRM). Now random apps running under an untrusted user can fingerprint your machine, just because you want to watch Netflix.
And that is why this design was utterly broken and a privacy nightmare. Not because it's a unique ID. We have tons of those.
* VMs can trap CPUID, but of course VM support came later anyway.