Should be easy to compare - sand down both the cheapo and a legit one (with a similar datecode) with fine sandpaper, clean with isoprop and shove under an optical microscope - you can then tell how similar/dissimilar the dies are, you don't even have to look for markings or anything super legible, or even sand them to the same metal layer.
Clones tend to be vastly different - different technology node, architecture, die size, etc. - that's because they are generally functional clones, not mask clones.
I have done this a bunch of different times, mostly to resolve sourcing disputes. Dissolving in boiling sulphuric acid/nitric acid will make quick work of most epoxy packaging
IMO, I have mostly seen mislabeling, rebinning, and passing off obvious QC rejects.
> Dissolving in boiling sulphuric acid/nitric acid will make quick work of most epoxy packaging
That's the better method of course (results wise), but it's not nearly as accessible, hence my recent evangelism of the virtues of 2000 grit sandpaper.
I prefer Japanese sharpening stones or those DMT diamond whetstones. It’s relative easy to 3d print a jig that converts a woodworking honing guide into an IC holder and you get a feel for how many passes to do very quickly before slowing down and checking each pass.
I like it. I've used sandpaper to solve some interesting problems in the past as well with great success. I suspect we are only seeing the beginnings here in a trend of high-tech applications of fine grit paper.
I expect it's much easier to acquire fine sandpaper, yet my inner child yearns for laser decapping. (Or I suppose more than "decapping", depending on the depth.)
High power lasers are becoming more and more affordable. Laser ablation is definitely an option but you really want to have some proper fume extraction around that.
I've heard that boiling epoxy in molten colophony for a few hours, like a long-cooked version of rosin potatoes, will also dissolve it. I haven't tried it myself. Colophony is much more expensive than sulfuric or nitric acid, but safer to handle (when not molten) and often easier to get.
These slides from the talk say it only takes 5–20 minutes. I'm not sure it will smell better when you heat epoxy to the specified 320–360°. I'm interested in hearing whether anyone else has tried this.
There was a sandpaper expert in that company associated with sour-tasting fruit which shall not be named. I believe you guys have met, though I think by that time he already pivoted to making the perfect cheesecake.
It may not even be a clone. As the author said, it could just as easily be production line items that were tested and found to be outside of spec tolerances and resold off-label.
Yes, though depending on the product line it may be binned into lower quality parts first. Remember though, the marginal cost of these chips is so small the packaging is usually more expensive than the silicon. They often get tested before the wafer is even sliced up so any waste is hardly worth mentioning until you get into modern processes with large error rates and huge chips like CPUs.
TI isn't patterning $10000 wafers at 3nm with massive chiplets and questionable yield, they are making mostly commodity ICs on cheaper processes - they can afford to discard rejects.
Not only that, leakage goes way up as you shrink the transistors, an ADC on a 3nm wafer would require a complete redesign, if it could be made to work at all.
> These are cheap, relatively fast, and not particularly good. While they sport impressive-sounding 12- and 16-bit readouts, the effective number of bits (ENOB) is usually around 8 or 9.
I don't think that's quite accurate for reasonably modern MCUs. You can typically shake 10+ bits out of them, but you need to take a lot of precautions, such as providing very stable external reference voltage and shutting down unneeded subsystems of the chip.
They're still not as good as standalone ADCs, but they're at a point where you can actually use them for 90% of things that require an ADC.
In cases where you need more bits, there's a lot more that must go into the design, which is what gives me a pause about the article. There's nothing about the PSU the author is using or how he managed the MCU noise and RFI. So I don't know if the findings here are that these are knock-off devices with worse specs, or if his overhead LED lamp is causing a lot of interference.
Hey, you can also arrange 2 GPIO pins in 1st order sigma delta topology with two external resistors, a capacitor and disabled hysteresis.
With RP2040 (and an LDO for supply), using two such channels for pseudo differential measurement (the second one just tracks threshold noise) I typically obtain 16 ENOB at 1 kHz, more at DC.
It is critical to avoid any periodic activity on the chip, though. Putting cores to sleep and then waking them up again causes huge spurs. One has to e.g. sleep for random intervals to spread them around. Same with flash. USB can be used, it's noise doesn't normally exceed -100 dB for me.
Fun stuff!
PS: I have not tested DC accuracy. One would likely use a channel with reference and hope that GPIOs are well matched. Could be used to e.g. sense CC lines on USB or analog joysticks and other non-critical, low accuracy stuff.
Can you post any resources for understanding this? I have basic understanding of circuits.
Is this essentially trading time resolution for voltage resolution? Would just doing an exponentially weighted moving average in firmware achieve the same results?
I think you might both be right- the author may be thinking of lower cost MCUs only aiming for tolerable ADC performance, while you might be thinking of all MCUs, even higher cost.
The RP2350 has 9.2 ENOB on a 12 bit ADC. Sure, you might be able to decimate multiple samples to get more bits out of them, but the spec sheet supports the author's claim (https://www.raspberrypi.com/documentation/pico-sdk/hardware....). There are even lower cost MCUs like the CH32V003 that have even worse ADC performance.
On the other hand, some MCUs can definitely do 10+ bits, such as the STM32H7 line which gets 13+ ENOB from a 16 bit ADC. This is impressive, but the H7 MCUs are literally an order of magnitude more expensive than the RP2350, so they might not be something the author tinkers with much.
https://www.st.com/resource/en/application_note/dm00628458-g...
I tested a few STM32F103 dev boards, using a Siglent SDM3055 multimeter and Siglent SPD330X power supply.
The chip has a 12bit SAR ADC. Layout and board design mattered a lot, but even the worst ones had 10 bits worth, and the best one had nearly 12 bits effective.
That was without doing too much on the software side, meaning the other modules weren't running, besides a single serial output. On the bad boards the serial affected it, but on the good board very little.
I was planning on using (genuine) ADS1115s for a project but ended up going with an STM32g071 instead. It was cheaper, easier to source at the time (surprisingly) and much more flexible. The newer STM32g/h series ADCs have fewer hardware/software footguns than the old f103 et al.
The paragraph ending with "Compare that with a microcontroller ADC with a fixed 3.3 V range: 9 ENOB steps are ~6 mV" also seems to insinuate that no MCU has an analog reference that's independent from the supply, which just isn't true at all. Hell, NXP has a few that have a built-in programmable reference.
> So what’s going on with the cheap parts? My best guess is that these are either quite good copies, or failed parts that somehow made their way into the hobbyist supply chain.
The vast majority of counterfeit chips I've seen were from ghost shifts but IIRC TI fabs all their analog parts in house, I doubt they're ghost shift parts or failed QC.
Interesting, most of the counterfeits that affect me (eg. FTDI, STM32 clones) have been just straight up clones developed from scratch, not excess inventory / ghost shifts / packaged rejects. I guess it might be a digital/mixed-signal split, with the two worlds having different issues?
(also interestingly the STM32 clones I've seen had stacked die flash because they didn't fab them in a technology that could also do flash, so you can easily tell the counterfeit from sanding down the package and looking for an extra set of bonding wires; it's also a cool place to access the internal flash bus if you wanna bypass some readout protection :) )
I remember the mess with FTDI clones back when I was still a hobbyist and buying stuff from eBay, but ever since I’ve started doing EE professionally I rarely run into anything that bad. You’re not going to make a clone Marvell processor for example, but I’ve run into several ghost shift runs from a distributor.
I don’t usually buy from electronics markets in Shenzhen either so that probably helps.
Buy in bulk from the Shenzhen markets and sellers will be pretty clear that you're getting a clone, and will give you samples of that specific clone so you can QA your product with them. (Some popular devices have multiple clone suppliers).
I now always buy clones where possible - whilst not all features are implemented and some specifications won't be met, the devices seem to match the original for reliability, and sometimes even come with their own cloned modded datasheet.
Author here. I did consider this, as others have reported getting ADS1015 marked as ADS1115. If it were an ADS1015 the readout would be truncated at 12 bits. These parts definitely delivered 16 bits of readout.
I worry about the demo boards being radically different in terms of layout etc. Even if you're using the same interface and power supply, the PCB may be affecting performance.
Getting full spec performance out of an ADC requires having good layout power supply routing etc.
I would transplant the chips from PCB A to PCB B and vice versa. See if the performance follows the chip or the PCB.
Also check power consumption before / after board swaps. If they are fakes, that would be significantly different.
Analogy's datasheet is directly cribbed from TI's (see TI Fig. 7-7 / Analogy Fig. 22½, pg. 18).
This already passes my "run away screaming" threshold for trust, but a decapping would help me understand whether they've stolen the physical design (bad) or just cloned it (bad).
"Stealing" the physical design is legal after 10 years in the US (https://en.wikipedia.org/wiki/Integrated_circuit_layout_desi...), or 20 years if there's a patent, and that's how we avoided having a single giant company own everything and halt innovation forever in the First and Second Industrial Revolutions. So I would say that stealing the physical design is good.
Apparently you haven't seen what the datasheets of 7805 regulators look like; or to use a far more complex example, LCD controllers. I do find it amusing that AnalogySemi's datasheet uses basically the same fonts as Linear Technology's, but copy-paste is the norm in the industry, and many companies will require that parts in their design always have a second-source manufacturer, if not more.
What's wrong with cloning a chip functionality-wise? This is basically how the industry has operated since its infancy, and what gave us jelly bean logic parts and transistors, x86 and the PC revolution, ...
(just talking about the cloning part here, not counterfeit markings or datasheet copyright infringement, or copying mask work)
If it's an open clone that can be reasonably distinguished from software side and from looking at the part and it doesn't violate IP laws other than software patents, no biggie.
Every clone of any sufficiently complex Thing will have subtle quirks and edge cases compared to the original and as long as I can work around them for only that specific clone model, that's easy.
But clones that have no way of determining if the part is a clone? That's bad to even exist because unscrupulous actors will go and repackage "legitimate clone" chips into faked originals if the price difference is big enough.
There's nothing wrong with an open clone when everybody is acting in good faith. (In fact, "good faith" does not even necessarily mean "according to the letter of the law in $jurisdiction". Sometimes the law is an ass.)
However, there's nothing more toxic to an OEM than a vendor relationship founded on dishonesty. I know I shouldn't trust them, they know it too, and even if it seems advantageous at first I shouldn't be surprised when they turn on me.
Since these parts are being sold as genuine TI parts, I don't care whether the clone is physically faithful or just functionally faithful - I should treat it like it's poisonous.
I'd like to know more about the world of ADCs. I've used the ADS1115 with success but only at very slow speeds.
On the current project we started with an MCP3208 via SPI. It did the job but only has 8 channels and it's slow (100K samples per sec).
To get something faster we switched to ADS7953. It has 16 channels and runs 10 times faster. It's somewhat more complex to code, and you can only get the highest sample rate if you scan the inputs in a predictable order. But it sure flies.
To me, these chips feel like cars. The ADS7953 is somewhat of a ferrari, whereas the MCP3208 feels like a Toyota, simple to use, unimpressive performance.
I'd love to know the industry background about how these varieties of ADC chips came to be and carved their own space in the world, and how widely they are used (millions? billions?).
> To get something faster we switched to ADS7953. It has 16 channels and runs 10 times faster.
I recall reading about a project at CERN to design a 12bit ADC chip that could sample at tens of GHz, maybe 50 or more.
I was perplexed at how they could achieve this.
Turned out it was the same we programmers do. Parallel processing.
They had taken a 12bit SAR unit which ran at like MHz rates, and just cloned it many times. They then had a large analog multiplexer in front to route the signal to the active ADC unit in a round-robin fashion.
That takes a lot of chip real-estate, and the analog muxer had to be carefully designed.
For a simpler approach to speed there is Flash ADCs[1], which kinda brute-force it.
For precision I know multi-slope ADCs[2] are often used.
Sadly I don't know much about the history, and would also love to learn more about it. Bound to be some fascinating stories there.
You can buy ADCs at over 100 GS/s (keysight, teledyne and tektronix make oscilloscopes using them), however typical ENOBs are more around 5 bits for these. For people interested in this stuff, I there is a video of someone taking apart one of the high speed keysight scopes (I think signal path is the YouTube channel?).
What is the challenge in scope design is that you need to protect the ADC and often user's life!
So the scope has 1Mohm || 15pF input impedance. You need to buffer it. So you first have to attenuate the signal by tapping this input impedance e.g. at 1/10, attenuating the signal. Then you selectively boost it back up for the ADC.
Or you selectively tap it at different ratios.
In any case, you have to protect whatever there is after the tap(s) by diodes that inherently bring parasitic capacitance.
Some scopes avoid expensive buffer ICs and go with split DC path (with gain) using opamp and then AC path (with gain) using e.g. JFET and BJT RF amps and combine those later.
The whole path from input to ADC must have flat frequency response in both magnitude and phase on all gain settings. This is non-trivial, especially with split DC/AC paths.
Sure, HMCAD ADC series help immensely nowadays with their builtin gain, but you still have to give them something flat to digitize as they output 8b streams and thus you won't be able to "fix it up digitally".
And then you also have to be able to inject bias to move the signal up/down.
And some scopes now can toggle between this and just 50 ohm impedance.
So, yeah, it's kinda non-trivial to condition signal somewhere between millivolts and mains to get to the ADC safely.
That's a weird argument. System integration is a significant amount of work. To get any data in and out of these high speed dac/ADCs requires significant man hours (several months to years) of rf, system, fpga and systems engineers. These scopes typically have some pretty hefty fpgas in them as well. All That comes at a significant cost (and those ADCs are not cheap either).
It's sort of like saying CPUs are the were all the hard work for computers is, so they should get all the money (not the motherboards, GPU and especially not software). That's just not how the economics of these things go.
Keysight at least, has a fab where they make their own ADCs. Those are something like ENOB 6, 10 bit raw up to 120GHz and are used in their oscilloscopes but can also be purchased standalone.
Oscilloscopes also have a significant amount of additional front end conditioning, probe control, channel timing, and analysis software built into them. Most of the math functions on oscilloscopes use custom ASICs that work off the raw bits coming from the 120GHz digitizer which is non-trivial even just to receive. Calling it a plastic case around a digitizer is disingenuous.
> The ADS7953 is somewhat of a ferrari, whereas the MCP3208 feels like a Toyota, simple to use, unimpressive performance.
What about the AD9226? It only has a single channel but can do up to 65 MSa/s at 12 bits. I bought one as a module for around $12 on AliExpress to experiment with software decoding of analog video. I only run it at 20 MSa/s and only use 8 bits because, funnily enough, the limiting factor is the speed at which I could get the data into my laptop. I connected it to a Raspberry Pi Zero and use the SMI peripheral as described here: https://iosoft.blog/2020/07/16/raspberry-pi-smi/
Hah, damn. We live in different worlds - in mine, 100K samples/sec is blazingly fast!
I'm currently working on a PLC program, replacing the PLC's basic cyclic input sampling (max 2K samples/sec) with a harder-to-use mechanism that lets you access the raw data off its 12-bit ADC at 10K samples/sec, which we consider unusually speedy.
Ultra high-speed ADCs are extremely ugly to handle with microcontrollers. That's mostly FPGA territory: toggle its pins, suck the data from the ADC and offer them on a friendly parallel bus for a microcontroller (USBC->Parallel interface from FTDI).
They need a lot of pins to be toggled. Otherwise they spit out no data.
And a lot of manual stuff means it's super DMA unfriendly. And you need DMA for high-speed stuff.
> These are cheap, relatively fast, and not particularly good. While they sport impressive-sounding 12- and 16-bit readouts, the effective number of bits (ENOB) is usually around 8 or 9.
The edge case of the 1-bit conversion scheme used in SACD format is compelling from a few perspectives. The idea is to run the sampling rate in the megahertz region. SACD achieves 120dB of dynamic range with an extended frequency response up to ~100kHz. CD audio only achieves 96dB of range up to 20kHz with its 16-bit PCM scheme. From the analog hardware complexity standpoint, a bitstream converter is much simpler than a multi-bit converter. The 16-bit ADC might be cheaper due to the insane manufacturing volumes.
Trading bit depth for sample rate is a very compelling offer in many cases. The 3d graphics version of this is SSAA where you sample more pixels than your monitor needs in order to resolve higher frequency information.
$3 ADCs are not cheap ADCs. Cheap ADCs are the one embedded inside cheap MCU. These days, TI sell cheap ADC/DAC that is artificially limited to 10/12-bit, but with linearity at more than 16-bit. The cost of producing bad ones versus good ones are not much different. They go through the same semiconductor process. So long as the fundamental design is sound, cheap ADC don’t perform any worse.
The price difference could be explained by LCSC purchasing in bulk directly from TI or similar and selling them at lower margins because their volumes are larger. I've seen "clone" chips sold at LCSC, but they're listed under a different brand (I can't recall one off memory unfortunately).
Why would it be bad for that price? Highly accurate ADCs are pretty easy to make, provided you don't need a high sample rate, thanks to the magic of signal processing. Delta-sigma ADCs, and ramp ADCs can use a single bit of digital input + some pattern hold circuitry to do incredibly accurate measurements, it's just they take some time.
If you want a flash ADC that can do 16 bit (and can do 16 bit at 100Mhz), however you'll have to probably mortgage your house.
Most companies have region-specific price lists. My Spotify subscription costs about 60% of what the same plan costs in the US (2200 HUF vs. 11 USD). In the electronics industry, everyone has a separate price list for China.
Btw. the western list price is just an indicative at-most number anyway. Even a small-sized project gets discounted prices when you start talking to a sales rep.
Well, I guess a whole-three-genuine-US-dollars is actually pretty expensive for an ADC, and that the person-in-charge-of-your-BOM in one of the countries that can actually still manufacture things can get one for way less than that.
Does it work? Well, does your design power up during factory testing, and then pass whatever things your rig (hope you made a few!) has in mind? Well, then, yes, in fact it does...
These numbers add up fast when you have dozens or hundreds of components on your board. A $3 part is often one of the most expensive items on a board! If you're trying to get something shipped to consumers for $20 each, with an enclosure, packaging, shipping, retail markup, and profit...that's a huge price disparity.
Also, and perhaps more importantly, the test rig is a lot simpler and a lot cheaper if you can generally trust manufacturer data. Sure, send off a few samples (likely prototypes with parts from Digikey instead of LCSC) to run extended testing in an environmental chamber with thermal imaging, build an endurance test rig that pushes the button once a second for four weeks to simulate once-daily use for years, whatever you want to do...but after that, if TI says it's good from -40 to +125, you're going to trust them on a lot of the edge cases.
Do 100% testing of the things you can test in-circuit if you can - power it up at room temperature and make sure it works once - but that doesn't mean you get the actual rated performance across all published environmental conditions.
Ooh this is interesting! I've seen some big price differences between Digikey and LCSC at small volumes; not sure! You can also sometimes buy direct from TI.
Of interest from early in the article: I'm curious how these external ones compare to onboard, e.g. STM32's. Btw, the TI one listed is actually pretty simple to use in comparison. The ST integrated ones have more config and hardware considerations, e.g. complicated calibration procedures, external VREF (mentioned) etc. So, if you do app the config, is the integrated one as good?
The integrated ones usually have nice ways to integrate with timers and other onboard periphs.
I thought I sucked at debouncing until I tried a rotary encoder that wasn’t like 10 for $4 on aliexpress. Lesson learned. Then I found out they sell a lot of decent related stuff at Micro center.
Digikey's markup is the issue.. most people in Asia buy from lcsc or agents, that can get parts 2-10 times cheaper.. In this case both parts are genuine just different batch/year/production location probably..
LCSC is a grey market distributor whose sources of supply are of untraceable, dubious provenance. They are neither ECIA member nor participating distributor.
> Single cycle readings defeat the point of sigma delta ADC setups.
The ADC's internal delta-sigma ADC takes a lot of samples at a much higher modulation frequency and presents them as a single output value.
You do not get the direct delta-sigma output from an ADC like this. The internal logic handles that for you. It's okay to take single samples of the output.
OP is using the chip with the data rate set to 8 samples per second.
Natively/internally, it runs at 860 samples per second, and you can configure it to provide that data at a lower sample rate at lower noise levels by averaging multiple readings together internally.
There are a bunch of reasons but the primary reason is that good ADCs are made using a different mixed signal process than microcontrollers. MCU ADCs are capacitive charge-balancing successive-approximation type which limits their sensitivity and precision.
Standalone ADCs also eliminate significant sources of noise like temperature fluctuations and electronic noise (the digital logic on the chip often runs at less than 1Mhz for example)
Not sure I'd call the silabs chips hyper exotic. The SIWG917 is intended as a direct competitor to ESP32s. It's a bit more expensive, but not unexpectedly so.
I'm not sure about WiFi, but JieLi (JL) definitely has a huge marketshare for single-chip BLE/BT. They are the origin of the infamous "the Bluetooth device is ready to pair" stock prompt voice.
If these are genuine TI parts, I highly suspect the issue to be a measurement error by the user. Whether it's $0.60 or $4 does not matter, that's due to the order size as mentioned in the article.
For regular readers of this blog, that would be like defining USB for a general tech audience. Other articles by this author expect the reader to also know terms like I2C and SPI.
Well, if you're trying to measure DC extremely accurately the old stuff is still golden. Multi-slope is the gold standard and no one will sell you one. Delta-sigma get most of the way there.
The delta-sigma is conceptually very similar and gets you most of the way there for DC performance, but can be used for sample rates in excess of 100 kSamp/s. Multi-slopes also need large passive values that make an integrated solution impossible. Keysight will still sell you their 3458A.
Also, the AD7177-2 exists. That might be evidence enough for delta-sigma's victory.
Err, the title isn't really correct. The genuine part is still only $4. This is a 60 cent knockoff. Not having done design in a while but still having a rough feeling for costs I was a bit confused.
Honestly: no one of the bigger players in the industry pays the prices you see on Digikey/Mouser/Farnell (with the exception maybe for prototyping stuff). Often you have direct delivery contracts with the vendors.
That the ADS1115 costs <$1 on LCSC means they buy millions from them every year. They are one of the biggest trustable players in Asia.
I have access to our internal STM32 pricing. You'd be shocked.
We buy low end STM32s in 10-100k quantities and pay shockingly low prices as you noted and that's through official channels. One of our other suppliers offered us some "compatible" parts that he "might be able to find" for about a quarter of the cost, but we declined.
Yep people are mostly paying for the added inventory risk and labor when buying from distributors. Anyone with a big enough order to have their own wafer and the time to wait can get it a hell of a lot cheaper, especially for rarer parts that aren’t as popular.
One supplier I developed a relationship with showed us their internal numbers and it was $1,000-3,000 per wafer for 130nm-180nm nodes with a minimum order of 25 wafers. Once the part is designed and the mask is made, the cost is mostly just the setup plus whatever they want for the IP. The silicon itself is often cheaper than the packaging around it.
I recalled a few years ago there were shockingly inexpensive MCUs at LCSC and similar far east vendors, but they were never heard before names, so I just checked again out of curiosity and here's a $0.05 one by Cypress.
They are huge step ahead: the upcoming CH32H417 has pretty much all PHYs integrated (!). For 10/100M Ethernet, USB-C 5GBbps and USB HS 480Mbps. That dramatically reduces the components needed to get that stuff running.
I also build a small robot with the ultra cheap CH32V003. That's a full fledged 48MHz microcontroller with 16kb flash and 2kb SRAM. Fun little thing.
If you are used to the ST HAL you will be able to work with them within 10 minutes. Their API style is similar.
That's out of stock, though. Here are my other shockingly inexpensive MCUs at LCSC that were in stock at the time:
links from 02025-03-16:
https://www.lcsc.com/product-detail/Microcontrollers-MCU-MPU... #LCSC #pricing for 16-pin #CH32V003 #microcontrollers #electronics #hardware: 14.21¢ in quantity 500. Not the cheapest, but it’s a RISC-V with IIRC an on-board op-amp, though the linked datasheet only covers the CPU really.
https://www.lcsc.com/product-detail/Microcontrollers-MCU-MPU... new cheapest #microcontrollers: Cypress CY8C4045FNI-DS400T, 1.5¢ in quantity 1000. It purports to be a 48MHz Cortex-M0 ARM with 32KiB of Flash and 4KiB of SRAM in a 1.6mm x 2mm package, 0.31-0.37 mm thick, with nine GPIO pins. I think that’s 4000 Dhrystone MIPS per dollar. #electronics #hardware #pricing
As of today (02025-10-15), the CH32V003A4M6, the PY32F002BD15S6TU, the PFS122, and the PMC251-S14 are in stock. The CY8C4045FNI-DS400T, the CY8C4013SXI-400, and the NY8A051H (not an 8051 despite the name) are out of stock. Today what I see are:
https://www.lcsc.com/product-detail/C49173938.html Currently cheapest #microcontrollers #pricing on LCSC: PMS150G-U06 (a SOT-23-6) for 2.02¢ in quantity 3000. #Padauk has broken the 3¢ barrier! One-time programmable. #electronics #hardware
https://www.lcsc.com/product-detail/C2857167.html Almost cheapest #microcontrollers #pricing on LCSC: NY8A054ES8, a SOP-8, for 3.91¢ in quantity 1000. This is from JSMSEMI (Shenzhen JSMicro Semiconductor Co., Ltd.) 1T 8-bit 8MHz chip, 128 bytes RAM, 2k × 14-bit EPROM program memory (but no quartz window, so one-time programmable). 8-level hardware stack. The datasheet is full of descriptions of peripheral registers and pin circuit schematics, and then the 55-instruction instruction set is documented on pp. 76–91.
Clones tend to be vastly different - different technology node, architecture, die size, etc. - that's because they are generally functional clones, not mask clones.
(also, as a general shoutout to the low tech sandpaper technique for exploratory work, here's a sanded down RP2350 thrown under a clapped out SEM: https://object.ceph-eu.hswaw.net/q3k-personal/484e7b33dbdbd9... https://object.ceph-eu.hswaw.net/q3k-personal/3290eef9b6b9ad... )
reply