Thanks for another interesting reverse-engineering case.
This made me remember the first project where I have worked after graduating. It was in a metalworking plant, at a huge rolling mill for aluminum sheet.
The controller for that system could be implemented today in a small micro-controller unit or in an FPGA, but in that old installation the controller occupied 3 cabinets. Two cabinets contained an analog computer made with operational amplifiers, which controlled the speed and torque of all the very large DC motors used in the rolling mills.
The third cabinet contained a one-bit programmable logic controller (PLC), which was very similar to Motorola 14500B.
It was even older, so it was made with TTL gates. However, in the next generation of that PLC, the large PCB with many TTL integrated circuits that implemented it was replaced by a simpler board with 14500B and other CMOS ICs.
So I had to write a program for that 14500B-like programmable logic controller. It read data from a large number of sensors and then switched on or off almost a hundred of hydraulic valves that caused the movements of various things and also the electrical power supplies for various devices, e.g. pumps or fans.
Writing programs for it was far less convenient than it would be to do the same thing today with an ARM MCU, but nonetheless it did the required job without problems.
Hi Ken, I’m interested in the acknowledgements section:
> Thanks to David of Usagi Electric for driving the MC14500B analysis project and thanks to John McMaster for decapping the chips and creating the MC14500B images
How/where are projects like this initially organized and then driven to completion?
I’d like to answer the question “what went wrong with the first batch of 6502’s ROR instruction,” but don’t have the resources to do that entire project myself.
With regard to the ROR 'bug', Bill Mensch (who did circuit design on the 6502) relates that it wasn't so much a 'bug' as Chuck Peddle left it off the specification so the early 6502s were missing ROR as designed. However they found a way to add it soon after the first version was released.
https://youtu.be/sPaAnbkhgAE?t=1049
With that in mind the early ROR behavior should probably be considered like other illegal opcodes as just a side effect of how the decode worked.
It would still be fun to look at the early 6502 and contrast it with the later version to see the changes that were made and how much work was involved.
It’s very suspicious to me that it was “not specified” but the opcode list has holes exactly where you would expect the ROR instructions to go. I want to see the ALU and decode ROM PLA to figure out the truth about what was taped out. Did a team of dozens really forget to add ROR but included SHR, ASL, and ROL? Or did they make a mistake in the mask?
"organized" is probably an overstatement for this project :-) David wanted to find out about the internals of the MC14500B, so he got a couple of them on eBay and sent them to John for decapping (https://siliconpr0n.org/archive/doku.php?id=mcmaster:start#c...). Since I've been interested in this chip for a while, I studied the die photos and figured out how it works and wrote this blog post.
The 6502 ROR bug sounds like an interesting puzzle. As far as I know, there's nothing conclusive on it. The last I heard is https://www.pagetable.com/?p=406
Maybe I overlooked it, but I missed a mentioning of ladder logic [1] that was often used in process control devices (even in those for homes, I've seen some of those still in use).
There's an obvious translation from the ladder logic to a MC14500B program, so it's easy to replace those devices with a simpler MC14500B device.
I even think I've seen ladder logic mentioned in some of the MC14005B documentation (but it's been some time since I looked at that).
I originally mentioned ladder logic briefly, but I figured nobody would be familiar with it, so I cut it out. The documentation discusses ladder logic a lot (as you remembered), and how to convert from ladder logic to the MC14500B:
http://www.bitsavers.org/components/motorola/14500/MC14500B_...
My father and colleagues were still using ladder logic at the paper mill they worked at as of the nineties. In an article on PLCs as of 2016 they claimed that ladder logic still dominated...
Great choice of subject matter, as usual, Ken! May I direct your attention to a one-bit machine [1] that's already something of a favorite on HN, posted for discussion in 2014 [2], 2016 [3] and 2019 [4].
"This is not a Motorola MC14500 computer, but it was the MC14500 that introduced me to the idea of one-bit computing. Exploring ways to reduce the chip count, a series of pencil & paper designs revealed the MC14500 itself could be omitted!"
This another great post! I had question about the following:
>"Another key circuit in the processor is the transmission gate. This acts as a switch, either passing a signal through or blocking it."
This is the first time I think I've come across a "transmission gate" circuit in one of your posts. Is this uncommon then? What is the actual input to the transmission gate?
The transmission gate is pretty common in microprocessors. In NMOS circuits, it's a single transistor called a "pass transistor", but in CMOS it uses two transistors and is usually called a transmission gate.
It takes a logic signal as input, as well as a control signal. If the control signal is 1, the logic signal goes through to the output. If the control signal is 0, the transmission gate is disconnected. You can think of it like a relay-controlled switch, or a tri-state buffer.
The nice thing about MOS circuits is the gate resistance is almost infinite, so if you open a pass transistor, a gate on the output side will keep the old value (for a few milliseconds at least). So you can create latches almost for free. This is used very often in microprocessors. The disadvantage is the chip has a minimum clock speed, or else the data will leak away.
Pass transistors / transmission gates can also be used to implement multiplexers, selecting one of the inputs.
A disadvantage compared to regular logic gates is that a logic gate amplifies the input signal, while a pass transistor weakens the input signal. So you usually can't connect two pass transistor circuits together directly.
Interesting, so it has different names in different contexts(NMOS vs CMOS.0 I'm familiar with the "pass transistor" nomenclature and have read about that in at least one of your previous posts. Good to know. Cheers.
When I was in college we had lots of TTL breadboard labs to make various types of digital circuits. I wonder if something like this might be a good part of a curriculum that builds up to making a full processor?
Possibly, but I suspect this chip would probably be more of a tangent than something on the path to a full processor. Also, a 1-bit processor isn't very exciting as far as applications, compared to even a 4-bit processor.
The Connection Machine had a very unusual architecture that was sort of 1-bit, but sort of 32-bit. It was a massively parallel computer of the 1980s with 16,384 processing elements. Each processing element handled one bit at at time. But usually each processing element performed arithmetic on a 32-bit value, in a bit-serial fashion.
You can call this a 1-bit processing element, but I think calling it a 32-bit serial processor is more descriptive. Processing data serially using a 1-bit ALU was not uncommon, from the early EDSAC computer to the PDP-8/S minicomputer to the Datapoint 2200 desktop computer, but these are not considered 1-bit computers. The Connection Machine was more flexible with word size than these, so calling it a 32-bit computer isn't quite accurate either.
In any case, the MC14500B didn't have any support for bit-serial operations. (For instance, you want the processor to add the carry from one bit to the next bit to do addition.) Arithmetic was possible on the MC14500B (Turing machine and so forth), but it was very slow, taking 12 instructions per bit to manipulate the sum and carry. The documentation recommended using an external chip if you needed to do arithmetic.
You mention this briefly, but it's worth pointing out that the CM-2 (and, I believe, the CM-1 as well) was indeed a single-bit-at-a-time processor, but the word length was arbitrary, not 32 bits. You could, in *Lisp (and maybe the low-level calls that C* used, I didn't work with that enough to know) define pretty much any bit length you wanted, up to the number of bits on the processor.
They later implemented a floating-point accelerator that worked with 32 of the 1-bit processors in "slice-wise" mode to do 32 (and possibly 64) bit arithmetic, where the word was spread across all 32 processors.
The CM-5 used true 32-bit SPARC processors.
Also, the CM-2 had up to 128k bits per processor, and you could have 64k of them in one CM-2.
I'm no FPGA expert but I think two problems would be the 4K per-processor memory and the inter-processor communication. You might have routing problems with the Connection Machine's hypercube routing.
Random Connection Machine fact I found on Wikipedia: Maya Lin, who designed the famous Vietnam War memorial in Washington also designed the exterior of the Connection Machine CM-5.
See my post from above. In industrial control there are a lot of processes with a large number of on/off sensors and actuators.
One-bit programmable controllers like 14500B were good enough to handle those.
In such old installations, the sensors and actuators that could not be handled by one-bit controllers were handled by analog controllers, unlike in modern installations, where a MCU handles digitally not only the Boolean variables but also the analog values through ADCs and DACs, so arithmetic computations are also required, besides the logic operations that could already be performed by something like 14500B.
I was early in my career at the time, and remember looking at the datasheet and asking myself the same question. Reflecting back, I think the sibling post that mentions an upgrade of an older discrete TTL PLC system could be the bulk of the target market. So while it was unlikely someone would begin a clean new design with a 1-bit CPU, there were older systems that already had the 1-bit architecture that suited this device for a mid-life re-do.
Besides that, semi vendors have on occasion been known to manufacture a device that has no market success to due to poor product/market fit, or technology bypassing it by the time it was finished. Most devices were originally made as custom designs for some specific customer or application so perhaps there's a boatload of these in some 1970's car.
> In 1977, the MC14500B cost $7.58 in quantities of 100 ($32 in current dollars), which seems expensive. However, at the time, an 8080A CPU cost $20 and a Z80 cost $50 ($85 and $215 in current dollars) so there was a significant cost saving to the MC14500B.5 However, the steady fall of processor prices soon made the MC14500B less attractive.
Perhaps in 1977 you couldn’t source the 4004 in the quantities you might need to make a product leaving you only with more expensive contemporary processors?
Yes, cost was an issue. Also complexity; the documentation says:
"Computers and microcomputers may also be used [for control tasks], but they tend to overcomplicate the task and often require highly trained personnel to develop and maintain the system. A simpler device, designed to operate on inputs and outputs one-at-a-time and configured to resemble a relay system, was introduced. These devices became known to the controls industry as Programmable Logic Controllers (PLC). The Motorola MCI4500B Industrial Control Unit (ICU) is the monolithic embodiment of the PLC's central architecture."
I couldn't find data on how popular the MC14500B was, but I think microcontrollers such as the Texas Instruments TMS1000 were much more popular.
Yes, in hindsight with our computer-centric perspective today, it seems like programming the MC14500 is something you'd get CS and programming people to do.
But the user manual has sections on how to translate ladder logic and similar control logic into a program counter circuit and appropriate code, step by step. If someone was familiar with industrial control and basic digital electronics, I think the manual is about all they would need. The same can't be said for most other microcontrollers which are indisputably full computers, with all of their complexity.
The complete schematics of the Xerox Alto are on Bitsavers. I don't think photos would help a lot with generating a PCB, since the chips cover much of the wiring.
A bigger problem with a Xerox Alto replica is the wire-wrapped backplane. It would be a pain to redo that by hand.
For most purposes, you'd be better off using the ContrAlto emulator.
The data path is 1 bit, but instructions are 4 bits, so it has 16 different instructions. Many processors back then had completely different widths for instructions vs data.
Even today, Harvard architectures[a] like the PIC have different instruction and data widths. For example, the PIC12 has 12-bit code words with 8-bit data words IIRC.
[a]: Are there von Neumann architectures with different width code and data?
The Intel iAPX 432 used variable bit length instructions, where instruction lengths could be pretty much anything, completely unaligned with bytes. As you might expect, this was kind of a mess.
The iAPX 432 was designed to be Intel's revolutionary follow-on processor to the 8080 with all sorts of bizarre features such as objects implemented in hardware. However, it fell behind schedule, so Intel created the 8086 as a stopgap processor until the iAPX 432 was ready. Things didn't turn out as expected; the 8086 took over the world and the iAPX 432 was a failure.
Essentially all von Neumann architectures have different sizes for code and data. Von Neumann's own IAS, for example, packed two instructions in each data word.
Even many machines that started out with one instruction per data word (ARM, MIPS, Nova...) eventually evolved to larger data words while keeping the same instruction size. An exception was the PDP-8, but there might have been others.
The IBM 360 had 32 bit data words but instructions were 16, 32 and 48 bits long.
A modern example is RISC-V. A RV64GC chip, for example, has 64 bit data words but the instructions are 32 and 16 bits wide.
This made me remember the first project where I have worked after graduating. It was in a metalworking plant, at a huge rolling mill for aluminum sheet.
The controller for that system could be implemented today in a small micro-controller unit or in an FPGA, but in that old installation the controller occupied 3 cabinets. Two cabinets contained an analog computer made with operational amplifiers, which controlled the speed and torque of all the very large DC motors used in the rolling mills.
The third cabinet contained a one-bit programmable logic controller (PLC), which was very similar to Motorola 14500B.
It was even older, so it was made with TTL gates. However, in the next generation of that PLC, the large PCB with many TTL integrated circuits that implemented it was replaced by a simpler board with 14500B and other CMOS ICs.
So I had to write a program for that 14500B-like programmable logic controller. It read data from a large number of sensors and then switched on or off almost a hundred of hydraulic valves that caused the movements of various things and also the electrical power supplies for various devices, e.g. pumps or fans.
Writing programs for it was far less convenient than it would be to do the same thing today with an ARM MCU, but nonetheless it did the required job without problems.