Hacker News new | past | comments | ask | show | jobs | submit login
The magic of DC-DC voltage conversion (lcamtuf.substack.com)
229 points by zdw on April 22, 2023 | hide | past | favorite | 134 comments



> The deal with linear regulators is that despite what some internet sources might recommend, you probably shouldn’t be using them in your designs.

Yeah, no.

1. Linear regulators do not introduce voltage ripple and by the nature of being variable resistors, they form a nice low-pass filter with the capacitors around it. For some sensitive designs, you use a switching converter to step down voltage to e.g. 5V, filter it and then use LDO to get it down to 3.3V with even more filtering.

2. Linear regulators are dirt cheap. And for e.g. USB devices drawing less than couple hundred mA @ 5V using a regulator to step it down to 3.3V would be an overkill.

Granted, recent Raspberry Pi Pico (for example) does use a switching regulator (unlike other boards in that form factor), but it also has led to complications with power supply ripple showing in ADC readings.

But sure, if you application needs higher efficiency, steps down by more than ~2V or pushes around a lot of current, switching regulator is a better choice.


And if your device spends a lot of time asleep, linear regulators may beat switching regulators by a LOT.

Leakage current can dominate your design, and switching regulators often have lousy leakage current specs (to be fair, so do a lot of old school linear regulators). Furthermore, switching regulators often have to "spin up" while a linear regulator is just sitting there ready to go as soon as your MCU switches on.

In addition, switching regulators tend to be designed for higher currents and tend to have terrible efficiency at small currents (< 1mA). If your circuit uses a very small amount of current even when active, a linear regulator may be superior even for constant-on systems.


Nowadays you can pretty easily find switchers with sub-100nA quiescent current + leakage, e.g. TPS62840 (of course, that's assuming you're at room temperature... FETs at high temperature are all crap for leakage). I note you get 80% efficiency at 1uA out on this device with a 3.6V to 1.8V stepdown config; beats the LDO by a lot, particularly if you spend a lot of your time at uA load currents.

But spin-up time... Yeah that still sucks :)


Issue with a lot of these and switching regulators is off chip inductors, which take up space. In small form factors, SOCs etc, you need to use charge pumps and LDOs for internal analog/mixed signal voltages.


Are there chips doing that but buck-boost ? I know there are for "normal" current range, but I haven't found anything super low power

Use case: I want to just have a solar panel + supercap into microcontroller for IoT stuff so I want to suck that cap dry and avoid batteries for temperature range reasons.


Maybe TPS61094 or something similar?

https://www.ti.com/lit/ds/symlink/tps61094.pdf


Oh, wow, it's even in stock, thanks. I was looking for something like that some time ago but back then only BQ25570 really fitted (and was overkill and a bit more pricy for that)


I built all the low voltage MCU based electronics for my team's solar race cars in college. Power efficiency is a huge deal for a solar powered race car. One of the first pieces I designed was a standalone switching regulator PCB with a 3 pin interface, then made many dozens of them. I then used them in all of my other boards.

Years later after I graduated and left the team, I heard that the folk that took over went through and replaced all the switching regulators with linear ones. I asked them why, and they said someone told them the quiescent draw was lower for linear regulators. I asked them if it was really worth the significantly reduced efficiency dropping from 12V to 3.3V, and they didn't seem to understand what I was talking about. Oops!


> One of the first pieces I designed was a standalone switching regulator PCB with a 3 pin interface,

This sounds like a useful thing to learn from. Do you have the schematics?


The eagle file is probably rattling around somewhere on my hard drive. It was back in 2005 or so, though. I had very little idea what I was doing back then. I just followed the reference schematic from the switching IC's datasheet. It had the IC, an inductor, and some size 1206 resistors and capacitors.


One can purchase $0.15 TI parts have <0.4mA quiescent current and output 3A. These things draw so little under no/low load circumstances that my cheaper 4-digit readout benchtop supplies cannot detect any draw.

https://www.ti.com/lit/ds/symlink/tps563201.pdf


0.4 mA quiescent current is huge for very low-power designs though. If your system must spend the majority of its time consuming very little power, that level of constant current draw is a non-starter.


Well, typical $0.15 linear regulators will have similar or higher quiescent current


TPS7A20 is 6.5uA, $0.10/ku. TPS7A05 is 1uA, $0.19/ku. It's not that hard to do better than 400uA!

And if you don't mind coughing up when it counts... TPS7A02 is 25nA, $0.45/ku. Triple the price, but more than four orders of magnitude better leakage.


If money is no object, there are some interesting chips out there. The LTC3335 chip[0] is a buck-boost that claims to have a 680nA Iq. Not as low as the TPS7A02, but it looks like it could make up for that in efficiency if your design does anything other than sleep.

[0]: https://www.analog.com/media/en/technical-documentation/data...


I was looking for one for one of my project but this is above "I could have just 10 years worth of batteries instead of that" level of pricing


Do very low-power designs also involve "awake" current of 3A?) Asking unironically.)


Maybe 300mA order of magnitude is more common than 3A, but it's not inconceivable. Lots of designs wake up, burst some RF data at a few hundred mA transceiver load for a few tens of microseconds, then go back to sleep for another few seconds. There are some long-interval scientific instruments that need high-speed ADCs or DACs on battery power in remote locations, which might get into higher current than a few hundred mA. Some deep space crafts can sleep at very low power and run their transmitters at comparable currents. It's very situational.


Older cell phone standards used to burst to about 2.3A upon initial connection to a tower before the tower told the handset to dial it back. I don't know about modern cell phone standards.


400uA isn't "low"--that's roughly speaking a fully-awake MCU running at 10MHz.


Linear regulators also have a start-up time, and its usually somewhere in the datasheet. All feedback loops have some kind of settling response time depending on the bandwidth of the loop, and you'll see that behavior both on start-up and if there's any kind of disturbance, like the load changing.


Yeah. I use a lot of linear regulators. And they have even more advantages. For example, they are simple. They do exactly what you think they do. They don't leak. They don't create noise. They convert voltage right away and you don't need to wait for it to stabilise. And they have simple failure modes. You just throw in one part and it magically converts voltages. What simpler thing you would really want?

In most realistic designs you put multiple power supply rails in your designs because you need a lot of parts that don't need a lot of power but have different voltage requirements or might need voltage offset. In those cases linear regulators are perfect solution.

More than that, those power supply rails usually have standard voltages and there exist standard linear regulators that output those voltages to make everything even easier.

My strategy is to use linear regulators default and only use anything more complex on those voltage rails where I need to step the voltage up or where the inefficiencies would affect my design's performance significantly enough for me to care.

Oh, and use voltage dividers if you want to convert signal levels (unless it is fast signal and you care about signal integrity).


The other place where I see a lot more linear regulators is inside ICs! Working in Mixed Signal ICs and IP I see a lot of linear regulators.

Inductors are huge and (generally) off chip. If say your SOC has an AMS components, and PINs are a commodity, then you can't use anything but a Charge Pump and LDOs.


Also PCB layout for a linear regular is simple


EMC is much easier too :)


Yeah that’s what I was alluding to, gotta be careful with those switchers


> but it also has led to complications with power supply ripple showing in ADC readings.

High-end MCUs used to have separate voltage inputs to ADC, for power, and voltage reference.

Older ATX power supplies actually had very high quality 3.3V and 5V outputs because they were coming from linear regs.

I remember some USB gadgets were working in some mobos, but not the others depending on whether 5V was wired from the ATX power, or not


> High-end MCUs used to have separate voltage inputs to ADC, for power, and voltage reference.

Used to? Almost every MCU I’ve seen recently has had distinct voltage references for the ADCs


I designed a DC-DC converter once.[1] This is an exotic application - providing 60mA Teletype signals at up to 120VDC, with power from a 5V USB port.

There are two main trouble spots in DC-DC converter design - protection and noise.

A switching power supply is a dead short across its input once the inductor has saturated. The switch, usually a power MOSFET, needs to turn off on every cycle before that happens. Otherwise, something will fail and probably burn out. Also, the failure mode of power MOSFETS is usually "on". So protection circuitry is needed. Fuses, current limiters, etc. This is why UL approval for switchers connected to the power line is important.

Switchers work by generating big inductive spikes. Those spikes are supposed to be directed into capacitors and smoothed out into DC. Without suitable filtering, spikes will be pushed into the power source, the load, and the RF spectrum. A few ferrite beads, Zener diodes, and small capacitors in the right spots will fix this. LTSpice simulation is useful in picking the component values. You're not done until both the current and voltage curves are flat.

[1] https://github.com/John-Nagle/ttyloopdriver


I am using "Switching Power Supply Design" from Pressman, Billings and Morey and "Switmode Power Supply Handbook" from Billings and Morey.

Both are excellent resources and look at design from a bit different point of view.

I think the biggest problems with swithing designs are not what you have listed, although both noise and failure modes are a huge problem and main cause of concern (and cost) when certifying your designs.

The biggest problem is that they are just so damn complex and they have so damn complex characteristics over time and operating parameters. You might think you understand how a switching PSU works but that's just an illusion. There are people who spent their entire life specialising in switching PSU design and are still learning. At best we can understand how they behave within certain parameters and then try to make sure to shut it down safely when we leave those parameters.


Those books have good advice if you already know the basics of how to analyze a converter.


> Between the resulting thermal management issues and reduced battery life, linear regulation is seldom worth the pain.

Rather the opposite, actually! Most simple electronics or DIY stuff has rather trivial needs, like USB-sourced 5V->3.3V conversion at a few dozen mA. A simple LDO will cost you about $0.01 in bulk, so your total BOM is $0.03 once you include capacitors. The linked CUI VXO7803-500 module is closer to $2.00. An LTC3240 IC will cost you $1.00, and an AP63203 IC is at least $0.50 too.

Unless your application requires high efficiency, has significant voltage differences, or is handling large currents, there is no reason not to just throw in a dirt-cheap LDO.


A few points:

1) LM317 and LM7805 are not LDOs. LM317 is a series regulator, which is way more flexible and has some infrequent but difficult-to-emulate use cases. LM7805 is a linear regulator but has substantial voltage drop relative to a modern LDO. Enthusiast/hobbyist space would do well to consider alternatives to the LM317/LM7805 if they're designing PCBs with surface mount components and very simple power rail needs, but for quick and dirty through hole designs there's few well-known alternatives that are truly LDOs - maybe MCP1700?

2) Enthusiast/hobbyist market isn't too sensitive to bulk pricing. Case in point: LM317 is about a dollar in hobbyist/enthusiast quantities, and LM7805 is comparable. TI lists 1ku TO-220 pricing at $0.65. You're definitely right for real products with mass manufacturing, but hobbyists don't generally haggle over pennies.


They are better than they used to be but LDO's are finickier than traditional NPN pass element based regulators. Unless you need a low drop out regulator there isn't much advantage to them.

Tip for power supply stuff, derate the heck out of everything. Voltage current and power derate by 2X and you usually won't have issues.


LDO's are linear regulators. Both the LM317 and LM7805 are considered LDOs.


TI pretty clearly distinguishes between LDOs and other categories of linear regulators on the basis of using a FET pass element (which is true of neither the LM317 or the LM7800 series, both are Darlington-based).

See section 2: https://www.ti.com/lit/ml/slup239a/slup239a.pdf


It's not really clear from this paper, it is more handwavy based on today's circuits.

They claim that the darlington voltage drop is ~2V which is suitable for 5V to 2.5V regulation, and then introduce 100mV with an NFET as low-dropout for cases where LiPO cells are 3.6 V, or 300 mV above industry standard 3.3V (or the new embedded expectation of 1.8V or 1.2V off a 2x 700mV cells).

However, what if I need to regulate 3.35 V to 3.3 V, do I need an Even-lower LDO (ELLDO)? That meants we have HDO (2V), LDO (.1V), ELLDO (0.05V), which becomes an absurdly semantic situation.

I think the confusion is that engineers picked the words and it is based on the technology of a current point in time. In their mind, the breakpoint is NFET dropout of 100mV, which is purely subjective. Although I could also argue that my example is silly because it is within the range of tolerance of most datasheets spec for Vin of 3.3V +/- 1%.


I'll agree that it's largely a semantic difference and that I overstated the significance of the MOSFET involvement as the pass element - you can design single-transistor PNP elements with only a few hundred millivolts dropout, and at one point TI referred to these as LDOs.[0] Indeed, even the Sziklai pair gets called "quasi-LDO". Individual engineers likely have different personal thresholds for what constitutes an LDO. But it's worth pointing out that in practice there are only a handful of plausible linear regulator pass elements, and the physics of the BJT-based pass elements sort them into a distinctly higher minimum dropout voltage than what is achievable with MOSFETs.

I still stand by the characterization of LM317 and LM7800 family as "not LDOs". Both devices are Darlingtons with at least two Vbe drops across the series pass element. On the continuum of LDO------Not_LDO, both LM317 and LM7800 are firmly on the Not_LDO side.

[0] https://www.ti.com/lit/an/snva020b/snva020b.pdf


That’s a great apnote (app report?) and I agree with you I just think LDO is a confusing term.


Damn, I stand corrected. Apologies.


They are not. They are inherently high dropout devices which is why LDOs were created in the first place. LDOs are a refinement on those early generation of linear regulators.

The rise of 3.3V devices created a demand to derive regulated power from a 5V rail. Traditional linear regulators are unsuitable for this because the dropout voltage is in the range of 2V. LM317, the most likely old school variable output candidate, is 3V drop and can't do the job either.


1.6V is definitely not 'low drop'.


Agreed. Low-N solutions should optimize for simplicity rather than cost, but simplicity tends to favor LDOs too.

PSA: Watch out for residual flux residue on your voltage dividers! I've seen parallel resistance as low as 50k. If your rails come up at the wrong voltage they can fry your expensive chips! Consider investing in packaged LDOs at the exact voltage you require.


High divider resistances are asking for trouble, but very popular in datasheets because they make the efficiency specs look a little better.

Residual flux in combination with high resistances is especially scary because the regulator will appear to work "just fine" until some moisture is encountered. Such as when you're spraying R134a around looking for a problem elsewhere on the PCB. Water condenses on the flux near the divider resistors, and now you have (at least) two problems.

Another trap for young and old alike is putting your finger on the regulator IC to see how hot it's getting. Touch the 1M resistor next to it by accident, and now your 3.3V bus is more like six or eight volts...


Current handling in space-constrained situations is what drives me from LDOs. Ie, if space-limited, linear regs top out at 500mA for sot23-5. So, then you need to go switching, along with the slew of accompanying passives.


The proper way to select linear vs switched is really a flow chart of some applications require super low analog noise or low quiescent current and cannot tolerate a switcher solution. The next step is a complete system analysis including power/battery budget AND thermal for both solutions then pick the overall system level winner. No point in spending dollars of switcher components to save fractions of a penny of battery energy.

Another interesting point is its "generally" easier to buy/build constant current linear sources than constant current switching sources. Plenty of sensor applications where you want to mostly just limit to 4-20 mA or similar.

Final point to make is "generally" with massive hand waving and isolated exceptions, linear sources are harder to destroy via inductive loads and oscillating loads and ESD / EMI impacts.


> The proper way to select linear vs switched is really a flow chart of some applications require super low analog noise or low quiescent current and cannot tolerate a switcher solution.

Switching pre-regulators followed by a high PSRR LDO with some filtering can work here


As someone that works with low noise systems but on the firmware/software side, is it possible to design a low noise power supply with a switcher, without some kind of linear stage (like a final LDO regulator)? The hardware engineers are good, but I'd like to understand this a bit myself.


In principle yes, in practice it's rarely worth the cost and complexity.

If you're careful about the converter design (keep the high-current loops extremely short, use counter-rotating loops to tightly confine magnetic field within those short loops, use a soft-switching topology to reduce EMI and sharp edges at the switch node, switch quickly or use multiple parallel converters at different phase offsets to reduce magnitude of current ripple), you can get decently low noise. There's a good Jim Williams app note about this.[0]

But it's almost never worth it to do this, since there's LDOs with two or three orders of magnitude better noise voltage. There's a time and a place for a really low noise converter; usually EMI constrained galvanically isolated converters like medical supplies or scientific instruments need them and aren't too sensitive to the cost or development effort. But even then, you'll often find LDOs cascaded on the outputs just afterward, since a good LDO can add another two orders of magnitude of ripple rejection in the switching frequency band.

[0] https://www.analog.com/media/en/technical-documentation/appl...


I spent a long time afraid of making switching regulators because I heard so much about how complicated they are, failure modes, EMI problems, etc. But when I got over that I never had any problems just reading and following the datasheet recommendations. The layout rules aren't even particularly complex and the datasheets will always give you an example layout to copy anyway.

Of course once I figured that out I found self-contained switching regulator modules like the RECOM R78-K and RPM series which are foolproof and cheap. Well the RPM modules were cheap at the time, but apparently they've doubled in price. Maybe that was an introductory thing or the supply chain got to them.


During a college internship, I actually debugged an EEPROM corruption issue on a PLC card all the way back to the power supply the original designer had copy-pasted out of the datasheet. The compensation network they had used was definitely not stable, even though it was exactly the same input and output voltages and circuit elements in their diagram and our application.

I've written my fair share of datasheets now, and while most of us are trying to do a good job and be clear and helpful, sometimes the stuff below the spec tables in the datasheet is, uh... less good than we'd like, for any number of reasons (inexperience, no time, someone left halfway through writing the datasheet, someone forgot to clean up copy-paste from the other datasheet with the slightly different thing, etc). I guess my point is: trust, but verify.


There are lies, damn lies, and data sheets.


I have recently found that converting higher DC voltage (48v or higher) down to 5v is rather difficult. Or at least there are not many isolated options on the market. I have a diy home battery system that runs on 48v and it has been very difficult to get anything to work and not fail within a week or two. I started with various sizes of buck converters, but it seems that they are too noisy and get blown out by voltage spikes or dips when the inverter kicks in or cuts off. I finally found some isolated converters that convert dc-ac-dc through induction. But the largest I could find can do 5v at 1amp. So I had to get two of them just to handle a raspberry pi zero and an esp8266.


As you get close to 5% duty cycle you start having tough problems keeping the control loop stable. You might have better luck stepping down 48V to 12V and 12V to 5V cascaded. Less efficient, but cheaper than blowing up your supplies regularly. And now you get 12V for other stuff.


> I started with various sizes of buck converters, but it seems that they are too noisy and get blown out by voltage spikes or dips when the inverter kicks in or cuts off.

Dips shouldn't kill it (unless it's inherently unstable) but spikes definitely can.

Spikes can be addressed with appropriate input protection. A big, low-value resistor with a TVS behind it should take care of spikes.

But this shouldn't be a difficult problem to solve. Converting 48V to ~5V is extremely common in PoE equipment. There are numerous off the shelf designs designed for it. It's likely you were looking in the wrong place (e.g. searching for "buck" instead of "flyback").


I’ve been running into this problem too, my solution has been to use parts and reference designs originally created for PoE applications. Inductor selection is also more finicky. When assembling a buck circuit to take 12v down to 5v, I can be pretty cavalier about the inductor, so long as it supports my max current x1.5 (my minimum derating value), but once you get to covert 48v down to 12, 5, or 3.3, ripple becomes a huge concern, but also the inductor behaves like a gigantic flywheel, plowing current down regardless of upstream switches and in general doing its own thing. I tend to over-protect my circuits with camping diodes and zeners to dump excess voltage to ground - something you don’t usually do when you’re controlling the voltage source, but I lack the knowledge on how to make a minimum design possible here.


Pro tip:

Most Wall AC Adapters will operate at 48V. This has been used by the ebike community for a while to power lights and accessories. I use an 120VAC molex hdd ac adapter to get 12V and 5VDC out of 48V batteries.


You can put 48vdc into a 120vac adapter?


Did you consider adding a voltage-protection circuit, e.g. based on zener diode?

If not for being mechanical, I could imagine using a 48V motor and a 5V generator; both can be 95%-98% efficient.


On at least one of my cousin's farm fields, there's a single phase induction motor connected to a 3-phase alternator out on the power line pole across the dirt road from the field, to provide 3-phase power where there's only single-phase service. It's not crazy.

(I think perhaps his irrigation pivot needs 3-phase power. I'm pretty sure all of his well pumps are run by de-tuned automotive engines converted to run on natural gas.)


At some point it makes sense to not use a simple buck converter, but e.g. a flyback converter with a small transformer. Then you can step-down much higher voltages, of course it will also increase your complexity.


Hi-Link on Aliexpress has decent modules, also isolated ones, that don’t cost too much and push more than a few milliamps.


I'd honestly just get 48V to 5V non-isolated then 5V to 5V isolated if it osn't anything high current


Interesting. Although it's the non-isolated converters that keep getting blown out from spikes. Luckily, they seem to die without blowing up my MCUs.


For a good practical guide on switching regulator component selection and layout check out Phil's Lab on YouTube:

https://www.youtube.com/watch?v=FqT_Ofd54fo https://www.youtube.com/watch?v=AmfLhT5SntE

He also has good guides on digital audio processing and sensor fusion.


His videos definitely helped demystify my prior assumptions about how to incorporate ESP32s or STM32 in my own custom PCB designs (and not deal with external programmers, castellated edge ICs etc or resort to devkits adding quite some vertical space requirements and coming with issues of their own).

Highly recommended channel.


I really enjoy articles like this. My job responsibilities require a knowledge split about 67% software and 33% hardware. While I've been building circuits since the early 1980's, I never got the hang of analog in college (yes, I know, "everything is really analog"). I know enough to follow examples and spot issues or contentions or oversights but I don't know enough to do __efficient__ ground-up design, just naive circuits that burn a lot of power or are noisy at higher frequencies (I'm pretty sure I'll never get RF).

Articles like this that start from highschool circuits and move to professional discussion are super useful to refresh my memory neurons. DigiKey has a huge number of articles like this, which walk you from the naive circuit and then point out errors (e.g.: https://www.digikey.com/en/articles/how-to-power-and-control...)

More please!


Just the other day I idly wondered how LED "light bulbs" regulate down their 230V (here in central Europe, at least) to the ~2V needed per LED.

Although it's from a different starting point (AC, not DC), after reading the article it seems they could use an AC->DC converter and then a charge pump.

Is that what they actually use? Or is there something easier/cleverer when starting from AC?


LED bulbs usually use the most basic of power supplies, the capacitive dropper[0], the downside is that they tend to die easily.

Also, the LED diodes themselves will often be multiple in series (a string), or series-parallel (several strings in parallel) depending on the bulb, that end up needing more than just 2V, anywhere from 12V to 60V or so per string of diodes.

Sometimes the high power diodes being used are themselves a series chain of diodes on a singular piece of silicon encased in a blob of phosphor, so that the diode package ends up needing 12V or so. These are often referred to as 'COB' diodes.

(apologies for the RAS syndrome, but saying 'LE diodes' or just 'LED' to refer to the individual light elements when talking about 'LED bulbs' is too confusing otherwise)

[0] https://en.wikipedia.org/wiki/Capacitive_power_supply


RAS syndrome is not a bad thing for exactly the reason you've just mentioned! redundancy is common in language, it does actually serve a useful purpose and this particular redundancy confuses no-one.


Capacitive droppers are the most reliable for LED bulbs in my experience, because they are very simple and there's not much to go wrong. It's usually the IC-based ones that die first.


Just to add a data point, I have the opposite experience based on just a few bulbs of different brands I disassembled over the years.


What has failed in them?


In my experience, it's usually the capacitor that pops, but my experience is mostly with GU10 form factor bulbs, which have the downside of being very compact and fitted in fittings that don't have good heat reduction properties resulting in everything running hot, even in a LED bulb that should be cool. Even the cheapest LEDs last longer than standard halogen GU10 bulbs though, so there's that.


Capacitor usually blows - they don't like operating really hot and heat removal inside those small form factors is a serious challenge. Also, many times, the whole power converter is encased in a semi-insulative rubber, so that just compounds the problem.


How about. "Also, the LED elements will often be in series (a string)"


They probably have a switching power supply, but if you want really simple and low parts count, you'd do a transfomer down to something nicer, have a full bridge rectifier, a capacitor, a current limiting resistor, and then put all the leds in the bulb in series; that way you don't need a big ratio for the transfomer... although maybe that's not a big deal, and you may prefer to not need a higher voltage capacitor, or a more failure tolerant parallel wired LEDs. You could have a linear regulator in there too, but careful component choices may allow that to be omitted.

But switch mode power supplies are probably more efficient and highly miniturized, better adapted to different line voltages, etc. All around a better choice. Some sort of smarts are needed to work well with dimmers as well.


Transformer is probably a lot more expensive than what they use.


Nowadays most basic LED bulbs just put the LEDs in series to get the combined voltage drop close to the rectified and filtered mains voltage, and a linear regulator takes care of the rest. This is cheaper and simpler than other methods mentioned by other users (switching or capacitive dropper). Here's a nice video with a schematic and even a way to hack them to lower the power: https://youtu.be/5HTa2jVi_rc (feel free to skip around, no need to watch the whole video to understand it)


For household LED bulbs almost always Flyback.


Linear regulators are craptastic, inefficient, and ancient like 7805 circuits. Sure you can throw them together with a few parts, but why? Apart from extreme PSUs used only briefly, they will rapidly eat more electricity than the added cost of switching components.

Switching DC/DC is generally what you want. Buck -> V down, boost -> V up, buck-boost -> V up or down (sometimes 2 PSUs or 1 with shared components).

I have a 90VDC to 12VDC 10A buck converter to power a train air horn on my electric scooter. It's in a solid-state, ruggedized, industrial form-factor that's potted into a heatsink.


If you replace the diode in a boost or buck converter with a second switch that is turned on and off complementary to the other one, they are the same circuit run in opposite directions. Just an observation.


You can use this observation to build a "buck-boost" topology that can make the output voltage larger or smaller than the input voltage. This is common for 1sNp Li-ion batteries that need a high-current 3.3V rail - buck from 4.2V to 3.3V at full charge, boost from 3V to 3.3V at low charge.

There's actually a bunch of ways to make buck-boost style converters that can do both functions, like the cuk and sepic topologies. There's also bidirectional bridge converters that can change input and output direction - you see this a lot on hybrid vehicles (12V to 48V or vice versa) and electric vehicles (48V to 400V and vice versa).


I'd be really interested in a teardown of what goes into a modern tri-phase solar inverter.


There's not much to it. There's usually some stage-1 converters at a subnetwork of panels that step the panel voltage up to a common DC bus voltage with all the MPPT and relevant OTP/OVP/UVP/OCP/etc. Then the big DC bus gets combined across all panels and fed into a three phase inverter with IGBTs or sometimes SiC FETs (getting more common), which just looks like a hex bridge across a three phase transformer. There's gate drivers (maybe isolated), amps for current and voltage sensing (maybe isolated), some protection circuits, and a half-assed flyback to run the fans and the control DSP. Some models might host an MCU for data logging and comms out to a control plane.

There's a handful of projects where the size of the solar field is large enough to make it economic to step up from 400V or 800V bus. I've seen many 1000V buses, a few 1200V and 1500V buses. Honestly it's exactly the same circuits, just with higher voltage ratings; all your switching elements are still giant hockey pucks, you're still doing a three phase hex bridge, etc. The half-assed flyback is sometimes replaced with something a little less braindead.


When I was 17 or so I built a simple inverter out of a big 12V transformer and some 3055's (RCA's, not Motorola) which was enough to run most of my gear when it was 'lights out' (or not...). So I understand the basics. But those transformerless sine wave inverters are interesting, they seem to get away with squeezing a 20 KW inverter into a relatively modest package at a ridiculously high efficiency.

On top of that they have to comply with a whole host of safety regulations, so even if the theoretical block diagram is as simple as you've outlined it the actual implementation is likely going to be a lot more complex and interesting.

Any pointers to where I can dig around without opening one up myself would be greatly appreciated, most of the youtube stuff is for very cheap or small gear.


Sure. TPT-PV systems are pretty close to the same concept, with a bigger LCL filter on the output. IMO the tricky part is all the compensation and software design required to minimize ground leakage currents, since that needs real-time analysis of the grid state. I've seen some neat tricks with common-mode current injection off the DC bus using another converter.

Most of the inverter designs offload all the complexity to the software controller in an attempt to keep the power component choice and placement simple. The cool control stuff is mostly available in published IEEE papers, particularly from 2015-2020. It's not open-access, but it's definitely easier to get ahold of IEEE papers than a PV inverter. Once you know what you need to implement, the rest is just software engineering.


Thank you! I will definitely try to read up on this. A similar thing happened to servo drivers somewhere around the early 00's up to that point they were pretty involved hardware wise and then from one day to the next it was super simple hardware and a very beefy controller. It's interesting how when you solve math in software (and some of the math involved is quite complex) the marginal cost is like with every other piece of software: close to $0. But doing the same thing in hardware is expensive and it will decrease your MTBF considerably (and it probably will have other negative effects as well). Servo motors are an interesting case (as are steppers, but for very different reasons), especially when driving large loads and decelerating them again at speed. Those are definitely non-trivial control problems and I can see some parallels with these inverters.


I think the big driver for solar inverters was panel cost reductions. With motor drives it's straightforward even 25 years ago to see the benefits and applications of slapping a DSP and some digitizing sensors onto the system to do crazy control stuff. But PV cost was just way too high for this stuff to have tons of active research until recently. Once panels got commodized, the industry pretty quickly noticed how much virtually free compute is available, and moved to fill the gap. It's certainly been interesting watching commercial PV inverter design start with an extra decade of compute improvements and cost reductions as the industry effectively speedruns the design challenges - even with only a few active research projects before 2015, we've had designs in hand for a decade and no scalable way to deploy them.

The commoditization of computing power, and the continuous decades of improvement in digital hardware performance per watt, has reduced numerous classes of analog problems to an exercise in fast enough bit-twiddling. I think in motor drives the big jump to simple hardware and beefy controller was directly downstream of the creation of usable 32-bit motor controller DSPs, along with software toolchains that made it possible to compile optimized C and C++ libraries for these architectures. Up to early 00's there just wasn't enough real-time computing power available for most of the market to take advantage of it, and what little did exist wasn't directly targeted at motor drives. But it is worth pointing out that the S-curve of digital adoption probably got started as far back as the mid-90s; the only people who could really take advantage of it back then were at the cutting edge with very expensive low-volume projects. I'm sure it felt like an overnight event, but it took a decade for motor drive DSPs and software toolchains to get good enough that most people felt compelled to switch.

ETA: oh gosh I forgot FPGAs happened then too, that probably had a lot more to do with it... Ah well, fun trip down memory lane :)

It's been very interesting watching as the compute gets cheap enough that we can start embedding it into the analog chips. The telecom chips all have DSPs in the ADCs and DACs and digital PLLs in the line cards, the battery management ICs all have microcontrollers for charge management and safety, you can buy radar ASICs for automotive proximity detection, there's gate drivers for SiC FETs in automotive traction inverters that incorporate redundant microcontrollers to do monitoring and fault detection/recovery for ASIL D compliance. So I'd add: the same way that software drives the marginal cost of complex math to near-zero, advances in digital circuitry and ease of incorporation into other analog designs drives the marginal cost of complex application requirements down. It's not quite as stark as software, but it's amazing how much quicker a single complex chip design becomes when you can digitize a subclass of the problems and solve them in real-time at virtually no cost on analog ASICs with built-in CPUs and DSPs. Analog hardware advances are extremely challenging by comparison, and can take years of R&D across a wide array of reliability and performance assessments before they become realized in designs.


Interesting conversation. I recall '5 phase' suddenly being all the fashion because it allowed for better control in hardware at a very high expense and then two seasons later 5 phase had simply disappeared because the increased degree in control meant that the intractable problems of two phase resonance could be solved in software (effectively those drivers made it possible to draw energy out of the motor while speeding it up past those resonance points). The cost for a motor, wiring and controller dropped to a fraction of what it was before. Berger Lahr must have been seriously pissed, finally they had those resonance problems licked and then the software revolution simply overtook them.


Nothing wrong with linear regulators in the appropriate applications. In fact, the switching converter/controller IC will generally include a small LDO to generate an internal Vcc supply for the control logic.

Just don't make the basic mistake of reading the 25V maximum input voltage/ 1.5A maximum load current specs for an L7805 and thinking you can pull 1.5A at 5V from a 24V supply - it will quickly go up in smoke. You must understand the operating principals and what "thermally constrained" means.


I'm not sure if it's a good analogy, but I think of boost converters like hydraulic ram pumps, which convert high flow low pressure water into low flow, high pressure.


That's a great analogy, actually! The components of a hydraulic ram line up perfectly with a boost converter -- there's an inductor (the inertia of moving water), a switch (the waste valve), a diode (the check valve), and a capacitor (the pressure vessel).


There is one case that's missing: variable input, say a USB-C PD powered device that can use anything from 5-20V despite operating internally at 12V (which, iirc, most laptops do), or automotive devices that can run at anything from 6V (motorbike) to 24V (truck, bus) while being tolerant of >100V spikes during load changes.

How do these work?


For output voltages that must be higher or lower than the input, there are other topologies like buck-boost, Ćuk, or SEPIC. In general, DC-DC converters can change their pulse duty cycle to modify the amount of energy transferred per switching cycle, and use a closed-loop control architecture to handle changes in input or output conditions. The control loop has a very fast bandwidth (usually hundreds of kHz) and can quickly respond to input voltage spikes or load transients.


I was helping a friend repair some old (WW2 vintage) transmitting gear. The voltage dividers used to bias the tubes used almost 100 watts by themselves! Quite the contrast to modern electronics.


I believe the reason being, accurate high resistance components were difficult to make for decades, so you’d be working with 10 ohm resistors for your dividers to have sufficient control of the reference voltage.

Although suddenly I wonder why you wouldn’t just use four of them for 40 ohms, or eight for 80 ohms…


Are DC-DC convertors applicable for high-power industrial applications?

Can a mega pack battery output be converted 10kVDC without an AC step ?


Yes, dc/dc conversion is definitely applicable in high-power applications. As one example, high-voltage DC power transmission [1] is in widespread use globally, and always requires a dc/dc conversion step for connection to local grids. In addition to dc/dc conversion during transmission, converting from DC to AC (known as inversion) uses essentially the same techniques.

To answer your question more directly: stepping a battery's output to 10 kV is a good example of an application that would almost always be done with a dc/dc converter in an industrial application.

(Aside: "without an AC step" is slightly tricky. If by AC you mean 50 or 60 Hz, definitely can and should be avoided. But AC is generally used to refer to any non-constant voltage or current, and if that's what you mean then the answer is no since a dc/dc converter works by switching, which by definition means there's some sinusoidal voltage somewhere in the circuit.)

(Source: I used to design integrated circuits for industrial control.)

[1] https://en.wikipedia.org/wiki/High-voltage_direct_current


> almost always be done with a dc/dc converter in an industrial application.

thanks. lots of good details in this thread.

> if by AC you mean 50 or 60 Hz, definitely can and should be avoided.

Yes, i meant avoiding the maintenance and losses of an actual transformer


This is absolutely something you could do, in the sense that no one in electrical engineering will raise an eyebrow if you call your DC-DC conversion with an "AC" step in the middle a DC-DC converter. There's fundamentally going to be alternating currents in any DC-DC design, but typically at the inputs and outputs it looks like an average DC current with a small amount of AC ripple.

The kind of DC-DC converters that work well for megabattery to 10kVDC conversion will look very different from the kind of DC-DC converters that step up your 3.3V rail to 5V for some low-power peripheral, and may actually have individual components that completely reverse current direction for more efficient current transfer. You pretty much need a transformer to handle the high power transfer and voltage ratio mismatch. Depending on the pack voltage you might use multiple stages cascaded, but typical 400V-800V packs can step up to 10kV in a reasonable number of turns (12-25). The battery pack side probably has IGBTs or SiC FETs driving some kind of large bridge switcher (or several parallel bridges); the 10kV side probably has some big chonker diodes in a rectifier bridge, though they conceptually could be replaced with synchronous switches if you could find thyristors with fast enough switching speed and better efficiency (usually it's not worth it). Technically this topology runs the transformer primary current in both directions (hence the rectifier at the output) so I guess this is the "AC" stage in the middle... But it's worth pointing out that the AC portion is incidental to operation, unlike something like a Tesla power wall using an AC inverter to feed power into the AC grid, and a grid-connected charger converting that back to DC.


Yes, though at that voltage range you’ll need to stray into somewhat esoteric parts, like silicon carbide mosfets / igbt’s https://www.power-mag.com/pdf/feature_pdf/1461163294_Woifspe...


Haha SiC and IGBTs are hardly esoteric. If anything, GaN is esoteric. You just wouldn't know it because industrial doesn't get the same press as computer power supplies.


Thanks! Just a hobbyist, glad to hear industry perspective on this!


Cool. Be safe, don't play with SiC or IGBTs, the high voltage will kill you. GaN will be more mainstream in a few years, Infineon just bought GaN Systems.


x10 for this link!


"can" yes.

Industrially, no. You're asking for an automotive ignition without a coil, pretty much not done.

Various logic chopping options exist to be technically correct. If you define a pulse as not being "AC" because its not a constant waveform or its not wall outlet 50 hz or 60 hz, then sorta kinda thats an engine ignition coil. If you define a tesla coil as not being AC because its a resonant ckt with a quarter wave antenna colocated, then sorta kinda sure no AC.


Check out the hardware involved in HVDC transmission lines to get an idea of what 'high power DC' looks like.


As far as I have heard modern power electronics is sophisticated enough to do that


I always liked the switched capacitor concept.


It has its uses. For example, powering 5v LCD off your 3.3V power supply and micro with very few parts (just some extra caps/diodes hooked to microcontroller):

https://www.youtube.com/watch?v=I4ED_8cuVTU


Wow! Great video. That is a really practical approach with a micro-controller.


Indeed. The first time I ran into the MAX233 I really did a double take. How on earth does it do away with all of the supply voltages?


The "magic" of course, is AC...


Not really. AC literally stands for alternating current, meaning the current flows the other direction in the conductors as well. That's not what happens in most DC-DC converters. Instead the current is switched on or off (or this way and that way if you prefer).

As a visualization, the hydraulic ram pump[1] is the water equivalent of a DC boost converter[2]. At no point in the cycle does the water flow in reverse. Same with the DC boost converter.

[1]: https://en.wikipedia.org/wiki/Hydraulic_ram

[2]: https://en.wikipedia.org/wiki/Boost_converter


Technically all DC-DC converters at a minimum have an AC current in their input and output capacitances. Since we see a small ripple voltage across the capacitors, it must be true that the direction of the current flowing in the capacitors is alternating.

It is true that in most cases the inductor current isn't changing direction though.


My point was the core of a DC-DC converter like the simple boost converter I mention does not rely on AC to work. It relies on building up a magnetic field in the inductor and then redirecting that stored energy somewhere.

Any AC inside the circuit is due to imperfections, like the reverse recovery time of the diodes or similar.


Sure, direction of current in the magnetics does capture the intended distinction better. Ripple currents are usually non-ideal behavior, and cases where it isn't (discontinuous conduction mode in the boost converter) are probably arguably but needlessly pedantic.


Nice try, Nikola, but AC is too dangerous.


How much energy would we save if we replaced all linear regulators by switched configurations?


Largely done except when it can’t be due to noise / expense / reliability.


Already done in most electronics...


How does the boost converter mentioned here work? I’m not understanding the explanation.


Step 1: You briefly short the inductor. Inductors cannot instantaneously change current, so they will linearly ramp up the current over time. This builds up a magnetic field in the inductor.

Step 2: Stop shorting the inductor. Inductors cannot instantaneously change current, so the now-built-up magnetic field continues pushing current into the switching node. The magnetic field and the inductor current linearly ramp down over time.

The forced current will push charge onto the parasitic capacitance of the switch (from switch to ground), the inductor itself (from inductor output to inductor input), and the reverse diode capacitance (anode to cathode).

Since capacitor voltage is charge over capacitance, once enough charge is forced onto the capacitance at the switch node, eventually the voltage from the switch node to the output capacitance is high enough to turn the diode on in forward conduction. The rest of the inductor current is forced into the output capacitance until the remaining magnetic field in the inductor is depleted.

Step 3) Repeat very fast to reduce inductor size and ripple current required (100s of kHz or MHz speed). Vary the duration for which the inductor is shorted in step 1 according to how much charge you need to put on the output capacitor. You could figure this out open-loop by noting that output current at the high voltage side is in charge per second, output voltage is equal to charge over output capacitance, calculating the time taken for the ramp to grow and decay, etc. Or you could design a closed loop control scheme that looks at the output voltage and converts it to shorted duration for you (this is what most integrated circuit boost converters do).

In summary, you dump current into a inductor to build up a magnetic field, then you use the inductor's magnetic field to yeet current up over a large voltage difference.


I know how boost converters work but reading your great explanation made me realize how similar they are to an impact wrench. Ever wonder how an impact wrench creates such massive torque?

Spin up a fly wheel then let it hit the dogs greatly amplifying the torque through stored kinetic energy similar to an inductor being dumped.

https://www.youtube.com/watch?v=xQzqNnWG21s

I always like how electricity can be compared to mechanical and hydraulic systems, it's not always perfect but there is obviously a lot of overlap between voltage, current, pressure, flow, torque and rpm. Power is the common thread.


Well it actually is physically equivalent considering: speed * torque = power = voltage * current.

Switching voltage regulators are just gearboxes for electricity.

Maybe you could even say LDOs are kind of like brakes on a car, since they throw away energy as heat to regulate speed.


I don't agree, transformers are more like gears, while switching boost converters are more like impact wrenches.

LDO's as brake yes I see that.

However the analogy is never perfect even if useful.


Thanks! It’s starting to make sense.

How does this jive with conservation of energy though?


Shorting the inductor temporarily converts some of the energy in the stored electric field of the input capacitor to "stored" magnetic field in the inductor (not quite, since it's only present when current is flowing, but close enough). Roughly the same amount of energy is eventually converted back into stored electric field in the output capacitor, minus the losses from parasitic capacitances/resistances and radiated emissions.

To keep things simple, imagine a lossless boost converter. In terms of power-in and power-out, a lossless converter has the same input and output power. If the converter output is doing real work (resistive load), the output power is equal to the output voltage times the load current. Therefore, at a lower input voltage, the converter sees a much higher input current than the load current - it has to, because power-in equals power-out. If it intuitively feels like you're pulling more energy out of the input capacitor than you're pushing into the output capacitor because of the temporary shorting of the inductor, remember - it's not lost to real work, just cleverly exploited by the transformation to magnetic field to losslessly overcome the difference in potential energy between the input charge at low voltage and the output charge at high voltage.

  Energy in cap: E = 1/2 * C * V^2
  C = Q / V
  E = Q * V / 2
  Energy in inductor: E = 1/2 * L * I^2
  I = dQ/dt
I don't know how to write out the integral notation on HN, so you can fill in the blanks (sorry) - integrate the inductor current ramp up and ramp down portions, set equal to input charge pulled and output charge pushed, observe that energy is conserved with less charge at higher voltage on the output.


Thanks. So basically the energy to increase voltage comes at the expense of reducing current or am I not summarizing that correctly?


You really need to design it yourself to grok it. It's weird. Try using Falstad's online simulator.


What’s the best way to boost a dc voltage really high like into the mega volts?


For megavolts just rub some cats fur over an ebony rod.

Megavolts.

Aka static electricity.


That would be an _ebonite_ rod, not ebony:

https://en.m.wikipedia.org/wiki/Ebonite

Fun fact: ebonite is used to make fountain pens.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: