Hacker News new | past | comments | ask | show | jobs | submit login

Who needs 16GB ram on a PI! Like what are the actual use cases?



I covered a few potential use cases in my blog post [1], but I'll list them here for brevity:

1. LLMs / AI: you can run llama2:13b on the Pi 5 natively, though at a pokey 1.4 t/s or so. Training small models for use with camera projects is easier too.

2. Web apps / consolidating containers: You could run a few 'beefy' websites off one Pi, as they're often memory constrained more than CPU-bound nowadays (my Drupal site requires 256 MB per PHP thread). (Though an N100 mini PC could be a better option if you care less about the energy efficiency).

3. Experimental gamers (probably like 1/10,000th the size of the other markets) who want to run modern AAA games with eGPUs on arm64... I'm one of like 10 people I've heard of who have attempted this lol

4. Clustering enthusiasts: usually we have more dollars than sense, and having arm64 nodes that cost $120 new with 16 GB of RAM per node means we can have more raw container or MPI capacity than with 8 GB nodes...

[1] https://www.jeffgeerling.com/blog/2025/who-would-buy-raspber...


However you can get an N100-based box for about $150 (including shipping) with 16GB RAM and 500GB NVMe storage[1].

The N100 has a more powerful CPU[2], and can use OpenVINO which llama.cpp supports, so better token performance than the Pi. The N100 has far better storage performance due to x4 M.2 slot, and if you need even more RAM you can upgrade[3] it to 32GB.

The RPi 5 was a very niche board to begin with, the 16GB option at $120 even more so IMO.

[1]: https://www.aliexpress.com/item/1005007511663921.html (semi-random example)

[2]: https://bret.dk/raspberry-pi-5-review/#Raspberry-Pi-5-Benchm...

[3]: https://www.reddit.com/r/MiniPCs/comments/179c9m1/comment/k5... (needs to be single module, not dual)


I'm not sure why you're being down-voted; If you're not using a Pi for the GPIO/HATs and tinkering of that kind, but for hosting software/services, a mini-PC destroys the Pi in every regard.

I have a couple of dozen Pis, I typically buy 3x of each generation, but recently I retired everything below a Pi4 and use a Minisforum mini-PC I got for ~£260 with a 8c/16T Ryzen 7 mobile, 32GB RAM and 1TB SSD, it can do what all of the Pis were doing before and still have a tonne of CPU headroom, and I can double the RAM to 64GB if I need more.

Factoring in the cost of the Pis, coolers, PSUs, storage etc it was literally cheaper than all of the Pis and has performance and features in a different league to the Pi.

Power consumption is lower than the number of Pis required to run the equivalent workload by some way.


Great!

All of these are quantitative metrics I (and many others) don't care about.

- All save for one machine in my home are now ARM. I like the consistency, e.g I can share Nix derivations or Docker images.

- N100 has no GPIO, which I like to tinker with from time to time.

- N100 does not support my favourite HATs that I own or consider as potential buy.

- N100 are one size fits all. There's a whole array of Pi cases and thermal management that can be picked up for any reason ranging from purely technical and practical purposes (fan vs passive, sealed vs open, human/environment protection...) or simply because it's fun and engaging (e.g NESPi case with SSD cartridges)

- N100 come in subtle variations that you have to care about. Pis are "fixed targets" physically, hardware-wise, and culturally, which makes them easy to consistently target, support, recommend, educate about, or find books for (e.g gifts for kids).

- N100 are this century nondescript dull beige boxes, while Pis are engaging through and through.

Pis and N100s are qualitatively different. A Pi5 is simply an upgrade over a Pi4. All that matters is that they're fast enough.


If you don't care about performance, which you just said you didn't, then the RPi4 ticks all the boxes you mentioned.

Given the list of things you care about, the RPi5 is not really an upgrade over the RPi4. Hence why I think it's a very niche board.

Had they instead made the RPi5 be a cheap RPi4, I think it would have been much more interesting. I bought some RPi4's 2GB boards when they were $30 each. That was a great price and enables a lot of fun and interesting use-cases.


> I bought some RPi4's 2GB boards when they were $30 each.

$30? That was a great price indeed.

In 2019 I bought a 2GB Pi4 from an approved seller for £44, which is about £56 in today's money. The 4GB Pi5 now sells for £57 from that same seller (the 2GB Pi4 sells for £42, or £32 in 2019's money)


Focusing on cheap would be the real “niche” area here. There are plenty of alternative boards if price is your driving concern.


The cheap ones I've seen are almost all Chinese boards with limited support. The appeal of the RPi has always been that you can have some confidence it will work.


Sure but that’s just unrealistic to expect getting everything and at the price point you want.

Pi’s niche is the ecosystem. It hasn’t been about cheap in a long while.


It also uses a lot more power all of the time for something that will be on 24/7. There is a trade off between these devices, the pi is more expensive to kit equivalent to the N100 machines (which are a bit quicker) but it uses less power all the time especially under load.


This probably depends more on what hardware is being used rather than the SoC itself.

A bare RPi5 seems to idle around 3 W, which a bare N100 can absolutely (out)do (these are designed with S0ix/s2idle after all).

If you add gigabit ethernet, you'll add 0.5 - 2 W (depending on the controller).

If you add NVMe, you'll add 0-4 W - NVMe SSDs vary wildly in both their own power consumption and how they interact with CPU C-states and ASPM. Some SSDs prevent low C-states and thereby increase CPU power consumption a lot, for example (even in S0ix). This is generally true for every PCIe peripheral (including network controllers), but NVMe SSDs are popular troublemakers in this area.

I honestly don't believe any of the "N100 idles at 15 W" numbers. First of all, that substantially exceeds the power limits of the N100, by like a factor of three. So clearly the vast majority of that power isn't being dissipated by the N100. Seeing how N100 boards generally have only one heatsinked component, the power is dissipated elsewhere. Second, people rarely post their exact hardware and how they measured this. "Idles at 15W [because there's a spinning hard drive attached]" is not very interesting. Third, many of the N100 boards have ATX power connectors, and if you use any old ATX power supply, that alone can cause such a number. Fourth, if you're using a cheap power meter, many of them are still wildly inaccurate at the low end. And, fifth, as a reality check, even much older 1L PCs using actual desktop platforms, even with separate chipsets and all that jazz, don't idle at 15 W. Unless you're using Windows, then, maybe. But Windows can't and shouldn't be the yard stick for power efficiency.


From the 5 N100 systems I've run, they all idle between 5-8W, PSU inclusive (measured at the wall). The SoC might idle slightly lower, but I compare total system power draw when I test SBCs since that's the only fair comparison.


Now you made me want to hook up my N100 box to my lab PSU so I can get some accurate numbers of what it does and how it compares to the wall.


fwiw I have a bunch of M720q's in the lab right now and these also idle at 6-8 W depending on what exactly they're doing (driving display(s) / headless / network). Except these are full-on Skylake-era desktop platforms, complete with a socketed desktop CPU and RAM and a separate chipset.


My minipc idle (with nothing added) idle at 6W and works at 22w


Of course if you're powering this off solar in some remote location then we're into the niches where an RPi 5 might make sense, but otherwise it's just down to electricity cost.

How that works out in practice depends on workload though, while the N100 consumes more at idle, it can also finish workloads much faster so can potentially spend more time at idle. While the RPi 5 idles at around 4W (including a NVMe) and the N100 at around 15W, the RPi 5 uses 12W at full tilt[1], ie close to N100 idle power draw.

Alternatively the N100 with 32GB of RAM can replace two or more RPi 5's in terms of performance, so in that regard might even come out ahead.

[1]: https://github.com/geerlingguy/sbc-reviews/issues/21#issueco...


The cost difference in power probably works out to much less than $50/year. Modern PCs are very power efficient when idle.


Based on my current unit prices the difference in idle performance alone would come out to about £30 a year, so you would get the difference in price back within the life of the device. Europe has much higher KW/h prices for power especially in the past few years and is more advanced in transition to green power technologies and as such we tend to care more about the cost of power than those in the USA.


$50/yr means you make up the entire cost of the RPi in a little more than 2 years if you’re running it as a home server of some sort, for example.


Jeff,

Price is precisely linear, not polynomial! $5/GiB (price= $40 + $5 * xGiB)

The graph isn't spaced correctly on the x axis, which causes confusion.


If you want cheaper (sort of) K8s nodes, with similar more compute power / RAM you can reuse old devices (smartphones, tablets) and run postmarketOS on them.

Shameless plug: https://blog.denv.it/posts/pmos-k3s-cluster/


I mean it's funny that people even have to ask, pretty much every piece of software treats RAM like it's free and unlimited nowadays. Even the most memory conscious cpp programs are so bloated at compile time that you need swap to even build them on <8GB boards.

16 GB is really a minimum for anything that's not embedded.


More RAM for ZFS perhaps, Jeff?


Heh, indeed—that's my plan for the 16 GB Pi 5, to swap it out for the 32 TB all-SSD Pi NAS setup I've been running for the past year! https://github.com/geerlingguy/arm-nas


Bigger caches. Making entire RAMFS or tmpfs partitions (filesystems but in RAM) for applications or tinkering with things. https://wiki.debian.org/ramfs , Virtualization / VMs, databases. Loading large files into RAM instead of having to read by row/column on a HD.

I'd rather have more RAM available unused than not have RAM available and need it. Been the general rule of thumb for me for the last 30ish years.


RAMFS is a genius idea. That solves most of the SD card health and speed issues without needing to get a whole hard drive. I know Puppy[0] and MX Linux[1] were made to run like that too.

[0] https://puppylinux-woof-ce.github.io/ [1] https://mxlinux.org/


I used to run a Pi as a Wireguard entrypoint to my home network, I made the filesystem read-only and created a RAM disk and moved logs and any other writes to it to protect the longevity of the uSD, it had the added benefit of security of a read-only FS. I'd remount it r/w occasionally and run updates. It ran flawlessly for years (at a time when I was killing a Sandisk uSD in Pis roughly once per week) until I decommissioned it.


I've been running a public facing weather website on my RPi2 since 2014. I am still on the same SD card since all HTML assets and logs are on tmpfs. The only thing being written to the SD card are entries to the DB once every 5 minutes.


That's pretty sweet.

My suspicion about the many uSD cards I've killed is power issues, power loss, etc. In terms of wear, I don't think a typical Pi would be doing enough to wear them out unless it was being hammered.


Your kernel will already cache all IO in memory.

You can decrease the to-disk syncing to e.g. once per day.

```sysctl.conf vm.dirty_writeback_centisecs = 86400000 ```


Thanks for this information. That's actually really helpful for me to know in my server administration dealings. I _hate_ disk IO and disk thrash. I was aware of kernel stuff but completed eluded me that I could modify it. I put it to 7200


Good point, but I was more talking about distros designed for a smooth day-to-day experience. A user would probably want something like SquashFS (to save space on the SD card) and ZRAM (to conserve RAM) since all their files would be living there.


Now you can finally run basic Electron applications, like Slack. If you're lucky, you might even be able to run 2 at the same time!


Outside of a real use-case, RPi products are well-polished and fun to play with. There are few other products with an overall presentation - from design to marketing - that are as clean and well done. Personally, I enjoy supporting that.


>Who needs 16GB ram on a PI! Like what are the actual use cases?

If you use it as you would a PC then it's actually not enough RAM. I have 16GB on my laptop and desktop computers both, and as I always keep browsers running they're always out of memory, even with 16GB swap added.

Just came in to my office.. and my office computer with the same spec had killed all the desktop applications due to memory overuse, just as it always does when I leave it alone for a few days.

Granted, I do have a lot of windows and tabs open, that's because I need to move away from stuff and do other things for a while, but when I go back I need it to be there just as I left it. But browsers are eating memory. All of them. Chromium, Firefox, Vivaldi.. you name it.

For something working as a desktop PC I'm looking for way more RAM than a meagre 16GB. For a Pi which I use just for a single purpose I'm fine with those I have.. 2GB , 4GB, 8GB (which I use for different things). I'll never run a browser on any of them though. No way.


K8s control plane node or even worker node. 4 get you a single cp and 3 workers which most helm charts require. There's a pi hosting provider which is reasonably cheap, like 7€/m, yearly payment, but only 100mbit connectivity. Still good enough for a learning cluster. Where can you get that for less than 28€/month?


It's theoretically viable to run a 16TB ZFS NAS, which would be perfectly respectable for SoHo/homelab workloads.

I've been looking to upgrade my aging PowerEdge T20 (also hate the fan noise), this is looking very interesting. I wouldn't be surprised if Jeff Geerling makes a video about this exact use case.


The latest android phones are adopting 16GB now, so the SBC's for equivalent performance are going to do so too. One use of them is running android for development purposes among many many others. They make great self hosting servers that are really low power.


It's just a bigger hammer, the only tool you will ever need in your life.


The use cases get born once you have such amount of ram...


Would it still make sense to use a raspberry pi over an x86 mini PC in those cases, economic and performance wise?


Only your use case can tell you if it makes sense :) If you're looking for a cheap in-house CI cluster for building/testing on ARM, it makes a ton of sense, just as an example.


With more memory available, the cross-compiling constraints become less relevant.

Add one of these to your development environment, use it for building and packaging, deliver to the lower-spec memory devices being shipped.

This can be a massive productivity boost.


Seriously, these days, running a few dozen containers requires so much ram, I'm running a Odroid with 32GB ram and finally feeling safe from memory exhaustion.


I can't think of a single use case that wouldn't be served by a much faster MiniPC that after counting power supply, storage, box etc. would fall in a similar price ballpark. RPi and similar boards are a godsend when one needs easily accessible GPIOs without external interfaces adding points of failure and cost, but usually the amount of memory required in those contexts is much lower.


The bigger the system, the smaller the role of the CPU/GPU power consumption has in the overall sticker price. I would expect a 10TB-memory server with a couple hundred petabytes of flash storage to cost more or less the same regardless of what's the CPU inside it.


I think it's mainly for embedding into products that require a standalone AI dataset of some sort. Robotics, perhaps, or production line defect detection.


Running Chrome?




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: