This post nicely sums up why I just cannot get excited about anyone's announcement of some new and (on paper) awesome SBC with specs much better than those of a contemporary Raspberry Pi.
Software ecosystem support is THE killer feature for this class of device. And the RPI has, despite its fundamental shortcomings (needing the proprietary GPU to manage bootup, etc.), the best story in that particular department by so much that it's not even a contest.
> And the RPI has, despite its fundamental shortcomings (needing the proprietary GPU to manage bootup, etc.), the best story in that particular department by so much that it's not even a contest.
Exactly!
On the RPI if I want to watch HBO Max without stuttering, all I do is apt install Chromium in Raspbian Buster, fuck around with a dozen or so startup flags, give up, try Kodi, notice that there's no official plugin, install a conceivably legal plugin from "http://k.slyguy.xyz", unzip it, twiddle some quality settings, and put up with intermittent stuttering that's slightly better than when I started the whole process.
I'm just guessing at the Kodi part[1] since I've only made it to the "give up on Chromium" part so far. Anyway, it's probably less work than what the author describes.
I have a Pi 4 and a Rock64Pro, both with LibreElec (basically Kodi) installed.
The video playback performance and stability of the Pi 4 beat the Rock64 handily.
Using Kodi via LibreElec on a Pi3 (a B IIRC, not a B+) and x265 was a mixed bag. There is no hardware acceleration unlike on the Pi4 so it runs the general CPU cores hard. 720P was fine, but without extra cooling 1080P in summer months would eventually (60 to 90 minutes) cause it to thermal throttle and playback would fall apart. Given how warm it felt soon after starting playback, I was surprised it managed that long before throttling. You will likely get much better results with a cooling solution of some kind, particularly an active one.
> If it needs a fan, I wouldn't use it as a media player.
Same here.
You might get away with higher resolution x265 with passive cooling (a really good heatsink), especially if the device is in a reliably cool environment, but around the time I started using such encoded media the Pi4 was readily available which does 265 in hardware so no similar heat problem (the Pi3 is now used for other experiments). Unfortunately current supply issues might preclude you using that upgrade option!
Streaming is a huge weakness of the Pi platform. It seems like the big sites change their DRM every few months, and everything stops working.
A couple of years ago, I could happily use a Pi400 as a streaming box for my TV. Today, it's only good for playing videos off of an external drive with stock Raspbian. This was one big reason why I cancelled my Netflix subscription.
In fairness though, I seriously doubt that other SBCs have better streaming support. Is there a project like OpenWRT for HDMI sticks like Roku, Fire TV, etc?
The amount of performance that these SBCs are capable of delivering makes them attractive for power users that want to use them as little Linux desktop machines.
Only to be incredibly disappointed once they don't deliver what they're expected by specs alone
But if you get almost all the way through watching an entire episode of Barry on your gateway and then it stutters, trust me-- you're going to loudly ask why the hell this gateway won't do the goddamned thing it's supposed to.
The process you link doesn't seem to be related to an rPi, rather, trying to get Kodi to play HBO. Am I misunderstanding something? Would this process be any easier on a Rock64?
The day when RPi's browsers can play HW accelerated video without stutter, It's truly ready for desktop; Then again even with intel GPU's excellent Linux support getting HW accelerated (4k) video on browsers is not possible without some funny business(at least on Wayland).
Interestingly x86 ChromeOS machines with low memory (4GB) didn't have issues with HW accelerated videos, I wonder whether the ARM ChromeOS machines of that era had issues?
It so often comes to some sloppiness on the driver side. I can't help myself but think of "cha bu duo". In my experience Chinese companies (i.e. where most other SBC SoCs are designed) are notorious for having all the ingredients, yet making silly decisions that aren't really even for cost-cutting.
Exactly. Just reading this gave me PTSD and makes me glad I don't work with embedded anymore.
My memories are of Indiana Jones-esque gymnastics after stepping on functionality that should have been solid... but turned out to have either been not implemented or implemented in a known-broken way.
It makes me appreciate how good enterprise, desktop, and server software actually is.
And understand why Apple and Google both put in the approval speed bumps they did for apps.
The minimum quality floor is far lower than you would expect...
I think the only sane use of most niche embedded stuff is if you're building a product for a single use case and have a team. Aiming for general purpose & everything works? Hell no!
These embedded devices have needed something like UEFI for well over a decade now. Back in the old old days when they shipped with 4MB of storage for Kernel, OS, and application it made a bit of sense to have everything stripped to the bone like this, but storage is so cheap now that even bottom of the barrel hardware ships with 64MB. There is plenty room for a standardized boot and hardware discovery process, yet nobody seems to have any interest in actually implementing it.
Resuscitating OpenBoot would be great for these devices. It used to be the BIOS of PPC Macs and Sun SPARCs. OpenBoot "programs" (PCI video card BIOS, PCI network card BIOS) were actually Forth bytecode compiled on the fly to the target architecture upon boot, so you could have architecture-independent extension cards.
I won't be buying any more ARM SoCs unless they start shipping with UEFI, ACPI or SBSA like ARM servers do. That includes Raspberry Pis, and laptops and tablets using ARM SoCs without SBSA.
Checkout Tow-Boot, it's not 100% there yet, but I'm using it to EFI boot nixos on a pinephone, granted it's the Alwinner one, not the rockchip one, but still.
Does it still require DeviceTree or custom images? Because that's what I'm trying to avoid, I want to stick with devices I can load any generic ISO on. So far the only thing in the ARM space I've seen are servers.
You shouldn't need any "ecosystem" just a bootloader, TF-A and a kernel. The rest should be abstracted away for you by the kernel, unless you're doing something really heavily reliant on details of the HW, like implementing a smartphone functionality.
And that's how I use all of my SBCs (I just counted I have 7 different SoCs on my various boards and mobile devices). I put on a regular whatever distro on all of them (Arch Linux ARM in my case, but it can be anything. That's the point. The distro doesn't need to "support" my board.) it and I only figure out my own kernel and bootloader builds. Bootloader can be figured out once and forgotten about. Kernel just needs a quick update now and then, if I care. Nothing much harder than a make && make install with some env vars set and a corss-compiler installed on my workstation.
I don't need any special board specific software ecosystem provided by some foundation for normal desktop or server uses. It's already there (any GNU/Linux on ARM distribution must work), and it's generic. If the SoC is so bizarre that it can't just run U-Boot/TF-A/Linux/any distro in rootfs in that order, and requires something custom, including non-standard tools in userspace to control basic things like display output, etc., that may not be worth bothering with, but that's only really a domain of Raspberry Pi, and not common anywhere else.
Some distros provide generic kernel builds and a proper bootloader/device tree already, for many boards. So if someone is not interested in building their own, they don't need to. That may limit the choice of the distro though. But it's just an artificial limit.
I've started looking more at thin clients to take roles of small servers that people go to Raspberry Pis for. Small cheap Intel Atom based thin clients usually don't even outperform the Raspberry Pi 4 in either peak performance or power efficiency, but they're close enough in both areas that for "I just need a small server" it's not a meaningful difference. Plus they come in a case, should have reliable storage and decent IO included, and they just boot and run whatever bog standard Debian ISO I can download from debian.org with zero issues now and for years and years to come.
If you can find them for a good price on eBay or the likes, it's well worth the convenience over any ARM board if all you want is to run a server - I find it preferable to the Raspberry Pi, and especially other ARM SBC hardware.
Expectation: dirt-cheap, power-efficient and reasonably fast SBC computers so you can dedicate one per service (or form a cluster!).
Reality: a RPI 4 costing as much as a whole Intel NUC, but slower, without upgradable RAM, without native ports to mass storage (SATA/M.2), and not as reliable.
Didn't even have to think that far to see that SBC computers are a joke^W^W not there yet. Considered buying a NanoPi or a Raspberry CM + some board to build a router, with at least 2 physical Gigabit ports. Tallying up all components (sbc, case, power supply, heat sink, etc.) made the NUC with 4 gigabit ports look a bargain.
Excellent write up. The world of cheap SBC's is a hot mess right now. I recently researched and bought one for my own project, and I feel like I've commented about it on HN several times already, but here we go again.
A Raspberry Pi's is by far the easiest way to go, but I've found they have become way too expensive. If anyone is looking at cheap SBC's I suggest you pick one with Linux kernel mainline support. Here is a good place to start: https://linux-sunxi.org/Linux_mainlining_effort
A lot of the newer SBC's aren't supported yet. This means that you will be stuck using the vendor's Linux distribution until support lands in the mainline kernel (which could be never). If you get an older SBC with mainline support, you can run 3rd party distros like Armbian. https://www.armbian.com/
I’m really surprised that the only way to get a consumer ARM chip is to buy an SBC or else a fully assembled computer (e.g., Mac Mini). I really want to build my own low-power server, but it seems like the only way to go is to buy a Raspberry Pi compute module and a carrier board (I’m not even sure where to find one of decent quality), and even then I’m either stuck with the SBC memory/gpu or maybe I can supplement with some PCIe stuff (although it seems like a complete gamble as to whether or not the Pi will support a given device).
Why isn’t there a market for standalone ARM chips and motherboards like there is for x86?
The answer to this is essentially that CPUs are only cheap if you make them in huge volume, and "I want something that's like a PC but not actually PC compatible" is a pretty small market, so an SoC designed and manufactured for that niche would just be too expensive for anybody to buy. So if you try to target that niche you are going to be doing it with an SoC designed for some other market, and accepting the tradeoffs that come with it. The traditional high-volume Arm CPU market is mobile, which is what almost all these SBCs are borrowing parts from, and you don't have PCIe on a mobile phone, so no PCIe on an SBC using a mobile part. Possibly as more Arm server chips appear something usable in a desktop form factor will appear, but I suspect that you'll end up just seeing a different set of tradeoffs instead (e.g. much higher price per part).
Unless I'm missing something, an ARM CPU is a PC in every interesting respect. I doubt a SATA, PCIe, USB, etc device cares whether the chip is ARM or x86. So compatibility with the CPU ecosystem doesn't seem like a very compelling reason for low demand. And I would think the M1 proves that there is demand for ARM CPUs (or at least fast, low-power chips).
Even if there is some constraint that prohibits scaling up ARM CPUs (e.g., fab capacity shortage) and thus we need to stick with mobile components, what's stopping someone from selling mobos for mobile chips? If the answer is "mobile chips have to be industrially attached to a board" then why aren't there SBCs that are a drop-in replacement for a PC mobo+x86-chip (or if these exist, why are they so niche)?
It's not a PC in the very important "just boots Windows and runs my existing software" respect. That is what the overwhelming majority of purchasers in the "desktop PC box" market have as an absolute non-negotiable requirement. If you are not an x86 PC compatible, then you're in the really small niche.
You could in theory, yes, do a PC-mobo form factor SBC. But no PCI, no SATA, no external DRAM: it doesn't look very much like a PC even if you've stuck it in a PC case...
Windows runs fine on at least a few ARM SoCs and I'm sure most have no SATA. Maybe some have PCIe or at least NVMe. BTW, they were available commercially prior to M1 Macbooks.
The vast, vast majority of software written for Windows doesn't care about those buses. And emulation allows for the x86 programs to run without being rebuilt.
Windows' biggest drawback in this regard is that hobbyist/enthusiasts generally can't add support for any-old-board. You've got to be a vendor who is willing to get in touch with Microsoft to make it happen.
Eh, that's a transient concern. Seems like M1 is going to force the entire ecosystem to support ARM sooner or later (laptop manufacturers are going to start supporting ARM to keep up with Apple, and the software will have to follow suit). Windows and Office already support ARM.
> But no PCI, no SATA, no external DRAM
Why not? Do mobile ARM chips not support those things? I know the RPI compute modules have PCIe.
Whether or not it's a transient concern, manufacturers operate in the market that currently exists, not the one that may exist in the future, and the market that currently exists doesn't include a lot of demand for non-Apple ARM PCs.
The best way to get a standalone ARM chip is to pretend to be a company making a sprinkler controller [historical reference there] when you e-mail the sales rep for your region and ask them to send you a half-dozen samples, for free. Your story is that you have, like everybody in your position, laid out a circuit board and are ready to solder some on and try it out, for a low-to-mid volume (100,000 to 200,000 units) product, with more products in the planning pipeline.
Right now i'd recommend getting in on the turing pi 2 kickstarter as far as wanting a good carrier board. There's some others out there, and I'd recommend Jeff Geerling's reviews of them for how well they work but I don't know a lot about any specific ones right now.
Direct consumer sales of CPUs is pretty niche even on x86, and on ARM side this whole SBC business afaik is pretty niche compared to the commercial (embedded) use. I can easily understand why no-one bothers to invest in that small sliver of a market
I don’t get this. x86 CPU sales isn’t niche. Last I checked, the thousands of PCs at my university alone were all x86.
Demand exists, supply does not. Why? Idk. Maybe it’s a consequence of bad business decisions at ARM.
Apple has recently shown that ARM chips can be competitive with x86. However, I don’t see why we couldn’t have had that a decade ago. Why did we have to wait for a licensee to take this matter into their own hands? It’s like the business execs at ARM are asleep at the wheel. If I were a shareholder, I’d be wanting some different leadership.
You might be interested in this recent Register opinion article, which (in passing) makes the point that Arm and x86 have thus far succeeded largely by concentrating on things the other is not doing rather than by directly trying to take the same path:
https://www.theregister.com/2022/05/16/riscv_world_dominatio...
If you were a shareholder are you sure you'd rather have Arm focused on trying to sell desktop chips rather than those 30 billion largely embedded and mobile cores they apparently shifted last year?
The key was “direct” sales of CPU - most x86 systems you see are fully integrated and ship with a CPU already loaded from Dell, etc. Few people “build their own”.
Just making desktop-class chips is useless without software. Apple can force both sides of that equation, other vendors can't. You can buy laptops with Windows and ARM, but nobody does for good reason.
The consumer PC building market is a remnant and kept alive by gamer culture, it's an anomaly tied closely to x86 by software and other network effects of compatibility.
This is what you get with 99% of the arm SBC trash. They will generally boot the one hacked up kernel the vendor ships, which are overwhelmingly about 5 years out of date because its the kernel they hacked up for the original high margin customer, and by the time the SoC is publicly available its basically abandonware.
Outside of rpi's which have boatloads of community support, about the only boards that aren't this way are the systemready ES/SR certified ones. There is a IR level too, but its still using DT, meaning that the linux kernel devs will break the machine at random times requiring firmware/DT updates when newer kernels are loaded. The ES/SR bands use UEFI+ACPI and provide a standardized platform that is the same as what one expects of a random pc.
Agreed. In this case, the vendor doesn't ship a working kernel at all. Their view is that the community will handle that. Massive props to ayufan, without whom most of the Pine SBC's would likely have no bootable image.
I experimented with a RockPro64 a few years ago before learning this lesson. The hardware sounded wonderful. Unfortunately, there was some kind of conflict between the PCIe slot and the GPU. It was possible to get hardware-accelerated video, or use the PCIe slot for additional SATA slots, but not both at the same time. I spent a fair amount of time trying to dig into it, but never did establish whether the problem was in hardware, the device tree, or the drivers.
I have a Rock64 and I don't remember it being that difficult to setup. Maybe I was just using it as a headless machine, but to get a working Linux install shouldn't have been this hard.
I typically use Arch Linux because it's very barebones, and Arch Linux ARM has a working setup [0] for the Rock64. You do need to write various parts like U-Boot bootloader onto the SD card manually, but everything has been pre-compiled. If you prefer an image that you can directly flash onto an SD card like Raspbian, then maybe you can try Armbian [1].
These should get you a working Linux install rather quickly, bringing you to up to the "sixth circle" as written by the author.
Yes indeed, OP specifically chose "hard mode" here by not using a distribution that supports it with a prebaked image.
That's fine for the OP! They probably wanted to experiment and tinker, no problems there. But it seems to have given a lot of people in this thread the wrong idea about how well supported these devices are for normal users.
If the author likes NetBSD then they are probably familiar with pkgsrc, there is a package for u-boot for the Rock64 that will take care of combining it with the ARM Trusted Firmware.
I'm currently debugging the lima DRM driver for NetBSD.
NetBSD works fantastically on my army of Rock64Pro systems.
Thanks for the contributions.
It is much simpler due to folks like you putting in the hard work.
Can anyone explain why the ARM ecosystem is like this? So many comments in this discussion about how all SBCs other than the RPi are terrible to get booted. Why is it like that? Why hasn't the market settled on one bootchain?
* Vendor integrates a bunch of IP (Cortex, codecs, GPU), tapes out SoC design.
* Vendor has challenges hiring or retaining excellent software engineering team, because VLSI and hardware are the core competencies that make money, not software.
* Vendor's struggling software engineering team hacks copy-pasted code together until they can produce a board support package against a single Linux kernel version, with no care to upstream. Once the device boots in a single configuration, they ship it.
* Vendor's also struggling documentation team attempt to use sketchy descriptions from engineers to produce documentation. Often, internal interfaces are ignored or not documented since "it works" - only the external interfaces needed to make the chip run are documented.
Now, an OEM buys the SoC and BSP. Sometimes they do some extra work to get the BSP to work with a newer kernel, and sometimes they don't. And if they do, they hack on top of the hacks, so the divergence increases and now there are multiple devicetree versions floating around (as happened with this Rock64 board).
I wonder at what point, if ever, the economics of ARM chip makers [1] starts to work in favor of ARM chips supporting full UEFI and ACPI for hardware initialization like x86 boxes do. I may be mistaken, but I thought some more "enterprise grade" ARM hardware already does UEFI+ACPI, as well as perhaps the Surface Pro X Windows 10 ARM device with a Qualcomm ARM SoC?
You'd imagine at some point the maintenance costs for the chipmakers themselves as well as the increased interest / adoption rates from customers would tip the economics of all of this firmly towards standardization of this mess. Then again, I may not fully understand the economics here, or how much cheaper the current hacky approach is.
[1]: Looking at you, Rockchip, Amlogic, Mediatek, Unisoc, Allwinner, Broadcom and the list goes on. I hope the Raspberry Pi Foundation + Broadcom can take the opportunity with the Raspberry Pi 5 or something to lead the way, but I'm not optimistic at all.
Whether it's UEFI or U-Boot loading the DeviceTree blob doesn't really matter, no? A correct DeviceTree matching the drivers which ship with a given kernel version is still necessary.
The big problems are when:
* The DeviceTree is incorrect but works anyway because the peripheral drivers are ignoring it.
* The DeviceTree is correct but the peripheral drivers had to be patched to work.
UEFI doesn't seem to do much here, no?
And even moving to ACPI doesn't really change anything - DeviceTree is just a simplified ACPI really, if the ACPI descriptors and what the peripheral drivers do don't match up, it still doesn't work.
Yes, and no. DT's as used by these arm boards are basically reflections of how the linux kernel works on a given platform. A device driver developer needs to make a decision, say about how fast to program a clock divisor, so they then need a clock driver, and a device specific set of attributes that match 1:1 with the code paths in the Linux kernel which are making platform dependent decisions. When another OS comes along the attributes change because the driver model is different. This is visible in linux as its own model evolves. The systemready/ACPI specs tend to require more self describing buses (pci/usb/etc) where the devices are more fully encapsulated behind the bus, and things like power/clock management are either standardized by the bus interface, or via the standardized ACPI methods. AKA, putting an ACPI device into a low power sleep state is the same across every single device/platform because ACPI can notify some other firmware/management entity to perform the work rather than trying to fine grain manage the process.
So, your right, DT is a subset of ACPI, but its only really capable of HW description. So all these platforms (like the rpi) end up with proprietary ways of getting their management (the GPU in the case of the rpi) engine to perform platform actions. And that's needed because all these arm SoCs now have processors/etc that aren't visible to Linux because the model is no longer just about a central CPU doing all the work.
It's more like, the device tree describes the parameters needed to load and run Linux drivers for the hardware, given the exact git revision of Linux including the dts file. For one particular hardware variant/revision, of course.
I suppose my comment is more about a culture shift rather than a technical suggestion, one supported by UEFI+ACPI :) Mainline Linux boots on pretty much all x86 hardware without needing explicit device trees, because the hardware is autodetected and initialized, and the hardware plays nice with the Linux driver (preferably mainline, but even that's not necessary). We could have that on ARM SBCs, if vendors were better.
> And even moving to ACPI doesn't really change anything - DeviceTree is just a simplified ACPI really, if the ACPI descriptors and what the peripheral drivers do don't match up, it still doesn't work.
I'm getting to the edges of my understanding of these things here, but wouldn't ACPI make it easier to support newer / mainline kernel versions without the vendor needing to supply a device tree specific to a kernel version they maintain?
It would be nice if there was enough of a standard one could download a Debian ARM64 image from Debian's main website, and it would just boot and work on an ARM SBC. Even brand new x86 hardware can usually at least _boot_ and provide basic and useful I/O on Debian stable, and work pretty well on Debian sid.
> It would be nice if there was enough of a standard one could download a Debian ARM64 image from Debian's main website, and it would just boot and work on an ARM SBC.
NetBSD has one image that will boot on all supported ARM64 systems, it just uses the device tree or ACPI table supplied by the firmware to configure the hardware, there is nothing stopping Linux from doing the same.
This is how Linux works too, the issue, as I pointed out, is just device tree <-> peripheral driver mismatch.
In the sense of "supported" where the peripheral drivers are upstreamed _and_ the Device Tree matches the code that was upstreamed, this is exactly how Linux works. It's just that as the article documents with Rock64, this is sometimes not an easy combination to find. I'd imagine that NetBSD actually has the same issue with respect to needing a "supported" Device Tree for a given "supported" hardware version - there are probably interim Device Trees that will not work at all.
The issue is that the Device Tree and kernel are both moving targets which often don't match, because fundamentally the Device Tree is just a configuration file for the peripheral drivers and the details of the interface contract between the Device Tree and peripheral driver can change at any time.
The same issue would exist for ACPI too - as several posters have pointed out, the issue is mostly cultural.
x86 has a "plug n play" culture - hardware is sold mix and match and has to work with generic CPU/motherboard/firmware combinations, so the platform needs to be able to enumerate devices and allocate resources on the fly. Drivers are written from the ground up to work with what bus enumeration gives them and not to make naive assumptions about the configuration of system memory. Plus, PCI devices all have a standards-constrained communication mechanism with the host, as opposed to ARM peripherals, which can be any given combination of register-mapped, memory-mapped, interrupt-mapped, or multiplexed through other hardware.
ARM systems culturally have never had this constraint, so many drivers rely on hardcoded assumptions that break when they are introduced to another SoC. And for ARM, this is only getting worse as instead of making peripheral drivers more generic, vendors are instead building blob-HALs. I'm not sure what the solution here is unless the buy side (OEM server integrators, phone vendors, set top box manufacturers, SBC vendors, etc.) start demanding improvement. In the case of the Pi, the Pi folks instead have been slowly chipping away at making this migration themselves (i.e. slowly replacing DispmanX FKMS with native KMS), which is An Approach but maybe not A Great Approach.
This is certainly the dream of DeviceTree, but I don't see how this can work, given that "Linux" generally requires a different device tree depending on both the kernel version and which specific drivers are employed. Perhaps for specific ARM boards with a very high degree of maturity this is possible, but for example even for Raspberry Pi, a completely different set of DT overlays are required depending on the specific graphics driver configuration which is desired (KMS/FKMS/proprietary VideoCore etc.).
Sometimes the above involves signing a NDA with companies like Broadcom so they can't release useful documentation without having a lawyer go over it first, and nobody wants to bother with that. Even worse, they can't release the code and have to release a giant poorly documented and buggy binary blob to support things like 3D acceleration and video decoding. And yes, fundamental features may or may not work, like resolutions other than 1920x1080 as described in the article.
It's the default state that happens with Linux ports. Unless there's an existing de facto standard OS whose boot interface can be piggybacked on.
But the booting is just a small part of the mess. For the rest I guess the answer is similar - the default state of things is "not working" and a lot of things have to go right to make things run well.
(How long did it take for RPi to get working hw video decoding OOTB with a distribution other than the board specific one from the SBC provider? Or does it even now?)
I appreciate making a post that ends in failure. Worthwhile to document your progress for others, as well as acknowledge that we don't always figure it out.
An old fish swims past a couple of young fish, and says, "morning lads, how's the water?" the young fish look at each other briefly, and one asks, "what's water?"
The water in this case is Linux.
All I want is a handful of risc CPU cores, a few gb of ram, and a framebuffer device with a good blitter. Something well-documented so I can play with the metal.
I went through hell trying to get a NanoPC-T4 (RK3399) board to work with NixOS. Getting everything working took up roughly a month of my free time and required engagement with multiple communities (including Armbian) and a bunch of digging to cobble together bits of information into a coherent build process (reading shell scripts from distros, etc).
I did finally get everything working, and I wrote some documentation to capture it.
What I learned in the process is that Nix is a great resource to see how to get lots of Arm chipsets to bootstrap with Linux. Specifically, the uboot config file.
By reading that, you can get a good view into what proprietary blobs and tools are necessary to get various boards working. As an added bonus, since Nix is declarative and versioned, weird boards should continue to work for the long haul.
Ironically, once I got the board up and running, I completely lost interest. Turns out the challenge of getting it to work was scratching a particular itch, not actually using it in practice.
Had the same battle with Rock64 and a Pinebook Pro. RPI has UEFI support (https://github.com/pftf/RPi4) that makes the entire bootloader process so, so much simpler. I wish Pine would provide something similar for every chipset they produce. U-boot is too hard to understand/script, give me a standard UEFI interface that I can use with native linux tools like efibootmgr and systemd-boot.
Look into the Tow-Boot project. They're trying to replace the paradigm of distributions shipping their own hardware specific u-boot implementations with a u-boot platform firmware that gets saved to dedicated storage and presents a UEFI interface for distributions to use. A bunch of Pine64 devices are supported!
I went deep on the Pine64 ecosystem, the RockPro64s are pretty nifty with all their inputs, but all (5) of my Rock devices are collecting dust because, like OP, I discovered the ecosystem is still embryonic. I think progress will continue to be made, just at a snails pace.
Actually RockPro64's SoC RK3399 SW support is fully open and mainlined... from DRAM code, to low level suspend to RAM code, to bootloader and platform firmware with development originally mostly supported by Google. So if you think that SoC's support is in embryonic state, I don't think your standards are reasonable. :) You can pretty much just do a common builds of unmodified TF-A, U-Boot and Linux and get a working system.
I can count platforms with lacking SW support on all my fingers, easily, but RK3399 is not it.
Not sure you read my comment thoroughly. I called the RockPro64 nifty, followed by mentioning all 5 of my Pine64 devices are collecting dust. It's embryonic across all 5 devices and that's pretty clear once you get to the camera, or phones.
I don't think you wrote your comment thoroughly. ;) You mentioned RockPro64 and then that you have 5 Rock devices, and Pine64 doesn't have many Rock boards, just RockPro64 and Rock64. Pro one is maybe supported better.
It's too bad the ARM world seems vehemently opposed to UEFI/ACPI. On x86 platforms, you can rely on a standard interface for everything... Loading the kernel from the UEFI partition, enumerating PCIe and USB devices, console input/output, even drawing simple 2D graphics to the framebuffer.
DeviceTree is just a file format, it's not really an alternative to the guarantees that UEFI/ACPI provide.
Yes, U-Boot seems to be designed to load kernel images from flash ROM. I doubt it can handle, for instance, loading a kernel image from a hard drive connected via USB.
Whether or not a particular strain of u-boot supports booting from a certain medium or interacting with a certain thing in general is often more a function of whether said strain of u-boot is new enough (measured through mainline) and whether the particular PHY and controller has a driver in u-boot. For example, the vendor u-boot on my RK3588 dev board can even PXE boot, because u-boot has that functionality and the vendor u-boot has GMAC and Ethernet PHY drivers.
Rockchip SoCs builtin "maskrom" can read u-boot (or whatever other bootloader you use, Quartz64 with its RK3566 actually has a Tianocore EDKII port for full UEFI) from either SPI flash, eMMC or SD. The boot mediums after that are entirely up to said bootloader.
This is not somehow special to ARM, your x86_64 PC motherboard just comes with an SPI flash chip that has firmware like this flashed to it from the factory. Some SBC vendors are doing this as well now, e.g. ODROID is shipping u-boot+Petitboot (which seems like a bad idea, kexec is fragile from what I know) right on a flash chip included on the board.
The difference with RK3328 as found on the ROCK64 and RK3399 as found on the ROCKPro64 or Pinebook Pro is that you are not at the mercy of your vendor to provide you with firmware updates. Everything (TF-A, u-boot, and kernel) is included in the main repository for those particular projects and will be maintained going forward, and you can build it from source and hack on it at any time. It's not a u-boot fork from 2017 languishing in some dumping ground of a GitHub repository.
Nice write up, very informative, thanks for taking the time and sharing. There are a lot of these sbc's around nowadays, I've always been hesitant to explore them, for exactly the reasons here (huge learning curve).
This story reminds me of the Adventures I've experienced while getting a Wandboard with a FreeScale I.MX6Q to a latest version rather then the official Image offering Xubuntu 11.10.
No binary blobs available for the graphic card (actually there was a triple graphic chip boosted from Vivante) and Xorg in newer versions couldn't detect it. If you were running an apt-get upgrade on the official image, the very next boot could made the UI experience broken.
I still have the board but I use it headless for pihole and wireguard, but the whole ARM experience becomes awful when it comes to graphic acceleration and the drivers aren't open source and not maintained anymore.
That sounds unfortunate, I was under the impression that imx series socs were relatively foss friendly and in particular the vivante gpu would be well-supported by free etnaviv driver. Quick googling found that Debian Buster (i.e. oldstable at this point) should work relatively smoothly?
> On 2017-12-14 Debian Buster (currently 'testing') was tested and running GNOME on Wandboard almost worked right out of the box. (In Buster the GNOME session uses Wayland by default instead of Xorg.) The only change needed was reserving contiguous memory, see the 'cma' section above.
I’ve tried using á RockPro64 with FreeBSD 13.0-CURRENT and although it runs just fine without hacks I’ve noticed the performance is not that good. FreeBSD is not big.little aware so it doesn’t distinct between the A53 and A72 cores. Cryptographic hardware acceleration is not very good. On the other hand most Linux distros work quite well.
Then there is the piece of flaming crap that is the ASMedia PCI-E SATA card that they sell along with the board on their store. That thing cannot handle 2 SSDs and its not a problem with power delivery. Replacing it with some marvel based cards.
I’m completely baffled by this comment. Itanium is dead, never made it to the desktop and so has zero chance and we have an actual mass market Arm desktop available - and apparently selling very well.
I think they were arguing that it showed more promise at the time. I disagree though, I think it was simply too over-priced for the mainstream. M1 is very competitive both performance-wise and price-wise. I don’t think it will unseat the reigning champ x86 but do see them living side-by-side for a long time.
Developers should just stop putting up with this nonsense and start migrating to non proprietary RISC-V boards. We already have low cost options for developers who want to get their feet wet, the moment we start having a decent community around a RISC-V microcontroller, a RISC-V router and a RISC-V dev board then the ecosystem will be very desireable.
Help me understand how RISC-V solves these problems? It doesn’t offer a standard way to enumerate hardware in the SoC, so you still need device trees; it’d not a GPU, so you still need to deal with proprietary GPU drivers; it doesn’t standardize clocking or DRAM controllers, so you still need uboot or some other first stage boot loader; AFAIK, it doesn’t standardize power management, so you still need secure firmware or equivalent. The need to PXE boot comes from only having one boot device supported by the board, which isn’t solved by RISC-V
PCs (partially) solve these problems because the BIOS plays the role of secure firmware and first stage boot loader. Hardware enumeration is solved by having most things be PCI-attached and the bios providing ACPI to enumerate the rest.
I was talking about non proprietary boards on the whole, not just the CPU. Also if everything is documented it's not a big deal to customize your bootloader to do all the needed stuff. Of course standards would be very welcome, but at least you wouldn't need some arcane knowledge to get things running
I don't think access to SoC documentation would have solved very many of the problems described in the article. The author still would have had to fiddle with device trees, secure firmware, u-boot, pxeboot, and getting a kernel with the right drivers built in.
RISC-V also doesn't solve SoC documentation access. Most of Arm's CPU documentation is freely available, its all the other stuff SoC vendors bolt on that's hard to get access to. I don't see RISC-V changing that.
Arm really messed up by not standardizing a sane firmware standard around their arch similar to (minus the wintel brain damage) what Intel did with the PC and provide a simple OS interface to query and configure hardware. Instead we have this device tree madness.
There’s a lot more to a computer than just a CPU or instruction set, and it’s borderline negligent for these companies to avoid developing standards for the surrounding ecosystem. RISC-V was initially guilty of this too, but I think they’re finally coming around and publishing some guidelines for what the rest of a RISC-V computer should look like.
I miss the days of standardized I/O ports, BIOS interrupts and memory-mapped devices at published addresses. Now, with ACPI, you practically need to run an entire VM in kernel to figure out what’s connected, and the complexity of drivers makes it very difficult for the hobbyist to interact directly with hardware.
All of the Pine64 stuff is like this and it's detestable.
I had a RockPro64 for a while that I had tons of trouble getting to work for almost these exact reasons (largely thanks to uboot). Got rid of my Pinebook Pro for similar reasons.
Pine64 makes tons of interesting devices then relies almost 100% on unpaid post-consumer-hobbyist interest for any usability outside of a hacked together barebones ecosystem. Even their unique PineTime, which has been out for some time, is at a barely functioning level and I have zero hope for the PineNote at this point.
I really wish they cared about more than spinning out flashy hardware and patting themselves on the back.
Isn't that the point of the org though? They spin out cheap hacker-friendly ARM hardware at cost for interested hobbyists and that's about it. They're very clear[0] that they don't necessarily provide software outside of the bare basics for it.
Well written! That sounds about right, normal process for bringing up Linux on a new ARM single board computer. Actually sounds like everything worked fairly clean.
Doing it yourself is a lot of "fun" and a good learning experience.
What an outstanding read. It even had an element of suspense. In the very least, it makes us appreciate those cheap raspberry pi alternatives that work out of the box.
Its just the reference implementation of the EL3 "Secure world" monitor, and this is where some of the power management stuff is implemented according to ARM standards (e.g. PSCI).
Technically I'm pretty sure you could do without it, but then you'd need to amend the devicetree and drivers to handle power state transitions.
On the Rockchip SoC in question, boot security is disabled by default, so you can compile it yourself from source and run your own modified version if you want to.
Trusted Firmware is an open governance project[1] and you build it from source. It's not 'required' by Arm. It's perfectly possible to boot without it, as demonstrated by most if not all Qualcomm CPUs.
Hi, John Doe from IRC here. I made this account just to reply to you.
The patch to the (already mainlined ages ago) device tree is not "on the manufacturer's website" because it is already on the Linux kernel's website. To which I linked. And it will be in Linux kernel 5.19.
Unlike Broadcom's fruit-themed tax evasion scheme, we strive to mainline support for the devices, so that you do not need manufacturer specific patches and mainline Linux just works out of the box.
The berries wrinkle their nose at this approach, as they'd rather use a proprietary boot chain with their own kernel fork and pre-made SD card images containing forks of Debian for maximum handholding.
Meanwhile, I'll be content booting fully open and auditable firmware with mainline bootloaders, kernels and userlands. We all have our own priorities.
The device tree is in mainline. The patch is for enabling the HW video decoder, which is currently being reviewed for 5.19. The person on IRC just pointed to the patch.
Did anyone ever figure out the hardware video acceleration on the Rock64Pro / Pinebook Pro? Bought one two years ago and it still doesn't seem to work with the official Linux image (Manjaro ARM?).
It's supported since late 2018 in the kernel, based on generic v4l2-requests interface. So anything that uses v4l2-requests should have video acceleration working. That should be all gstreamer based apps at the very least. Various codecs were gradually stabilized and moved to public API over the last few years.
There's no support in ffmpeg for v4l2-requests API, and nobody forward ported old Bootlin vaapi wrapper driver around v4l2-requests API to the stabilized public version of the API, yet. So app relying on these programs/interfaces don't get the acceleration, atm.
Software ecosystem support is THE killer feature for this class of device. And the RPI has, despite its fundamental shortcomings (needing the proprietary GPU to manage bootup, etc.), the best story in that particular department by so much that it's not even a contest.