Describing the driver as crap seems a bit harsh when the problem appears to be that the driver doesn't open-up as much gpu functionality as the author hoped. Perhaps a better title for the article would be "Raspberry Pi GPU Driver Not As Open As I Would Like".
It seems that Phoronix is going to new depths here in sensationalizing titles.
They don't mention any reason why it would be crap as a driver. Reasons it'd be 'crap' is if it is much slower than the binary one, or buggy, or lacks features. But that's not the case is it?
Even if it is not as open as people would like, I suppose it does give some information (like the entry points) that can be useful in "full" reverse engineering...
I thought the article was pretty clear about this. The open source driver is just pipelining commands and data from the OS to the firmware blob. The open source driver isn't doing any work, it's just proxying to the firmware which is doing all the work. The situation is isomorphic to a closed source driver.
Note that the "binary blob" is not executing within the linux kernel or the arm core at all, it's running on the videocore, a mostly undocumented piece of a black box.
At some level this is the case for all hardware. Take a ordinary ethernet nic. Back in the days it was a piece of silicon on a PCB and some glue to get the data onto the CPU and main memory. Such a chip is treated as just hardware, a black box in which we can poke at some registers, get some interrupts a shuffle data back & forth.
Now, inside that black box of a NIC is a whole world in itself. The act of framing and deframing ethernet with all the details around it is actually quite complicated, the spec. even for 100mbit ethernet is orders of magnitude bigger than e.g. the spec of TCP/IP. There's a lot of stuff happening inside that NIC - and there's parts that break. (e.g. as shown in some parts of this rather hilarius piece: https://www.youtube.com/watch?v=8Q8EFwKVKdA).
Now - as NICs got even more complicated, and supported even more stuff, checksum offloading, more buffering, SSL offloading and many other things, parts of many NICs today have a lot of the functionality implemented in FPGAs. Which we some times doen't see, and in other cases have to upload a binary blob to before it can be activated.
The difference is now that the pre-made chip on our NICs can change, with software, after it's been shipped from the factory. This is increasingly common for many types of hardware, in this case for an OpenGL core, though perhaps even a level up from FPGAs. But we can still treat it as a black box, through which we just send commands, poke at registers and so on.
Now - I'm not saying such stuff shouldn't be open source - it would be awesome if it were, it'd allow us to fix and extend them. I'm just saying that what's done with the "shim" for the videocore on the raspberry is a common approach to interfacing with peripherals - the main difference is the black box have to be programmed after it has left the factory.
I think it stretches the imagination to argue that OpenGL itself is implemented in FPGAs on the chip. Not that it's impossible, but is it likely, and on a $25 device?
No, the outrage here is not because of this practice, it's because this is being claimed as a victory. The distasteful PR move. Few are surprised or particularly offended by what Broadcom did here.
Well, I didn't claim that the videocore was a fpga, it's likely some form of gpu/cpu - It's not implemented on anything running in the kernel or on the arm core though. fpga or something else, the concepts are rather the same seen from the main CPU running an operating system.
Complexity-wise, comparing an ethernet card to a GPU is like comparing a bicycle to a commercial jet.
The funny thing is that the article specifically trash-talks what you're describing in defense of this practice: NICs that do TCP offload also tend to suck (at least according to Dave Airlie).