Hacker News new | past | comments | ask | show | jobs | submit login
Raspberry Pi GPU Driver Turns Out To Be a Shim (phoronix.com)
115 points by ari_elle on Oct 25, 2012 | hide | past | favorite | 29 comments



Wanted a flying pony. Got a pony.

An open userspace means that I will be able to use the GPU as far into the future as I like. It means that if anything is screwed up on my ARM processor I can see it and fix it (or thank the person that beat me to it.) That's a good thing.

The code blob that is not open is what runs on the GPU. I'd love to have that too so that the OpenGL could possibly be extended beyond what the vendor chooses or it could be ditched as a GPU entirely and used as a big parallel DSP. BroadCom didn't give us that.

Some reasons I can think of:

• GPU litigation – If you don't have a pre-emptive patent arsenal, open sourcing your GPU code and design is like inviting your competitors' lawyers over for tea and depositions.

• Unbuildable – The toolchain to build the GPU code is likely a cobbled together mess of tools, many built to minimally functional in house standards.

• Undocumented – They probably have not spent the hundreds of thousands of dollars to properly document how the GPU works. What documentation exists is possibly not in English.

• Unsafe – It is possible that the GPU has HCF[1] opcodes or sequences, or at the very least hasn't been lightly proven not to. In particular, there could be thermal issues depending how you drive it.

• IP Ownership – Portions of their implementation could be licensed under terms that do not permit release.

EOM

[1] http://en.wikipedia.org/wiki/Halt_and_Catch_Fire


A more apt analogy would be "Was old I was getting a pony but got a small plastic model of a mule".

Pi made a huge song and dance about this yesterday, nobody can fault their media savvy, but in the end they didn't really release much.


I'm still happy. It wanders a bit from topic, but I eventually gave away stacks of Broadcom based WRTSL54GS wifi routers because their closed source driver was stuck at linux 2.4 and I needed 2.6 for some attached hardware. That went on for years. Having everything that runs on the host CPU open means things like this can't happen. Much better than a plastic mule.

Even further afield… When my first daughter was 3 or 4 we got her a magic wand for her birthday. A plastic star on the end of a plastic stick. She was thrilled, pointed it at her grandmother and said "I turn you into a bug!". She was crushed when it didn't work. Rather spoiled the whole day for her. She assures us she was going to turn her back.


The bit about your daughter keeps cracking me up. That that would be the first thing she would try... Thanks.


> She assures us she was going to turn her back.

She did. It all happened so quickly you didn't notice!


True, BUT, it doesn't look there was really very much to it anyway, so while I wouldn't be jumping up and down cursing at them over this, I'd hardly call it an impressive achievement either.


Missed one (which I still consider valid): making it trivial for others to extend the firmware might endanger other products on sale. ATI learned this a few years ago, when it was discovered a software upgrade was all required to turn a sub-$150 graphics card into a far more expensive model.

From the user's perspective, all they see is hardware that can Do So Much More simply with a little firmware jiggery, but from the manufacturer's perspective, they've spent billions developing that firmware and if they choose to differentiate depending on its configuration, and they've paid the infrastructure costs to reach this position of privilege, then as far as I'm concerned that's their prerogative.


While I can see a stereotypical manager coming up with those or any number of silly reasons, I can't agree with the code-related ones.

Litigation and IP ownership may be reasons enough, I like not to get into details there, but:

- Unbuildable - great, release it, someone will either pick it up and correct it, or they will not. Right now they're guaranteeing that noone will.

- Undocumented - same as above; also I've never seen a proper, up to date, useful documentation without errors. If you open the code, you may get some improvements and current documentation may be actually useful. Otherwise we have nothing.

- Unsafe - that's pretty much the same situation as currently used custom android roms, firmware which is not cryptographicly signed, and many things that talk to SOCs with power management. If you corrupt the firmware in some way, or configure chips for different voltage than expected bad things will happen. This is not specific to GPUs.


There's a certain implied responsibility that comes with releasing source. While the community is clearly better off with any sort of source release than with none, their reputation could still take a hit if the source release has problems like these.


Those reasons are possible, but in the companies I've worked for the resistance to open-sourcing code is mostly just fear:

• the competition might use our code to make their products better than ours

• our code is obsolete or just plain terrible, and could hurt our company image

• we might have to spend a ton of time supporting users that made changes to the code

• if the community does a better job with the code then we might get fired


How about "the competition might use our code as a spec to make a cheaper compatible part in Asia and undercut our business altogether"?


The approach Broadcom have taken also makes it really easy for them to support different OSs too.


Describing the driver as crap seems a bit harsh when the problem appears to be that the driver doesn't open-up as much gpu functionality as the author hoped. Perhaps a better title for the article would be "Raspberry Pi GPU Driver Not As Open As I Would Like".


It seems that Phoronix is going to new depths here in sensationalizing titles.

They don't mention any reason why it would be crap as a driver. Reasons it'd be 'crap' is if it is much slower than the binary one, or buggy, or lacks features. But that's not the case is it?

Even if it is not as open as people would like, I suppose it does give some information (like the entry points) that can be useful in "full" reverse engineering...


I thought the article was pretty clear about this. The open source driver is just pipelining commands and data from the OS to the firmware blob. The open source driver isn't doing any work, it's just proxying to the firmware which is doing all the work. The situation is isomorphic to a closed source driver.


Note that the "binary blob" is not executing within the linux kernel or the arm core at all, it's running on the videocore, a mostly undocumented piece of a black box.

At some level this is the case for all hardware. Take a ordinary ethernet nic. Back in the days it was a piece of silicon on a PCB and some glue to get the data onto the CPU and main memory. Such a chip is treated as just hardware, a black box in which we can poke at some registers, get some interrupts a shuffle data back & forth.

Now, inside that black box of a NIC is a whole world in itself. The act of framing and deframing ethernet with all the details around it is actually quite complicated, the spec. even for 100mbit ethernet is orders of magnitude bigger than e.g. the spec of TCP/IP. There's a lot of stuff happening inside that NIC - and there's parts that break. (e.g. as shown in some parts of this rather hilarius piece: https://www.youtube.com/watch?v=8Q8EFwKVKdA).

Now - as NICs got even more complicated, and supported even more stuff, checksum offloading, more buffering, SSL offloading and many other things, parts of many NICs today have a lot of the functionality implemented in FPGAs. Which we some times doen't see, and in other cases have to upload a binary blob to before it can be activated.

The difference is now that the pre-made chip on our NICs can change, with software, after it's been shipped from the factory. This is increasingly common for many types of hardware, in this case for an OpenGL core, though perhaps even a level up from FPGAs. But we can still treat it as a black box, through which we just send commands, poke at registers and so on.

Now - I'm not saying such stuff shouldn't be open source - it would be awesome if it were, it'd allow us to fix and extend them. I'm just saying that what's done with the "shim" for the videocore on the raspberry is a common approach to interfacing with peripherals - the main difference is the black box have to be programmed after it has left the factory.


I think it stretches the imagination to argue that OpenGL itself is implemented in FPGAs on the chip. Not that it's impossible, but is it likely, and on a $25 device?

No, the outrage here is not because of this practice, it's because this is being claimed as a victory. The distasteful PR move. Few are surprised or particularly offended by what Broadcom did here.


Well, I didn't claim that the videocore was a fpga, it's likely some form of gpu/cpu - It's not implemented on anything running in the kernel or on the arm core though. fpga or something else, the concepts are rather the same seen from the main CPU running an operating system.


Complexity-wise, comparing an ethernet card to a GPU is like comparing a bicycle to a commercial jet.

The funny thing is that the article specifically trash-talks what you're describing in defense of this practice: NICs that do TCP offload also tend to suck (at least according to Dave Airlie).


I find this conversation, and the previous one, instructive.

The discussion illustrates the challenge that Linux has accommodating third parties. Two entire classes of software are categorically unable to embrace the 'rules' of open software, the graphics folks and the wireless folks (and to some extent the Printer folks). This has meant that these areas generally had a much poorer to non-existent user experience for Linux users than users of systems with support for IP protection built into them.

I think the 'shim' model is certainly one way to compromise here. I think the Raspberry Pi folks have done a great service to developers by helping this get done.


Every time I read an article like this it seems that the implied consensus is that companies with millions of dollars in yearly revenue are unable to hire good talent when it comes to hardware programming and all the "good" ones are open-source people who may or may not have worked for one of these companies in the past.

With all these "this thing is bad" articles it just seems like no hardware company has ever been able to write good drivers and are completely unable to hire the right people to do the job. This is implied in all of the history of electronics.

But they also almost always come across as more opinions than facts. Does the Pi GPU driver suck? I don't know because this article says so but doesn't explain why exactly. As someone else pointed out, this article is pointing out that the company simply did not create certain things the way the community wanted.

I'm not a hardware programmer so I don't much about such things, but what do these people actually expect out of a piece of hardware that costs around $25-50 US? What exactly are the goals these people want that require the type of access they are asking for? What goals are they failing to achieve that makes them decide the hardware and/or drivers suck? That's all I ask when one of these "this thing is bad" type of article.


>I don't know because this article says so but doesn't explain why exactly.

I think it's pretty clear. What broadcom released was a binary file and and code to interface with it. It's not that the driver functionality is any different than what's already available it's that all the important functionality is in the binary file that was provided without any source code.

They made a big deal about the first open source ARM gpu driver but, for the most part, it's still a black box and it's not really useful to developers looking to change/improve the gpu performance.


I get what you're saying, but that doesn't really say the hardware or its drivers suck. It just tells me that the community doesn't have the access they would like.

Complaining that the GPU isn't open is one thing and I totally understand that, saying it sucks is completely different to me. Saying it sucks implies that there is something wrong with the hardware and/or its drivers. It would be similar to claiming that a Macbook Pro sucks because you can't swap the battery yourself.

Now, if someone can show where the binary file has a flaw in it so that it could be said to be defective in some way, then sure, let the accusations of suckage begin. Especially if the flaw prevents usage of the device as described and it remains closed so the community cannot fix it. But then if they don't fix it themselves nor allow the community access to fix it then the platform will die anyway.

So, back to my comment, I don't understand the point of the article and others like it.


If the Macbook Pro was advertised as a laptop you can customise and play with and then you find out you can't swap the battery yourself, then it would not be surprising if some people who bought it for that reason would say it "sucks".

Similarly, the Raspberry Pi was advertised as a computer you can customise and play with. It was built for educational use, and yet part of it is hidden away from study. This is annoying some people who expected more than they reasonably should have.


Again, I get that, but the target of the accusation is wrong to me. You can say the people suck at keeping their promises, or the company sucks at marketing things that are not true. But those things have little or nothing to do with the hardware that customers have in hand right now. What I read in that article is that the underlying hardware, to some even the whole concept, sucks because its graphics driver model is not as open as some would like.

There's been no indication that the hardware itself, the chips on the board, nor the driver do not PERFORM as expected.

If the Macbook Pro was advertised in such a way that was not true in the end, yes indeed that "situation" would suck, but the computer itself would still be a good computer.


It's cool that phoronix linked and quoted Dave Airlie. It's too bad they didn't quote or respond to the last line of his blog post: "(and really phoronix, you suck even more than usual at journalism)"


For me, the not-one-but-two "tip for a tiny belly" ads gave it away before I even got to Airlie's post. Right or wrong, those types of ads prejudice me against a site before I even get to the content. In this case the content did little to change my judgement.


As far as the linux graphics stack goes, its hard to find someone with more experience than arlied. I think you may have misjudged the author.


The nut graph from arlied's post[1] (linked to in phoronix article):

"So really Rasberry Pi and Broadcom - get a big FAIL for even bothering to make a press release for this, if they'd just stuck the code out there and gone on with things it would have been fine, nobody would have been any happier, but some idiot thought this crappy shim layer deserved a press release, pointless. (and really phoronix, you suck even more than usual at journalism)."

Emphasis mine.

[1] http://airlied.livejournal.com/76383.html




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: