Kind of like Asimov's 'Three Laws', really. It's a talking point and a foil for the plot. It was never intended to be a realistic and workable solution.
On the other hand, (at least in the I, Robot stories) the Three Laws actually seem to be fine as a realistic/workable solution; all the conflict revolves around robots being programmed to not obey them, whether due to added rules before Rule 1, elided rules, or reordering of rules.
Prime-Directive-related conflicts, meanwhile, seem to revolve around cases where the Directive itself is flawed (or at the very least not ideal for a given situation).
The funny thing is that the popular culture somehow missed the memo, and often treat the Three Laws as if they were really wise. You can see this happening in every other article about AI these days.