Hacker News new | past | comments | ask | show | jobs | submit login

Think web browsers. REST was modeled after the existing web.



Web browsers have a human intelligence driving the interactions. API clients don't. Hence the huge gap, and as far as I can see, the rather pointlessness of HATEOAS.

In fact, I'm still totally lost as to the usefulness of REST at all, except as a generic term to mean RPC over HTTP except not as clunky as SOAP. Which isn't what REST is. I've yet to see or use an API that was easier to deal with because it was REST.


The Googlebot drives REST APIs just fine, and would be impossible to write using an RPC model.

I've yet to see or use an API that was easier to deal with because it was REST.

Of course not, because people actually want to use RPC, and so shoehorn REST into RPC-like models, which destroy its usefulness.

If you're sitting at your computer and deciding that you're now going to write a client against Service A's API, the point of REST was missed, and Service A might as well have used RPC.

The point of REST is to decouple the client from the specific service, using the Uniform Interface and standard formats to allow clients to interact with any service that "speaks" the same formats.

But nobody's is thinking on those terms. Everyone is still thinking that it's perfectly normal and OK to waste years of developer time rewriting the wheel, over and over again, for each new service that pops up. This is fueled by the services themselves, of course, whose companies want to use their API to lock you in.

So no, while this is the normal mentality, you won't see any major gains from REST.


Is probably something you already know. Consider this: in your HTML home page you link to some page to do some action, tomorrow you create a new page and add the link to it to your home page, magic done, your generic client (web browser) is now able to show the user a new functionality, no need to change anything on the client. If you have a native Android App (that does not use the browser) you probably need to update it.


... That only works because there's a human driving the browser. I've yet to hear of a concrete example of how this would apply to API clients.


> ... That only works because there's a human driving the browser.

It works for unattended web clients (like Google's spider) too -- and not just for generating basic listings, but for structured schema-based data to update Knowledge Graph. That's one of the foundations of many of the new features of Google Search in the last several years.


it works because links are defined in the hypertext and discovered by clients (say by the browser when a page is visited), so are new functionalities. A (well designed) web app is always up to date. In an Android native app the API URL(s) are (99%) hardcoded using the knowledge about the API of a certain moment. This auto-discovery mechanism works also for a spider.

Auto discovery does not mean that links are understood (@rel may help but..) you may need a human to decide but..

Suppose a (rest) application that lists links to its "services" in home page with the "service" page describing the service following a certain standard. You may have a bot that checks periodically the application for services you are interested in and be notified if a new service is available, with the possibility to navigate to the page and possibly subscribe.


two points:

1. why do you necessarily assume that REST API's are only accessed by robots? A human developer can benefit from HATEOAS quite a lot by being able to use the RESTful service's outputs as its own documentation. The developer can discover the features of the API by following links provided in the API outputs.

2. An API client can check that it matches the interface specified by the API just by comparing the URI's it is accessing with the URI's provided by the HATEOAS part of the RESTful service. You can automatically detect changes, breakages, or new feature introduction. This doesn't spare the client developer from having to update their client code, but it gives that developer a powerful tool for getting updates about the RESTful service.


So basically, putting structured comments in the API output would have the same effect? Instagram does that. They don't update their API docs, and instead put comments in the results so you might stumble on them while debugging But specifically to hyperlinks, I don't see the point. For instance, a search API might want to return s next link. So they can do that with a URL. Or they can just include a next token I pass to the search API. The latter is somewhat easier to program against since you often abstract the HTTP/URL parts.


REST was in the original HTTP spec. Most people were just doing it wrong the entire time until recently when it became trendy to go back to the root REST ideals. And by most people I mean everyone involved in SOAP and RPC and other nonsense like that.


> REST was in the original HTTP spec.

No, it wasn't. Fielding's dissertation, in which REST was defined, argues that a certain set of principles were an underlying foundation of the structure of the WWW architecture in its original construction, proposes REST as a formalization of and update to those principles, and proposes further that updates to the WWW architecture should be reviewed for compliance to the REST architecture. [1]

So REST is a further elaboration of a set of principles inferred from the original HTTP spec, not something present as such in the original HTTP spec.

[1] http://www.ics.uci.edu/~fielding/pubs/dissertation/web_arch_...




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: