Hacker News new | past | comments | ask | show | jobs | submit login

URIs matter and REST matters because people have been using them incorrectly for so long. Common mistakes like putting verbs in URIs instead of using HTTP methods. You are correct though, about the design of verbs don't matter (outside of conventions) as long as everything is done correctly.



Just off the top of my head: if you think of verbs exposed in URLs as a common REST mistake, you may not have fully absorbed the HATEOS concept. If you have HATEOS, you're unlikely to have verb URLs.


I agree, that is what I was trying to say. If you are doing things correctly (fully absorbed the concept), everything will be fine and no one has to argue about REST, HTTP, and URIs.


any good resources to HATEOS?


I quite like this article from Martin Fowler on the "Richardson Maturity Model" that works up to HATEOS:

http://martinfowler.com/articles/richardsonMaturityModel.htm...


Thank you.

I was expecting more from this HATEOS stuff. Everything I read before sounded like full auto-discovery of APIs.

But the only thing seems to be including possible next URLs in the responses.

Don't get me wrong, this is a good thing. It gives the backend devs more freedom and the frontend devs need less documentation to find out what's possible. But everyone still has to write the interfacing code to these APIs :D


At the risk of butchering the concept for the sake of simplicity:

In a HATEOS API, clients need to know exactly one entry point endpoint. Nothing else is hardcoded. There is no Python code that happens to know "if you want to add a widget to a product, you POST to /product/$X/widgets". The API itself tells you where to go.

An acid test: assume your API entrypoint is /api. If your API keeps HATEOS kosher, you could in a serverside update change every other URL endpoint in the application without breaking clients, because the clients would be getting those URLs from the entrypoint URL dynamically anyways.

(That's not really the point of HATEOS, but it's a side effect).

There is nothing wrong with simple HTTP APIs, and a lot wrong with explicitly RPC (verb) oriented APIs in general, so adhering to REST principles isn't an absolute good.

This might be the point where 'dragonwriter tells me I, too, have misunderstood HATEOS. :)


One thing that's always bugged me (in a small way) about REST is that proponents/experts always insist that REST does not rely on any specific protocol (HTTP) but all discussions of REST carry a very strong assumption that specific actions are mapped to specific HTTP verbs. For example, Martin Fowler's doctor appointment scheduling example gives you "discoverable" hypermedia links for canceling and editing appointments, but they use the same URI and there is an implicit assumption that the client knows how to distinguish between the two by choosing the appropriate HTTP verb. It just seems kind of strange to say, well, REST isn't tied to HTTP, it's tied to any request/response protocol where each request is bound to a specific URI and one of these core HTTP verbs. Wouldn't implementing REST on any other protocol look an awful lot like tunneling HTTP over that protocol?

Another small gripe is the notion that a REST client need not have URIs to specific resources/actions hardcoded in them. The fact that you don't hardcode the specific URIs but rather a bunch of link strings that you then use to look up URIs makes this a lot less interesting. The way it's described generally makes it sound as if there is some kind of magical mechanism by which a client actually learns of the existence of a given endpoint, which would truly be magical. Really, all that's happening is that a client knows a name for a specific endpoint that it's looking for, and the API provides a way to look up the specific URI for that endpoint. Makes things tidy, but it doesn't seem like a feature that has much practical impact if you follow a "URIs shouldn't change" philosophy anyway.


Without breaking clients permanently. Running clients would not be able to continue their current interaction.


Yeah, I was just about to go edit that. "Cool URIs don't change" and all that. It would still be bad to change URLs; you just wouldn't need to update the client API code.


The most useful resource -- and its quite concise -- on HATEOAS is this 2008 blog post from Roy Fielding (who defined REST, so it's straight from the proverbial horse's mouth):

http://roy.gbiv.com/untangled/2008/rest-apis-must-be-hyperte...


well, its just wikipedia, but its actually well written page on the concept, in my opinion

http://en.wikipedia.org/wiki/HATEOAS

key concept:

> "The principle is that a client interacts with a network application entirely through hypermedia provided dynamically by application servers. A REST client needs no prior knowledge about how to interact with any particular application or server beyond a generic understanding of hypermedia. By contrast, in a service-oriented architecture (SOA), clients and servers interact through a fixed interface shared through documentation or an interface description language (IDL)."

a truly RESTful service that follows the HATEOAS pattern doesn't require documentation to be hosted separately. it will supply all the information necessary directly through the RESTful service.


> a truly RESTful service that follows the HATEOAS pattern doesn't require documentation to be hosted separately.

It might need documentation of the special media types it relies on to be hosted separately (one place where most "REST" APIs fail to follow HATEOAS is that they reuse generic media types but rely on out-of-band descriptions of the "real" format of the data, so that a client familiar with only the media type and the data would not semantics of the resource representations being returned by the API.)


Hm okay.

The key concept I understood but I don't get how clients and servers have to be, to just "get" each other in the way HATEOAS implies.


Think web browsers. REST was modeled after the existing web.


Web browsers have a human intelligence driving the interactions. API clients don't. Hence the huge gap, and as far as I can see, the rather pointlessness of HATEOAS.

In fact, I'm still totally lost as to the usefulness of REST at all, except as a generic term to mean RPC over HTTP except not as clunky as SOAP. Which isn't what REST is. I've yet to see or use an API that was easier to deal with because it was REST.


The Googlebot drives REST APIs just fine, and would be impossible to write using an RPC model.

I've yet to see or use an API that was easier to deal with because it was REST.

Of course not, because people actually want to use RPC, and so shoehorn REST into RPC-like models, which destroy its usefulness.

If you're sitting at your computer and deciding that you're now going to write a client against Service A's API, the point of REST was missed, and Service A might as well have used RPC.

The point of REST is to decouple the client from the specific service, using the Uniform Interface and standard formats to allow clients to interact with any service that "speaks" the same formats.

But nobody's is thinking on those terms. Everyone is still thinking that it's perfectly normal and OK to waste years of developer time rewriting the wheel, over and over again, for each new service that pops up. This is fueled by the services themselves, of course, whose companies want to use their API to lock you in.

So no, while this is the normal mentality, you won't see any major gains from REST.


Is probably something you already know. Consider this: in your HTML home page you link to some page to do some action, tomorrow you create a new page and add the link to it to your home page, magic done, your generic client (web browser) is now able to show the user a new functionality, no need to change anything on the client. If you have a native Android App (that does not use the browser) you probably need to update it.


... That only works because there's a human driving the browser. I've yet to hear of a concrete example of how this would apply to API clients.


> ... That only works because there's a human driving the browser.

It works for unattended web clients (like Google's spider) too -- and not just for generating basic listings, but for structured schema-based data to update Knowledge Graph. That's one of the foundations of many of the new features of Google Search in the last several years.


it works because links are defined in the hypertext and discovered by clients (say by the browser when a page is visited), so are new functionalities. A (well designed) web app is always up to date. In an Android native app the API URL(s) are (99%) hardcoded using the knowledge about the API of a certain moment. This auto-discovery mechanism works also for a spider.

Auto discovery does not mean that links are understood (@rel may help but..) you may need a human to decide but..

Suppose a (rest) application that lists links to its "services" in home page with the "service" page describing the service following a certain standard. You may have a bot that checks periodically the application for services you are interested in and be notified if a new service is available, with the possibility to navigate to the page and possibly subscribe.


two points:

1. why do you necessarily assume that REST API's are only accessed by robots? A human developer can benefit from HATEOAS quite a lot by being able to use the RESTful service's outputs as its own documentation. The developer can discover the features of the API by following links provided in the API outputs.

2. An API client can check that it matches the interface specified by the API just by comparing the URI's it is accessing with the URI's provided by the HATEOAS part of the RESTful service. You can automatically detect changes, breakages, or new feature introduction. This doesn't spare the client developer from having to update their client code, but it gives that developer a powerful tool for getting updates about the RESTful service.


So basically, putting structured comments in the API output would have the same effect? Instagram does that. They don't update their API docs, and instead put comments in the results so you might stumble on them while debugging But specifically to hyperlinks, I don't see the point. For instance, a search API might want to return s next link. So they can do that with a URL. Or they can just include a next token I pass to the search API. The latter is somewhat easier to program against since you often abstract the HTTP/URL parts.


REST was in the original HTTP spec. Most people were just doing it wrong the entire time until recently when it became trendy to go back to the root REST ideals. And by most people I mean everyone involved in SOAP and RPC and other nonsense like that.


> REST was in the original HTTP spec.

No, it wasn't. Fielding's dissertation, in which REST was defined, argues that a certain set of principles were an underlying foundation of the structure of the WWW architecture in its original construction, proposes REST as a formalization of and update to those principles, and proposes further that updates to the WWW architecture should be reviewed for compliance to the REST architecture. [1]

So REST is a further elaboration of a set of principles inferred from the original HTTP spec, not something present as such in the original HTTP spec.

[1] http://www.ics.uci.edu/~fielding/pubs/dissertation/web_arch_...




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: