How does one even write an API client against a REST API that only publishes the initial entry point? in particular, how should the client discover the resources that can be manipulated by the API or the request/response models?
The responses from prior requests give you URLs which form subsequent requests.
For example, if I,
GET <account URL>
that might return the details of my account, which might include a list of links (URLs) to all subscriptions (or perhaps a URL to the entire collection) in the account.
(Obviously you have to get the account URL in this example somewhere too, and usually you just keep tugging on the objects in whatever data model you're working with and there are a few natural, easy top-level URLs that might end up in a directory of sorts, if there's >1.)
Needing a single URL is beautiful, IMO, both configuration-wise and easily lets one put in alternate implementations, mocks, etc., and you're not guessing at URLs which I've had to do a few times with non-RESTful HTTP APIs. (Most recently being Google Cloud's…)
> How does one even write an API client against a REST API that only publishes the initial entry point? in particular, how should the client discover the resources that can be manipulated by the API or the request/response models?
HAL[0] is very useful for this requirement IMHO. That in conjunction with defining contracts via RAML[1] I have found to be highly effective.
Look up HATEOS. The initial endpoint will you give you the next set of resources - maybe the user list and then the post list. Then as you navigate to say, the post list, it will have embedded pagination links. Once you have resource urls from this list you can post/put/delete as usual.
your browser is a client that works against RESTful entries points that only publish an initial entry point, such as https://news.ycombinator.com
from that point forward the client discovers resources (articles, etc) that can be manipulated (e.g. comments posted and updated) via hypermedia responses from the server in responses
Your Web browser is probably the best example. When you visit a Web site, your browser discovers resources and understands how it can interact with them.
It certainly does not. Sure it can crawl links, but the browser doesn't understand the meaning of the pages, nor can it intelligently fill out forms. It is the user that can hopefully divine how to interact with the pages you serve their browser.
Most APIs however are intended to be consumed by another service, not by a human manually interpreting the responses and picking the next action from a set of action links. HATEOS is mostly pointless.