Representational State Transfer, abbreviated as REST, is a software architectural style intended for use in long-lived network-based applications. At its core, REST offers an alternative to RPC-based interactions over a network, instead focusing on resources undergoing state transitions. To decouple clients and servers it defines a uniform interface and provides discovery of services over "hypermedia": links between endpoints.
- Heterogeneity through interoperability with clients.
- Scalability beyond a traditional application.
- Evolvability of clients and servers independently of one another.
- Visibility through standardisation of monitoring.
- Reliability in error recovery.
- Efficiency as load can be moved off onto caches.
- Performance, through increased reliability and scalability.
- Manageability owed to increased consistency of interactions.
Richardson's Maturity Model represents a series of incremental steps toward RESTful architecture. Note that only the 3rd level can be considered RESTful.
Unlike its peers, REST's design was constraints-driven, not requirements-driven, embracing the underlying platform rather than working against it. It addresses head-on the fallacies of distributed systems:
- The network is 100% reliable.
- There's no latency.
- Bandwidth is limitless.
- Security is guaranteed.
- Network topology is constant.
- Administration is in the hands of a single person.
- Transport costs nothing.
- Networks are homogeneous.
- Everyone has the necessary domain knowledge.
Separation of concerns allows separate evolution of both the client and the server. Clients know about servers, but not vice versa.
Communication between the client and server is stateless: no negotiation is required between the client and server as all data necessary to fulfil a request is present in that request.
A uniform interface grants intermediaries a sufficient understanding of the response to allow caching.
Hypermedia as the Engine of Application State
Generality of communication creates a standard mechanism for interaction regardless of the underlying implementation. There are a few components to this constraint:
- Resources must be identifiable. In HTTP, this is achieved through URLs.
- Manipulation of resources takes place through representations. In HTTP, these are markup languages such as JSON or XML.
- All messages must be self-descriptive, including any metadata necessary to interpret the response in the response. In HTTP, this is covered by sending responses in the form of key-value pairs or in markup languages.
Components in a system can only know about the components in the layer at which it is interacting. This places a hard limit on the level of complexity that any one component is exposed to, and limits the amount of churn in components when another changes.
This optional constraint allows "enhancement" of a client through delivery of executable code.
- Resources are named information, or concepts; a resource maps a named concept to a set of entities over time. Resources should remain constant over time, even as the underlying entities change.
- Resource Identifiers are how Servers make Resources accessible for interaction: they're names associated with resources.
- Resource Metadata covers HTTP headers that directly describe the Resource, e.g.:
Location, which identifies the canonical Resource Identifier; and
ETag, which specifies the current version of the resource as relevant for caching.
- Representations define the state of a Resource at a specific point of time. A Resource might have multiple available Representations. Content negotiation is the process of selecting the most appropriate Representation based on metadata included in the request.
- Control Data describes messages sent between components, providing semantics for the message exchange. A good example is cache control headers.
- Hypermedia reduces coupling between a client and server and grants a server full ownership of the URL namespace. Clients can dereference links from an endpoint to a resource namespace, and from there to resources.
- Components are processors of resource requests and representations:
- Origin Servers are ultimate destinations that respond to requests, sometimes with Representations.
- User Agents (e.g. browsers) initiate requests or state transitions for resources.
- Gateways provide caching and/or load balancing and are hosted on the application side, representing one or more Origin Servers to the network.
- Proxies also provide caching and/or load balancing, but are on the client side, representing one or more User Agents to the network.
- Connectors are interfaces components can implement to perform their work:
- Clients initiate resource requests or state transfers.
- Servers listen for requests and respond.
- Caches manage stores of representations for reuse, according to TTLs.
- Resolvers translate Resource Identifiers into addresses, e.g. DNS. Resolvers provide indirection between components, extending the lifetime of references between components.
- Tunnels relay communications across boundaries.
Each resource is identified by a base path, e.g.
/issues. The following routes are defined for each:
|Show resource creation form|
|Create a resource|
|Get a resource|
|Get a resource edit form|
|Update a resource|
|Get a resource delete form|
|Delete a resource|
Issuing a request for the index of the application might return a series of headers describing the locations of other endpoints:
Link: <https://api.example.com/issues>; rel="child"