I am quite new in developing RESTful APIs on top of HTTP, so this is why I have some basic architectural questions. I'll leave authentication outside of the equation, for simplicity.
The RESTful APIs shall be handled by nginx (in reverse proxy configuration) and Redis. Some HTTP request/responses may use JSON in the HTTP Body.
What I am thinking to achieve, from a messaging perspective, is this:
1. (Client -> nginx) A RESTful API request is made to nginx over HTTP.
2. (nginx -> Redis) nginx will pass the API request to Redis and issue a "publish newRequest", after which nginx will wait for the response from Redis (using an nginx 3rd party Redis module).
2.1 I am not so sure yet how the above "wait for the response from Redis" will actually be implemented. I can however think of subscribing for an Redis event which will be published by my custom Redis "application" (see below), as soon as the request has been processed. Do you maybe know any better ways?
3. (Redis -> Redis "Application") The (above published) "newRequest" will wake-up its Redis subscriber, which is a Redis "application" (custom C++ code based on the Redis C++ client).
4. (Redis "Application" -> Redis -> nginx -> Client) The Redis "application" will handle the request and after that will publish a response (for waking-up the Redis subscriber -from 2.1- and thus passing the "response" back to nginx and finally to the original caller)
4.1 Now.. my Redis "application" may fail, so I would like to communicate such errors back to the original caller (using both HTTP response error codes + some descriptive JSON attached). But from my Redis "application" I cannot control the HTTP response error code (this is managed by nginx). So then I am wondering a bit.. how/where could then error handling be more cleanly implemented, such that my Redis "application" will drive the error handling, without having to update the nginx configuration for each new error I add in my Redis "application"?
Thank you in advance for your support!