Rest Endpoint - Do's and Don'ts
Rest Endpoint is special from some aspects. Every tuple process is triggered by a REST API request and ends in a REST API response. That brings some restrictions:
Don’t use slow operations
A Rest Endpoint Flow should process a REST API request within 100 ms - 5 sec. In some cases up to 10-40 seconds, but longer is not recommended. If the execution time is longer, that indicates that the Flow should be restructured. For example, your AI operations or Docker Functions may be too slow to have in a Rest Endpoint Flow. Temporarily store your data and let another Flow work through that queue.
Allow the triggering system to pass a callback URL. The flow working the queue can then “inform” the initiator with the extraction data when processing is complete. This avoids polling the results and creating unnecessary load.
Don’t use Aggregation or Unflatten
Aggregation needs to have all the input tuples to generate an output. That is not the case for Rest Endpoint Flows. This is a similar issue shared with the Unflatten function.
Make sure a Rest Endpoint gets exactly one output for every input
Make sure your filters, branches, and flattening operations are safe and produce exactly one Rest Endpoint output.
Each input tuple has a unique ID which allows Hero Platform_ to associate it with the corresponding HTTP request object. Flatten operations might erase this ID and make it impossible to associate this tuple with the right request. In this case, you will end up with a connection timeout.
Don’t create non-exclusive branchings if both branches end in a Rest Endpoint Output
If you branch the Flow with multiple Rest Endpoint Outputs, make sure you add filters after the branch so that each branch only receives unique tuples. This means that tuples going into one branch cannot also go into a different branch ending in a Rest Endpoint Output. (See examples below.)
For a Rest Endpoint Output, you cannot have additional Inputs. A Rest Endpoint Output requires a single Rest Endpoint Input.
Don’t forget to configure parallelism for Rest Endpoint Flows.
A Flow can process many requests simultaneously as per the number of Runners assigned to it . A Flow with one Runner will only be able to process one request at a time. If requests arrive while the Flow's Runner is processing a tuple, that request will wait in the queue which increases the response time.