Skip to main content

Timeouts

Process modules with API calls such as the API client process module and the Webservice process module have a timeout of approximately 20 seconds. This means that the process module will wait for a response from the server for approximately 20 seconds. If the server does not respond within this time, the process module will classify this as a timeout, log it in the runtime log, and continue with the next process module. So if such timeouts occur in the runtime log of your tenant, please check with the developer of your API if the server is reachable and responds in time.

Can LoyJoy extend the timeout duration of 20 seconds?

Unfortunately, we cannot extend the timeout duration of 20 seconds. This is because the process module is executed in the context of a chat. If we would extend the timeout duration, the chat would be blocked for this time, showing a typing indicator. This would be a bad user experience and heavily influence conversion rates.

Also, please take into consideration that one chat message could trigger multiple process modules with API calls, which would sum up. For the worst case the LoyJoy chat UI itself has a timeout of approximately 30 seconds, and then will resend the chat for multiple times, inducing a loop.

But I need more than 20 seconds to process the request in my server!

If you cannot optimize your server response time, you can enable "Use request queue?" in the process module configuration. This will queue requests and send them one after another in the background with an extended timeout duration.

However, this makes it impossible to process the response in the process module to retrieve process variables. Also the queue might fill up, so that requests might be delayed for multiple hours in case of an overflowing queue. So please use this option with caution.

Typical response times of cloud APIs

Typical response times of cloud APIs such as Zapier, Salesforce, or Mailchimp are approximately 100ms. So please do not take the 20 seconds as a benchmark for your server response time, but instead as a worst case scenario. As a benchmark a response time of 1 second could be the maximum response e.g. in the 99% percentile. So 99% of the requests should be processed in less than 1 second.

If the response time of your server is above 1 second, it could make sense to put the request in your server into a queue and and process it in the background.