I have an issue that Rest-Assured seems to issue a 2nd and 3rd call when the response takes more than 60 seconds. What I need is RestAssured to wait for the call to finish and not send more calls.
The simple test to an endpoint that takes more than 60 seconds to respond:
return given()
.relaxedHTTPSValidation()
.baseUri(cloudBaseurl)
.when()
.get(fullServicePath)
.then().extract().response();
The issue is that I see in the logging of the requested service that exactly 60 seconds later, a second call comes in and a 3rd after another 60 seconds. By default RestAssured seems to timeout after 3 minutes.
What I tried:
1. Setting different parameters
RestAssured.config= RestAssuredConfig.config().httpClient(httpClientConfig().
setParam("http.connection.timeout",70000).
setParam("http.connection.request.timeout",70000).
setParam("http.socket.timeout",70000).
setParam("http.connection-manager.timeout",70000).
setParam("http.conn-manager.timeout",70000L).
setParam("http.connection.stalecheck",false).
setParam("http.keepAlive",70000L));
It does timeout after 70 seconds, but still after 60 seconds the 2nd call comes in.
2. Setting headers in RestAssured
return given()
.relaxedHTTPSValidation()
.baseUri(cloudBaseurl)
.header(HttpHeaders.CONNECTION,"Keep-Alive")
.header("Keep-Alive","timeout=100", "max=180")
.when()
.get(fullServicePath)
.then().extract().response();
No change. Still a 2nd request after 60 seconds.
3. Changing the default request behavior (as suggested here: https://stackoverflow.com/questions/23054289/httpclient-executes-requests-multiple-time-if-request-timed-out)
RestAssured.config = RestAssured.config().httpClient(httpClientConfig().httpClientFactory(
() -> {
SystemDefaultHttpClient systemDefaultHttpClient = new SystemDefaultHttpClient();
ClientConnectionManager connectionManager = systemDefaultHttpClient.getConnectionManager();
connectionManager.closeIdleConnections(100, TimeUnit.SECONDS);
// Disable default behavior of HttpClient of retrying requests in case of failure
((AbstractHttpClient) systemDefaultHttpClient).setHttpRequestRetryHandler(new DefaultHttpRequestRetryHandler(0, false));
return systemDefaultHttpClient;
}));
I was looking in the wrong direction.
The timeout on the Ingress in Kubernetes needed to be increased. Adding the following to the Kubernetes deployment yaml solved the issue.
metadata:
name: your-service-name
labels:
app: your-service-name
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/proxy-connect-timeout : "3600"
nginx.ingress.kubernetes.io/proxy-read-timeout : "3600"
nginx.ingress.kubernetes.io/proxy-send-timeout : "3600"