my go code downloads a file from Azure and writes it to the disk. For a file size of about 512 Mega Bytes, the code takes around a min to download and write the file to disk (when executed standalone).
For the same code executed as a container inside kubernetes pod, the time shoots up to almost 3 min. I added additional traces in the code to figure out that it is the io.Copy function which takes the maximum time. I do have cpu and memory quota limitation set on the pod and performance does improve on increasing the same. But just want to understand/get some guidance if there is an even efficient way to do it?
Solutions tried: Tried io.CopyBuffer but didn't help much.
Would really appreciate some guidance if someone else has faced a similar issue.
resp, err := client.Do(req)
err != nil {
return err
}
if resp.StatusCode != 200 {
buf := new(bytes.Buffer)
_, _ = buf.ReadFrom(resp.Body)
return errors.New(string(buf.Bytes()))
}
err = os.Mkdir(outputPath+"/"+digest[7:], 0700)
if err != nil {
return err
}
filepath := outputPath + "/" + digest[7:] + "/layer.tar.gzip"
file, err := os.OpenFile(filepath, os.O_RDWR|os.O_CREATE, 0600)
if err != nil {
return err
}
_, err = io.Copy(file, resp.Body)
if err != nil {
_ = file.Close()
return err
}
_ = file.Close()