I have referred to https://redis.io/topics/mass-insert and tried the Luke protocol, and did
cat data.txt | redis-cli -a <pass> -h <events-k8s-service> --pipe-timeout 100 > /dev/null
The redirection to /dev/null is to ignore the replies. The CLIENT REPLY of redis can't serve its purpose here from CLI and it may turn into a blocking command.
The data.txt has around 18 Million records/commands like
SELECT 1
SET key1 '"field1":"val1","field2":"val2","field3":"val3","field4":"val4","field5":val5,"field6":val6'
SET key2 '"field1":"val1","field2":"val2","field3":"val3","field4":"val4","field5":val5,"field6":val6'
.
.
.
This command is executed from a cronJob which execs into the redis pod, and executes the above command from within the pod, to understand the footprint, the redis pod had no resources limit and following are the observation:
Keys loaded: 18147292
Time taken: ~31 minutes
Peak CPU: 2063 m
Peak Memory: 4745 Mi
The resources consumed are way too high and the time taken is too long.
The questions:
The help is appreciated, Thanks in advance.
If you are using the redis-cli
inside the pod to move the millions of key into Redis POD won't be able to handle it.
Also, you have not specified any resources that you are giving to Redis however it's a memory store so it will be better to give proper memory to redis 2-3 GB depends on usegae.
You can try out the Redis-riot : https://github.com/redis-developer/riot
to add data into the Redis.
There is also good video across loading the Big foot data into the redis : https://www.youtube.com/watch?v=TqTg6RijfaU
Do we need to fine tune redis here.
Increase the memory for redis if it's getting OOMkilled.