I have a docker file and using that i have build the image and then used EKS service to launch the containers. Now in my application for logging purpose I am taking environment variables like "container_instance" and "ec2_instance_id" and logging it so that I can see in Elastic Search from which container or host ec2 machine this log got generated.
How can I set these 2 data when I start my container in environment variable?
In your Kubernetes Pod spec, you can use the downward API to inject some of this information. For example, to get the node's Kubernetes node name, you can set
env:
- name: MY_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
The node name is typically a hostname for the node (this example in the EKS docs shows the EC2 internal hostnames, for example). You can't easily get things like an EC2 instance ID at a per-pod level.
You also might configure logging globally at a cluster level. The Kubernetes documentation includes a packaged setup to route logs to Elasticsearch and Kibana. The example shown there includes only the pod name in the log message metadata, but you should be able to reconfigure the underlying fluentd to include additional host-level metadata.