The issue is that I would like to persistent one status file(status generated by the service), not the directory, of some service in case the status lost when service restart, how to solve?
You can include an emptyDir volume in your pod spec. That will create an empty directory that lives as long as the pod does, even if it's restarted or migrated.
This won't necessarily play well if your pod belongs to a deployment that gets updated, since the core action of a deployment is to create new pods and delete old ones. In that case you need a persistent volume claim. That's more complicated: it involves having another Kubernetes object, and it means having somewhere to store the actual volume. (If you're running on AWS, for instance, persistent volumes will probably be backed by EBS volumes.) This might be a little overkill for a simple status.
Another option is to set up some sort of small database (Redis is popular) and store the status there. Now you're not storing state in your container's filesystem and you don't actually care whether the pod exits or is deleted.
Also consider what can happen if you have multiple copies of your pod running at once. If nothing else, since the default behavior of a deployment is to create the new pod, wait for its health check to pass, and then delete the old pod, you can routinely wind up with two concurrent copies of the same pod. A single shared status might not have the behavior you want.
If it's just a status file, you should be able to write it into a config map. See Add ConfigMap data to a Volume. If in volumes you have
volumes:
- name: status
configMap:
name: status
defaultMode: 420
optional: true
and in volumeMounts
volumeMounts:
- name: status
mountPath: /var/service/status
then you should be able to write in it. See also how kube-dns does it with the kube-dns-config
mount from kube-dns
config-map.