This is a total noob Kubernetes question. I have searched for this, but can't seem to find the exact answer. But that may just come down to not having a total understanding of Kubernetes. I have some pods deployed across three nodes, and my questions are simple.
For calculating total disk space you can use
kubectl describe nodes
from there you can grep ephemeral-storage which is the virtual disk size This partition is also shared and consumed by Pods via emptyDir volumes, container logs, image layers and container writable layers
If you are using Prometheus you can calculate with this formula
sum(node_filesystem_size_bytes)
I'm assuming you're using AKS as that's what the question is tagged with.
The worker nodes are just standard VMs with a whole load of scripts to bootstrap the Kubernetes cluster. Disk space is very important as every image layer you download will be cached on the server and by default the hard drive space of these servers can be very small (30GB IIRC) unless tweaked at creation. The partitioning schema is also not particularly tuned for container delivery.
You can use OMS and the container monitoring solutions in Azure to get a great insight into your cluster health. https://docs.microsoft.com/en-us/azure/azure-monitor/insights/containers or as mentioned above - you can use prometheus / Grafana or just ssh in and df -h
to see what's going on (although I wouldn't advocate ssh access nodes).
The disk space on the nodes is very different to PVs mounted by the containers.
With regards to the max number of pods per node - I think the default is 30 unless you built the cluster specifically with a higher number.