I'm porting a node/react/webpack app to k8s, and am trying to configure a development environment that makes use of the hot-reloading feature of webpack. I'm hitting an error when running this with a shared volume on minikube
:
ERROR in ./~/css-loader!./~/sass-loader/lib/loader.js?{"data":"$primary: #f9427f;$secondary: #171735;$navbar-back-rotation: 0;$navbar-link-rotation: 0;$login-background: url('/images/login-background.jpg');$secondary-background: url('/images/secondary-bg.jpg');"}!./src/sass/style.sass
Module build failed: Error: Node Sass does not yet support your current environment: Linux 64-bit with Unsupported runtime (67)
For more information on which environments are supported please see:
Running the code in the container by itself (mostly) works--it starts up without errors and serves the page via docker run -it --rm --name=frontend --publish=3000:3000 <container hash>
#Dockerfile
FROM node:latest
RUN mkdir /code
ADD . /code/
WORKDIR /code/
RUN yarn cache clean && yarn install --non-interactive && npm rebuild node-sass
CMD npm run dev-docker
where dev-docker
in package.json
is NODE_ENV=development npm run -- webpack --progress --hot --watch
In the following, commenting out the volumeMounts
key eliminates the error.
# deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: dev
name: web
labels:
app: web
spec:
replicas: 1
selector:
matchLabels:
app: frontend-container
template:
metadata:
labels:
app: frontend-container
spec:
volumes:
- name: frontend-repo
hostPath:
path: /Users/me/Projects/code/frontend
containers:
- name: web-container
image: localhost:5000/react:dev
ports:
- name: http
containerPort: 3000
protocol: TCP
volumeMounts:
- name: frontend-repo
mountPath: /code
env:
... # redacted for simplicity, assume works
Based on what i've found elsewhere, I believe that the os-native binding used by node-sass
are interfering between host and container when the shared volume is introduced. That is, the image build process creates the bindings that would work for the container, but those are overwritten when the shared volume is mounted.
Is this understanding correct? How do I best structure things so that a developer can work on their local repo and see those changes automatically reflected in the cluster instance, without rebuilding images?
My hypothesis was borne out--the node modules were being built for the container, but overwritten by the volumeMount
. The approach that worked best at this point was to do the requirements building as the entrypoint of the container, so that it would run when the container started up, rather than only at build time.
# Dockerfile
CMD RUN yarn cache clean && yarn install --non-interactive --force && npm run dev-docker