I am trying to create a Kubernetes node in minikube with a NodejS Express server in it.
The basic application runs without any problem on bare metal(Linux Ubuntu, Windows etc). But in Kubernetes I have lot of problems I have a lot of routes and the server deployment fails. If I reduce the number of routes with for lets say 50% the app runs fine. It doesn't make any difference which routes I commented out.
Deployment file (server-cluster-ip-service.yaml):
apiVersion: v1
kind: Service
metadata:
name: server-cluster-ip-service
spec:
type: ClusterIP
selector:
component: server
ports:
- port: 8093
targetPort: 8093
Deployment file (server-deployment.yaml):
apiVersion: apps/v1
kind: Deployment
metadata:
name: server-deployment
spec:
replicas: 1
selector:
matchLabels:
component: server
template:
metadata:
labels:
component: server
spec:
containers:
- name: server
image: jabro888/salesactas4002server:1.0.1
ports:
- containerPort: 8093
server.ts file:
export const app: Application = express();
app.listen(8093), () => {
initApi(app).then(() => {
apiRoutes(app);
}).catch((error) => {
console.log(" what the f?ck is going wrong: " + error);
});
console.log('HTTP Server running at http://' + '192.168.99.100' + ': on port: ' + '8093');
});
api.ts file:
const options:cors.CorsOptions = {
allowedHeaders : config.get('server.cors.allowedHeaders'),
credentials: config.get('server.cors.credentials'),
methods: config.get('server.cors.methods'),
origin: config.get('server.cors.origin'),
preflightContinue: config.get('server.cors.preflightContinue')
};
export async function initApi(app) {
console.log('voor initialiseren');
//await apiInitialiseer();
console.log('na initialiseren');
app.use(bodyParser.json());
app.use(cors(options));
app.use(cookieParser());
app.set('strict routing', true);
app.enable('strict routing');
console.log('stap1');
}
apiRoute.ts file : (And when I remove or commented out the routes from step6 untill step9 the application runs ok in kubernetes minikube.)
export function apiRoutes(app) {
//app.route('/api/test').get(apiGetRequestByMedewerkerAfterTime);
app.route('/api/salesactas400/cookie').get(apiGetAllCookies);
app.route('/api/salesactas400/aut/v').put(apiVerlengSession);
app.route('/api/salesactas400/aut/s').put(apiStopSession);
console.log('step2');
app.route('/api/salesactas400/medewerker/login-afdeling').get(apiGetMedewerkerAfdelingByLogin);
app.route('/api/salesactas400/medewerker/Login').get(apiGetMedewerkerByLogin);
app.route('/api/salesactas400/medewerker/login').put(apiGetMedewerkerVestigingByLoginLogin); //+gebruikt inloggen PUt vanwege de cookie
console.log('step3');
app.route('/api/salesactas400/medewerker').get(apiGetAllMedewerkersWithAfdelingLocatie);
app.route('/api/salesactas400/medewerker/:id').get(apiGetMedewerkerByID);
app.route('/api/salesactas400/medewerker/:id').put(apiUpdateMedewerkerByID);
app.route('/api/salesactas400/medewerker').post(apiAddMedewerker);
app.route('/api/salesactas400/medewerker/:id').delete(apiDeleteMedewerkerByID);
console.log('step4');
app.route('/api/salesactas400/locatie').get(apiGetAllLocaties);
app.route('/api/salesactas400/locatie/:id').get(apiGetLocatieByID);
app.route('/api/salesactas400/locatie/:id').put(apiUpdateLocatieByID);
app.route('/api/salesactas400/locatie').post(apiAddLocatie);
app.route('/api/salesactas400/locatie/:id').delete(apiDeleteLocatieByID);
console.log('step5');
app.route('/api/salesactas400/afdeling').get(apiGetAllAfdelings);
app.route('/api/salesactas400/afdeling/:id').get(apiGetAfdelingByID);
app.route('/api/salesactas400/afdeling/:id').put(apiUpdateAfdelingByID);
app.route('/api/salesactas400/afdeling').post(apiAddAfdeling);
app.route('/api/salesactas400/afdeling/:id').delete(apiDeleteAfdelingByID);
console.log('step6');
app.route('/api/salesactas400/activiteit').get(apiGetAllActiviteitenWithAfdeling);
app.route('/api/salesactas400/activiteit/afdeling/:afdelingId').get(apiGetActiviteitenByAfdelingId);
app.route('/api/salesactas400/activiteit/:id').get(apiGetActiviteitByID);
app.route('/api/salesactas400/activiteit/:id').put(apiUpdateActiviteitByID);
app.route('/api/salesactas400/activiteit').post(apiAddActiviteit);
app.route('/api/salesactas400/activiteit/:id').delete(apiDeleteActiviteitByID);
console.log('step13');
console.log('step7');
app.route('/api/salesactas400/registratiefilter').put(apiGetAllRegistratiesFiltered);
app.route('/api/salesactas400/registratie').get(apiGetAllRegistraties);
app.route('/api/salesactas400/registratie/:id').get(apiGetRegistratieByMedewerkerID);
app.route('/api/salesactas400/registratie/:id').put(apiUpdateRegistratieByID);
app.route('/api/salesactas400/registratie/:id').delete(apiDeleteRegistratieByID);
app.route('/api/salesactas400/registratie').post(apiAddRegistratie);
console.log('step8');
app.route('/api/salesactas400/export').post(apiAddExport);
console.log('step9');
}
after loading the files with kubectl apply -f And running kubectl logs server-deployment-8588f6cfdd-ftqvj Then i get this in response:
> salesactas400@0.8.0 start /server
> ts-node ./server.ts
This is WRONG, it seems that the application crashes I don't see console.log messages.
After kubectl get pods I get this:
NAME READY STATUS RESTARTS AGE
postgres-deployment-7d9788bdfd-pf6hf 1/1 Running 0 101s
server-deployment-8588f6cfdd-ftqvj 0/1 Completed 2 67s
For some reason the container completed ???
When i remove the routes from step6 to step9 then I see this:
> salesactas400@0.8.0 start /server
> ts-node ./server.ts
voor initialiseren
na initialiseren
stap1
HTTP Server running at http://192.168.99.100: on port: 8093
stap2
stap3
stap4
stap5
So this is OK, but WHY I can't load all the routes, is there any limitation in kubernetes on NodeJS Express server on routes maybe, something else in my code is maybe wrong ?
I run: minikube version 1.6.2, docker version 19.03.5 NodeJS version at this moment 12.14 from the the node:alpine image I also tried NodeJS version 10.14 and 11.6
Dockerfile I have used for creating the container jabro888/salesactas4002server:1.0.1
FROM node:12.14.0-alpine
WORKDIR "/server"
COPY ./package.json ./
RUN apk add --no-cache --virtual .gyp \
python \
make \
g++ \
unixodbc \
unixodbc-dev \
&& npm install \
&& apk del .gyp
COPY . .
#ENV NODE_ENV=production
CMD ["npm", "start"]
I hope somebody can help me I am struggling all ready 3 days with this problem.
This also might be interesting and I don't understand no any .... thing about this. After some time the pod restarts and after some time it crashes. And again I tried the same app on a Linux machine and it runs without any problem.
bp@bp-HP-Z230-Tower-Workstation:~/Documents/nodejs/salesactas400/server$ kubectl get pods
NAME READY STATUS RESTARTS AGE
postgres-deployment-7d9788bdfd-mm8mm 1/1 Running 0 76s
server-deployment-8588f6cfdd-qd5n6 0/1 Completed 1 34s
bp@bp-HP-Z230-Tower-Workstation:~/Documents/nodejs/salesactas400/server$ kubectl get pods
NAME READY STATUS RESTARTS AGE
postgres-deployment-7d9788bdfd-mm8mm 1/1 Running 0 81s
server-deployment-8588f6cfdd-qd5n6 0/1 Completed 1 39s
bp@bp-HP-Z230-Tower-Workstation:~/Documents/nodejs/salesactas400/server$ kubectl get pods
Unable to connect to the server: net/http: TLS handshake timeout
bp@bp-HP-Z230-Tower-Workstation:~/Documents/nodejs/salesactas400/server$ kubectl get pods
NAME READY STATUS RESTARTS AGE
postgres-deployment-7d9788bdfd-mm8mm 1/1 Running 0 2m17s
server-deployment-8588f6cfdd-qd5n6 0/1 Completed 2 95s
bp@bp-HP-Z230-Tower-Workstation:~/Documents/nodejs/salesactas400/server$ kubectl get pods
NAME READY STATUS RESTARTS AGE
postgres-deployment-7d9788bdfd-mm8mm 1/1 Running 0 2m21s
server-deployment-8588f6cfdd-qd5n6 0/1 Completed 2 99s
bp@bp-HP-Z230-Tower-Workstation:~/Documents/nodejs/salesactas400/server$ kubectl get pods
NAME READY STATUS RESTARTS AGE
postgres-deployment-7d9788bdfd-mm8mm 1/1 Running 0 2m27s
server-deployment-8588f6cfdd-qd5n6 0/1 CrashLoopBackOff 2 105s
bp@bp-HP-Z230-Tower-Workstation:~/Documents/nodejs/salesactas400/server$
Ok SOLVED, the problem was that minikube is not giving enough resources. I had the same problem when I used AWS Beanstalk, also suddenly the server stopped, but in the logs I could see why. They ran out of RAM. So to solve this Minkube has to be started with an extra memory parameter like this:
minikube start --memory=4096