I am using a fabric8 maven plugin to create a docker image of the app written on spring-boot. I need also pods with Nginx,Db and Mail server. Can fabric 8 maven plugin help me create those pods as well ? If not what should I do?
I'm a maintainer of Fabric8 Maven Plugin. Fabric8 Maven Plugin has a concept of resource fragments(i.e you can add your additional resources in FMP source (src/main/fabric8
by default) directory and FMP would process and enrich them during resource goal). This goes for controller resources also, If you add a fragment of deployment with additional containers in your Deployment
spec, FMP would For example, let me add a pod fragment in src/main/fabric8
directory:
~/work/repos/fmp-demo-project : $ cat src/main/fabric8/test-pod.yml
apiVersion: v1
kind: Pod
metadata:
name: testkubee
spec:
containers:
- name: testkubepod
image: nginx
~/work/repos/fmp-demo-project : $ mvn fabric8:resource
[INFO] Scanning for projects...
[INFO]
[INFO] ----------------------< meetup:random-generator >-----------------------
[INFO] Building random-generator 0.0.1
[INFO] --------------------------------[ jar ]---------------------------------
[INFO]
[INFO] --- fabric8-maven-plugin:4.3.0:resource (default-cli) @ random-generator ---
[INFO] F8: Running generator spring-boot
[INFO] F8: spring-boot: Using Container image fabric8/java-centos-openjdk8-jdk:1.5 as base / builder
[INFO] F8: using resource templates from /home/rohaan/work/repos/fmp-demo-project/src/main/fabric8
[INFO] F8: fmp-controller: Adding a default Deployment
[INFO] F8: fmp-service: Adding a default service 'random-generator' with ports [8080]
[INFO] F8: f8-healthcheck-spring-boot: Adding readiness probe on port 8080, path='/actuator/health', scheme='HTTP', with initial delay 10 seconds
[INFO] F8: f8-healthcheck-spring-boot: Adding liveness probe on port 8080, path='/actuator/health', scheme='HTTP', with initial delay 180 seconds
[INFO] F8: fmp-revision-history: Adding revision history limit to 2
[INFO] F8: validating /home/rohaan/work/repos/fmp-demo-project/target/classes/META-INF/fabric8/kubernetes/random-generator-service.yml resource
[INFO] F8: validating /home/rohaan/work/repos/fmp-demo-project/target/classes/META-INF/fabric8/kubernetes/testkubee-pod.yml resource
[INFO] F8: validating /home/rohaan/work/repos/fmp-demo-project/target/classes/META-INF/fabric8/kubernetes/random-generator-deployment.yml resource
[INFO] F8: using resource templates from /home/rohaan/work/repos/fmp-demo-project/src/main/fabric8
[INFO] F8: fmp-controller: Adding a default DeploymentConfig
[INFO] F8: fmp-service: Adding a default service 'random-generator' with ports [8080]
[INFO] F8: f8-healthcheck-spring-boot: Adding readiness probe on port 8080, path='/actuator/health', scheme='HTTP', with initial delay 10 seconds
[INFO] F8: f8-healthcheck-spring-boot: Adding liveness probe on port 8080, path='/actuator/health', scheme='HTTP', with initial delay 180 seconds
[INFO] F8: fmp-revision-history: Adding revision history limit to 2
[INFO] F8: validating /home/rohaan/work/repos/fmp-demo-project/target/classes/META-INF/fabric8/openshift/random-generator-service.yml resource
[INFO] F8: validating /home/rohaan/work/repos/fmp-demo-project/target/classes/META-INF/fabric8/openshift/testkubee-pod.yml resource
[INFO] F8: validating /home/rohaan/work/repos/fmp-demo-project/target/classes/META-INF/fabric8/openshift/random-generator-route.yml resource
[INFO] F8: validating /home/rohaan/work/repos/fmp-demo-project/target/classes/META-INF/fabric8/openshift/random-generator-deploymentconfig.yml resource
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 4.210 s
[INFO] Finished at: 2020-01-26T14:32:09+05:30
[INFO] ------------------------------------------------------------------------
~/work/repos/fmp-demo-project : $ mvn fabric8:apply
[INFO] Scanning for projects...
[INFO]
[INFO] ----------------------< meetup:random-generator >-----------------------
[INFO] Building random-generator 0.0.1
[INFO] --------------------------------[ jar ]---------------------------------
[INFO]
[INFO] --- fabric8-maven-plugin:4.3.0:apply (default-cli) @ random-generator ---
[INFO] F8: Using Kubernetes at https://192.168.39.39:8443/ in namespace default with manifest /home/rohaan/work/repos/fmp-demo-project/target/classes/META-INF/fabric8/kubernetes.yml
[INFO] F8: Using namespace: default
[INFO] F8: Using namespace: default
[INFO] F8: Creating a Service from kubernetes.yml namespace default name random-generator
[INFO] F8: Created Service: target/fabric8/applyJson/default/service-random-generator-1.json
[INFO] F8: Using namespace: default
[INFO] F8: Creating a Deployment from kubernetes.yml namespace default name random-generator
[INFO] F8: Created Deployment: target/fabric8/applyJson/default/deployment-random-generator-1.json
[INFO] F8: Using namespace: default
[INFO] F8: Creating a Pod from kubernetes.yml namespace default name testkubee
[INFO] F8: Created Pod result: Pod(apiVersion=v1, kind=Pod, metadata=ObjectMeta(annotations=null, clusterName=null, creationTimestamp=2020-01-26T09:02:18Z, deletionGracePeriodSeconds=null, deletionTimestamp=null, finalizers=[], generateName=null, generation=null, labels={app=random-generator, group=meetup, provider=fabric8, version=0.0.1}, managedFields=[], name=testkubee, namespace=default, ownerReferences=[], resourceVersion=200332, selfLink=/api/v1/namespaces/default/pods/testkubee, uid=9959a629-7dff-4a66-802a-c3c24107ce6b, additionalProperties={}), spec=PodSpec(activeDeadlineSeconds=null, affinity=null, automountServiceAccountToken=null, containers=[Container(args=[], command=[], env=[EnvVar(name=KUBERNETES_NAMESPACE, value=null, valueFrom=EnvVarSource(configMapKeyRef=null, fieldRef=ObjectFieldSelector(apiVersion=v1, fieldPath=metadata.namespace, additionalProperties={}), resourceFieldRef=null, secretKeyRef=null, additionalProperties={}), additionalProperties={})], envFrom=[], image=nginx, imagePullPolicy=IfNotPresent, lifecycle=null, livenessProbe=null, name=testkubepod, ports=[ContainerPort(containerPort=8080, hostIP=null, hostPort=null, name=http, protocol=TCP, additionalProperties={}), ContainerPort(containerPort=9779, hostIP=null, hostPort=null, name=prometheus, protocol=TCP, additionalProperties={}), ContainerPort(containerPort=8778, hostIP=null, hostPort=null, name=jolokia, protocol=TCP, additionalProperties={})], readinessProbe=null, resources=ResourceRequirements(limits=null, requests=null, additionalProperties={}), securityContext=SecurityContext(allowPrivilegeEscalation=null, capabilities=null, privileged=false, procMount=null, readOnlyRootFilesystem=null, runAsGroup=null, runAsNonRoot=null, runAsUser=null, seLinuxOptions=null, windowsOptions=null, additionalProperties={}), stdin=null, stdinOnce=null, terminationMessagePath=/dev/termination-log, terminationMessagePolicy=File, tty=null, volumeDevices=[], volumeMounts=[VolumeMount(mountPath=/var/run/secrets/kubernetes.io/serviceaccount, mountPropagation=null, name=default-token-qx85s, readOnly=true, subPath=null, subPathExpr=null, additionalProperties={})], workingDir=null, additionalProperties={})], dnsConfig=null, dnsPolicy=ClusterFirst, enableServiceLinks=true, hostAliases=[], hostIPC=null, hostNetwork=null, hostPID=null, hostname=null, imagePullSecrets=[], initContainers=[], nodeName=null, nodeSelector=null, preemptionPolicy=null, priority=0, priorityClassName=null, readinessGates=[], restartPolicy=Always, runtimeClassName=null, schedulerName=default-scheduler, securityContext=PodSecurityContext(fsGroup=null, runAsGroup=null, runAsNonRoot=null, runAsUser=null, seLinuxOptions=null, supplementalGroups=[], sysctls=[], windowsOptions=null, additionalProperties={}), serviceAccount=default, serviceAccountName=default, shareProcessNamespace=null, subdomain=null, terminationGracePeriodSeconds=30, tolerations=[Toleration(effect=NoExecute, key=node.kubernetes.io/not-ready, operator=Exists, tolerationSeconds=300, value=null, additionalProperties={}), Toleration(effect=NoExecute, key=node.kubernetes.io/unreachable, operator=Exists, tolerationSeconds=300, value=null, additionalProperties={})], volumes=[Volume(awsElasticBlockStore=null, azureDisk=null, azureFile=null, cephfs=null, cinder=null, configMap=null, csi=null, downwardAPI=null, emptyDir=null, fc=null, flexVolume=null, flocker=null, gcePersistentDisk=null, gitRepo=null, glusterfs=null, hostPath=null, iscsi=null, name=default-token-qx85s, nfs=null, persistentVolumeClaim=null, photonPersistentDisk=null, portworxVolume=null, projected=null, quobyte=null, rbd=null, scaleIO=null, secret=SecretVolumeSource(defaultMode=420, items=[], optional=null, secretName=default-token-qx85s, additionalProperties={}), storageos=null, vsphereVolume=null, additionalProperties={})], additionalProperties={}), status=PodStatus(conditions=[], containerStatuses=[], hostIP=null, initContainerStatuses=[], message=null, nominatedNodeName=null, phase=Pending, podIP=null, qosClass=BestEffort, reason=null, startTime=null, additionalProperties={}), additionalProperties={})
[INFO] F8: HINT: Use the command `kubectl get pods -w` to watch your pods start up
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 7.207 s
[INFO] Finished at: 2020-01-26T14:32:22+05:30
[INFO] ------------------------------------------------------------------------
~/work/repos/fmp-demo-project : $ kubectl get pods
NAME READY STATUS RESTARTS AGE
random-generator-86496844ff-2tzn2 1/1 Running 0 43s
testkubee 1/1 Running 0 43s
~/work/repos/fmp-demo-project : $
You can either provide pod fragments like I did in src/main/fabric8
directory or you can also provide a customized Deployment
resource fragment (if you want all pods as a part of your Deployment
like in this example)