Background: I have setup a ServiceAccount and spinnaker-role-binding in the default namespace. Created the spinnaker namespace for Kubernetes. Deployed services on port 9000 and 8084.
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/spin-deck-np LoadBalancer hidden <pending> 9000:31295/TCP 9m39s
service/spin-gate-np LoadBalancer hidden <pending> 8084:32161/TCP 9m39s
Created halyard deployment in the default namespace and configured hal inside it.
Problem: When I run the hal deploy apply
command then I am getting below error
Problems in Global:
! ERROR Unexpected exception:
io.fabric8.kubernetes.client.KubernetesClientException: Failure executing: GET
at:
https://kubernetes.default/apis/extensions/v1beta1/namespaces/spinnaker/replicasets.
Message: the server could not find the requested resource. Received status:
Status(apiVersion=v1, code=404, details=StatusDetails(causes=[], group=null,
kind=null, name=null, retryAfterSeconds=null, uid=null,
additionalProperties={}), kind=Status, message=the server could not find the
requested resource, metadata=ListMeta(resourceVersion=null, selfLink=null,
additionalProperties={}), reason=NotFound, status=Failure,
additionalProperties={}).
Below is my kube config file at /home/spinnaker/.kube/config
apiVersion: v1
clusters:
- cluster:
certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
server: https://kubernetes.default
name: default
contexts:
- context:
cluster: default
user: user
name: default
current-context: default
kind: Config
preferences: {}
users:
- name: user
user:
token: *********************
Below is the hal config file at /home/spinnaker/.hal/config
currentDeployment: default
deploymentConfigurations:
- name: default
version: 1.8.1
providers:
appengine:
enabled: false
accounts: []
aws:
enabled: false
accounts: []
bakeryDefaults:
baseImages: []
defaultKeyPairTemplate: '{{name}}-keypair'
defaultRegions:
- name: us-west-2
defaults:
iamRole: BaseIAMRole
ecs:
enabled: false
accounts: []
azure:
enabled: false
accounts: []
bakeryDefaults:
templateFile: azure-linux.json
baseImages: []
dcos:
enabled: false
accounts: []
clusters: []
dockerRegistry:
enabled: true
accounts:
- name: my-docker-registry
requiredGroupMembership: []
providerVersion: V1
permissions: {}
address: https://index.docker.io
email: fake.email@spinnaker.io
cacheIntervalSeconds: 30
clientTimeoutMillis: 60000
cacheThreads: 1
paginateSize: 100
sortTagsByDate: false
trackDigests: false
insecureRegistry: false
repositories:
- library/nginx
primaryAccount: my-docker-registry
google:
enabled: false
accounts: []
bakeryDefaults:
templateFile: gce.json
baseImages: []
zone: us-central1-f
network: default
useInternalIp: false
kubernetes:
enabled: true
accounts:
- name: my-k8s-account
requiredGroupMembership: []
providerVersion: V1
permissions: {}
dockerRegistries:
- accountName: my-docker-registry
namespaces: []
configureImagePullSecrets: true
cacheThreads: 1
namespaces: []
omitNamespaces: []
kinds: []
omitKinds: []
customResources: []
cachingPolicies: []
kubeconfigFile: /home/spinnaker/.kube/config
oauthScopes: []
oAuthScopes: []
primaryAccount: my-k8s-account
openstack:
enabled: false
accounts: []
bakeryDefaults:
baseImages: []
oracle:
enabled: false
accounts: []
deploymentEnvironment:
size: SMALL
type: Distributed
accountName: my-k8s-account
updateVersions: true
consul:
enabled: false
vault:
enabled: false
customSizing: {}
gitConfig:
upstreamUser: spinnaker
persistentStorage:
persistentStoreType: gcs
azs: {}
gcs:
jsonPath: /home/spinnaker/.gcp/gcs-account.json
project: round-reality
bucket: spin-94cc2e22-8ece-4bc1-80fd-e9df71c1d9f4
rootFolder: front50
bucketLocation: us
redis: {}
s3:
rootFolder: front50
oracle: {}
features:
auth: false
fiat: false
chaos: false
entityTags: false
jobs: false
metricStores:
datadog:
enabled: false
prometheus:
enabled: false
add_source_metalabels: true
stackdriver:
enabled: false
period: 30
enabled: false
notifications:
slack:
enabled: false
timezone: America/Los_Angeles
ci:
jenkins:
enabled: false
masters: []
travis:
enabled: false
masters: []
security:
apiSecurity:
ssl:
enabled: false
overrideBaseUrl: http://External IP of worker:8084
uiSecurity:
ssl:
enabled: false
overrideBaseUrl: http://External IP of worker:9000
authn:
oauth2:
enabled: false
client: {}
resource: {}
userInfoMapping: {}
saml:
enabled: false
ldap:
enabled: false
x509:
enabled: false
iap:
enabled: false
enabled: false
authz:
groupMembership:
service: EXTERNAL
google:
roleProviderType: GOOGLE
github:
roleProviderType: GITHUB
file:
roleProviderType: FILE
enabled: false
artifacts:
bitbucket:
enabled: false
accounts: []
gcs:
enabled: false
accounts: []
github:
enabled: false
accounts: []
gitlab:
enabled: false
accounts: []
http:
enabled: false
accounts: []
s3:
enabled: false
accounts: []
pubsub:
google:
enabled: false
subscriptions: []
canary:
enabled: false
serviceIntegrations:
- name: google
enabled: false
accounts: []
gcsEnabled: false
stackdriverEnabled: false
- name: prometheus
enabled: false
accounts: []
- name: datadog
enabled: false
accounts: []
- name: aws
enabled: false
accounts: []
s3Enabled: false
reduxLoggerEnabled: true
defaultJudge: NetflixACAJudge-v1.0
stagesEnabled: true
templatesEnabled: true
showAllConfigsEnabled: true
Used below commands in hal to interact with kubernetes
kubectl config set-cluster default --server=https://kubernetes.default --certificate-authority=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt
kubectl config set-context default --cluster=default
token=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)
kubectl config set-credentials user --token=$token
kubectl config set-context default --user=user
kubectl config use-context default
How could I resolve the error for spinnaker deployment?
Thank you
As per your config file it's looking like kubeconfig context(Search it) not setup correctly.
Please use below command
# Setting Variable for admin kubeconfig file location(Please fetch config file with --admin - if possible)
kubeconfig_path="<my-k8s-account-admin-file-path>"
hal config provider kubernetes account add my-k8s-account --provider-version v2 \
--kubeconfig-file "$kubeconfig_path" \
--context $(kubectl config current-context --kubeconfig "$kubeconfig_path")
After execution of above command you will be able to see context in your config file, which is missing in current config.