How to create a private Kubernetes cluster with OKD on AWS without an "allow any" outbound rule in a security group?

10/25/2021

While I was running the latest OKD 4 openshift-install to deploy a private cluster on AWS, I realized that the installer created two security groups on AWS with an egress rule that allows all ports to all targets. The following API call was done by the installer (I removed the sensitive information) :

{
  "eventVersion": "",
  "userIdentity": {
    "type": "",
    "principalId": "",
    "arn": "",
    "accountId": "",
    "accessKeyId": "",
    "userName": "",
    "sessionContext": {
      "sessionIssuer": {},
      "webIdFederationData": {},
      "attributes": {
        "creationDate": "",
        "mfaAuthenticated": ""
      }
    }
  },
  "eventTime": "",
  "eventSource": "ec2.amazonaws.com",
  "eventName": "AuthorizeSecurityGroupEgress",
  "awsRegion": "",
  "sourceIPAddress": "",
  "userAgent": "",
  "requestParameters": {
    "groupId": "",
    "ipPermissions": {
      "items": [
        {
          "ipProtocol": "-1",
          "groups": {},
          "ipRanges": {
            "items": [
              {
                "cidrIp": "0.0.0.0/0"
              }
            ]
          },
          "ipv6Ranges": {},
          "prefixListIds": {}
        }
      ]
    }
  },
  "responseElements": {
    "requestId": "",
    "_return": true,
    "securityGroupRuleSet": {
      "items": [
        {
          "groupOwnerId": "",
          "groupId": "",
          "securityGroupRuleId": "",
          "isEgress": true,
          "ipProtocol": "-1",
          "fromPort": -1,
          "toPort": -1,
          "cidrIpv4": "0.0.0.0/0"
        }
      ]
    }
  }
}

Such a rule in a security group is not allowed by my company and will be deleted automatically while the installer is still running (which will cause a failure of the cluster creation).

Therefore, I am wondering if there is a way to prevent the OKD installer to do this or to specify the outbound rules more precisely during installation? And, out of curiosity, why does the OKD installer do this (any reason why it might be required to have this any to any rule) ?

Here is my install-config.yaml (with some sensitive information removed):

apiVersion: v1
baseDomain: okd.idnr.de
credentialsMode: Manual
controlPlane:
  hyperthreading: Enabled
  name: master
  platform:
    aws:
      zones:
        - eu-central-1a
      rootVolume:
        iops: 4000
        size: 500
        type: io1
      type: t2.xlarge
  replicas: 3
compute:
  - hyperthreading: Enabled
    name: worker
    platform:
      aws:
        rootVolume:
          iops: 2000
          size: 500
          type: io1
        type: r5.xlarge
        zones:
          - eu-central-1a
    replicas: 8
metadata:
  name: ...
networking:
  clusterNetwork:
    - cidr: ..
      hostPrefix: 23
  machineNetwork:
    - cidr: ...
  networkType: OVNKubernetes
  serviceNetwork:
    - ...
platform:
  aws:
    region: eu-central-1
    userTags:
      adminContact: ...
      costCenter: ...
    subnets:
      - <some private subenet>
    amiID: ...
sshKey: ...
pullSecret: ...
additionalTrustBundle: |
publish: Internal
imageContentSources:
  - mirrors:
      - ...
    source: ...
  - mirrors:
      - ...
    source: quay.io/openshift/okd-content
-- randy_marsh
amazon-web-services
kubernetes
okd
openshift

0 Answers