I am new using OpenShift Origin. I've installed 1 master and 2 nodes using ansible-openshift. Everything appears to be ok. I can access to the dashboard at http://10.1.10.1:8443. But the problem appears when I want to expose a service so I did:
OS version
CentOS Linux release 7.3.1611 (Core)
OC Version
oc v1.4.1+3f9807a
kubernetes v1.4.0+776c994
features: Basic-Auth GSSAPI Kerberos SPNEGO
Server https://10.1.10.1:8443
openshift v1.4.1+3f9807a
kubernetes v1.4.0+776c994
ansible / hosts
# Create an OSEv3 group that contains the masters and nodes groups
[OSEv3:children]
masters
nodes
#etcd
# Set variables common for all OSEv3 hosts
[OSEv3:vars]
ansible_user=root
ansible_become=true
deployment_type=origin
openshift_release=1.4.1
containerized=true
openshift_router_selector='router=true'
openshift_registry_selector='registry=true'
enable_docker_excluder=false
enable_excluders=false
os_firewall_use_firewalld=false
# enable htpasswd auth
openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider', 'filename': '/etc/origin/master/htpasswd'}]
openshift_master_htpasswd_users={'admin': '$apr1$zgSjCrLt$1KSuj66CggeWSv.D.BXOA1', 'user': '$apr1$.gw8w9i1$ln9bfTRiD6OwuNTG5LvW50'}
# host group for masters
[masters]
10.1.10.1 openshift_public_hostname=10.1.10.1 openshift_hostname=os-master
# host group for etcd, should run on a node that is not schedulable
#[etcd]
#54.175.0.44
# host group for worker nodes, we list master node here so that
# openshift-sdn gets installed. We mark the master node as not
# schedulable.
[nodes]
10.1.10.1openshift_hostname=10.1.10.1 openshift_schedulable=false
10.1.10.2openshift_hostname=10.1.10.2openshift_node_labels="{'router':'true','registry':'true'}"
10.1.10.3 openshift_hostname=10.1.10.3 openshift_node_labels="{'router':'true','registry':'true'}"
The oc adm diagnostics command shows only 2 warnings:
WARN: [DH0005 from diagnostic MasterConfigCheck@openshift/origin/pkg/diagnostics/host/check_master_config.go:52]
Validation of master config file '/etc/origin/master/master-config.yaml' warned:
assetConfig.loggingPublicURL: Invalid value: "": required to view aggregated container logs in the console
assetConfig.metricsPublicURL: Invalid value: "": required to view cluster metrics in the console
auditConfig.auditFilePath: Required value: audit can now be logged to a separate file
WARN: [DClu0003 from diagnostic NodeDefinition@openshift/origin/pkg/diagnostics/cluster/node_definitions.go:112]
Node 10.1.10.1 is ready but is marked Unschedulable.
This is usually set manually for administrative reasons.
An administrator can mark the node schedulable with:
oadm manage-node 10.1.10.1 --schedulable=true
While in this state, pods should not be scheduled to deploy on the node.
Existing pods will continue to run until completed or evacuated (see
other options for 'oadm manage-node').
Could you please shed some light on this?. Thanks in advance.
I have faced similar kind of issue. I have configured ansible-container indeed. May I know which kind of user you have deployed tomcat app? did you use the user 'developer/developer' or did you create another user? The possible reason could be insufficient rights for your user (thats true in my experience). I would suggest you two things,
1) Check whether your router/service configured correctly or not by doing this
2) If your user doesn't have sufficient roles, then do this or this to make pulled image working. ( example:$$ oc adm policy add-scc-to-user anyuid -z default)