OpenShift 4 is the best Kubernetes distribution that everyone is eagerly waiting for. Openshift gives you a self-service platform to create, modify, and deploy containerized applications on demand. This guide will dive to the installation of OpenShift Origin (OKD) 3.x on a CentOS 7 VM.
The OpenShift development team has done a commendable job is simplifying OpenShift Cluster setup. A single command is all that’s required to get a running OKD Local cluster.
- 8 vCPUs
- 32 GB RAM
- 50 GB free disc space
- CentOS 7 OS
Step 1: Update CentOS 7 system
Let’s kick off by updating our CentOS 7 machine.sudo yum -y update
Step 2: Install and Configure Docker
OpenShift required docker engine on the host machine for running containers. Install Docker on CentOS 7 using the commands below.sudo yum install -y yum-utils device-mapper-persistent-data lvm2
sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
sudo yum install -y docker-ce docker-ce-cli containerd.io
Add your standard user account to docker group.
sudo mkdir /etc/docker /etc/containers
sudo tee /etc/containers/registries.conf<<EOF
[registries.insecure]
registries = ['172.30.0.0/16']
EOF
sudo tee /etc/docker/daemon.json<<EOF
{
"insecure-registries": [
"172.30.0.0/16"
]
}
EOF
sudo systemctl daemon-reload
sudo systemctl restart docker
Enable Docker to start at boot.$ sudo systemctl enable docker Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.Then enable IP forwarding on your system. echo "net.ipv4.ip_forward = 1" | sudo tee -a /etc/sysctl.conf sudo sysctl -p
Step 3: Configure Firewalld.
Ensure that your firewall allows containers access to the OpenShift master API (8443/tcp) and DNS (53/udp) endpoints.DOCKER_BRIDGE=`docker network inspect -f "{{range .IPAM.Config }}{{ .Subnet }}{{end}}" bridge`
sudo firewall-cmd --permanent --new-zone dockerc
sudo firewall-cmd --permanent --zone dockerc --add-source $DOCKER_BRIDGE
sudo firewall-cmd --permanent --zone dockerc --add-port={80,443,8443}/tcp
sudo firewall-cmd --permanent --zone dockerc --add-port={53,8053}/udp
sudo firewall-cmd --reload
Step 4: Download the Linux oc binary
At this step, we can download the Linux oc binary from openshift-origin-client-tools-VERSION-linux-64bit.tar.gz and place it in your path. wget https://github.com/openshift/origin/releases/download/v3.11.0/openshift-origin-client-tools-v3.11.0-0cbc58b-linux-64bit.tar.gz tar xvf openshift-origin-client-tools*.tar.gz cd openshift-origin-client*/ sudo mv oc kubectl /usr/local/bin/ Verify installation of OpenShift client utility.$ oc version oc v3.11.0+0cbc58b kubernetes v1.11.0+d4cacc0 features: Basic-Auth GSSAPI Kerberos SPNEGO
Step 5: Start OpenShift Origin (OKD) Local Cluster
Now bootstrap a local single server OpenShift Origin cluster by running the following command:$ oc cluster up
The command above will:- Start OKD Cluster listening on the local interface – 127.0.0.1:8443
- Start a web console listening on all interfaces at
/console
(127.0.0.1:8443). - Launch Kubernetes system components.
- Provisions registry, router, initial templates, and a default project.
- The OpenShift cluster will run as an all-in-one container on a Docker host.
$ oc cluster up --helpOn a successful installation, you should get output similar to below.
Login to server … Creating initial project "myproject" … Server Information … OpenShift server started. The server is accessible via web console at: https://127.0.0.1:8443 You are logged in as: User: developer Password: <any value> To login as administrator: oc login -u system:adminExample below uses custom options.
$ oc cluster up --routing-suffix=<ServerPublicIP>.xip.io \ --public-hostname=<ServerPulicDNSName>Example.
$ oc cluster up --public-hostname=okd.example.com --routing-suffix='services.example.com'
The OpenShift Origin cluster configuration files will be located inside the openshift.local.clusterup/
directory.If your cluster setup was successful, you should get a positive output for the following command.
$ oc cluster status
Web console URL: https://okd.example.com:8443/console/
Config is at host directory
Volumes are at host directory
Persistent volumes are at host directory /home/dev/openshift.local.clusterup/openshift.local.pv
Data will be discarded when cluster is destroyed
Step 6: Using OpenShift Origin Cluster
To login as an administrator, use:$ oc login -u system:admin
Logged into "https://127.0.0.1:8443" as "system:admin" using existing credentials.
You have access to the following projects and can switch between them with 'oc project <projectname>':
* default
kube-dns
kube-proxy
kube-public
kube-system
myproject
openshift
openshift-apiserver
openshift-controller-manager
openshift-core-operators
openshift-infra
openshift-node
openshift-service-cert-signer
openshift-web-console
testproject
Using project "default".
As System Admin user, you can few information such as node status.$ oc get nodes
NAME STATUS ROLES AGE VERSION
localhost Ready <none> 1h v1.11.0+d4cacc0
$ oc get nodes -o wide
To get more detailed information about a specific node, including the reason for the current condition:$ oc describe node <node>To display a summary of the resources you created: $ oc status In project default on server https://127.0.0.1:8443 svc/docker-registry - 172.30.1.1:5000 dc/docker-registry deploys docker.io/openshift/origin-docker-registry:v3.11 deployment #1 deployed 2 hours ago - 1 pod svc/kubernetes - 172.30.0.1:443 -> 8443 svc/router - 172.30.235.156 ports 80, 443, 1936 dc/router deploys docker.io/openshift/origin-haproxy-router:v3.11 deployment #1 deployed 2 hours ago - 1 pod View details with 'oc describe <resource>/<name>' or list everything with 'oc get all'. To return to the regular
developer
user, login as that user:$ oc login Authentication required for https://127.0.0.1:8443 (openshift) Username: developer Password: developer Login successful.Confirm if Login was successful.
$ oc whoami developerLet’s create a test project using oc new-project command. $ oc new-project dev --display-name="Project1 - Dev" \ --description="My Dev Project" Now using project "dev" on server "https://127.0.0.1:8443".
Using OKD Admin Console
OKD includes a web console which you can use for creation and management actions. This web console is accessible on Server IP/Hostname on the port,8443
via https.https://<IP|Hostname>:8443/consoleYou should see an OpenShift Origin window with Username and Password forms, similar to this one:
Login with:
Username: developer Password: developerYou should see a dashboard similar to below.
If you are redirected to https://127.0.0.1:8443/ when trying to access OpenShift web console, then do this:
1. Stop OpenShift Cluster
$ oc cluster down
2. Edit OCP configuration file.$ vi ./openshift.local.clusterup/openshift-controller-manager/openshift-master.kubeconfig
Locate line “server: https://127.0.0.1:8443“, then replace with:server: https://serverip:8443
3. Then start cluster:$ oc cluster up
Finally add below firewall settings
webconsole will not work from centos vm to outside if you don't allow below setting
firewall-cmd --zone=public --add-port=8443/tcp --permanent
firewall-cmd --reload
Step 7: Deploy Test Application
We can now deploy test Application in the cluster.1. Login to Openshift cluster:
$ oc login Authentication required for https://https://127.0.0.1:8443 (openshift) Username: developer Password: developer Login successful. You don't have any projects. You can try to create a new project, by running oc new-project2. Create a test project.
$ oc new-project test-project
3. Tag an application image from Docker Hub registry.$ oc tag --source=docker openshift/deployment-example:v2 deployment-example:latest Tag deployment-example:latest set to openshift/deployment-example:v2.4. Deploy Application to OpenShift.
$ oc new-app deployment-example --> Found image da61bb2 (3 years old) in image stream "test-project/deployment-example" under tag "latest" for "deployment-example" * This image will be deployed in deployment config "deployment-example" * Port 8080/tcp will be load balanced by service "deployment-example" * Other containers can access this service through the hostname "deployment-example" * WARNING: Image "test-project/deployment-example:latest" runs as the 'root' user which may not be permitted by your cluster administrator --> Creating resources ... deploymentconfig.apps.openshift.io "deployment-example" created service "deployment-example" created --> Success Application is not exposed. You can expose services to the outside world by executing one or more of the commands below: 'oc expose svc/deployment-example' Run 'oc status' to view your app.5. Show Application Deployment status.
$ oc status In project test-project on server https://127.0.0.1:8443 svc/deployment-example - 172.30.15.201:8080 dc/deployment-example deploys istag/deployment-example:latest deployment #1 deployed about a minute ago - 1 pod 2 infos identified, use 'oc status --suggest' to see details.6. Get service detailed information.
$ oc get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE deployment-example ClusterIP 172.30.15.201 8080/TCP 18m $ oc describe svc deployment-example Name: deployment-example Namespace: test-project Labels: app=deployment-example Annotations: openshift.io/generated-by=OpenShiftNewApp Selector: app=deployment-example,deploymentconfig=deployment-example Type: ClusterIP IP: 172.30.15.201 Port: 8080-tcp 8080/TCP TargetPort: 8080/TCP Endpoints: 172.17.0.12:8080 Session Affinity: None Events: <none>7. Test App local access.
curl http://172.30.15.201:8080
8. Show Pods status$ oc get pods
NAME READY STATUS RESTARTS AGE
deployment-example-1-vmf7t 1/1 Running 0 21m
9. Allow external access to the application.$ oc expose service/deployment-example
route.route.openshift.io/deployment-example exposed
$ oc get routes
NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD
deployment-example deployment-example-testproject.services.computingforgeeks.com deployment-example 8080-tcp None
10. Test external access to the application.Open the URL shown in your browser. Note that I have Wildcard DNS record for *.services.comp.com pointing to OpenShift Origin server IP address.
11. Delete test Application
$ oc delete all -l app=deployment-example pod "deployment-example-1-8n8sd" deleted replicationcontroller "deployment-example-1" deleted service "deployment-example" deleted deploymentconfig.apps.openshift.io "deployment-example" deleted route.route.openshift.io "deployment-example" deleted $ oc get pods No resources found.
login as: ubuntu Authenticating with public key "imported-openssh-key" Welcome to Ubuntu 18.04.3 LTS (GNU/Linux 4.15.0-1057-aws x86_64) * Documentation: https://help.ubuntu.com * Management: https://landscape.canonical.com * Support: https://ubuntu.com/advantage System information as of Thu Mar 26 18:25:04 UTC 2020 System load: 0.0 Processes: 86 Usage of /: 13.6% of 7.69GB Users logged in: 0 Memory usage: 14% IP address for eth0: 172.31.32.35 Swap usage: 0% 0 packages can be updated. 0 updates are security updates. The programs included with the Ubuntu system are free software; the exact distribution terms for each program are described in the individual files in /usr/share/doc/*/copyright. Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by applicable law. To run a command as administrator (user "root"), use "sudo <command>". See "man sudo_root" for details. ubuntu@ip-172-31-32-35:~$ sudo su - root@ip-172-31-32-35:~# curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add - OK root@ip-172-31-32-35:~# sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" Hit:1 http://us-east-1.ec2.archive.ubuntu.com/ubuntu bionic InRelease Get:2 http://us-east-1.ec2.archive.ubuntu.com/ubuntu bionic-updates InRelease [88.7 kB] Get:3 http://us-east-1.ec2.archive.ubuntu.com/ubuntu bionic-backports InRelease [74.6 kB] Get:4 https://download.docker.com/linux/ubuntu bionic InRelease [64.4 kB] Get:5 http://us-east-1.ec2.archive.ubuntu.com/ubuntu bionic/universe amd64 Packages [8570 kB] Get:6 http://security.ubuntu.com/ubuntu bionic-security InRelease [88.7 kB] Get:7 http://us-east-1.ec2.archive.ubuntu.com/ubuntu bionic/universe Translation-en [4941 kB] Get:8 http://us-east-1.ec2.archive.ubuntu.com/ubuntu bionic/multiverse amd64 Packages [151 kB] Get:9 http://us-east-1.ec2.archive.ubuntu.com/ubuntu bionic/multiverse Translation-en [108 kB] Get:10 http://us-east-1.ec2.archive.ubuntu.com/ubuntu bionic-updates/main amd64 Packages [892 kB] Get:11 http://us-east-1.ec2.archive.ubuntu.com/ubuntu bionic-updates/main Translation-en [309 kB] Get:12 http://us-east-1.ec2.archive.ubuntu.com/ubuntu bionic-updates/restricted amd64 Packages [36.1 kB] Get:13 http://us-east-1.ec2.archive.ubuntu.com/ubuntu bionic-updates/restricted Translation-en [9208 B] Get:14 http://us-east-1.ec2.archive.ubuntu.com/ubuntu bionic-updates/universe amd64 Packages [1060 kB] Get:15 http://us-east-1.ec2.archive.ubuntu.com/ubuntu bionic-updates/universe Translation-en [328 kB] Get:16 http://us-east-1.ec2.archive.ubuntu.com/ubuntu bionic-updates/multiverse amd64 Packages [10.3 kB] Get:17 http://us-east-1.ec2.archive.ubuntu.com/ubuntu bionic-updates/multiverse Translation-en [4664 B] Get:18 http://us-east-1.ec2.archive.ubuntu.com/ubuntu bionic-backports/main amd64 Packages [2512 B] Get:19 http://us-east-1.ec2.archive.ubuntu.com/ubuntu bionic-backports/main Translation-en [1644 B] Get:20 http://us-east-1.ec2.archive.ubuntu.com/ubuntu bionic-backports/universe amd64 Packages [4020 B] Get:21 http://us-east-1.ec2.archive.ubuntu.com/ubuntu bionic-backports/universe Translation-en [1900 B] Get:22 https://download.docker.com/linux/ubuntu bionic/stable amd64 Packages [11.0 kB] Get:23 http://security.ubuntu.com/ubuntu bionic-security/main amd64 Packages [670 kB] Get:24 http://security.ubuntu.com/ubuntu bionic-security/main Translation-en [216 kB] Get:25 http://security.ubuntu.com/ubuntu bionic-security/restricted amd64 Packages [27.1 kB] Get:26 http://security.ubuntu.com/ubuntu bionic-security/restricted Translation-en [7260 B] Get:27 http://security.ubuntu.com/ubuntu bionic-security/universe amd64 Packages [652 kB] Get:28 http://security.ubuntu.com/ubuntu bionic-security/universe Translation-en [216 kB] Get:29 http://security.ubuntu.com/ubuntu bionic-security/multiverse amd64 Packages [6968 B] Get:30 http://security.ubuntu.com/ubuntu bionic-security/multiverse Translation-en [2732 B] Fetched 18.6 MB in 4s (4814 kB/s) Reading package lists... Done root@ip-172-31-32-35:~# sudo apt update && sudo apt -y install docker-ce Hit:1 http://us-east-1.ec2.archive.ubuntu.com/ubuntu bionic InRelease Hit:2 http://us-east-1.ec2.archive.ubuntu.com/ubuntu bionic-updates InRelease Hit:3 http://us-east-1.ec2.archive.ubuntu.com/ubuntu bionic-backports InRelease Hit:4 http://security.ubuntu.com/ubuntu bionic-security InRelease Hit:5 https://download.docker.com/linux/ubuntu bionic InRelease Reading package lists... Done Building dependency tree Reading state information... Done 72 packages can be upgraded. Run 'apt list --upgradable' to see them. Reading package lists... Done Building dependency tree Reading state information... Done The following additional packages will be installed: aufs-tools cgroupfs-mount containerd.io docker-ce-cli libltdl7 pigz The following NEW packages will be installed: aufs-tools cgroupfs-mount containerd.io docker-ce docker-ce-cli libltdl7 pigz 0 upgraded, 7 newly installed, 0 to remove and 72 not upgraded. Need to get 85.8 MB of archives. After this operation, 385 MB of additional disk space will be used. Get:1 http://us-east-1.ec2.archive.ubuntu.com/ubuntu bionic/universe amd64 pigz amd64 2.4-1 [57.4 kB] Get:2 http://us-east-1.ec2.archive.ubuntu.com/ubuntu bionic/universe amd64 aufs-tools amd64 1:4.9+20170918-1ubuntu1 [104 kB] Get:3 http://us-east-1.ec2.archive.ubuntu.com/ubuntu bionic/universe amd64 cgroupfs-mount all 1.4 [6320 B] Get:4 http://us-east-1.ec2.archive.ubuntu.com/ubuntu bionic/main amd64 libltdl7 amd64 2.4.6-2 [38.8 kB] Get:5 https://download.docker.com/linux/ubuntu bionic/stable amd64 containerd.io amd64 1.2.13-1 [20.1 MB] Get:6 https://download.docker.com/linux/ubuntu bionic/stable amd64 docker-ce-cli amd64 5:19.03.8~3-0~ubuntu-bionic [42.6 MB] Get:7 https://download.docker.com/linux/ubuntu bionic/stable amd64 docker-ce amd64 5:19.03.8~3-0~ubuntu-bionic [22.9 MB] Fetched 85.8 MB in 2s (51.6 MB/s) Selecting previously unselected package pigz. (Reading database ... 56554 files and directories currently installed.) Preparing to unpack .../0-pigz_2.4-1_amd64.deb ... Unpacking pigz (2.4-1) ... Selecting previously unselected package aufs-tools. Preparing to unpack .../1-aufs-tools_1%3a4.9+20170918-1ubuntu1_amd64.deb ... Unpacking aufs-tools (1:4.9+20170918-1ubuntu1) ... Selecting previously unselected package cgroupfs-mount. Preparing to unpack .../2-cgroupfs-mount_1.4_all.deb ... Unpacking cgroupfs-mount (1.4) ... Selecting previously unselected package containerd.io. Preparing to unpack .../3-containerd.io_1.2.13-1_amd64.deb ... Unpacking containerd.io (1.2.13-1) ... Selecting previously unselected package docker-ce-cli. Preparing to unpack .../4-docker-ce-cli_5%3a19.03.8~3-0~ubuntu-bionic_amd64.deb ... Unpacking docker-ce-cli (5:19.03.8~3-0~ubuntu-bionic) ... Selecting previously unselected package docker-ce. Preparing to unpack .../5-docker-ce_5%3a19.03.8~3-0~ubuntu-bionic_amd64.deb ... Unpacking docker-ce (5:19.03.8~3-0~ubuntu-bionic) ... Selecting previously unselected package libltdl7:amd64. Preparing to unpack .../6-libltdl7_2.4.6-2_amd64.deb ... Unpacking libltdl7:amd64 (2.4.6-2) ... Setting up aufs-tools (1:4.9+20170918-1ubuntu1) ... Setting up containerd.io (1.2.13-1) ... Created symlink /etc/systemd/system/multi-user.target.wants/containerd.service → /lib/systemd/system/containerd.service. Setting up cgroupfs-mount (1.4) ... Setting up libltdl7:amd64 (2.4.6-2) ... Setting up docker-ce-cli (5:19.03.8~3-0~ubuntu-bionic) ... Setting up pigz (2.4-1) ... Setting up docker-ce (5:19.03.8~3-0~ubuntu-bionic) ... Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /lib/systemd/system/docker.service. Created symlink /etc/systemd/system/sockets.target.wants/docker.socket → /lib/systemd/system/docker.socket. Processing triggers for libc-bin (2.27-3ubuntu1) ... Processing triggers for systemd (237-3ubuntu10.33) ... Processing triggers for man-db (2.8.3-2ubuntu0.1) ... Processing triggers for ureadahead (0.100.0-21) ... root@ip-172-31-32-35:~# docker version Client: Docker Engine - Community Version: 19.03.8 API version: 1.40 Go version: go1.12.17 Git commit: afacb8b7f0 Built: Wed Mar 11 01:25:46 2020 OS/Arch: linux/amd64 Experimental: false Server: Docker Engine - Community Engine: Version: 19.03.8 API version: 1.40 (minimum version 1.12) Go version: go1.12.17 Git commit: afacb8b7f0 Built: Wed Mar 11 01:24:19 2020 OS/Arch: linux/amd64 Experimental: false containerd: Version: 1.2.13 GitCommit: 7ad184331fa3e55e52b890ea95e65ba581ae3429 runc: Version: 1.0.0-rc10 GitCommit: dc9208a3303feef5b3839f4323d9beb36df0a9dd docker-init: Version: 0.18.0 GitCommit: fec3683 root@ip-172-31-32-35:~# sudo usermod -aG docker $USER root@ip-172-31-32-35:~# wget https://github.com/openshift/origin/releases/download/v3.11.0/openshift-origin-client-tools-v3.11.0-0cbc58b-linux-64bit.tar.gz --2020-03-26 18:27:44-- https://github.com/openshift/origin/releases/download/v3.11.0/openshift-origin-client-tools-v3.11.0-0cbc58b-linux-64bit.tar.gz Resolving github.com (github.com)... 140.82.114.4 Connecting to github.com (github.com)|140.82.114.4|:443... connected. HTTP request sent, awaiting response... 302 Found Location: https://github-production-release-asset-2e65be.s3.amazonaws.com/22442668/bc49e200-cd4b-11e8-867b-80841e1e238f?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20200326%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20200326T182637Z&X-Amz-Expires=300&X-Amz-Signature=19ead5f0ecc90618d4bbdccc2c9e712575bb145f8a3a32f1b5479078285d5197&X-Amz-SignedHeaders=host&actor_id=0&response-content-disposition=attachment%3B%20filename%3Dopenshift-origin-client-tools-v3.11.0-0cbc58b-linux-64bit.tar.gz&response-content-type=application%2Foctet-stream [following] --2020-03-26 18:27:44-- https://github-production-release-asset-2e65be.s3.amazonaws.com/22442668/bc49e200-cd4b-11e8-867b-80841e1e238f?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20200326%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20200326T182637Z&X-Amz-Expires=300&X-Amz-Signature=19ead5f0ecc90618d4bbdccc2c9e712575bb145f8a3a32f1b5479078285d5197&X-Amz-SignedHeaders=host&actor_id=0&response-content-disposition=attachment%3B%20filename%3Dopenshift-origin-client-tools-v3.11.0-0cbc58b-linux-64bit.tar.gz&response-content-type=application%2Foctet-stream Resolving github-production-release-asset-2e65be.s3.amazonaws.com (github-production-release-asset-2e65be.s3.amazonaws.com)... 52.217.1.244 Connecting to github-production-release-asset-2e65be.s3.amazonaws.com (github-production-release-asset-2e65be.s3.amazonaws.com)|52.217.1.244|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 56507103 (54M) [application/octet-stream] Saving to: ‘openshift-origin-client-tools-v3.11.0-0cbc58b-linux-64bit.tar.gz’ openshift-origin-client-tools-v3.11.0-0cb 100%[=====================================================================================>] 53.89M 45.5MB/s in 1.2s 2020-03-26 18:27:46 (45.5 MB/s) - ‘openshift-origin-client-tools-v3.11.0-0cbc58b-linux-64bit.tar.gz’ saved [56507103/56507103] root@ip-172-31-32-35:~# tar xvf openshift-origin-client-tools*.tar.gz tar: Ignoring unknown extended header keyword 'LIBARCHIVE.xattr.security.selinux' openshift-origin-client-tools-v3.11.0-0cbc58b-linux-64bit/oc tar: Ignoring unknown extended header keyword 'LIBARCHIVE.xattr.security.selinux' openshift-origin-client-tools-v3.11.0-0cbc58b-linux-64bit/kubectl tar: Ignoring unknown extended header keyword 'LIBARCHIVE.xattr.security.selinux' openshift-origin-client-tools-v3.11.0-0cbc58b-linux-64bit/README.md tar: Ignoring unknown extended header keyword 'LIBARCHIVE.xattr.security.selinux' openshift-origin-client-tools-v3.11.0-0cbc58b-linux-64bit/LICENSE root@ip-172-31-32-35:~# cd openshift-origin-client*/ root@ip-172-31-32-35:~/openshift-origin-client-tools-v3.11.0-0cbc58b-linux-64bit# ls -lhtr total 230M -rwxrwxr-x 1 root root 115M Oct 10 2018 oc -rwxrwxr-x 1 root root 115M Oct 10 2018 kubectl -rw-rwxr-- 1 root root 16K Oct 10 2018 README.md -rw-rwxr-- 1 root root 11K Oct 10 2018 LICENSE root@ip-172-31-32-35:~/openshift-origin-client-tools-v3.11.0-0cbc58b-linux-64bit# sudo mv oc kubectl /usr/local/bin/ root@ip-172-31-32-35:~/openshift-origin-client-tools-v3.11.0-0cbc58b-linux-64bit# oc version oc v3.11.0+0cbc58b kubernetes v1.11.0+d4cacc0 features: Basic-Auth GSSAPI Kerberos SPNEGO root@ip-172-31-32-35:~/openshift-origin-client-tools-v3.11.0-0cbc58b-linux-64bit# cat << EOF | sudo tee /etc/docker/daemon.json > { > "insecure-registries" : [ "172.30.0.0/16" ] > } > EOF { "insecure-registries" : [ "172.30.0.0/16" ] } root@ip-172-31-32-35:~/openshift-origin-client-tools-v3.11.0-0cbc58b-linux-64bit# cat << EOF | sudo tee /etc/docker/daemon.json > { > "insecure-registries" : [ "172.30.0.0/16" ] > } > EOF { "insecure-registries" : [ "172.30.0.0/16" ] } root@ip-172-31-32-35:~/openshift-origin-client-tools-v3.11.0-0cbc58b-linux-64bit# sudo systemctl restart docker root@ip-172-31-32-35:~/openshift-origin-client-tools-v3.11.0-0cbc58b-linux-64bit# oc cluster up Getting a Docker client ... Checking if image openshift/origin-control-plane:v3.11 is available ... Pulling image openshift/origin-control-plane:v3.11 E0326 18:29:48.494783 4386 helper.go:179] Reading docker config from /root/.docker/config.json failed: open /root/.docker/config.json: no such file or directory, will attempt to pull image docker.io/openshift/origin-control-plane:v3.11 anonymously Pulled 1/5 layers, 23% complete Pulled 2/5 layers, 53% complete Pulled 3/5 layers, 68% complete Pulled 4/5 layers, 85% complete Pulled 5/5 layers, 100% complete Extracting Image pull complete Pulling image openshift/origin-cli:v3.11 E0326 18:30:07.672932 4386 helper.go:179] Reading docker config from /root/.docker/config.json failed: open /root/.docker/config.json: no such file or directory, will attempt to pull image docker.io/openshift/origin-cli:v3.11 anonymously Image pull complete Pulling image openshift/origin-node:v3.11 E0326 18:30:07.888628 4386 helper.go:179] Reading docker config from /root/.docker/config.json failed: open /root/.docker/config.json: no such file or directory, will attempt to pull image docker.io/openshift/origin-node:v3.11 anonymously Pulled 5/6 layers, 91% complete Pulled 6/6 layers, 100% complete Extracting Image pull complete Creating shared mount directory on the remote host ... Determining server IP ... Checking if OpenShift is already running ... Checking for supported Docker version (=>1.22) ... Checking if insecured registry is configured properly in Docker ... Checking if required ports are available ... Checking if OpenShift client is configured properly ... Checking if image openshift/origin-control-plane:v3.11 is available ... Starting OpenShift using openshift/origin-control-plane:v3.11 ... I0326 18:30:18.653499 4386 config.go:40] Running "create-master-config" I0326 18:30:24.125494 4386 config.go:46] Running "create-node-config" I0326 18:30:25.846041 4386 flags.go:30] Running "create-kubelet-flags" I0326 18:30:26.796586 4386 run_kubelet.go:49] Running "start-kubelet" I0326 18:30:27.170590 4386 run_self_hosted.go:181] Waiting for the kube-apiserver to be ready ... I0326 18:31:00.384162 4386 interface.go:26] Installing "kube-proxy" ... I0326 18:31:00.385070 4386 interface.go:26] Installing "kube-dns" ... I0326 18:31:00.385084 4386 interface.go:26] Installing "openshift-service-cert-signer-operator" ... I0326 18:31:00.385092 4386 interface.go:26] Installing "openshift-apiserver" ... I0326 18:31:00.385120 4386 apply_template.go:81] Installing "openshift-apiserver" I0326 18:31:00.386177 4386 apply_template.go:81] Installing "kube-proxy" I0326 18:31:00.396455 4386 apply_template.go:81] Installing "kube-dns" I0326 18:31:00.407936 4386 apply_template.go:81] Installing "openshift-service-cert-signer-operator" E0326 18:43:47.150792 4386 interface.go:34] Failed to install "openshift-service-cert-signer-operator": failed to install "openshift-service-cert-signer-operator": Docker run error rc=1; caused by: Docker run error rc=1 E0326 18:48:28.674973 4386 interface.go:34] Failed to install "kube-proxy": failed to install "kube-proxy": Docker run error rc=1; caused by: Docker run error rc=1 I0326 18:48:56.362738 4386 interface.go:41] Finished installing "kube-proxy" "kube-dns" "openshift-service-cert-signer-operator" "openshift-apiserver" Error: [failed to install "openshift-service-cert-signer-operator": Docker run error rc=1; caused by: Docker run error rc=1, failed to install "kube-proxy": Docker run error rc=1; caused by: Docker run error rc=1] root@ip-172-31-32-35:~/openshift-origin-client-tools-v3.11.0-0cbc58b-linux-64bit# oc cluster up --help ^C root@ip-172-31-32-35:~/openshift-origin-client-tools-v3.11.0-0cbc58b-linux-64bit# oc cluster up --help
Article is very useful, checkout this link also
ReplyDeleteDevOps Interview Questions with Answers
Docker Interview Questions with Answers
Git Interview Questions with Answers
How To Setup Local Openshift Origin (Okd) Cluster On Centos 7 ~ System Admin Share >>>>> Download Now
ReplyDelete>>>>> Download Full
How To Setup Local Openshift Origin (Okd) Cluster On Centos 7 ~ System Admin Share >>>>> Download LINK
>>>>> Download Now
How To Setup Local Openshift Origin (Okd) Cluster On Centos 7 ~ System Admin Share >>>>> Download Full
>>>>> Download LINK
How To Setup Local Openshift Origin (Okd) Cluster On Centos 7 ~ System Admin Share >>>>> Download Now
ReplyDelete>>>>> Download Full
How To Setup Local Openshift Origin (Okd) Cluster On Centos 7 ~ System Admin Share >>>>> Download LINK
>>>>> Download Now
How To Setup Local Openshift Origin (Okd) Cluster On Centos 7 ~ System Admin Share >>>>> Download Full
>>>>> Download LINK gp