Creating and managing persistent storage on Open shift or Kubernetes

Creating a persistent volume



By default, OpenShift/Kubernetes containers don't store data persistently. In practice, when you start an a container from an immutable Docker image Openshift will use an ephemeral storage, that is to say that all data created by the container will be lost when you stop or restart the container. This approach works mostly for stateless scenarios; however for applications, like a database or a messaging system need a persistent storage that is able to survice the container crash or restart.

In order to do that, you need an object called PersistentVolume which is a storage resource in an OpenShift cluster that is available to developers via Persistent Volume Claims (PVC).

Persistent Volume is shared across the OpenShift cluster since any of them can be potentially used by any project. On the other hand, a Persistent Volume Claim is a kind of resources which is specific to a project (namespace).








Behind the hoods, when you create a PVC, OpenShift tries to find a matching PV based on the size requirements, access mode (RWO, ROX, RWX). If found, the PV is bound to the PVC and it will not be able to be bound to other volume claims.

Persistent Volumes and Persistent Volume Claims

So to summarize: a PersistentVolume OpenShift API object which describes an existing storage infrastructure (NFS share, GlusterFS volume, iSCSI target,etc). A Persistent Volume claim represents a request made by an end user which consumes PV resources.

Persistent Volume Types


OpenShift supports a wide range of PersistentVolume. Some of them, like NFS, you probably already know and all have some pros and cons :

NFS

Static provisioner, manually and statically pre-provisioned, inefficient space allocation
Well known by system administrators, easy to set up, good for tests
Supports ReadWriteOnce and ReadWriteMany policy

Ceph RBD

Can provision dynamically resources, Ceph block devices are automatically created, presented to the host, formatted and presented (mounted into) to the container
Excellent when running Kubernetes on top of OpenStack

Ceph FS

Same as RBD but already a filesystem, a shared one too
Supports ReadWriteMany
Excellent when running Kubernetes on top of OpenStack with Ceph

Gluster FS

Dynamic provisioner
Supports ReadWriteOnce
Available on-premise and in public cloud with lower TCO than public cloud providers Filesystem-as-a-Service
Supports Container Native Storage

GCE Persistent Disk / AWS EBS / AzureDisk

Dynamic provisioner, block devices are requested via the provider API, then automatically presented to the instance running Kubernetes/OpenShift and the container, formatted etc
Does not support ReadWriteMany
Performance may be problematic on small capacities ( <100GB, typical for PVCs)

AWS EFS / AzureFile

Dynamic provisioner, filesystems are requested via the provider API, mounted on the container host and then bind-mounted to the app container
Supports ReadWriteMany
Usually quite expensive

NetApp

dynamic provisioner called trident
supports ReadWriteOnce (block or file-based), ReadWriteMany (file-based), ReadOnlyMany (file-based)
Requires NetApp Data OnTap or Solid Fire Storage

Two kinds of Storage

There are essentially two types of Storages for Containers:
✓ Container‐ready storage: This is essentially a setup where storage is exposed to a container or a group of containers from an external mount point over the network. Most storage solutions, including SDS, storage area networks (SANs), or network‐attached storage (NAS) can be set up this way using standard interfaces. However, this may not offer any additional value to a container environment from a storage perspective. For example, few traditional storage
platforms have external application programming interfaces (APIs), which can be leveraged by Kubernetes for Dynamic Provisioning.
✓ Storage in containers: Storage deployed inside containers, alongside applications running in containers, is an important innovation that benefits both developers and administrators. By containerizing storage services and managing them under a single management plane such as Kubernetes, administrators have fewer housekeeping tasks to deal with, allowing them to focus on more value‐added tasks. In addition, they can run their applications and their storage platform on the same set of infrastructure, which reduces infrastructure expenditure.
Developers benefit by being able to provision application storage that’s both highly elastic and developer‐friendly. Openshift takes storage in containers to a new level by integrating Red Hat Gluster Storage into Red Hat OpenShift Container Platform — a solution known as Container‐Native Storage.In this tutorial we will use a Container-ready storage example that uses an NFS mount point on "/exports/volume1"

Configuring Persistent Volumes (PV)

To start configuring a Persistent Volume you have to switch to the admin account:

$ oc login -u system:admin
We assume that you have available an NFS storage at the following path: /mnt/exportfs
The following mypv.yaml provides a Persistent Volume Definition and a related Persistent Volume Claim:


kind: PersistentVolume
apiVersion: v1
metadata:
  name: mysql-pv-volume
  labels:
    type: local
spec:
  storageClassName: manual
  capacity:
    storage: 20Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/mnt/exportfs"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: mysql-pv-claim
spec:
  storageClassName: manual
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 10Gi




 You can create both resources using:

oc create -f mypv.yaml
Let's check the list of Persistent Volumes:

$ oc get pv
NAME              CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM     STORAGECLASS   REASON    AGE
mysql-pv-volume   10Gi      RWO            Recycle          Available
As you can see, the "mysql-pv-volume" Persistent Volume is included in the list. You will also find a list of pre-built Persistent Volumes.

Using Persistent Volumes in pods

After creating the Persistent Volume, we can request storage through a PVC to request storage and later use that PVC to attach it as a volume to containers in pods. For this purpose, let's create a new project to manage this persistent storage:
1
oc new-project persistent-storage
Now let's a MySQL app, which contains a reference to our Persistent Volume Claim:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
apiVersion: v1
kind: Service
metadata:
  name: mysql
spec:
  ports:
  - port: 3306
  selector:
    app: mysql
  clusterIP: None
---
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
  name: mysql
spec:
  selector:
    matchLabels:
      app: mysql
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: mysql
    spec:
      containers:
      - image: mysql:5.6
        name: mysql
        env:
          # Use secret in real usage
        - name: MYSQL_ROOT_PASSWORD
          value: password
        ports:
        - containerPort: 3306
          name: mysql
        volumeMounts:
        - name: mysql-persistent-storage
          mountPath: /var/lib/mysql
      volumes:
      - name: mysql-persistent-storage
        persistentVolumeClaim:
          claimName: mysql-pv-claim
Create the app using the 'oc command':
1
oc create -f mysql-deployment.yaml
This will automatically deploy MySQL container in a Pod and place database files on the persistent storage.
1
2
3
4
5
$ oc get pods
NAME               READY     STATUS    RESTARTS   AGE
mysql-1-kpgjb    1/1       Running   0          13m
mysql-2-deploy   1/1       Running   0          2m
mysql-2-z6rhf    0/1       Pending   0          2m
Let's check that the Persistent Volume Claim has been bound correctly:
1
2
3
4
5
6
7
8
9
10
11
12
13
$ oc describe pvc mysql-pv-claim
Name:         mysql-pv-claim
Namespace:    default
StorageClass:
Status:       Bound
Volume:       mysql-pv-volume
Labels:       <none>
Annotations:    pv.kubernetes.io/bind-completed=yes
                pv.kubernetes.io/bound-by-controller=yes
Capacity:     10Gi
Access Modes: RWO
Events:       <none>
 Done! now let's connect to the MySQL Pod using the mysql tool and add a new Database:

1
2
3
4
$ oc rsh mysql-1-kpgjb
sh-4.2$ mysql -u root -p
Enter password:
Welcome to the MySQL monitor.  Commands end with ; or \g.
Let's add a new Database named "sample":
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
MySQLDB [(none)]> create database sample;
Query OK, 1 row affected (0.00 sec)
MySQLDB [(none)]> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| mysql              |
| openshift          |
| performance_schema |
| sample             |
| test               |
+--------------------+
6 rows in set (0.00 sec)
MySQL [(none)]> exit
As proof of concept, we will kill the Pod where MySQL is running so that it will be automatically restarted:
1
2
$ oc delete pod mysql-1-kpgjb
pod "mysqldb-1-kpgjb" deleted
In a few seconds the Pod will restart:
1
2
3
4
5
$ oc get pods
NAME               READY     STATUS    RESTARTS   AGE
mysql-1-5jmc5    1/1       Running   0          27s
mysql-2-deploy   1/1       Running   0          3m
mysql-2-z6rhf    0/1       Pending   0          3m
Let's connect again to the Database and check that the "sample" DB is still there:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
$ oc rsh mysql-1-5jmc5
sh-4.2$ mysql -u root -p
Enter password:
Welcome to the MySQLDB monitor.  Commands end with ; or \g.
Your MySQLDB connection id is 9
Server version: 10.2.8-MySQLDB MySQL Server
Copyright (c) 2000, 2017, Oracle, MySQL Corporation Ab and others.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
MySQL [(none)]> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| mysql              |
| openshift          |
| performance_schema |
| sample             |
+--------------------+
PVs in OpenShift rely on one of the listed types of network storage to make the storage available across all nodes in a cluster. For the examples in the next few chapters, you’ll use a PV built with NFS storage. 

Logging in as the admin user

When an OpenShift cluster is installed, it creates a configuration file for a special user named system:admin on the master server. The system:admin user is authenticated using a specific SSL certificate, regardless of the authentication provider that’s configured. System:admin has full administrative privileges on an OpenShift cluster. The key and certificate for system:admin are placed in a Kubernetes configuration file when the OpenShift cluster is installed; this makes it easier to run commands as system:admin. To run commands as system:admin, you need to copy this configuration file to the local system where you’ve been running oc commands.
Example admin.kubeconfig file (certificate and key fields trimmed)




apiVersion: v1
clusters:
- cluster:
    certificate-authority-data:
    LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tL...
    server: https://ocp-1.192.168.122.100.nip.io:8443
  name: ocp-1-192-168-122-100-nip-io:8443
contexts:
- context:
    cluster: ocp-1-192-168-122-100-nip-io:8443
    namespace: default
    user: system:admin/ocp-1-192-168-122-100-nip-io:8443
  name: default/ocp-1-192-168-122-100-nip-io:8443/system:admin
current-context: default/ocp-1-192-168-122-100-nip-io:8443/system:admin
kind: Config
preferences: {}
users:
- name: system:admin/ocp-1-192-168-122-100-nip-io:8443
  user:
    client-certificate-data:
    LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tL...
    client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLR...

Creating new resources from the command line

OpenShift makes extensive use of configuration files written in YAML format. YAML is a human-readable language that’s often used for configuration files and to serialize data in a way that’s easy for both humans and computers to consume. YAML is the default way to push data into and get data out of OpenShift.
Template to create a PV using the NFS volume created 
apiVersion: v1kind: PersistentVolumemetadata:
  name: pv01spec:
  capacity:
    storage: 2Gi
  accessModes:
  - ReadWriteMany
  nfs:
    path: /var/nfs-data/pv01
    server: 192.168.122.100
  persistentVolumeReclaimPolicy: Recycle

    Creating a physical volume

    To create a resource from a YAML template, use the oc create command along with the -f parameter, which specifies the template file you want to process. To create the PV for this example, you’ll use the template named pv01.yaml,  repository you cloned onto your workstation earlier 
    $ oc --config ~/admin.kubeconfig get pv
    NAME   CAPACITY ACCESSMODES  RECLAIMPOLICY   STATUS     CLAIM  REASON  AGE
    pv01   2Gi      RWX          Recycle         Available                 15s
    pv02   2Gi      RWX          Recycle         Available                  9s
    pv03   2Gi      RWX          Recycle         Available                  8s
    pv04   2Gi      RWX          Recycle         Available                  8s
    pv05   2Gi      RWX         Recycle         Available                  8s
     Using persistent storage
    Now that you have PVs configured, it’s time to take advantage of them. In OpenShift, applications consume persistent storage using persistent volume claims (PVCs). A PVC can be added into an application as a volume using the command line or through the web interface. Let’s create a PVC on the command line and add it to an application.

    Adding a volume to an application on the command line

    In OpenShift, a volume is any filesystem, file, or data mounted into an application’s pods to provide persistent data. In this chapter, we’re concerned with persistent storage volumes. Volumes also are used to provide encrypted data, application configurations, and other types of data,
    $ oc volume dc/app-cli --add \
    --type=persistentVolumeClaim \
    --claim-name=app-cli \
    --mount-path=/opt/app-root/src/uploads
    info: Generated volume name: volume-l4dz0
    deploymentconfig "app-cli" updated
    $ oc describe dc/app-cli
    Name:        app-cli
    Namespace:    image-uploader
    Created:    2 hours ago
    Labels:        app=app-cli
    Annotations:    openshift.io/generated-by=OpenShiftNewApp
    Latest Version:    2
    Selector:    app=app-cli,deploymentconfig=app-cli
    Replicas:    1
    Triggers:    Config, Image(app-cli@latest, auto=true)
    Strategy:    Rolling
    Template:
      Labels:    app=app-cli
            deploymentconfig=app-cli
      Annotations:    openshift.io/generated-by=OpenShiftNewApp
      Containers:
       app-cli:
        Image:    172.30.52.103:5000/image-uploader/
         app-cli@sha256:f5ffe8c1...
        Port:    8080/TCP
        Volume Mounts:
          /opt/app-root/src/uploads from volume-l4dz0 (rw)
        Environment Variables:    <none>
      Volumes:
       volume-l4dz0:
        Type:    PersistentVolumeClaim (a reference to a PersistentVolumeClaim in
         the same namespace)
        ClaimName:    app-cli
        ReadOnly:    false
    ...

    Adding persistent storage to an application using the web interface

    Creating PVCs and associating them as persistent volumes in applications is easy to do using the web interface.
    Creating a persistent volume claim with the web interface
     The Storage link to create PVCs in the web interface
     Creating a PVC using the web interface
    The required parameters for the web interface are as follows:
    • te the PVC for app-gui using the web interface
    In the web interface, use the link to the deployment overview to edit your applications.
     The app-gui Deployments page. There’s currently only the initial deployment for the application. Adding the PVC as a PV will trigger a new deployment.
     Adding a persistent volume to app-gui
    After adding a new PV, OpenShift redeploys the app-gui application to incorporate the persistent storage in the pod.

     Testing applications after adding persistent storage

    First, the fun stuff. Because app-gui and app-cli are individual instances of the same Image Uploader application, they both have web pages that you can access through your browser. Each application has an active URL address that leads you to the application . For both app-gui and app-cli, browse to the web interfaces and upload a few pictures. These pictures are stored in the uploads directory in each application’s pod. That means the pictures are stored on the two PVs you just configured. In my case, I uploaded pictures of container ship accidents into app-cli, and I uploaded pictures of my daughter into app-gui.
    On the project overview page, each application’s URL is an active link to the application.
    NOTE
    To upload pictures with the Image Uploader program, use the Choose File button on the main page to select pictures from your workstation.

    Data doesn’t get mixed up

    Notice that you don’t see pictures in the wrong places after you upload them. That’s because each application deployment is using its own NFS volume to store data. Each NFS volume is mounted into its application’s mount namespace, as we talked about , so the application’s data is always separated. It isn’t possible for one application to inadvertently put data, or funny pictures, in the wrong place. The true test will come when you force OpenShift to create a new copy of your application’s pod.

     Forcing a pod restart

    At the beginning of the chapter, you deleted a pod and noticed that the uploaded pictures were lost when OpenShift automatically replaced the deleted pod. Go ahead and repeat the experiment, this time for both applications. Here’s the process in action:
    $ oc get pods
    NAME               READY     STATUS      RESTARTS   AGE
    app-cli-1-build    0/1       Completed   0          1d
    app-cli-2-1bwrd    1/1       Running     0          1d
    app-gui-1-build    0/1       Completed   0          3h
    app-gui-2-lkpn0    1/1       Running     0          1h
    $ oc delete pod app-cli-2-1bwrd app-gui-2-lkpn0
    pod "app-cli-2-1bwrd" deleted
    pod "app-gui-2-lkpn0" deleted
    $ oc get pods
    NAME               READY     STATUS      RESTARTS   AGE
    app-cli-1-build    0/1       Completed   0          1d
    app-cli-2-m2k7v    1/1       Running     0          34s
    app-gui-1-build    0/1       Completed   0          3h
    app-gui-2-27h64    1/1       Running     0          34s
    Investigating persistent volume mounts
    Because you’re using NFS server exports as the source for your PVs, it stands to reason that somewhere on the OpenShift node, those NFS volumes are mounted. You can see that this is the case by looking at the following example. SSH into the OpenShift node where the containers are running, run the mount command, and search for mounted volumes from the IP address of the NFS server. In my environment, the IP address of my OpenShift master server is 192.168.122.100:
    $ mount | grep 192.168.122.100
    192.168.122.100:/var/nfs-data/pv05 on /var/lib/origin/openshift.local.
    volumes/pods/b693b1ad-5496-11e7-a7ee-52540092ab8c/volumes/
    kubernetes.io~nfs/pv05 type nfs4 (rw,relatime,vers=4.0,rsize=524288,
    size=524288,namlen=255,hard,proto=tcp,port=0,timeo=600,retrans=2,sec=sys,
    clientaddr=192.168.122.101,local_lock=none,addr=192.168.122.100)
    192.168.122.100:/var/nfs-data/pv01 on /var/lib/origin/openshift.local.
    volumes/pods/b69da9c5-5496-11e7-a7ee-52540092ab8c/volumes/
    kubernetes.io~nfs/pv01 type nfs4 (rw,relatime,vers=4.0,rsize=524288,
    wsize=524288,namlen=255,hard,proto=tcp,port=0,timeo=600,retrans=2,
    sec=sys, clientaddr=192.168.122.101,local_lock=none,addr=192.168.122.100)
     Volumes are bind mounted into the container from the host
    Summary
    • When an application pod is removed or dies, OpenShift automatically replaces it with a new instance of the application.
    • OpenShift can use multiple network storage services, including NFS, to provide persistent storage for applications.
    • When using persistent storage, applications in OpenShift can share data and provide data across upgrades, upgrades, and container replacement.
    • OpenShift uses persistent volumes to represent available network storage volumes.
    • Persistent volume claims are associated with a project and match criteria such as capacity needed and access mode to bind to and reserve a persistent volume.
    • Persistent volume claims can be mounted into OpenShift applications as volumes, mounting the network storage into the container’s filesystem in the desired location.
    • OpenShift manages the persistent volume using the proper network storage protocol and uses bind mounts to present the remote volumes in application containers.

    Deploying WordPress and MySQL with Persistent Volumes

    This tutorial shows you how to deploy a WordPress site and a MySQL database using Minikube. Both applications use PersistentVolumes and PersistentVolumeClaims to store data.
    A PersistentVolume (PV) is a piece of storage in the cluster that has been manually provisioned by an administrator, or dynamically provisioned by Kubernetes using a StorageClass. A PersistentVolumeClaim (PVC) is a request for storage by a user that can be fulfilled by a PV. PersistentVolumes and PersistentVolumeClaims are independent from Pod lifecycles and preserve data through restarting, rescheduling, and even deleting Pods.
    Warning: This deployment is not suitable for production use cases, as it uses single instance WordPress and MySQL Pods. Consider using WordPress Helm Chart to deploy WordPress in production.
    Note: The files provided in this tutorial are using GA Deployment APIs and are specific to kubernetes version 1.9 and later. If you wish to use this tutorial with an earlier version of Kubernetes, please update the API version appropriately, or reference earlier versions of this tutorial.
    • Create PersistentVolumeClaims and PersistentVolumes
    • Create a kustomization.yaml with
      • a Secret generator
      • MySQL resource configs
      • WordPress resource configs
    • Apply the kustomization directory by kubectl apply -k ./
    • Clean up

    Before you begin

    You need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using Minikube, or you can use one of these Kubernetes playgrounds:
    • Katacoda
    • Play with Kubernetes
    To check the version, enter kubectl version.
    The example shown on this page works with kubectl 1.14 and above.

    Create PersistentVolumeClaims and PersistentVolumes

    MySQL and Wordpress each require a PersistentVolume to store data. Their PersistentVolumeClaims will be created at the deployment step.
    Many cluster environments have a default StorageClass installed. When a StorageClass is not specified in the PersistentVolumeClaim, the cluster’s default StorageClass is used instead.
    When a PersistentVolumeClaim is created, a PersistentVolume is dynamically provisioned based on the StorageClass configuration.
    Warning: In local clusters, the default StorageClass uses the hostPath provisioner. hostPath volumes are only suitable for development and testing. With hostPath volumes, your data lives in /tmp on the node the Pod is scheduled onto and does not move between nodes. If a Pod dies and gets scheduled to another node in the cluster, or the node is rebooted, the data is lost.
    Note: If you are bringing up a cluster that needs to use the hostPath provisioner, the --enable-hostpath-provisioner flag must be set in the controller-manager component.
    Note: If you have a Kubernetes cluster running on Google Kubernetes Engine, please follow this guide.

    Create a kustomization.yaml

    Add a Secret generator

    A Secret is an object that stores a piece of sensitive data like a password or key. Since 1.14, kubectl supports the management of Kubernetes objects using a kustomization file. You can create a Secret by generators in kustomization.yaml.
    Add a Secret generator in kustomization.yaml from the following command. You will need to replace YOUR_PASSWORD with the password you want to use.
    cat <<EOF >./kustomization.yaml
    secretGenerator:
    - name: mysql-pass
      literals:
      - password=YOUR_PASSWORD
    EOF
    

    Add resource configs for MySQL and WordPress

    The following manifest describes a single-instance MySQL Deployment. The MySQL container mounts the PersistentVolume at /var/lib/mysql. The MYSQL_ROOT_PASSWORD environment variable sets the database password from the Secret.
    application/wordpress/mysql-deployment.yaml 
    apiVersion: v1
    kind: Service
    metadata:
      name: wordpress-mysql
      labels:
        app: wordpress
    spec:
      ports:
        - port: 3306
      selector:
        app: wordpress
        tier: mysql
      clusterIP: None
    ---
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: mysql-pv-claim
      labels:
        app: wordpress
    spec:
      accessModes:
        - ReadWriteOnce
      resources:
        requests:
          storage: 20Gi
    ---
    apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
    kind: Deployment
    metadata:
      name: wordpress-mysql
      labels:
        app: wordpress
    spec:
      selector:
        matchLabels:
          app: wordpress
          tier: mysql
      strategy:
        type: Recreate
      template:
        metadata:
          labels:
            app: wordpress
            tier: mysql
        spec:
          containers:
          - image: mysql:5.6
            name: mysql
            env:
            - name: MYSQL_ROOT_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: mysql-pass
                  key: password
            ports:
            - containerPort: 3306
              name: mysql
            volumeMounts:
            - name: mysql-persistent-storage
              mountPath: /var/lib/mysql
          volumes:
          - name: mysql-persistent-storage
            persistentVolumeClaim:
              claimName: mysql-pv-claim
    
    The following manifest describes a single-instance WordPress Deployment. The WordPress container mounts the PersistentVolume at /var/www/html for website data files. The WORDPRESS_DB_HOST environment variable sets the name of the MySQL Service defined above, and WordPress will access the database by Service. The WORDPRESS_DB_PASSWORD environment variable sets the database password from the Secret kustomize generated.
    application/wordpress/wordpress-deployment.yaml 
    apiVersion: v1
    kind: Service
    metadata:
      name: wordpress
      labels:
        app: wordpress
    spec:
      ports:
        - port: 80
      selector:
        app: wordpress
        tier: frontend
      type: LoadBalancer
    ---
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: wp-pv-claim
      labels:
        app: wordpress
    spec:
      accessModes:
        - ReadWriteOnce
      resources:
        requests:
          storage: 20Gi
    ---
    apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
    kind: Deployment
    metadata:
      name: wordpress
      labels:
        app: wordpress
    spec:
      selector:
        matchLabels:
          app: wordpress
          tier: frontend
      strategy:
        type: Recreate
      template:
        metadata:
          labels:
            app: wordpress
            tier: frontend
        spec:
          containers:
          - image: wordpress:4.8-apache
            name: wordpress
            env:
            - name: WORDPRESS_DB_HOST
              value: wordpress-mysql
            - name: WORDPRESS_DB_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: mysql-pass
                  key: password
            ports:
            - containerPort: 80
              name: wordpress
            volumeMounts:
            - name: wordpress-persistent-storage
              mountPath: /var/www/html
          volumes:
          - name: wordpress-persistent-storage
            persistentVolumeClaim:
              claimName: wp-pv-claim
    
    1. Download the MySQL deployment configuration file.
      curl -LO https://k8s.io/examples/application/wordpress/mysql-deployment.yaml
      
    2. Download the WordPress configuration file.
      curl -LO https://k8s.io/examples/application/wordpress/wordpress-deployment.yaml
      
    3. Add them to kustomization.yaml file.
    cat <<EOF >>./kustomization.yaml
    resources:
      - mysql-deployment.yaml
      - wordpress-deployment.yaml
    EOF
    

    Apply and Verify

    The kustomization.yaml contains all the resources for deploying a WordPress site and a MySQL database. You can apply the directory by
    kubectl apply -k ./
    
    Now you can verify that all objects exist.
    1. Verify that the Secret exists by running the following command:
      kubectl get secrets
      
      The response should be like this:
      NAME                    TYPE                                  DATA   AGE
      mysql-pass-c57bb4t7mf   Opaque                                1      9s
      
    2. Verify that a PersistentVolume got dynamically provisioned.
      kubectl get pvc
      
      Note: It can take up to a few minutes for the PVs to be provisioned and bound.

    The response should be like this:
      ```shell
      NAME             STATUS    VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS       AGE
      mysql-pv-claim   Bound     pvc-8cbd7b2e-4044-11e9-b2bb-42010a800002   20Gi       RWO            standard           77s
      wp-pv-claim      Bound     pvc-8cd0df54-4044-11e9-b2bb-42010a800002   20Gi       RWO            standard           77s
      ```
    1. Verify that the Pod is running by running the following command:
      kubectl get pods
    Note: It can take up to a few minutes for the Pod’s Status to be RUNNING.
    The response should be like this:
      ```
      NAME                               READY     STATUS    RESTARTS   AGE
      wordpress-mysql-1894417608-x5dzt   1/1       Running   0          40s
      ```
    1. Verify that the Service is running by running the following command:
      kubectl get services wordpress
      
      The response should be like this:

    NAME        TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)        AGE
    wordpress   ClusterIP   10.0.0.89    <pending>     80:32406/TCP   4m
    Note: Minikube can only expose Services through NodePort. The EXTERNAL-IP is always pending.
    1. Run the following command to get the IP Address for the WordPress Service:
      minikube service wordpress --url
      
      The response should be like this:
      http://1.2.3.4:32406
      
    2. Copy the IP address, and load the page in your browser to view your site.
      You should see the WordPress set up page similar to the following screenshot.
      wordpress-init
    Warning: Do not leave your WordPress installation on this page. If another user finds it, they can set up a website on your instance and use it to serve malicious content. Either install WordPress by creating a username and password or delete your instance.

    Sample File For PVC creation

    oc create -f pv-cloud-data-claim.yml
    persistentvolume "pv-cloud-data-claim" created
    oc get pv
    pv-cloud-data-claim       3Gi        RWO           Retain          Available
    apiVersion: v1
    kind: PersistentVolume
    metadata:
      annotations:
        pv.kubernetes.io/bound-by-controller: "yes"
      creationTimestamp: null
      finalizers:
      - kubernetes.io/pv-protection
      labels:
        filesystem: ext4
        mode: "777"
        mount-options: defaults
        node: node-1-10
        readiness: Ready
        volume-pool: block_device
      name: pv-cloud-data-name
    spec:
      accessModes:
      - ReadWriteOnce
      capacity:
        storage: 3Gi
      hostPath:
        path: /mnt/storage/pv-cloud-data-name
      persistentVolumeReclaimPolicy: Retain
    status: {}

    Cleaning up

    1. Run the following command to delete your Secret, Deployments, Services and PersistentVolumeClaims:
      kubectl delete -k ./

    Post a Comment

    Previous Post Next Post