Kubernetes or OpenShift Error "mkdir cannot create directory /var/lib/pgsql/data/userdata : Permission denied" when deploying Postgresql

While deploying postgres pod i was seeing this error:

mkdir: cannot create directory '/var/lib/pgsql/data/userdata': Permission denied

# oc logs my_pod -p
chmod: changing permissions of ‘/var/lib/postgresql/data’: Permission denied

Probably I have see this as selinux issue and correct setting as per your enviornment sis needed
chcon -Rt svirt_sandbox_file_t /path/to/volume

Issue

Unable to deploy the postgreSQL  the Openshift catalog. Getting the following error:

mkdir cannot create directory '/var/lib/pgsql/data/userdata': Permission denied
Before: image
 
root@localhost ~]# oc get pods -o wide
NAME                      READY     STATUS             RESTARTS   AGE       IP             NODE
pg-common-node1-3-5kgl6   0/1       CrashLoopBackOff   5          5h        10.130.2.172   test-pre1-node-2-2.openshift.managed.test.cloud
pg-common-node2-3-plcpp   0/1       CrashLoopBackOff   5          5h        10.130.2.171   test-pre1-node-2-2.openshift.managed.test.cloud
 
 
[root@localhost ~]# oc get pv
NAME                             CAPACITY   ACCESSMODES   RECLAIMPOLICY   STATUS      CLAIM                                     REASON    AGE
pv-cjm-mongodb                   50Gi       RWO           Retain          Bound       mongodb-main/pvc-mongodb-main-data-0                2d
pv-cjm-postgresql-backup-2       10Gi       RWO           Retain          Available                                                       2d
pv-pg-test-cjm-postgresql-3   10Gi       RWO           Retain          Bound       postgresql-main-2/pg-common-node1-claim             2d
pv-pg-test-cjm-postgresql-4   10Gi       RWO           Retain          Bound       postgresql-main-2/pg-common-node2-claim             2d
 
 
 
[root@localhost ~]# oc describe pv pv-pg-test-cjm-postgresql-3
Name:           pv-pg-test-cjm-postgresql-3
Labels:         filesystem=ext4
                 mode=777
                 mount-options=defaults
                 node=test-pre1-node-2-2.openshift.managed.test.cloud
                 readiness=Ready
                 volume-pool=block_device
StorageClass:
Status:         Bound
Claim:          postgresql-main-2/pg-common-node1-claim
Reclaim Policy: Retain
Access Modes:   RWO
Capacity:       10Gi
Message:
Source:
     Type:       HostPath (bare host directory volume)
     Path:       /var/lib/origin/openshift.local.volumes/pv-pg-test-cjm-postgresql-3
 

It is an SELinux issue.

You can temporarily fix issue

su -c "setenforce 0"

on the host to access or else add an SELinux rule by running

chcon -Rt svirt_sandbox_file_t /path/to/volume
 
[root@test-pre1-node-2-2 lib]# ls -ld /var/lib/origin/openshift.local.volumes/pv-pg-test-cjm-postgresql-3
drwxr-xr-x. 2 root root 6 May 29 10:18 /var/lib/origin/openshift.local.volumes/pv-pg-test-cjm-postgresql-3
[root@test-pre1-node-2-2 lib]# chown -R 26:26 /var/lib/origin/openshift.local.volumes/pv-pg-test-cjm-postgresql-3

chcon -R unconfined_u:object_r:container_file_t:s0 /var/lib/origin/openshift.local.volumes/pv-cjm-postgresql-backup-2
chmod 777 /var/lib/origin/openshift.local.volumes/pv-name
chown -R 26:26 /var/lib/origin/openshift.local.volumes/pv-name

 
2020-06-02T09:17:18,466][INFO][category=BackupDaemonConfiguration] Loaded PostgreSQL backup configuration: {"schedule": "0 1 * * *", "eviction": "120d/delete", "storage": "/backup-storage", "command": "/opt/backup/postgres_backup.sh %(data_folder)s"}
[2020-06-02T09:17:18,557][INFO][category=Storage] PostgreSQL server version is equal to 11, so will save all backups in pg11 dir
[2020-06-02T09:17:18,558][INFO][category=utils] During initialization of storage next error occurred: [Errno 13] Permission denied: '/backup-storage/pg11'
Traceback (most recent call last):
  File "/usr/local/bin/gunicorn", line 11, in <module>
    sys.exit(run())
  File "/usr/local/lib/python3.6/site-packages/gunicorn/app/wsgiapp.py", line 74, in run
    WSGIApplication("%(prog)s [OPTIONS] [APP_MODULE]").run()

 

For openshift 3.11

  • Project is created
  • PV are created
  • Project is annotated
oc annotate --overwrite ns <project-name> openshift.io/sa.scc.uid-range='100600/100600'
  • Use only versions 2.4.0 and higher.
  • Set UID/GID on PV directory:
     chown -R 100600.100600 /var/lib/origin/openshift.local.volumes/<PV-NAME>    
  • Set the exception for selinux on PVs directories:
     chcon -R 100600.100600 /var/lib/origin/openshift.local.volumes/<PV-NAME

After:

image

Other Solution could be to run containers with privileged mode

 

Adding the selinux rule is the best way, as it is not a good idea in most cases to run containers with privileged mode.

WARNING: This solution has security risks.

Try running the container as privileged:

sudo docker run --privileged=true -i -v /data1/Downloads:/Downloads ubuntu bash

Another option (that I have not tried) would be to create a privileged container and then create non-privileged containers inside of it.

 

Check here too for the selinux security issue :
https://docs.docker.com/engine/tutorials/dockervolumes/#/volume-labels

docker-compose file can be fixed by adding a :z at the end of the volume command
volumes: - /media/dataDemo/db:/var/lib/postgresql/data:z

Typically, permissions issues with a host volume mount are because the uid/gid inside the container does not have access to the file according to the uid/gid permissions of the file on the host. However, this specific case is different.

The dot at the end of the permission string, drwxr-xr-x., indicates SELinux is configured. When using a host mount with SELinux, you need to pass an extra option to the end of the volume definition:

 

  • The z option indicates that the bind mount content is shared among multiple containers.
  • The Z option indicates that the bind mount content is private and unshared.

 

 

 

 

 

Your volume mount command would then look like:

sudo docker run -i -v /data1/Downloads:/Downloads:z ubuntu bash

See more about host mounts with SELinux at: https://docs.docker.com/storage/#configure-the-selinux-label


For others that see this issue with containers running as a different user, you need to ensure the uid/gid of the user inside the container has permissions to the file on the host. On production servers, this is often done by controlling the uid/gid in the image build process to match a uid/gid on the host that has access to the files (or even better, do not use host mounts in production).

A named volume is often preferred to host mounts because it will initialize the volume directory from the image directory, including any file ownership and permissions. This happens when the volume is empty and the container is created with the named volume.

MacOS users now have OSXFS which handles uid/gid's automatically between the Mac host and containers. One place it doesn't help with are files from inside the embedded VM that get mounted into the container, like /var/lib/docker.sock.

For development environments where the host uid/gid may change per developer, my preferred solution is to start the container with an entrypoint running as root, fix the uid/gid of the user inside the container to match the host volume uid/gid, and then use gosu to drop from root to the container user to run the application inside the container. The important script for this is fix-perms in my base image scripts, which can be found at: https://github.com/sudo-bmitch/docker-base

The important bit from the fix-perms script is:

# update the uid
if [ -n "$opt_u" ]; then
  OLD_UID=$(getent passwd "${opt_u}" | cut -f3 -d:)
  NEW_UID=$(stat -c "%u" "$1")
  if [ "$OLD_UID" != "$NEW_UID" ]; then
    echo "Changing UID of $opt_u from $OLD_UID to $NEW_UID"
    usermod -u "$NEW_UID" -o "$opt_u"
    if [ -n "$opt_r" ]; then
      find / -xdev -user "$OLD_UID" -exec chown -h "$opt_u" {} \;
    fi
  fi
fi

That gets the uid of the user inside the container, and the uid of the file, and if they do not match, calls usermod to adjust the uid. Lastly it does a recursive find to fix any files which have not changed uid's. I like this better than running a container with a -u $(id -u):$(id -g) flag because the above entrypoint code doesn't require each developer to run a script to start the container, and any files outside of the volume that are owned by the user will have their permissions corrected.


You can also have docker initialize a host directory from an image by using a named volume that performs a bind mount. This directory must exist in advance, and you need to provide an absolute path to the host directory, unlike host volumes in a compose file which can be relative paths. The directory must also be empty for docker to initialize it. Three different options for defining a named volume to a bind mount look like:

  # create the volume in advance
  $ docker volume create --driver local \
      --opt type=none \
      --opt device=/home/user/test \
      --opt o=bind \
      test_vol

  # create on the fly with --mount
  $ docker run -it --rm \
    --mount type=volume,dst=/container/path,volume-driver=local,volume-opt=type=none,volume-opt=o=bind,volume-opt=device=/home/user/test \
    foo

  # inside a docker-compose file
  ...
  volumes:
    bind-test:
      driver: local
      driver_opts:
        type: none
        o: bind
        device: /home/user/test
  ...

Lastly, if you try using user namespaces, you'll find that host volumes have permission issues because uid/gid's of the containers are shifted. In that scenario, it's probably easiest to avoid host volumes and only use named volumes.

 

Host volume settings are not portable, since they are host-dependent and might not work on any other machine. For this reason, there is no Dockerfile equivalent for mounting host directories to the container. Also, be aware that the host system has no knowledge of container SELinux policy. Therefore, if SELinux policy is enforced, the mounted host directory is not writable to the container, regardless of the rw setting. Currently, you can work around this by assigning the proper SELinux policy type to the host directory":

chcon -Rt svirt_sandbox_file_t host_dir

Where host_dir is a path to the directory on host system that is mounted to the container.

 

Sometime it could be caused by a mismatch between the UID of the host and the UID of the container's user. The fix was to pass the UID of the user as an argument to the docker build and create the container's user with the same UID.

In the DockerFile:

ARG UID=1000
ENV USER="ubuntu"
RUN useradd -u $UID -ms /bin/bash $USER

In the build step:

docker build <path/to/Dockerfile> -t <tag/name> --build-arg UID=$UID

kubernetes and volume permissions

You may want to use persistent volume in your pod. You can claim a volume from kubernetes storageclass and mount it in the pod. It is straight forward if your pod is running with root user. But if you start the Pod with a non-root user, then you are in trouble!

By default, digitalocean claim provides you the storage with root:root permission. When your Pod which run as non-root user want to create directories/files in that volume mount like PostgreSQL’s /var/lib/postgresql , you will get permission denied! The following mountOptions is not supported by DigitalOcean k8s yet.

mountOptions:
- dir_mode=0777
- file_mode=0777

The solution

is using initContainers to change user/permission of the Persistent Volume Claim before creating the Pod.

PVC example:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: csi-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: do-block-storage

btw, minimum PersistentVolumeClaim size is 1Gi in DigitalOcean k8s and you can create max 10 persistent volume claim by default. You need to contact with Digitalocean support to increase this limit.

kind: Pod
apiVersion: v1
metadata:
name: my-csi-app
spec:
containers:
- name: mydb
image: postgres:10.4
volumeMounts:
- mountPath: "/var/lib/postgresql"
name: my-do-volume
initContainers:
- name: pgsql-data-permission-fix
image: busybox
command: ["/bin/chmod","-R","777", "/data"]
volumeMounts:
- name: my-do-volume
mountPath: /data
volumes:
- name: my-do-volume
persistentVolumeClaim:
claimName: csi-pvc

In above example,

  1. we run a busybox container
  2. mount my-do-volume claim as /data directory. You can choose any directory here.
  3. change directory permission as 777. If you use securityContext in Pod yaml, you can use chown $userid instead of chmod 777
  securityContext:
runAsUser: 1000
fsGroup: 2000

4. busybox container will be terminated

5. PostgreSQL Pod can initialize the database in /var/lib/postgresql successfully

$ kubectl logs my-csi-app 
The files belonging to this database system will be owned by user "postgres".
This user must also own the server process.The database cluster will be initialized with locale "en_US.utf8".
The default database encoding has accordingly been set to "UTF8".
The default text search configuration will be set to "english".Data page checksums are disabled.fixing permissions on existing directory /var/lib/postgresql/data ... ok
creating subdirectories ... ok
selecting default max_connections ... 100
selecting default shared_buffers ... 128MB
selecting dynamic shared memory implementation ... posix

 

Docker volumes

Volumes are the preferred mechanism for persisting data generated by and used by Docker containers. While bind mounts are dependent on the directory structure of the host machine, volumes are completely managed by Docker. Volumes have several advantages over bind mounts:

  • Volumes are easier to back up or migrate than bind mounts.
  • You can manage volumes using Docker CLI commands or the Docker API.
  • Volumes work on both Linux and Windows containers.
  • Volumes can be more safely shared among multiple containers.
  • Volume drivers let you store volumes on remote hosts or cloud providers, to encrypt the contents of volumes, or to add other functionality.
  • New volumes can have their content pre-populated by a container.

In addition, volumes are often a better choice than persisting data in a container’s writable layer, because a volume does not increase the size of the containers using it, and the volume’s contents exist outside the lifecycle of a given container.

volumes on the Docker host

If your container generates non-persistent state data, consider using a tmpfs mount to avoid storing the data anywhere permanently, and to increase the container’s performance by avoiding writing into the container’s writable layer.

Volumes use rprivate bind propagation, and bind propagation is not configurable for volumes.

Choose the -v or --mount flag

Originally, the -v or --volume flag was used for standalone containers and the --mount flag was used for swarm services. However, starting with Docker 17.06, you can also use --mount with standalone containers. In general, --mount is more explicit and verbose. The biggest difference is that the -v syntax combines all the options together in one field, while the --mount syntax separates them. Here is a comparison of the syntax for each flag.

New users should try --mount syntax which is simpler than --volume syntax.

If you need to specify volume driver options, you must use --mount.

  • -v or --volume: Consists of three fields, separated by colon characters (:). The fields must be in the correct order, and the meaning of each field is not immediately obvious.
    • In the case of named volumes, the first field is the name of the volume, and is unique on a given host machine. For anonymous volumes, the first field is omitted.
    • The second field is the path where the file or directory are mounted in the container.
    • The third field is optional, and is a comma-separated list of options, such as ro. These options are discussed below.
  • --mount: Consists of multiple key-value pairs, separated by commas and each consisting of a <key>=<value> tuple. The --mount syntax is more verbose than -v or --volume, but the order of the keys is not significant, and the value of the flag is easier to understand.
    • The type of the mount, which can be bind, volume, or tmpfs. This topic discusses volumes, so the type is always volume.
    • The source of the mount. For named volumes, this is the name of the volume. For anonymous volumes, this field is omitted. May be specified as source or src.
    • The destination takes as its value the path where the file or directory is mounted in the container. May be specified as destination, dst, or target.
    • The readonly option, if present, causes the bind mount to be mounted into the container as read-only.
    • The volume-opt option, which can be specified more than once, takes a key-value pair consisting of the option name and its value.

Escape values from outer CSV parser

If your volume driver accepts a comma-separated list as an option, you must escape the value from the outer CSV parser. To escape a volume-opt, surround it with double quotes (") and surround the entire mount parameter with single quotes (').

For example, the local driver accepts mount options as a comma-separated list in the o parameter. This example shows the correct way to escape the list.

$ docker service create \
    --mount 'type=volume,src=<VOLUME-NAME>,dst=<CONTAINER-PATH>,volume-driver=local,volume-opt=type=nfs,volume-opt=device=<nfs-server>:<nfs-path>,"volume-opt=o=addr=<nfs-address>,vers=4,soft,timeo=180,bg,tcp,rw"'
    --name myservice \
    <IMAGE>

The examples below show both the --mount and -v syntax where possible, and --mount is presented first.

Differences between -v and --mount behavior

As opposed to bind mounts, all options for volumes are available for both --mount and -v flags.

When using volumes with services, only --mount is supported.

Create and manage volumes

Unlike a bind mount, you can create and manage volumes outside the scope of any container.

Create a volume:

$ docker volume create my-vol

List volumes:

$ docker volume ls

local               my-vol

Inspect a volume:

$ docker volume inspect my-vol
[
    {
        "Driver": "local",
        "Labels": {},
        "Mountpoint": "/var/lib/docker/volumes/my-vol/_data",
        "Name": "my-vol",
        "Options": {},
        "Scope": "local"
    }
]

Remove a volume:

$ docker volume rm my-vol

Start a container with a volume

If you start a container with a volume that does not yet exist, Docker creates the volume for you. The following example mounts the volume myvol2 into /app/ in the container.

The -v and --mount examples below produce the same result. You can’t run them both unless you remove the devtest container and the myvol2 volume after running the first one.

$ docker run -d \
  --name devtest \
  --mount source=myvol2,target=/app \
  nginx:latest

Use docker inspect devtest to verify that the volume was created and mounted correctly. Look for the Mounts section:

"Mounts": [
    {
        "Type": "volume",
        "Name": "myvol2",
        "Source": "/var/lib/docker/volumes/myvol2/_data",
        "Destination": "/app",
        "Driver": "local",
        "Mode": "",
        "RW": true,
        "Propagation": ""
    }
],

This shows that the mount is a volume, it shows the correct source and destination, and that the mount is read-write.

Stop the container and remove the volume. Note volume removal is a separate step.

$ docker container stop devtest

$ docker container rm devtest

$ docker volume rm myvol2

Start a service with volumes

When you start a service and define a volume, each service container uses its own local volume. None of the containers can share this data if you use the local volume driver, but some volume drivers do support shared storage. Docker for AWS and Docker for Azure both support persistent storage using the Cloudstor plugin.

The following example starts a nginx service with four replicas, each of which uses a local volume called myvol2.

$ docker service create -d \
  --replicas=4 \
  --name devtest-service \
  --mount source=myvol2,target=/app \
  nginx:latest

Use docker service ps devtest-service to verify that the service is running:

$ docker service ps devtest-service

ID                  NAME                IMAGE               NODE                DESIRED STATE       CURRENT STATE            ERROR               PORTS
4d7oz1j85wwn        devtest-service.1   nginx:latest        moby                Running             Running 14 seconds ago

Remove the service, which stops all its tasks:

$ docker service rm devtest-service

Removing the service does not remove any volumes created by the service. Volume removal is a separate step.

SYNTAX DIFFERENCES FOR SERVICES

The docker service create command does not support the -v or --volume flag. When mounting a volume into a service’s containers, you must use the --mount flag.

Populate a volume using a container

If you start a container which creates a new volume, as above, and the container has files or directories in the directory to be mounted (such as /app/ above), the directory’s contents are copied into the volume. The container then mounts and uses the volume, and other containers which use the volume also have access to the pre-populated content.

To illustrate this, this example starts an nginx container and populates the new volume nginx-vol with the contents of the container’s /usr/share/nginx/html directory, which is where Nginx stores its default HTML content.

The --mount and -v examples have the same end result.

$ docker run -d \
  --name=nginxtest \
  --mount source=nginx-vol,destination=/usr/share/nginx/html \
  nginx:latest

After running either of these examples, run the following commands to clean up the containers and volumes. Note volume removal is a separate step.

$ docker container stop nginxtest

$ docker container rm nginxtest

$ docker volume rm nginx-vol

Use a read-only volume

For some development applications, the container needs to write into the bind mount so that changes are propagated back to the Docker host. At other times, the container only needs read access to the data. Remember that multiple containers can mount the same volume, and it can be mounted read-write for some of them and read-only for others, at the same time.

This example modifies the one above but mounts the directory as a read-only volume, by adding ro to the (empty by default) list of options, after the mount point within the container. Where multiple options are present, separate them by commas.

The --mount and -v examples have the same result.

$ docker run -d \
  --name=nginxtest \
  --mount source=nginx-vol,destination=/usr/share/nginx/html,readonly \
  nginx:latest

Use docker inspect nginxtest to verify that the readonly mount was created correctly. Look for the Mounts section:

"Mounts": [
    {
        "Type": "volume",
        "Name": "nginx-vol",
        "Source": "/var/lib/docker/volumes/nginx-vol/_data",
        "Destination": "/usr/share/nginx/html",
        "Driver": "local",
        "Mode": "",
        "RW": false,
        "Propagation": ""
    }
],

Stop and remove the container, and remove the volume. Volume removal is a separate step.

$ docker container stop nginxtest

$ docker container rm nginxtest

$ docker volume rm nginx-vol

Share data among machines

When building fault-tolerant applications, you might need to configure multiple replicas of the same service to have access to the same files.

shared storage

There are several ways to achieve this when developing your applications. One is to add logic to your application to store files on a cloud object storage system like Amazon S3. Another is to create volumes with a driver that supports writing files to an external storage system like NFS or Amazon S3.

Volume drivers allow you to abstract the underlying storage system from the application logic. For example, if your services use a volume with an NFS driver, you can update the services to use a different driver, as an example to store data in the cloud, without changing the application logic.

Use a volume driver

When you create a volume using docker volume create, or when you start a container which uses a not-yet-created volume, you can specify a volume driver. The following examples use the vieux/sshfs volume driver, first when creating a standalone volume, and then when starting a container which creates a new volume.

Initial set-up

This example assumes that you have two nodes, the first of which is a Docker host and can connect to the second using SSH.

On the Docker host, install the vieux/sshfs plugin:

$ docker plugin install --grant-all-permissions vieux/sshfs

Create a volume using a volume driver

This example specifies a SSH password, but if the two hosts have shared keys configured, you can omit the password. Each volume driver may have zero or more configurable options, each of which is specified using an -o flag.

$ docker volume create --driver vieux/sshfs \
  -o sshcmd=test@node2:/home/test \
  -o password=testpassword \
  sshvolume

Start a container which creates a volume using a volume driver

This example specifies a SSH password, but if the two hosts have shared keys configured, you can omit the password. Each volume driver may have zero or more configurable options. If the volume driver requires you to pass options, you must use the --mount flag to mount the volume, rather than -v.

$ docker run -d \
  --name sshfs-container \
  --volume-driver vieux/sshfs \
  --mount src=sshvolume,target=/app,volume-opt=sshcmd=test@node2:/home/test,volume-opt=password=testpassword \
  nginx:latest

Create a service which creates an NFS volume

This example shows how you can create an NFS volume when creating a service. This example uses 10.0.0.10 as the NFS server and /var/docker-nfs as the exported directory on the NFS server. Note that the volume driver specified is local.

NFSV3

$ docker service create -d \
  --name nfs-service \
  --mount 'type=volume,source=nfsvolume,target=/app,volume-driver=local,volume-opt=type=nfs,volume-opt=device=:/var/docker-nfs,volume-opt=o=addr=10.0.0.10' \
  nginx:latest

NFSV4

docker service create -d \
    --name nfs-service \
    --mount 'type=volume,source=nfsvolume,target=/app,volume-driver=local,volume-opt=type=nfs,volume-opt=device=:/var/docker-nfs,"volume-opt=o=10.0.0.10,rw,nfsvers=4,async"' \
    nginx:latest

Backup, restore, or migrate data volumes

Volumes are useful for backups, restores, and migrations. Use the --volumes-from flag to create a new container that mounts that volume.

Backup a container

For example, create a new container named dbstore:

$ docker run -v /dbdata --name dbstore ubuntu /bin/bash

Then in the next command, we:

  • Launch a new container and mount the volume from the dbstore container
  • Mount a local host directory as /backup
  • Pass a command that tars the contents of the dbdata volume to a backup.tar file inside our /backup directory.
$ docker run --rm --volumes-from dbstore -v $(pwd):/backup ubuntu tar cvf /backup/backup.tar /dbdata

When the command completes and the container stops, we are left with a backup of our dbdata volume.

Restore container from backup

With the backup just created, you can restore it to the same container, or another that you made elsewhere.

For example, create a new container named dbstore2:

$ docker run -v /dbdata --name dbstore2 ubuntu /bin/bash

Then un-tar the backup file in the new container`s data volume:

$ docker run --rm --volumes-from dbstore2 -v $(pwd):/backup ubuntu bash -c "cd /dbdata && tar xvf /backup/backup.tar --strip 1"

You can use the techniques above to automate backup, migration and restore testing using your preferred tools.

Remove volumes

A Docker data volume persists after a container is deleted. There are two types of volumes to consider:

  • Named volumes have a specific source from outside the container, for example awesome:/bar.
  • Anonymous volumes have no specific source so when the container is deleted, instruct the Docker Engine daemon to remove them.

Remove anonymous volumes

To automatically remove anonymous volumes, use the --rm option. For example, this command creates an anonymous /foo volume. When the container is removed, the Docker Engine removes the /foo volume but not the awesome volume.

$ docker run --rm -v /foo -v awesome:/bar busybox top

Remove all volumes

To remove all unused volumes and free up space:

$ docker volume prune

Post a Comment

Previous Post Next Post