兼容NFS v3版本的NFS Server部署以及nfs-subdir-external-provisioner
CSI安装配置,并通过Fio测试NFS性能。
兼容NFS v3版本的NFS Server
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21
| docker run --privileged -d --name nfs \ --network kind \ -v /home/boer/projects/kind-k8s/nfs_data:/data \ -e NFS_EXPORT_0='/data *(rw,insecure,no_subtree_check,no_root_squash,fsid=1)' \ -p 2049:2049 -p 2049:2049/udp \ -p 111:111 -p 111:111/udp \ -p 32765:32765 -p 32765:32765/udp \ -p 32767:32767 -p 32767:32767/udp \ erichough/nfs-server:latest
helm repo add nfs-subdir-external-provisioner https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner/ helm install nfs-subdir-external-provisioner nfs-subdir-external-provisioner/nfs-subdir-external-provisioner \ --set nfs.server=172.30.254.86 \ --set nfs.path=/data
nfs-client Delete archiveOnDelete=true nfs Retain archiveOnDelete=true
https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner/tree/master/deploy
helm install -n operators nfs-subdir-external-provisioner --set nfs.server=repository.boer.xyz --set nfs.path=/nfs --set storageClass.name=nfs-storage .
|
PVC FIO
1、参考:测试块存储性能
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97
| apiVersion: v1 kind: ConfigMap metadata: name: fio-test-cm namespace: default data: default-fio: | [global] randrepeat=0 verify=0 ioengine=libaio direct=1 gtod_reduce=1 [job1] name=read_iops bs=4K iodepth=64 size=2G readwrite=randread time_based ramp_time=2s runtime=15s [job2] name=write_iops bs=4K iodepth=64 size=2G readwrite=randwrite time_based ramp_time=2s runtime=15s [job3] name=read_bw bs=128K iodepth=64 size=2G readwrite=randread time_based ramp_time=2s runtime=15s [job4] name=write_bw bs=128k iodepth=64 size=2G readwrite=randwrite time_based ramp_time=2s runtime=15s --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: fio-test-pvc namespace: default spec: accessModes: - ReadWriteOnce resources: requests: storage: 100Gi storageClassName: standard --- apiVersion: v1 kind: Pod metadata: name: fio-test-pod namespace: default spec: containers: - name: fio command: - /bin/sh args: - -c - tail -f /dev/null image: xridge/fio:latest imagePullPolicy: Always volumeMounts: - mountPath: /data name: persistent-storage - mountPath: /etc/fio-config name: config-map volumes: - name: persistent-storage persistentVolumeClaim: claimName: fio-test-pvc - name: config-map configMap: name: fio-test-cm
fio /etc/fio-config/default-fio fio --directory /data /etc/fio-config/default-fio --output-format normal
fio -direct=1 -iodepth=128 -rw=randread -ioengine=libaio -bs=128k -numjobs=1 -time_based=1 -runtime=1000 -group_reporting -filename=/data/rr128k -size=2G -name=rr128k
fio -direct=1 -iodepth=128 -rw=randwrite -ioengine=libaio -bs=128k -numjobs=1 -time_based=1 -runtime=1000 -group_reporting -filename=/data/rw128k -size=2G -name=rw128k
|
2、参考:fio
Docker方式
1
| docker run --rm -v $(pwd)/test:/data -v /tmp/jobs.fio:/tmp/jobs.fio harbor.boer.xyz/public/xridge_fio:latest /tmp/jobs.fio
|