一、有一个ceph cluster,假设已经准备好了,文档网上一大堆
二、开始集成ceph和kuberntes
2.1 禁用rbd features
rbd p_w_picpath有4个 features,layering, exclusive-lock, object-map, fast-diff, deep-flatten
因为目前内核仅支持layering,修改默认配置每个ceph node的/etc/ceph/ceph.conf 添加一行rbd_default_features = 1这样之后创建的p_w_picpath 只有这一个feature验证方式:
ceph --show-config|grep rbd|grep featuresrbd_default_features = 12.2 创建ceph-secret这个k8s secret对象,这个secret对象用于k8s volume插件访问ceph集群:
获取client.admin的keyring值,并用base64编码:
# ceph auth get-key client.adminAQBRIaFYqWT8AhAAUtmJgeNFW/o1ylUzssQQhA==
# echo "AQBRIaFYqWT8AhAAUtmJgeNFW/o1ylUzssQQhA=="|base64QVFCUklhRllxV1Q4QWhBQVV0bUpnZU5GVy9vMXlsVXpzc1FRaEE9PQo=
创建ceph-secret.yaml文件,data下的key字段值即为上面得到的编码值:
apiVersion: v1kind: Secretmetadata: name: ceph-secretdata: key: QVFCUklhRllxV1Q4QWhBQVV0bUpnZU5GVy9vMXlsVXpzc1FRaEE9PQo=
创建ceph-secret:
# kubectl create -f ceph-secret.yamlsecret "ceph-secret" created# kubectl get secretNAME TYPE DATA AGEceph-secret Opaque 1 2ddefault-token-5vt3n kubernetes.io/service-account-token 3 106d
三、Kubernetes Persistent Volume和Persistent Volume Claim
概念:PV是集群的资源,PVC请求资源并检查资源是否可用注意:以下操作设计到name的参数,一定要一致3.1 创建disk p_w_picpath (以jdk保存到ceph举例)
# rbd create jdk-p_w_picpath -s 1G# rbd info jdk-p_w_picpathrbd p_w_picpath 'jdk-p_w_picpath': size 1024 MB in 256 objects order 22 (4096 kB objects) block_name_prefix: rbd_data.37642ae8944a format: 2 features: layering flags:
3.2 创建pv(仍然使用之前创建的ceph-secret)
创建jdk-pv.yaml:monitors: 就是ceph的mon,有几个写几个apiVersion: v1kind: PersistentVolumemetadata: name: jdk-pvspec: capacity: storage: 2Gi accessModes: - ReadWriteOnce rbd: monitors: - 10.10.10.1:6789 pool: rbd p_w_picpath: jdk-p_w_picpath user: admin secretRef: name: ceph-secret fsType: xfs readOnly: false persistentVolumeReclaimPolicy: Recycle
执行创建操作:
# kubectl create -f jdk-pv.yamlpersistentvolume "jdk-pv" created#kubectl get pvNAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM REASON AGEceph-pv 1Gi RWO Recycle Bound default/ceph-claim 1djdk-pv 2Gi RWO Recycle Available 1m
3.3 创建pvc
创建jdk-pvc.yamlkind: PersistentVolumeClaimapiVersion: v1metadata: name: jdk-claimspec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi
执行创建操作:
# kubectl create -f jdk-pvc.yamlpersistentvolumeclaim "jdk-claim" created# kubectl get pvcNAME STATUS VOLUME CAPACITY ACCESSMODES AGEceph-claim Bound ceph-pv 1Gi RWO 2djdk-claim Bound jdk-pv 2Gi RWO 39s
3.4 创建挂载ceph rbd的pod:
创建 ceph-busyboxpod.yamlapiVersion: v1kind: Podmetadata: name: ceph-busyboxspec: containers: - name: ceph-busybox p_w_picpath: busybox command: ["sleep", "600000"] volumeMounts: - name: ceph-vol1 mountPath: /usr/share/busybox readOnly: false volumes: - name: ceph-vol1 persistentVolumeClaim: claimName: jdk-claim
执行创建操作:
kubectl create -f ceph-busyboxpod.yamlceph rbd 持久化 这里描述下:
1、稳定性在于ceph
2、只能同一node挂载,不能跨node3、读写只能一个pod,其他pod只能读官方url描述
附官方关于kubernetes的volume的mode