Add CSI driver for gluster block#150
Conversation
|
Can one of the admins verify this patch? |
|
add to whitelist |
cmd/glusterblock/main.go
Outdated
| "fmt" | ||
| "os" | ||
|
|
||
| gfd "github.com/gluster/gluster-csi-driver/pkg/glusterblock" |
There was a problem hiding this comment.
gfd (gluster file driver) to fbd (gluster block driver)
can we rename glusterblock as glusterloopback
There was a problem hiding this comment.
can we call it 'glustervirtualblock' (gvb) instead? I am thinking of avoiding loop from code.
cmd/glusterblock/main.go
Outdated
|
|
||
| cmd := &cobra.Command{ | ||
| Use: "glusterblock-csi-driver", | ||
| Short: "GlusterFS Block CSI driver", |
There was a problem hiding this comment.
GlusterFS Block CSI driver or Gluster loop back CSI driver?
There was a problem hiding this comment.
glusterblock-csi-driver to glusteloopback-csi-driver ?
There was a problem hiding this comment.
again, as said, lets avoid 'loop', and fine to call 'virtual'
|
|
||
| gfd "github.com/gluster/gluster-csi-driver/pkg/glusterfs" | ||
| "github.com/gluster/gluster-csi-driver/pkg/glusterfs/utils" | ||
| "github.com/gluster/gluster-csi-driver/pkg/glusterfs/config" |
There was a problem hiding this comment.
what the intention behind renaming this folder name?
There was a problem hiding this comment.
There were common code between glusterfs and glusterblock and were moved under "pkg/utils/common-utils.go" with package name as "utils". Hence moved config.go into respective config/config.go with package name "config"
| @@ -0,0 +1,295 @@ | |||
| # Copyright 2018 The Gluster CSI Authors. | |||
There was a problem hiding this comment.
this template needs to be updated to use the latest sidecar CSI images
There was a problem hiding this comment.
IMO create below directory struct
example/kubernets/glusterfs/ ---> for glsuterfs
example/kubernetes/glsuter-loopback/ ----> for loopback
| @@ -0,0 +1,87 @@ | |||
| # Copyright 2018 The Gluster CSI Authors. | |||
There was a problem hiding this comment.
can we use single dockerfile for both, i feel a lot of code duplication here, can we make use of arguments during docker build?
There was a problem hiding this comment.
Dpendencies are different for each file. It will be messy with if blocks. I feel having it separate is cleaner.
| } | ||
|
|
||
| // NodeGetCapabilities returns the supported capabilities of the node server | ||
| func (ns *NodeServer) NodeGetCapabilities(ctx context.Context, req *csi.NodeGetCapabilitiesRequest) (*csi.NodeGetCapabilitiesResponse, error) { |
| @@ -0,0 +1,17 @@ | |||
| package config | |||
There was a problem hiding this comment.
can be moved to common place to avoid code duplication
There was a problem hiding this comment.
config of glusterfs csi and glusterblock csi is little different hence kept separate.
pkg/glusterfs/controllerserver.go
Outdated
|
|
||
| // CreateSnapshot create snapshot of an existing PV | ||
| func (cs *ControllerServer) CreateSnapshot(ctx context.Context, req *csi.CreateSnapshotRequest) (*csi.CreateSnapshotResponse, error) { | ||
| func (cs *ControllerServer) CreateSnapshot(ctx context.Context, req *csi.CreateSnapshotRequest) (*csi.CreateSnapshotResponse, error) { //nolint: gocyclo |
There was a problem hiding this comment.
//nolint: gocyclo needs to be removed
| } | ||
|
|
||
| // MountVolume mounts the given source at the target | ||
| func MountVolume(srcPath string, targetPath string, fstype string, mo []string) error { |
There was a problem hiding this comment.
can this be used for glusterfs driver also?
There was a problem hiding this comment.
Currently not being used but yes, it can be used.
pkg/utils/common-utils.go
Outdated
| } | ||
|
|
||
| // Config struct fills the parameters of request or user input | ||
| type Config struct { |
There was a problem hiding this comment.
this is not required i think (avoid code duplication)
|
Does this supersede #105? |
Yes, changed due to gluster/glusterd2#1439 |
65a1cbc to
60beba9
Compare
| securityContext: | ||
| privileged: true | ||
| capabilities: | ||
| add: ["CAP_MKNOD"] |
There was a problem hiding this comment.
why provisioner need these capabilities? ( AFAIK this does only create and delete volume)
| } | ||
|
|
||
| // ParseCreateVolRequest parse incoming volume create request and fill ProvisionerConfig. | ||
| func (cs *ControllerServer) ParseCreateVolRequest(req *csi.CreateVolumeRequest) (*ProvisionerConfig, error) { |
There was a problem hiding this comment.
can we remove this function as it's not doing any functionality?
|
|
||
| // CreateVolume creates and starts the volume | ||
| func (cs *ControllerServer) CreateVolume(ctx context.Context, req *csi.CreateVolumeRequest) (*csi.CreateVolumeResponse, error) { | ||
| glog.V(2).Infof("request received %+v", req) |
There was a problem hiding this comment.
you need to remove logging for secrets from the request, please check #148
| volumeReq := bapi.BlockVolumeCreateRequest{ | ||
| BlockVolumeInfo: &blockVolInfo, | ||
| } | ||
|
|
There was a problem hiding this comment.
I think one more validation is missing you need to reject volume creation from snapshot here.
pkg/gluster-virtblock/driver.go
Outdated
|
|
||
| const ( | ||
| glusterBlockCSIDriverName = "org.gluster.glustervirtblock" | ||
| glusterBlockCSIDriverVersion = "0.0.9" |
There was a problem hiding this comment.
need to update driver version to 1.0.0
|
@JohnStrunk @humblec PTAL |
| /profile.cov >profile.cov | ||
| fi | ||
|
|
||
| DRIVER="glustervirtblock-csi-driver" |
There was a problem hiding this comment.
The driver name looks bit confusing to me, if we meant this is a virtual block driver we could work on better names. Virt block Vs virtual block , I prefer latter one
There was a problem hiding this comment.
As discussed in gcs channel, keeping it as glustervirtblock.
| ) | ||
|
|
||
| func init() { | ||
| // #nosec |
There was a problem hiding this comment.
It's an annotation to skip golint warning as error check is not done for now.
cmd/gluster-virtblock/main.go
Outdated
|
|
||
| cmd := &cobra.Command{ | ||
| Use: "gluster-virtual-block-csi-driver", | ||
| Short: "GlusterFS Virtual Block CSI driver", |
There was a problem hiding this comment.
There is no point in keeping GlusterFS string here, Gluster fit better
| // ProvisionerConfig is the combined configuration of gluster cli vol create request and CSI driver specific input | ||
| type ProvisionerConfig struct { | ||
| gdVolReq *api.VolCreateReq | ||
| //csiConf *CsiDrvParam |
There was a problem hiding this comment.
may be more apt to have block volume request param
There was a problem hiding this comment.
Didn't get it, could you elaborate?
There was a problem hiding this comment.
please remove this structure, this is not needed at all
|
@kotreshhr @poornimag do we have any design doc which we can point on block driver ? if yes, please put a pointer here in this PR for reference. |
| "os" | ||
|
|
||
| gvb "github.com/gluster/gluster-csi-driver/pkg/gluster-virtblock" | ||
| "github.com/gluster/gluster-csi-driver/pkg/gluster-virtblock/config" |
There was a problem hiding this comment.
I feel , either virtualblock or vb suite better than virtblock.
There was a problem hiding this comment.
Also considering glusterfs and gluster vb driver use same config params of a GD2 cluster, config an outside library for both to consume once its imported.
There was a problem hiding this comment.
I will do this as part code refactor patch.
| - name: CSI_ENDPOINT | ||
| value: unix://plugin/csi.sock | ||
| - name: REST_URL | ||
| value: http://glusterd2-client.gcs:24007 |
There was a problem hiding this comment.
Whats this url all about ? Is it a constant value for any GD2 cluster?
There was a problem hiding this comment.
It is just like the url in glusterfs-csi deployment file [1]. If it's deployed manually, this has to be changed which will be mentioned in README. If the deployment is part of gcs, gcs will take care of updating this.
[1]
| if !found { | ||
| source := gs + ":" + volume | ||
| hostPath = path.Join(ns.Config.MntPathPrefix, volume) | ||
| err = utils.MountVolume(source, hostPath, "glusterfs", nil) |
There was a problem hiding this comment.
What are the permissions of this volume ? how do we make sure the acl is properly configured between different app pod mounts ?
There was a problem hiding this comment.
Right now, it's mounted with default options. And with it host volume mount is accessible to different pods as per testing.
| if err != nil { | ||
| return nil, err | ||
| } | ||
| ns.Config.Mounts[volume] = hostPath |
There was a problem hiding this comment.
How do we ensure the reference counts ? also how do we make sure there is no (stale) supervol mount exist when reference count of block volumes on it is "0".
There was a problem hiding this comment.
The logic of creating multiple block hosting volume is in gd2. If it can't be created in first one, second one is created. Volume context will return the correct block hosting volume and is being used. The actual deletion of block volume context, @poornimag or @oshankkumar would better answer that.
There was a problem hiding this comment.
what humble meant was if you delete all block volumes why you want to keep the mount of BHV on node-plugin?
@humblec if you have multiple BHV (maybe we will end up in stale BHV mounts) that can be fixed later
There was a problem hiding this comment.
Yeah its not ref counted yet. There will be stale host volume mounts if all the blocks are deleted. Yeah it can be sent as an incremental path.
|
@kotreshhr @poornimag we also need a README for sure :) |
cbfdedc to
7179dc4
Compare
| "pkg/util/feature" | ||
| "pkg/util/feature", | ||
| ] | ||
| pruneopts = "NUT" |
There was a problem hiding this comment.
[root@kotresh gluster-csi-driver $]dep version
dep:
version : v0.5.0
build date : 2018-07-26
git hash : 224a564
go version : go1.10.3
go compiler : gc
platform : linux/amd64
features : ImportDuringSolve=false
README.md
Outdated
| ### 1. RW0 Volume Claim (Gluster Virtual Block CSI driver) | ||
|
|
||
| ``` | ||
| [root@localhost]#kubectl create -f csi-deployment.yaml |
There was a problem hiding this comment.
in which directory csi-deployment.yaml is present please update
README.md
Outdated
| metadata: | ||
| name: glustervirtblock-csi | ||
| annotations: | ||
| storageclass.kubernetes.io/is-default-class: "true" |
There was a problem hiding this comment.
do we need to make this storage class as default one?
There was a problem hiding this comment.
I think we can't have two default storage class in the same cluster, please verify this.
README.md
Outdated
| local-storage kubernetes.io/no-provisioner 29h | ||
| ``` | ||
|
|
||
| Unset glusterfs-csi as default CSI driver if it's default |
There was a problem hiding this comment.
ah, this is not required please make gluster-block storage class as non default one
| name: glusterblock-csi-pv | ||
| annotations: | ||
| storageClassName: glustervirtblock-csi | ||
| spec: |
There was a problem hiding this comment.
instead of storageClassName in annotation, use spec.StorageClassName check https://github.com/gluster/gluster-csi-driver/pull/154/files#diff-04c6e90faac2675aa89e2176d2eec7d8R119
| requests: | ||
| storage: 100Mi | ||
|
|
||
| [root@localhost]# kubectl create -f pvc.yaml |
There was a problem hiding this comment.
cat and create should be in separate block
README.md
Outdated
| spec: | ||
| containers: | ||
| - name: smallfile | ||
| image: quay.io/ekuric/smallfile:latest |
There was a problem hiding this comment.
what this container image does, cant we use existing redis image?
README.md
Outdated
| persistentVolumeClaim: | ||
| claimName: glusterblock-csi-pv | ||
|
|
||
| [root@localhost cluster]#kubectl create -f app.yaml |
There was a problem hiding this comment.
please keep [root@localhost] constant across readme
README.md
Outdated
| NAME READY STATUS RESTARTS AGE | ||
| gluster-0 1/1 Running 0 38s | ||
|
|
||
| [root@localhost]# kubectl describe pods/gluster-0 |
There was a problem hiding this comment.
instead of describe just get the kubectl get po if the pod is in running state than its fine
7179dc4 to
3712dda
Compare
This patch intends to add gluster-virtblock csi driver which exports a block device provided by gluster. The current implementaion expects a mount point as a parameter, where the block devices will be created. Co-Authored-by: Poornima G <pgurusid@redhat.com> Co-Authored-by: Kotresh HR <khiremat@redhat.com> Signed-off-by: Kotresh HR <khiremat@redhat.com>
3712dda to
0e1f281
Compare
amarts
left a comment
There was a problem hiding this comment.
LGTM. I personally think, we should go ahead and get this moving, and make further improvement as followup PRs
Madhu-1
left a comment
There was a problem hiding this comment.
LGTM, need to do code cleanup once this PR is merged.
JohnStrunk
left a comment
There was a problem hiding this comment.
I'm ok with this. There will need to be a follow-up to clean up the build process.
This patch intends to add glusterblock csi driver which
exports a loopback device formatted as xfs. The current
implementaion expects a mount point as a parameter, where
the block devices will be created.
The original patch was taken from [1]
[1] master...Madhu-1:lo-block
Co-Authored-by: Poornima G pgurusid@redhat.com
Co-Authored-by: Kotresh HR khiremat@redhat.com
Signed-off-by: Poornima G pgurusid@redhat.com
Describe what this PR does
Provide some context for the reviewer
Is there anything that requires special attention?
Do you have any questions? Did you do something clever?
Related issues:
Mention any github issues relevant to this PR