-
Notifications
You must be signed in to change notification settings - Fork 27
Add first Longhorn tests #389
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
c8c01b3 to
0cda9b4
Compare
| // If Longhorn was installed by a previous test on this same session, uninstall it to install it again with custom configuration. | ||
| // If Longhorn was installed previously to this test run, leave it be and skip this test. This way we allow for running the | ||
| // next tests on top of a manually installed Longhorn and avoid accidentally uninstalling something important. | ||
| if chart.IsAlreadyInstalled { | ||
| if l.installedLonghorn { | ||
| l.T().Log("Uninstalling Longhorn that was intalled on the previous test.") | ||
| err = charts.UninstallLonghornChart(l.client, longhornNamespace, l.cluster.ID, l.payloadOpts.Host) | ||
| require.NoError(l.T(), err) | ||
| } else { | ||
| l.T().Skip("Skipping installation test because Longhorn is already installed") | ||
| } | ||
| } | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This decision was tricky and had some discussion surrounding it. From the outcome of the discussions I believe this is the way to go. This way we achieve
1- Running all tests at once with the suite;
2- Running some tests with a custom installation of longhorn;
3- Keep the cleanup logic neatly isolated;
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
cc @rancher/qa-pit-crew
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
1 last nit here; put chart_install test in its own file -- move the rest out (same structure as validation/fleet/provisioning)
this way, the non-install tests can have the chart check + install / uninstall as a part of setup instead of in each test.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So are you saying we should put the chart install test in a separate file and in a separate suite and install Longhorn in the setup for the suite containing all the tests except the chart install one. If that is correct,
I assume the test that installs longhorn again with a static custom configuration would share a suite with the simple chart install test and the one that runs last would uninstall if longhorn was installed in the same test run before proceeding.
If that is correct, I see the benefits of doing this, but also having them of two suites would make it impossible to run them all of them in a single Jenkins run, so now I am trying to think of a way that we can have the best of both worlds.
264b9a2 to
b1dbeaa
Compare
rancher-max
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks really good! Couple of minor things, but otherwise this has my approval 👍
floatingman
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good overall. You should consider adding a README for the longhorn_test, so others will know how to run the tests and if any specific cattle-config settings are needed.
b1dbeaa to
e7d7e69
Compare
e7d7e69 to
0ebe37f
Compare
rancher-max
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM -- left one comment that I feel is a nit so will leave up to you if you want to update or not
@floatingman at this point there is no additional settings required for these tests, so I don't think this is needed yet. That said, there are a lot more tests to go so I will keep that in mind moving forward. |
bc06906 to
978e0eb
Compare
978e0eb to
0a11914
Compare
Signed-off-by: hamistao <pedro.ribeiro@suse.com>
This is not required for the Longhorn tests. Signed-off-by: hamistao <pedro.ribeiro@suse.com>
The dash is already added by `AppendRandomString` Signed-off-by: hamistao <pedro.ribeiro@suse.com>
Signed-off-by: hamistao <pedro.ribeiro@suse.com>
Signed-off-by: hamistao <pedro.ribeiro@suse.com>
Signed-off-by: hamistao <pedro.ribeiro@suse.com>
Signed-off-by: hamistao <pedro.ribeiro@suse.com>
Signed-off-by: hamistao <pedro.ribeiro@suse.com>
Signed-off-by: hamistao <pedro.ribeiro@suse.com>
And move CreatePVCWorkload there Signed-off-by: hamistao <pedro.ribeiro@suse.com>
Signed-off-by: hamistao <pedro.ribeiro@suse.com>
Signed-off-by: hamistao <pedro.ribeiro@suse.com>
This adds the following tests: - Install Longhorn Using Rancher Charts - Install Longhorn with Custom Configuration - Create Longhorn Volume with Rancher Workloads - Test RBAC Integration with Longhorn - Test Scaling a StatefulSet with a Longhorn PVC Signed-off-by: hamistao <pedro.ribeiro@suse.com>
0a11914 to
aa8f72c
Compare
slickwarren
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
some more nits, but pretty simple ones. Can meet to go over if you would like
| storage.CheckVolumeAllocation(l.T(), l.client, l.cluster.ID, namespace.Name, l.longhornTestConfig.LonghornTestStorageClass, volumeSourceName, storage.MountPath) | ||
|
|
||
| var stetefulSetPodReplicas int32 = 5 | ||
| statefulSet.Spec.Replicas = &stetefulSetPodReplicas | ||
| statefulSet, err = statefulset.UpdateStatefulSet(l.client, l.cluster.ID, namespace.Name, statefulSet, true) | ||
| require.NoError(l.T(), err) | ||
|
|
||
| storage.CheckVolumeAllocation(l.T(), l.client, l.cluster.ID, namespace.Name, l.longhornTestConfig.LonghornTestStorageClass, volumeSourceName, storage.MountPath) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
there's some weirdness in the logs around this function but I can't figure out exactly where:
09:09:24 2025/12/24 17:09:23 (0xc000efe000) (0xc000b600a0) Stream added, broadcasting: 3
09:09:24 2025/12/24 17:09:23 (0xc000efe000) Reply frame received for 3
09:09:24 2025/12/24 17:09:23 (0xc000efe000) (0xc000a663c0) Create stream
09:09:24 2025/12/24 17:09:23 (0xc000efe000) (0xc000a663c0) Stream added, broadcasting: 5
09:09:24 2025/12/24 17:09:23 (0xc000efe000) Reply frame received for 5
|
|
||
| t.Logf("Write to mounted volume under the path [%v] on pod [%v]", mountpoint, podName) | ||
| writeToMountedVolume := []string{"touch", mountpoint + "/" + testFileName} | ||
| output, err := kubeconfig.KubectlExec(restConfig, podName, namespace, writeToMountedVolume) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
does this log streams for some reason?
| // If Longhorn was installed by a previous test on this same session, uninstall it to install it again with custom configuration. | ||
| // If Longhorn was installed previously to this test run, leave it be and skip this test. This way we allow for running the | ||
| // next tests on top of a manually installed Longhorn and avoid accidentally uninstalling something important. | ||
| if chart.IsAlreadyInstalled { | ||
| if l.installedLonghorn { | ||
| l.T().Log("Uninstalling Longhorn that was intalled on the previous test.") | ||
| err = charts.UninstallLonghornChart(l.client, longhornNamespace, l.cluster.ID, l.payloadOpts.Host) | ||
| require.NoError(l.T(), err) | ||
| } else { | ||
| l.T().Skip("Skipping installation test because Longhorn is already installed") | ||
| } | ||
| } | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
1 last nit here; put chart_install test in its own file -- move the rest out (same structure as validation/fleet/provisioning)
this way, the non-install tests can have the chart check + install / uninstall as a part of setup instead of in each test.
| ```yaml | ||
| longhorn: | ||
| testProject: "longhorn-custom-test" | ||
| testStorageClass: "longhorn" # Can be either "longhorn" or "longhorn-static" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
does it matter which?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
tbh I am not sure. Since most operations perform over the course of the tests can be done with either, I figured I should add this option in case someone wants to test using longhorn-static specifically, but I am not sure as which scenarios would demand that.
| longhornTestConfigConfigurationFileKey = "longhorn" | ||
| LonghornTestDefaultProject = "longhorn-test" | ||
| LonghornTestDefaultStorageClass = "longhorn" | ||
| ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why not set this at the TestConfig level instead i.e. json:"testProject,default=longhorn-test (not sure if that's exactly the format, see other files for exact ref.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey I like this approach. I am not sure whether it works if the configuration is missing entirely instead of just missing a few fields, but either way there is room for improvement here.
| if r := recover(); r != nil { | ||
| longhornTestConfig = TestConfig{ | ||
| LonghornTestProject: LonghornTestDefaultProject, | ||
| LonghornTestStorageClass: LonghornTestDefaultStorageClass, | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why would this ever fail? just curious as this isn't typically in other funcs that load in a struct.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If the longhorn test configuration on the cattle config file is not set, LoadConfig panics and then we recover by using the default values.
| if !slices.Contains(storage.LonghornStorageClasses, longhornTestConfig.LonghornTestStorageClass) { | ||
| panic(fmt.Errorf("Invalid storage class %s", longhornTestConfig.LonghornTestStorageClass)) | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
won't this fail is its empty? Or maybe it never will be empty if you use the json suggestion above. . ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If it is empty then we would have entered the previous if block.
| longhornTestConfig.LonghornTestProject = LonghornTestDefaultProject | ||
| } | ||
|
|
||
| if longhornTestConfig.LonghornTestStorageClass == "" { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this will never be reached if it is indeed empty due to the panic above.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is for the case where the longhorn test configuration is there but one of the fields is empty.
rancher/qa-tasks#2006
This adds the first automated Longhorn interoperability tests, namely: