-
Notifications
You must be signed in to change notification settings - Fork 268
CORENET-6613: ClusterNetworkConnect DevPreview FG
#2880
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
Did go get github.com/openshift/api@596420900030c87b81be311e02b4b8726ff5d56c to get openshift/api#2629 followed by go mod vendor and go mod tidy Then when I ran make it failed with: surya@fedora:~/go/src/github.com/openshift/cluster-network-operator$ make fatal: No names found, cannot describe anything. fatal: No names found, cannot describe anything. fatal: No names found, cannot describe anything. go build -mod=vendor -trimpath -ldflags "-X github.com/openshift/cluster-network-operator/pkg/version.versionFromGit="v0.0.0-unknown-00ab125d3" -X github.com/openshift/cluster-network-operator/pkg/version.commitFromGit="00ab125d3" -X github.com/openshift/cluster-network-operator/pkg/version.gitTreeState="dirty" -X github.com/openshift/cluster-network-operator/pkg/version.buildDate="2026-01-15T13:06:12Z" " github.com/openshift/cluster-network-operator/cmd/cluster-network-check-endpoints vendor/github.com/openshift/client-go/config/applyconfigurations/config/v1/gathererconfig.go:13:18: undefined: configv1.GathererState vendor/github.com/openshift/client-go/config/applyconfigurations/config/v1/gathererconfig.go:33:69: undefined: configv1.GathererState vendor/github.com/openshift/client-go/config/applyconfigurations/config/v1/gatherconfig.go:12:24: undefined: configv1.DataPolicyOption vendor/github.com/openshift/client-go/config/applyconfigurations/config/v1/gatherconfig.go:26:76: undefined: configv1.DataPolicyOption vendor/github.com/openshift/client-go/config/applyconfigurations/config/v1/gatherers.go:12:19: undefined: configv1.GatheringMode vendor/github.com/openshift/client-go/config/applyconfigurations/config/v1/gatherers.go:25:63: undefined: configv1.GatheringMode vendor/github.com/openshift/client-go/config/applyconfigurations/config/v1/storage.go:12:29: undefined: configv1.StorageType vendor/github.com/openshift/client-go/config/applyconfigurations/config/v1/gcpserviceendpoint.go:12:17: undefined: configv1.GCPServiceEndpointName vendor/github.com/openshift/client-go/config/applyconfigurations/config/v1/gcpserviceendpoint.go:25:72: undefined: configv1.GCPServiceEndpointName vendor/github.com/openshift/client-go/config/applyconfigurations/config/v1/insightsdatagather.go:43:61: undefined: configv1.InsightsDataGather vendor/github.com/openshift/client-go/config/applyconfigurations/config/v1/insightsdatagather.go:43:61: too many errors make: *** [vendor/github.com/openshift/build-machinery-go/make/targets/golang/build.mk:16: build] Error 1 Hence had to do: go get -u github.com/openshift/client-go go mod tidy go mod vendor Signed-off-by: Surya Seetharaman <suryaseetharaman.9@gmail.com>
|
@tssurya: This pull request references CORENET-6613 which is a valid jira issue. Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "4.22.0" version, but no target version was set. DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
WalkthroughAdds NetworkConnect feature wiring: a new ClusterNetworkConnect CRD, template and startup flag gating via OVN_NETWORK_CONNECT_ENABLE, feature-gate propagation into render data and tests, and bumped OpenShift module dependencies. Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes ✨ Finishing touches
📜 Recent review detailsConfiguration used: Organization UI Review profile: CHILL Plan: Pro Cache: Disabled due to data retention organization setting Knowledge base: Disabled due to 📒 Files selected for processing (10)
🚧 Files skipped from review as they are similar to previous changes (5)
🧰 Additional context used📓 Path-based instructions (1)**⚙️ CodeRabbit configuration file
Files:
🔇 Additional comments (13)
✏️ Tip: You can disable this entire section by setting Warning There were issues while running some tools. Please review the errors and either fix the tool's configuration or disable the tool if it's a critical failure. 🔧 golangci-lint (2.5.0)Error: can't load config: unsupported version of the configuration: "" See https://golangci-lint.run/docs/product/migration-guide for migration instructions Comment |
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: tssurya The full list of commands accepted by this bot can be found here. The pull request process is described here DetailsNeeds approval from an approver in each of these files:
Approvers can indicate their approval by writing |
| fi | ||
|
|
||
| network_connect_enabled_flag= | ||
| if [[ "{{.OVN_NETWORK_CONNECT_ENABLE}}" == "true" && "${OVN_NODE_MODE}" != "dpu-host" ]]; then |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What's up with "${OVN_NODE_MODE}" != "dpu-host"? I think it's only present here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hmm it seems the DPF folks added it to all of them?
I see:
multi_network_enabled_flag=
if [[ "{{.OVN_MULTI_NETWORK_ENABLE}}" == "true" && "${OVN_NODE_MODE}" != "dpu-host" ]]; then
multi_network_enabled_flag="--enable-multi-network"
fi
network_segmentation_enabled_flag=
if [[ "{{.OVN_NETWORK_SEGMENTATION_ENABLE}}" == "true" && "${OVN_NODE_MODE}" != "dpu-host" ]]; then
multi_network_enabled_flag="--enable-multi-network"
network_segmentation_enabled_flag="--enable-network-segmentation"
fi
multi_network_policy_enabled_flag=
if [[ "{{.OVN_MULTI_NETWORK_POLICY_ENABLE}}" == "true"&& "${OVN_NODE_MODE}" != "dpu-host" ]]; then
multi_network_policy_enabled_flag="--enable-multi-networkpolicy"
fi
admin_network_policy_enabled_flag=
if [[ "{{.OVN_ADMIN_NETWORK_POLICY_ENABLE}}" == "true" && "${OVN_NODE_MODE}" != "dpu-host" ]]; then
admin_network_policy_enabled_flag="--enable-admin-network-policy"
fi
not sure why some have it some don't?
| {{- if .OVN_PRE_CONF_UDN_ADDR_ENABLE }} | ||
| enable-preconfigured-udn-addresses=true | ||
| {{- end }} | ||
| {{if .OVN_NETWORK_CONNECT_ENABLE }} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Something doesn't look right, where is the {{- end }}?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think in master code.... there were two -end, -end?
| {{- end }} |
so I got it for free ?
but I do need to add the
- in front of if.. my bad will repush..
94dc54c to
db2a8db
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@bindata/network/ovn-kubernetes/common/001-crd.yaml`:
- Around line 5173-5205: The x-kubernetes-validations rule currently enforces
networkPrefix must be strictly greater than the CIDR prefix, but the description
allows it to be equal; update the validation expression for the CRD so that it
permits equality: locate the x-kubernetes-validations entry that references
self.networkPrefix and cidr(self.cidr).prefixLength() and change the comparison
from ">" to ">=" (or adjust the negated logic accordingly) so networkPrefix can
be equal to the CIDR prefix while still rejecting shorter prefixes.
♻️ Duplicate comments (1)
bindata/network/ovn-kubernetes/managed/004-config.yaml (1)
130-140: Missing{{- end }}makes enable-network-connect incorrectly depend on segmentation.
enable-network-connect=trueis currently nested under{{- if .OVN_NETWORK_SEGMENTATION_ENABLE }}because the block is not closed. This will suppress the new flag unless segmentation is enabled.Suggested fix
{{- if .OVN_NETWORK_SEGMENTATION_ENABLE }} {{- if not .OVN_MULTI_NETWORK_ENABLE }} enable-multi-network=true {{- end }} enable-network-segmentation=true +{{- end }} {{- if .OVN_PRE_CONF_UDN_ADDR_ENABLE }} enable-preconfigured-udn-addresses=true {{- end }}
📜 Review details
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Cache: Disabled due to data retention organization setting
Knowledge base: Disabled due to Reviews -> Disable Knowledge Base setting
📒 Files selected for processing (9)
bindata/network/ovn-kubernetes/common/001-crd.yamlbindata/network/ovn-kubernetes/common/008-script-lib.yamlbindata/network/ovn-kubernetes/managed/004-config.yamlbindata/network/ovn-kubernetes/managed/ovnkube-control-plane.yamlbindata/network/ovn-kubernetes/self-hosted/004-config.yamlbindata/network/ovn-kubernetes/self-hosted/ovnkube-control-plane.yamlpkg/network/cluster_config_test.gopkg/network/ovn_kubernetes.gopkg/network/ovn_kubernetes_test.go
🚧 Files skipped from review as they are similar to previous changes (5)
- pkg/network/ovn_kubernetes.go
- bindata/network/ovn-kubernetes/self-hosted/004-config.yaml
- pkg/network/cluster_config_test.go
- bindata/network/ovn-kubernetes/common/008-script-lib.yaml
- pkg/network/ovn_kubernetes_test.go
🧰 Additional context used
📓 Path-based instructions (1)
**
⚙️ CodeRabbit configuration file
-Focus on major issues impacting performance, readability, maintainability and security. Avoid nitpicks and avoid verbosity.
Files:
bindata/network/ovn-kubernetes/managed/ovnkube-control-plane.yamlbindata/network/ovn-kubernetes/common/001-crd.yamlbindata/network/ovn-kubernetes/managed/004-config.yamlbindata/network/ovn-kubernetes/self-hosted/ovnkube-control-plane.yaml
🔇 Additional comments (3)
bindata/network/ovn-kubernetes/managed/004-config.yaml (1)
48-50: LGTM: feature-flagged enablement is cleanly scoped.bindata/network/ovn-kubernetes/managed/ovnkube-control-plane.yaml (1)
207-210: Feature-gated NetworkConnect flag is wired consistently.
Matches the existing flag pattern and cleanly injects the option into the ovnkube command.Also applies to: 256-256
bindata/network/ovn-kubernetes/self-hosted/ovnkube-control-plane.yaml (1)
158-161: NetworkConnect flag addition is consistent with existing gating.
No issues; the exec wiring is straightforward and aligned with the managed manifest.Also applies to: 213-213
✏️ Tip: You can disable this entire section by setting review_details to false in your review settings.
| networkPrefix: | ||
| description: |- | ||
| NetworkPrefix specifies the prefix length for every connected network. | ||
| This prefix length should be equal to or longer than the length of the CIDR prefix. | ||
| For example, if the CIDR is 10.0.0.0/16 and the networkPrefix is 24, | ||
| then the connect subnet for each connected layer3 network will be 10.0.0.0/24, 10.0.1.0/24, 10.0.2.0/24 etc. | ||
| For layer2 networks we will allocate the next available /networkPrefix range | ||
| that is then split into /31 or /127 slices for each layer2 network | ||
| A good practice is to set this to a value that ensures it contains more | ||
| than twice the number of maximum nodes planned to be deployed in the cluster. | ||
| Each node gets a /31 subnet for the layer3 networks, hence networkPrefix should | ||
| contain enough IPs for 4 times the maximum nodes planned | ||
| Example - recommended values: | ||
| if you plan to deploy 10 nodes, set the networkPrefix to /26 (40+ IPs) | ||
| if you plan to deploy 100 nodes, set the networkPrefix to /23 (400+ IPs) | ||
| if you plan to deploy 1000 nodes, set the networkPrefix to /20 (4000+ IPs) | ||
| if you plan to deploy 5000 nodes, set the networkPrefix to /17 (20000+ IPs) | ||
| This field restricts the maximum number of nodes that can be deployed in the cluster | ||
| and hence its good to plan this value carefully along with the CIDR. | ||
| format: int32 | ||
| maximum: 127 | ||
| minimum: 1 | ||
| type: integer | ||
| required: | ||
| - cidr | ||
| - networkPrefix | ||
| type: object | ||
| x-kubernetes-validations: | ||
| - message: NetworkPrefix must be smaller than CIDR subnet | ||
| rule: '!has(self.networkPrefix) || !isCIDR(self.cidr) || self.networkPrefix | ||
| > cidr(self.cidr).prefixLength()' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Align networkPrefix validation with the documented “equal or longer” rule.
The description allows networkPrefix to be equal to the CIDR prefix length, but the validation rule requires it to be strictly greater. This will reject documented-valid configs (e.g., /16 with networkPrefix=16).
Suggested fix
- - message: NetworkPrefix must be smaller than CIDR subnet
- rule: '!has(self.networkPrefix) || !isCIDR(self.cidr) || self.networkPrefix
- > cidr(self.cidr).prefixLength()'
+ - message: NetworkPrefix must be equal to or longer than the CIDR prefix
+ rule: '!has(self.networkPrefix) || !isCIDR(self.cidr) || self.networkPrefix
+ >= cidr(self.cidr).prefixLength()'📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| networkPrefix: | |
| description: |- | |
| NetworkPrefix specifies the prefix length for every connected network. | |
| This prefix length should be equal to or longer than the length of the CIDR prefix. | |
| For example, if the CIDR is 10.0.0.0/16 and the networkPrefix is 24, | |
| then the connect subnet for each connected layer3 network will be 10.0.0.0/24, 10.0.1.0/24, 10.0.2.0/24 etc. | |
| For layer2 networks we will allocate the next available /networkPrefix range | |
| that is then split into /31 or /127 slices for each layer2 network | |
| A good practice is to set this to a value that ensures it contains more | |
| than twice the number of maximum nodes planned to be deployed in the cluster. | |
| Each node gets a /31 subnet for the layer3 networks, hence networkPrefix should | |
| contain enough IPs for 4 times the maximum nodes planned | |
| Example - recommended values: | |
| if you plan to deploy 10 nodes, set the networkPrefix to /26 (40+ IPs) | |
| if you plan to deploy 100 nodes, set the networkPrefix to /23 (400+ IPs) | |
| if you plan to deploy 1000 nodes, set the networkPrefix to /20 (4000+ IPs) | |
| if you plan to deploy 5000 nodes, set the networkPrefix to /17 (20000+ IPs) | |
| This field restricts the maximum number of nodes that can be deployed in the cluster | |
| and hence its good to plan this value carefully along with the CIDR. | |
| format: int32 | |
| maximum: 127 | |
| minimum: 1 | |
| type: integer | |
| required: | |
| - cidr | |
| - networkPrefix | |
| type: object | |
| x-kubernetes-validations: | |
| - message: NetworkPrefix must be smaller than CIDR subnet | |
| rule: '!has(self.networkPrefix) || !isCIDR(self.cidr) || self.networkPrefix | |
| > cidr(self.cidr).prefixLength()' | |
| networkPrefix: | |
| description: |- | |
| NetworkPrefix specifies the prefix length for every connected network. | |
| This prefix length should be equal to or longer than the length of the CIDR prefix. | |
| For example, if the CIDR is 10.0.0.0/16 and the networkPrefix is 24, | |
| then the connect subnet for each connected layer3 network will be 10.0.0.0/24, 10.0.1.0/24, 10.0.2.0/24 etc. | |
| For layer2 networks we will allocate the next available /networkPrefix range | |
| that is then split into /31 or /127 slices for each layer2 network | |
| A good practice is to set this to a value that ensures it contains more | |
| than twice the number of maximum nodes planned to be deployed in the cluster. | |
| Each node gets a /31 subnet for the layer3 networks, hence networkPrefix should | |
| contain enough IPs for 4 times the maximum nodes planned | |
| Example - recommended values: | |
| if you plan to deploy 10 nodes, set the networkPrefix to /26 (40+ IPs) | |
| if you plan to deploy 100 nodes, set the networkPrefix to /23 (400+ IPs) | |
| if you plan to deploy 1000 nodes, set the networkPrefix to /20 (4000+ IPs) | |
| if you plan to deploy 5000 nodes, set the networkPrefix to /17 (20000+ IPs) | |
| This field restricts the maximum number of nodes that can be deployed in the cluster | |
| and hence its good to plan this value carefully along with the CIDR. | |
| format: int32 | |
| maximum: 127 | |
| minimum: 1 | |
| type: integer | |
| required: | |
| - cidr | |
| - networkPrefix | |
| type: object | |
| x-kubernetes-validations: | |
| - message: NetworkPrefix must be equal to or longer than the CIDR prefix | |
| rule: '!has(self.networkPrefix) || !isCIDR(self.cidr) || self.networkPrefix | |
| >= cidr(self.cidr).prefixLength()' |
🤖 Prompt for AI Agents
In `@bindata/network/ovn-kubernetes/common/001-crd.yaml` around lines 5173 - 5205,
The x-kubernetes-validations rule currently enforces networkPrefix must be
strictly greater than the CIDR prefix, but the description allows it to be
equal; update the validation expression for the CRD so that it permits equality:
locate the x-kubernetes-validations entry that references self.networkPrefix and
cidr(self.cidr).prefixLength() and change the comparison from ">" to ">=" (or
adjust the negated logic accordingly) so networkPrefix can be equal to the CIDR
prefix while still rejecting shorter prefixes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
oh nice catch :D I'll fix it u/s and do a small change for this
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@tssurya Thanks! While you’re updating it, please also adjust the validation message to match the docs and swap the operator to allow equality.
YAML tweak:
x-kubernetes-validations:
- message: NetworkPrefix must be equal to or longer than the CIDR prefix
rule: '!has(self.networkPrefix) || !isCIDR(self.cidr) || self.networkPrefix >= cidr(self.cidr).prefixLength()'Notes:
- The dual-stack host-bits equality rule you already have remains valid.
- Keep the IPv4 cap rule (networkPrefix < 32) as-is.
Ping when pushed and I’ll re-check.
7207d29 to
fe79b4b
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@pkg/network/ovn_kubernetes_test.go`:
- Around line 880-920: The test case with desc "enable network connect
(DevPreview)" is missing the controlPlaneReplicaCount field causing it to
default to 0; update that table entry to include controlPlaneReplicaCount (match
other non-SNO tests by setting controlPlaneReplicaCount: 2, or 1 if intended as
SNO) so the cluster config used by the test (and the expected output block)
reflects a valid replica count; locate the test case by its desc string and add
the controlPlaneReplicaCount key alongside enabledFeatureGates and expected.
♻️ Duplicate comments (1)
bindata/network/ovn-kubernetes/common/001-crd.yaml (1)
5174-5205: AlignnetworkPrefixvalidation with the “equal or longer” rule.The description allows equality, but the validation rule requires
networkPrefixto be strictly greater than the CIDR prefix length, rejecting valid configs (e.g.,/16withnetworkPrefix=16).Proposed fix
- - message: NetworkPrefix must be smaller than CIDR subnet - rule: '!has(self.networkPrefix) || !isCIDR(self.cidr) || self.networkPrefix - > cidr(self.cidr).prefixLength()' + - message: NetworkPrefix must be equal to or longer than the CIDR prefix + rule: '!has(self.networkPrefix) || !isCIDR(self.cidr) || self.networkPrefix + >= cidr(self.cidr).prefixLength()'
📜 Review details
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Cache: Disabled due to data retention organization setting
Knowledge base: Disabled due to Reviews -> Disable Knowledge Base setting
📒 Files selected for processing (9)
bindata/network/ovn-kubernetes/common/001-crd.yamlbindata/network/ovn-kubernetes/common/008-script-lib.yamlbindata/network/ovn-kubernetes/managed/004-config.yamlbindata/network/ovn-kubernetes/managed/ovnkube-control-plane.yamlbindata/network/ovn-kubernetes/self-hosted/004-config.yamlbindata/network/ovn-kubernetes/self-hosted/ovnkube-control-plane.yamlpkg/network/cluster_config_test.gopkg/network/ovn_kubernetes.gopkg/network/ovn_kubernetes_test.go
🚧 Files skipped from review as they are similar to previous changes (2)
- pkg/network/cluster_config_test.go
- bindata/network/ovn-kubernetes/managed/004-config.yaml
🧰 Additional context used
📓 Path-based instructions (1)
**
⚙️ CodeRabbit configuration file
-Focus on major issues impacting performance, readability, maintainability and security. Avoid nitpicks and avoid verbosity.
Files:
bindata/network/ovn-kubernetes/common/008-script-lib.yamlpkg/network/ovn_kubernetes.gobindata/network/ovn-kubernetes/self-hosted/ovnkube-control-plane.yamlpkg/network/ovn_kubernetes_test.gobindata/network/ovn-kubernetes/common/001-crd.yamlbindata/network/ovn-kubernetes/managed/ovnkube-control-plane.yamlbindata/network/ovn-kubernetes/self-hosted/004-config.yaml
🔇 Additional comments (11)
bindata/network/ovn-kubernetes/common/008-script-lib.yaml (2)
625-629: Network-connect gating matches existing feature flags.
735-735: Flag is correctly wired into the ovnkube invocation.bindata/network/ovn-kubernetes/self-hosted/004-config.yaml (1)
55-57: Config template emits enable-network-connect under the gate.bindata/network/ovn-kubernetes/self-hosted/ovnkube-control-plane.yaml (2)
158-161: Network-connect flag initialization looks consistent.
213-213: Flag inclusion in ovnkube args is correct.pkg/network/ovn_kubernetes.go (1)
337-342: Feature gate is correctly surfaced into render data.bindata/network/ovn-kubernetes/managed/ovnkube-control-plane.yaml (2)
207-210: Network-connect flag definition is consistent with other gates.
256-256: Flag is correctly wired into the command line.pkg/network/ovn_kubernetes_test.go (3)
76-85: LGTM!The feature gate is correctly added to the disabled list with an appropriate comment indicating its DevPreview status.
971-978: LGTM!The feature gate is correctly added to
knownFeatureGatesin alphabetical order, ensuring proper test coverage for the NetworkConnect feature.
3869-3878: LGTM!Consistent with the default feature gates configuration, keeping NetworkConnect disabled when testing unrelated features.
✏️ Tip: You can disable this entire section by setting review_details to false in your review settings.
| { | ||
| desc: "enable network connect (DevPreview)", | ||
| expected: ` | ||
| [default] | ||
| mtu="1500" | ||
| cluster-subnets="10.128.0.0/15/23,10.0.0.0/14/24" | ||
| encap-port="8061" | ||
| enable-lflow-cache=true | ||
| lflow-cache-limit-kb=1048576 | ||
| enable-udp-aggregation=true | ||
| udn-allowed-default-services="default/kubernetes,openshift-dns/dns-default" | ||
| [kubernetes] | ||
| service-cidrs="172.30.0.0/16" | ||
| ovn-config-namespace="openshift-ovn-kubernetes" | ||
| apiserver="https://testing.test:8443" | ||
| host-network-namespace="openshift-host-network" | ||
| platform-type="GCP" | ||
| healthz-bind-address="0.0.0.0:10256" | ||
| dns-service-namespace="openshift-dns" | ||
| dns-service-name="dns-default" | ||
| [ovnkubernetesfeature] | ||
| egressip-node-healthcheck-port=9107 | ||
| enable-network-segmentation=true | ||
| enable-network-connect=true | ||
| [gateway] | ||
| mode=shared | ||
| nodeport=true | ||
| [logging] | ||
| libovsdblogfile=/var/log/ovnkube/libovsdb.log | ||
| logfile-maxsize=100 | ||
| logfile-maxbackups=5 | ||
| logfile-maxage=0`, | ||
| enabledFeatureGates: []configv1.FeatureGateName{ | ||
| apifeatures.FeatureGateNetworkSegmentation, | ||
| apifeatures.FeatureGateNetworkConnect, | ||
| }, | ||
| }, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Missing controlPlaneReplicaCount field.
All other test cases in this table specify controlPlaneReplicaCount: 2 (or 1 for SNO scenarios). This test case omits it, defaulting to 0, which may not represent a valid cluster configuration.
💡 Proposed fix
{
desc: "enable network connect (DevPreview)",
expected: `
[default]
...
logfile-maxage=0`,
enabledFeatureGates: []configv1.FeatureGateName{
apifeatures.FeatureGateNetworkSegmentation,
apifeatures.FeatureGateNetworkConnect,
},
+ controlPlaneReplicaCount: 2,
},📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| { | |
| desc: "enable network connect (DevPreview)", | |
| expected: ` | |
| [default] | |
| mtu="1500" | |
| cluster-subnets="10.128.0.0/15/23,10.0.0.0/14/24" | |
| encap-port="8061" | |
| enable-lflow-cache=true | |
| lflow-cache-limit-kb=1048576 | |
| enable-udp-aggregation=true | |
| udn-allowed-default-services="default/kubernetes,openshift-dns/dns-default" | |
| [kubernetes] | |
| service-cidrs="172.30.0.0/16" | |
| ovn-config-namespace="openshift-ovn-kubernetes" | |
| apiserver="https://testing.test:8443" | |
| host-network-namespace="openshift-host-network" | |
| platform-type="GCP" | |
| healthz-bind-address="0.0.0.0:10256" | |
| dns-service-namespace="openshift-dns" | |
| dns-service-name="dns-default" | |
| [ovnkubernetesfeature] | |
| egressip-node-healthcheck-port=9107 | |
| enable-network-segmentation=true | |
| enable-network-connect=true | |
| [gateway] | |
| mode=shared | |
| nodeport=true | |
| [logging] | |
| libovsdblogfile=/var/log/ovnkube/libovsdb.log | |
| logfile-maxsize=100 | |
| logfile-maxbackups=5 | |
| logfile-maxage=0`, | |
| enabledFeatureGates: []configv1.FeatureGateName{ | |
| apifeatures.FeatureGateNetworkSegmentation, | |
| apifeatures.FeatureGateNetworkConnect, | |
| }, | |
| }, | |
| { | |
| desc: "enable network connect (DevPreview)", | |
| expected: ` | |
| [default] | |
| mtu="1500" | |
| cluster-subnets="10.128.0.0/15/23,10.0.0.0/14/24" | |
| encap-port="8061" | |
| enable-lflow-cache=true | |
| lflow-cache-limit-kb=1048576 | |
| enable-udp-aggregation=true | |
| udn-allowed-default-services="default/kubernetes,openshift-dns/dns-default" | |
| [kubernetes] | |
| service-cidrs="172.30.0.0/16" | |
| ovn-config-namespace="openshift-ovn-kubernetes" | |
| apiserver="https://testing.test:8443" | |
| host-network-namespace="openshift-host-network" | |
| platform-type="GCP" | |
| healthz-bind-address="0.0.0.0:10256" | |
| dns-service-namespace="openshift-dns" | |
| dns-service-name="dns-default" | |
| [ovnkubernetesfeature] | |
| egressip-node-healthcheck-port=9107 | |
| enable-network-segmentation=true | |
| enable-network-connect=true | |
| [gateway] | |
| mode=shared | |
| nodeport=true | |
| [logging] | |
| libovsdblogfile=/var/log/ovnkube/libovsdb.log | |
| logfile-maxsize=100 | |
| logfile-maxbackups=5 | |
| logfile-maxage=0`, | |
| enabledFeatureGates: []configv1.FeatureGateName{ | |
| apifeatures.FeatureGateNetworkSegmentation, | |
| apifeatures.FeatureGateNetworkConnect, | |
| }, | |
| controlPlaneReplicaCount: 2, | |
| }, |
🤖 Prompt for AI Agents
In `@pkg/network/ovn_kubernetes_test.go` around lines 880 - 920, The test case
with desc "enable network connect (DevPreview)" is missing the
controlPlaneReplicaCount field causing it to default to 0; update that table
entry to include controlPlaneReplicaCount (match other non-SNO tests by setting
controlPlaneReplicaCount: 2, or 1 if intended as SNO) so the cluster config used
by the test (and the expected output block) reflects a valid replica count;
locate the test case by its desc string and add the controlPlaneReplicaCount key
alongside enabledFeatureGates and expected.
YAML was copied from https://github.com/ovn-kubernetes/ovn-kubernetes/blob/master/dist/templates/k8s.ovn.org_clusternetworkconnects.yaml.j2 Signed-off-by: Surya Seetharaman <suryaseetharaman.9@gmail.com>
fe79b4b to
aeb049c
Compare
|
/retest-required |
|
@tssurya: The following tests failed, say
Full PR test history. Your PR dashboard. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
Consumes openshift/api#2629