Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Remove kubeadm dependency when machinepool enabled #4868

Conversation

salasberryfin
Copy link

/kind bug

What this PR does / why we need it:

When installing the provider with the MachinePool feature enabled, the controller watches Kubeadm resources which forces the user to exclusively use Kubeadm as a bootstrap provider. As specified in #4854, users may need to use any bootstrap providers of their choice. Other infrastructure providers, such as CAPA don't have this dependency on Kubeadm. Unless there is a specific need for this watcher, I suggest we remove this dependency.

Which issue(s) this PR fixes:
Fixes #4854

Special notes for your reviewer:

TODOs:

  • squashed commits
  • includes documentation
  • adds unit tests

Release note:

kubeadm is no longer required when enabling `MachinePool`

Signed-off-by: Carlos Salas <carlos.salas@suse.com>
@k8s-ci-robot k8s-ci-robot added kind/bug Categorizes issue or PR as related to a bug. release-note Denotes a PR that will be considered when it comes time to generate release notes. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. labels May 22, 2024
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by:
Once this PR has been reviewed and has the lgtm label, please assign sbueringer for approval. For more information see the Kubernetes Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added the size/XS Denotes a PR that changes 0-9 lines, ignoring generated files. label May 22, 2024
@salasberryfin
Copy link
Author

/retest

Copy link

codecov bot commented May 22, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 62.01%. Comparing base (3c61283) to head (0155ed9).
Report is 92 commits behind head on main.

Additional details and impacted files
@@            Coverage Diff             @@
##             main    #4868      +/-   ##
==========================================
- Coverage   62.04%   62.01%   -0.03%     
==========================================
  Files         201      201              
  Lines       16878    16873       -5     
==========================================
- Hits        10472    10464       -8     
- Misses       5623     5627       +4     
+ Partials      783      782       -1     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@jackfrancis
Copy link
Contributor

/assign @nojnhuh @willie-yao

@mboersma mboersma added this to the v1.16 milestone May 23, 2024
Copy link
Contributor

@willie-yao willie-yao left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This change was made in #3134 to fix issues with custom data changes. @jackfrancis Since it looks like you took a look at that PR, wouldn't this break that use-case?

@mboersma
Copy link
Contributor

AFAICT merging this would break the fix in #3134, as without the watch the token won't be refreshed. Having said that, it's bad design to tie the CAPZ implementation to Kubeadm, as @salasberryfin points out.

Could we somehow make adding the watch conditional on whether kubeadm is actually being used for bootstrap? I assume it's not the import statements that are specifically problematic.

Or is there another fix we can imagine for #3134 that avoids the watch entirely?

@salasberryfin
Copy link
Author

Hey @mboersma, thanks for the detailed explanation. I think it is worth investigating how to avoid this dependency as this is a blocker for anyone wanting to use CAPZ + MachinePool without Kubeadm. I'd say if there is a way we can make the watch conditional, as you mentioned, it would be great, but I don't know if this would be a feasible change. Do you have any proposals on how to implement this? Perhaps any other components in other CAPI projects that implement a similar solution?

@mboersma
Copy link
Contributor

mboersma commented Jun 3, 2024

any other components in other CAPI projects that implement a similar solution?

I've done some looking around, but don't yet see an obvious way to know about the BootstrapProvider from down in the AMP controller code. Nor have I found any relevant code in CAPI...but it's entirely possible I missed something.

What if we moved the kubeadm watch code out of this block where we call Build() and into its own code block just afterward, where it can log a warning instead of returning an error?

@mboersma
Copy link
Contributor

@salasberryfin maybe moving the kubeadm watch code into its own block that just logs the error would work? Something like this:

...
		// watch for changes in AzureManagedControlPlane resources
		Watches(
			&infrav1.AzureManagedControlPlane{},
			handler.EnqueueRequestsFromMapFunc(azureManagedControlPlaneMapper),
		).
		Build(r)
	if err != nil {
		return errors.Wrap(err, "error creating controller")
	}

	// watch for changes in KubeadmConfig to sync bootstrap token
	if err := c.Watch(
		source.Kind(mgr.GetCache(), &kubeadmv1.KubeadmConfig{}),
		handler.EnqueueRequestsFromMapFunc(KubeadmConfigToInfrastructureMapFunc(ctx, ampr.Client, log)),
		predicate.ResourceVersionChangedPredicate{},
	); err != nil {
		log.Error(err, "failed adding a watch for KubeadmConfig")
	}

	if err := c.Watch(
		source.Kind(mgr.GetCache(), &infrav1exp.AzureMachinePoolMachine{}),
...

@alexander-demicev
Copy link
Contributor

@mboersma That should work, can we maybe log a warning instead of an error?

@mboersma
Copy link
Contributor

maybe log a warning instead of an error?

Yes, it's meant to be a warning. I think this logger only gives us Info() and Error(), so maybe use Info() instead?

@alexander-demicev
Copy link
Contributor

@mboersma +1, I also think this watch has to be configurable depending on what bootstrap provider people are using, kubeadm is the most popular one, but for example, the same problem can be found when using K3S provider. Ideally, we need to find a way to make it configurable somehow on the user side. I believe the issue will appear for every other provider that supports machine pools.

@mboersma
Copy link
Contributor

See #4931

@k8s-ci-robot k8s-ci-robot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Jun 28, 2024
@k8s-ci-robot
Copy link
Contributor

PR needs rebase.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@nojnhuh
Copy link
Contributor

nojnhuh commented Jun 28, 2024

Per #4931 (review),

/close

@k8s-ci-robot
Copy link
Contributor

@nojnhuh: Closed this PR.

In response to this:

Per #4931 (review),

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. kind/bug Categorizes issue or PR as related to a bug. needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. release-note Denotes a PR that will be considered when it comes time to generate release notes. size/XS Denotes a PR that changes 0-9 lines, ignoring generated files.
Projects
Archived in project
Development

Successfully merging this pull request may close these issues.

Remove dependency on Kubeadm when enabling MachinePools
7 participants