Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

host is set to localhost when loading from kube config file in v12 #1284

Closed
limonkufu opened this issue Oct 15, 2020 · 17 comments
Closed

host is set to localhost when loading from kube config file in v12 #1284

limonkufu opened this issue Oct 15, 2020 · 17 comments
Assignees
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@limonkufu
Copy link

What happened (please include outputs or screenshots):
We are using the following code to set up:

from kubernetes import client, config, watch

config.load_kube_config(kubeconfig)
    host = config.kube_config.Configuration().host
    logging.info("HOST INFO: {}".format(host))

The kubeconfig file has server field set up correctly. This is working with with version 11.0.0 correctly but when we change the version to 12.0.0 then it returns:

HOST INFO: http://localhost

I tried to investigate it and according to kube_config.py file it should be set correctly in _load_cluster_info method but it is not setting up correctly.

What you expected to happen:
Host url to be set correctly in compliance with kube-config file

How to reproduce it (as minimally and precisely as possible):

  • Install version 11.0.0 run the above code with a valid kube-config file
  • See the host url set correctly
  • Update the version to 12.0.0 and run the code again
  • See the host url set incorrectly to http://localhost

Anything else we need to know?:

Environment:

  • Kubernetes version (kubectl version): v1.18.2
  • OS (e.g., MacOS 10.13.6): Ubuntu 18.04
  • Python version (python --version) 3.7
  • Python client version (pip list | grep kubernetes) 12.0.0
@limonkufu limonkufu added the kind/bug Categorizes issue or PR as related to a bug. label Oct 15, 2020
@felixhuettner
Copy link

felixhuettner commented Oct 15, 2020

We just encountered the same thing. It seems like this change requires you now to explicitly get the default configuration.

So your line would need to change to something like:
host = config.kube_config.Configuration.get_default_copy().host

Something like this should probably be mentioned in the release notes.

@limonkufu
Copy link
Author

@felixhuettner thanks this solves the issue but using get_default_copy() instead of direct call seems counter-intuitive after setting the configuration from file. (note that this change is also not documented in the readme/examples)

@rasisuku
Copy link

I am also facing this issue.

@chancez
Copy link

chancez commented Oct 21, 2020

At the very least, this needs a mention in the changelog so that users know how to fix this.

@ntavares
Copy link

load_incluster_config() broke too for 12.0.0. It's not clear to me yet how to retain the same mechanism...

@DontWorry33
Copy link

My process for setting up my configuration when using load_incluster_config() was:

from kubernetes import client, config, utils
from kubernetes.client import Configuration

config.load_incluster_config()
c = Configuration()
c.assert_hostname = False
Configuration.set_default(c)

which failed to work with for 12.0.0. But if I don't set the new Configuration object as default and only have:

config.load_incluster_config()

it seems to use the correct cluster configuration

@roycaihw
Copy link
Member

/assign
cc @palnabarun @yliaog

collivier added a commit to collivier/xrally-kubernetes that referenced this issue Nov 21, 2020
As highlighted by [1], kubernetes v12.0.0 and newer asks for the use of
Configuration.get_default_copy() [2].

It detects if Configuration.get_default_copy() is defined to preserve
the backward compatibility.

[1] kubernetes-client/python#1284
[2] kubernetes-client/python@b4d11b0#diff-59aff6ce4d28aa662f8b411b9d0dfe4f3e949c32a5edaf8e08905b58e7a41ee3L69-R71

Signed-off-by: Cédric Ollivier <cedric.ollivier@orange.com>
collivier added a commit to collivier/xrally-kubernetes that referenced this issue Nov 23, 2020
As highlighted by [1], kubernetes v12.0.0 and newer asks for the use of
Configuration.get_default_copy() [2]. It also removes the deprecated
backward compatibility and then forces kubernetes >= 12.0.0.

It also renames a few imports as warned by 12.0.0 and older.

[1] kubernetes-client/python#1284
[2] kubernetes-client/python@b4d11b0#diff-59aff6ce4d28aa662f8b411b9d0dfe4f3e949c32a5edaf8e08905b58e7a41ee3L69-R71

Signed-off-by: Cédric Ollivier <cedric.ollivier@orange.com>
andreykurilin pushed a commit to xrally/xrally-kubernetes that referenced this issue Nov 24, 2020
As highlighted by [1], kubernetes v12.0.0 and newer asks for the use of
Configuration.get_default_copy() [2]. It also removes the deprecated
backward compatibility and then forces kubernetes >= 12.0.0.

It also renames a few imports as warned by 12.0.0 and older.

[1] kubernetes-client/python#1284
[2] kubernetes-client/python@b4d11b0#diff-59aff6ce4d28aa662f8b411b9d0dfe4f3e949c32a5edaf8e08905b58e7a41ee3L69-R71

Signed-off-by: Cédric Ollivier <cedric.ollivier@orange.com>

Co-authored-by: Cédric Ollivier <cedric.ollivier@orange.com>
@LuCatIsFun
Copy link

LuCatIsFun commented Dec 24, 2020

I have the same problem,when i use client 8.0.0 get kubernetes version info,It is normal.

from sdk.v8.kubernetes import client
from sdk.v8.kubernetes import config

config.load_kube_config()
configuration = client.Configuration()
configuration.verify_ssl = False

api_client = client.ApiClient(configuration=configuration)
version_api = client.VersionApi(api_client)
print(version_api.get_code())

##output
{
   'build_date': '2019-04-22T11:34:20Z',
   'compiler': 'gc',
   'git_commit': '8cb561c',
   'git_tree_state': '',
   'git_version': 'v1.12.6-aliyun.1',
   'go_version': 'go1.10.8',
   'major': '1',
   'minor': '12+',
   'platform': 'linux/amd64'
}

but when i use client 12.0.0.0,a procedural exception has occurred.

urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=80): Max retries exceeded with url: /version/ (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f9c7cf6f070>: Failed to establish a new connection: [Errno 61] Connection refused'))

It looks like a configuration error,After referring to the above answer, I modified it to

from sdk.v12.kubernetes import client
from sdk.v12.kubernetes import config

config.load_kube_config()
configuration = client.Configuration().get_default_copy()
configuration.verify_ssl = False

api_client = client.ApiClient(configuration=configuration)
version_api = client.VersionApi(api_client)
print(version_api.get_code())

That solved my problem,I think that part of the configuration code logic has changed

@riceluxs1t
Copy link

@ntavares I confirm this. Have you found a workaround that uses either v12.0.0 or v12.0.1? I am currently using the following code snippet and it fails

from kubernetes import config, client
from kubernetes.client import ApiClient
from kubernetes.dynamic import DynamicClient

config.load_incluster_config()
configuration = client.Configuration.get_default_copy()

k8s_client = ApiClient(configuration=configuration)
dyn_client = DynamicClient(k8s_client)
dyn_client.resources.get(kind="Secret").create(...)

Screen Shot 2021-01-15 at 3 35 52 PM

Version: 12.0.1
Python Version: 3.7.6
K8s version: v1.18.9

@chrisegb
Copy link

chrisegb commented Feb 19, 2021

I fixed this changing my Kubernetes version from 12 to 11:
Check Kubernetes version with
pip3 list | grep kubernetes

This problem is related with Kubernetes 12, if you see that the version is 12, downgrade to 11 with:

pip3 install kubernetes==11

@limonkufu
Copy link
Author

limonkufu commented Feb 19, 2021

@chrisegb We are doing the exact same thing right now but, this needs to be addressed in v12 as well for us to upgrade the client version eventually

olegeech-me added a commit to olegeech-me/rally-plugins that referenced this issue Mar 30, 2021
Update the Configuration logic for kubernetes v12.0.0

As highlighted by [1], kubernetes v12.0.0 and newer asks for the use of
Configuration.get_default_copy() [2]. It also removes the deprecated
backward compatibility and then forces kubernetes >= 12.0.0.

It also renames a few imports as warned by 12.0.0 and older.

[1] kubernetes-client/python#1284
[2] kubernetes-client/python@b4d11b0#diff-59aff6ce4d28aa662f8b411b9d0dfe4f3e949c32a5edaf8e08905b58e7a41ee3L69-R71
olegeech-me added a commit to olegeech-me/rally-plugins that referenced this issue Mar 30, 2021
Update the Configuration logic for kubernetes v12.0.0

As highlighted by [1], kubernetes v12.0.0 and newer asks for the use of
Configuration.get_default_copy() [2]. It also removes the deprecated
backward compatibility and then forces kubernetes >= 12.0.0.

It also renames a few imports as warned by 12.0.0 and older.

[1] kubernetes-client/python#1284
[2] kubernetes-client/python@b4d11b0#diff-59aff6ce4d28aa662f8b411b9d0dfe4f3e949c32a5edaf8e08905b58e7a41ee3L69-R71
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 20, 2021
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jun 19, 2021
@TobiasGoerke
Copy link

TobiasGoerke commented Jul 5, 2021

I'm experiencing the same issue and came up with the following workaround:

from kubernetes.config.kube_config import KubeConfigLoader, KubeConfigMerger, KUBE_CONFIG_DEFAULT_LOCATION

config_loader = KubeConfigLoader(config_dict=KubeConfigMerger(
    KUBE_CONFIG_DEFAULT_LOCATION).config,
    config_base_path=None)

current_cluster_name = config_loader.current_context["context"]["cluster"]
current_cluster_url = [cluster.value["cluster"]["server"]
                       for cluster in config_loader._config.value["clusters"] if cluster.value["name"] == current_cluster_name][0]

It's not pretty and I am not sure how portable this solution is. However, it uses similar code as kubernetes.config does internally and works for me. Hope this helps.

@sxddhxrthx
Copy link

The way it seems to work is when the kubeconfig file is loaded before instantiating the Kubernetes Api Client. But keep in mind that kubeconfig file has to be loaded from the default .kube directory and not anywhere else.

from kubernetes import client, config

config.load_kube_config()

kbn_client = client.CoreV1Api()

ctx_namespaces = kbn_client.list_namespace(watch=False, pretty=True)
namespace_list = [i.metadata.name for i in ctx_namespaces.items]
print(namespace_list)

configuration = client.Configuration()
print(configuration.host)

Output:

> ['default', 'kube-node-lease', 'kube-public', 'kube-system', 'cloud-ctx-namespace1', 'cloud-ctx-namespace2']
> 'https://kubernetes.cloud-provider.com'

Still the issue I have been facing is loading the kubeconfig file from any other location other than default .kube directory.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue.

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests