Skip to content
This repository has been archived by the owner on Sep 7, 2022. It is now read-only.

Can't install on CoreOS 1632.3.0 #460

Open
mbaquer6 opened this issue Feb 26, 2018 · 10 comments
Open

Can't install on CoreOS 1632.3.0 #460

mbaquer6 opened this issue Feb 26, 2018 · 10 comments
Assignees
Labels

Comments

@mbaquer6
Copy link

Is this a BUG REPORT or FEATURE REQUEST?:

Uncomment only one, leave it on its own line:

/kind bug

/kind feature

What happened:
Hi,
I was trying to install on CoreOS 1632.3.0 but it fails because the regex can't deal with the "+" of the 1.8+ version.
I removed the script that checks the version to use the "tpr" or "crd" to use always "crd", the supported resource definition in Kubernetes 1.8 but it fails in phase 6:
What you expected to happen:
Finish the installation
How to reproduce it (as minimally and precisely as possible):
Try to install on Kubernetes 1.8+ (CoreOS 1632.3.0)

Anything else we need to know?:
This is the log of a daemonset in phase 6:

  • echo '[INFO] successfully created vSphere.conf file at : /host/tmp/vsphere.conf'
  • PHASE='[PHASE 6] Update Manifest files and service configuration file'
    [INFO] successfully created vSphere.conf file at : /host/tmp/vsphere.conf
  • update_VcpConfigStatus vcp-daementset-5dh7m '[PHASE 6] Update Manifest files and service configuration file' RUNNING ''
  • POD_NAME=vcp-daementset-5dh7m
  • PHASE='[PHASE 6] Update Manifest files and service configuration file'
  • STATUS=RUNNING
  • ERROR=
  • '[' RUNNING == FAILED ']'
  • echo 'apiVersion: "vmware.com/v1"
    kind: VcpStatus
    metadata:
    name: vcp-daementset-5dh7m
    spec:
    phase: "[PHASE' '6]' Update Manifest files and service configuration 'file"
    status: "RUNNING"
    error: ""'
  • retry_attempt=1
  • kubectl apply -f /tmp/vcp-daementset-5dh7m_daemonset_status_update.yaml
  • ls /host//etc/kubernetes/manifests/kube-apiserver.yaml
  • '[' 1 -eq 0 ']'
  • ls /host//etc/kubernetes/manifests/kube-controller-manager.yaml
  • '[' 1 -eq 0 ']'
  • ls /host//etc/systemd/system/kubelet.service.d/10-kubeadm.conf
  • '[' 1 -eq 0 ']'
  • IS_CONFIGURATION_UPDATED=false
  • UPDATED_MANIFEST_FILE=/host/tmp/kube-controller-manager.json
  • '[' yaml == yaml ']'
  • UPDATED_MANIFEST_FILE=/host/tmp/kube-controller-manager.yaml
  • '[' -f /host/tmp/kube-controller-manager.yaml ']'
  • UPDATED_MANIFEST_FILE=/host/tmp/kube-apiserver.json
  • '[' yaml == yaml ']'
  • UPDATED_MANIFEST_FILE=/host/tmp/kube-apiserver.yaml
  • '[' -f /host/tmp/kube-apiserver.yaml ']'
  • '[' -f /host/tmp/vsphere.conf ']'
  • cp /host/tmp/vsphere.conf /host//etc/vsphereconf/vsphere.conf
  • '[' 0 -ne 0 ']'
  • '[' -f /host/tmp/kubelet-service-configuration ']'
  • '[' false == false ']'
  • ERROR_MSG='No configuration change is observed'
  • update_VcpConfigStatus vcp-daementset-5dh7m '[PHASE 6] Update Manifest files and service configuration file' FAILED 'No configuration change is observed'
  • POD_NAME=vcp-daementset-5dh7m
  • PHASE='[PHASE 6] Update Manifest files and service configuration file'
  • STATUS=FAILED
  • ERROR='No configuration change is observed'
  • '[' FAILED == FAILED ']'
  • echo '[ERROR] No configuration change is observed'
    [ERROR] No configuration change is observed
  • echo 'apiVersion: "vmware.com/v1"
    kind: VcpStatus
    metadata:
    name: vcp-daementset-5dh7m
    spec:
    phase: "[PHASE' '6]' Update Manifest files and service configuration 'file"
    status: "FAILED"
    error: "No' configuration change is 'observed"'
  • retry_attempt=1
  • kubectl apply -f /tmp/vcp-daementset-5dh7m_daemonset_status_update.yaml
  • exit 29
    [INFO] Done with all tasks. Sleeping Infinity.

Environment:

  • Kubernetes version (use kubectl version): 1.9
  • Cloud provider or hardware configuration: vSphere
  • OS (e.g. from /etc/os-release): CoreOS 1632.3.0
  • Kernel (e.g. uname -a): 4.14.19-coreos
  • Install tools:
  • Others:

Thanks.

@pshahzeb
Copy link

pshahzeb commented Mar 5, 2018

related to #429

@venilnoronha
Copy link

@mbaquer6 could you please post the exact versions of your kubectl and kubelet? You could use kubectl version and kubelet --version for the same.

@abrarshivani
Copy link

@mbaquer6 Can you please let us know whether you used kubeadm to configure your kubernetes cluster?
Also, can you check whether path "/etc/kubernetes/manifests/kube-apiserver.yaml" exists on master?

@venilnoronha
Copy link

@mbaquer6 we tried to test the enable-vcp-uxi script end-to-end, and it worked fine.

Following is our setup:

  • vSphere v6.5
  • ESXi v6.5
  • Master VM OS: CoreOS 1576.5.0
  • Node VM OS: CoreOS 1576.5.0
  • Kubernetes v1.9.4
  • Kubeadm v1.9.4

Also, following is the output produced by kubectl get VcpSummary --namespace=vmware -o json.

{
    "apiVersion": "v1",
    "items": [
        {
            "apiVersion": "vmware.com/v1",
            "kind": "VcpSummary",
            "metadata": {
                "annotations": {
                    "kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"vmware.com/v1\",\"kind\":\"VcpSummary\",\"metadata\":{\"annotations\":{},\"name\":\"vcpinstallstatus\",\"namespace\":\"\"},\"spec\":{\"nodes_being_configured\":\"0\",\"nodes_failed_to_configure\":\"0\",\"nodes_in_phase1\":\"0\",\"nodes_in_phase2\":\"0\",\"nodes_in_phase3\":\"0\",\"nodes_in_phase4\":\"0\",\"nodes_in_phase5\":\"0\",\"nodes_in_phase6\":\"0\",\"nodes_in_phase7\":\"0\",\"nodes_sucessfully_configured\":\"2\",\"total_number_of_nodes\":\"2\"}}\n"
                },
                "clusterName": "",
                "creationTimestamp": "2018-03-13T20:08:45Z",
                "generation": 0,
                "name": "vcpinstallstatus",
                "namespace": "",
                "resourceVersion": "5741",
                "selfLink": "/apis/vmware.com/v1/vcpinstallstatus",
                "uid": "5567ec52-26fa-11e8-9780-005056a1839f"
            },
            "spec": {
                "nodes_being_configured": "0",
                "nodes_failed_to_configure": "0",
                "nodes_in_phase1": "0",
                "nodes_in_phase2": "0",
                "nodes_in_phase3": "0",
                "nodes_in_phase4": "0",
                "nodes_in_phase5": "0",
                "nodes_in_phase6": "0",
                "nodes_in_phase7": "0",
                "nodes_sucessfully_configured": "2",
                "total_number_of_nodes": "2"
            }
        }
    ],
    "kind": "List",
    "metadata": {
        "resourceVersion": "",
        "selfLink": ""
    }
}

@mbaquer6
Copy link
Author

mbaquer6 commented Mar 13, 2018

Thanks for all your responses
@venilnoronha I deployed the cluster using Tectonic 1.8.4 using the CLI (Terraform), that installs Kuberentes 1.8.4, Container Linux by CoreOS stable (1632.3.0)
vSphere 6.0
I saw in the guides that the installation for the vsphere cloud provider on Kubernetes 1.9 is different from 1.8 when you do it manually. I tried that too but didn't have the /etc/kubernetes/manifests/kube-apiserver.yaml files, so I edited the kube-api-server running config to add the cloud provider lines and edit the kube-controller-manager too but didn't work either.

Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.1", GitCommit:"3a1c9449a956b6026f075fa3134ff92f7d55f812", GitTreeState:"clean", BuildDate:"2018-01-04T20:00:41Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"8+", GitVersion:"v1.8.4+coreos.0", GitCommit:"4292f9682595afddbb4f8b1483673449c74f9619", GitTreeState:"clean", BuildDate:"2017-11-21T17:22:25Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}

@mbaquer6
Copy link
Author

@abrarshivani I used Tectonic to configure the cluster.
/etc/kubernetes/manifests/kube-apiserver.yaml doesn't exists on the masters, there is just the pod checkpointer.json

@abrarshivani
Copy link

@mbaquer6 Where is your kube-api-server running config located?

@mbaquer6
Copy link
Author

@abrarshivani I edit with kubectl edit daemonset kube-apiserver -n=kube-system

@abrarshivani
Copy link

@mbaquer6 Can you please share checkpointer.json? Also, do you know where are kubernetes manifests located on master?

@mbaquer6
Copy link
Author

Looking at the files that Tectonic created, I see all the manifests in my machine but not in the master. I'll try copying them to the masters and try to reapply this tool and see what happens.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Projects
None yet
Development

No branches or pull requests

4 participants