Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

errcode:(executable file not found in $PATH)** output:() #93

Closed
wrahmann opened this issue Jan 29, 2022 · 12 comments · Fixed by #96
Closed

errcode:(executable file not found in $PATH)** output:() #93

wrahmann opened this issue Jan 29, 2022 · 12 comments · Fixed by #96

Comments

@wrahmann
Copy link

Hi ,

I need some support . I used example scripts to attach one pod with volumes using csi-driver-iscsi. But I am seeing the following error:

Normal Scheduled 9m4s default-scheduler Successfully assigned default/nginx to worker3
Warning FailedMount 7m1s kubelet Unable to attach or mount volumes: unmounted volumes=[iscsi-volume], unattached volumes=[default-token-lpfwk iscsi-volume]: timed out waiting for the condition
Warning FailedMount 49s (x12 over 9m4s) kubelet MountVolume.SetUp failed for volume "iscsiplugin-pv" : rpc error: code = Internal desc = format of disk "/dev/sdb" failed: type:("ext4") target:("/var/lib/kubelet/pods/3c4c4497-2867-42bc-917a-1041fa5dfc09/volumes/kubernetes.io~csi/iscsiplugin-pv/mount") options:("rw,defaults") errcode:(executable file not found in $PATH) output:()
Warning FailedMount 10s (x3 over 4m44s) kubelet Unable to attach or mount volumes: unmounted volumes=[iscsi-volume], unattached volumes=[iscsi-volume default-token-lpfwk]: timed out waiting for the condition

Can anyone point out what executable is required?

Wajih

@andyzhangx
Copy link
Member

@humblec looks like driver image build is broken, which driver image are you using? @wrahmann

@humblec
Copy link
Contributor

humblec commented Jan 31, 2022

@humblec looks like driver image build is broken, which driver image are you using? @wrahmann

yeah, looks like.

But, the error says mkfs does not exist in the container, which is confusing,
@wrahmann can you please change the image to v0.1.0 instead of canary and give a try.. Meanwhile let me check in my setup.

@humblec
Copy link
Contributor

humblec commented Jan 31, 2022

[root@hchiramm csi-driver-iscsi]# kubectl get pods
NAME                   READY   STATUS    RESTARTS   AGE
csi-iscsi-node-kckcg   3/3     Running   0          5m13s
[root@hchiramm csi-driver-iscsi]# kubectl exec -ti csi-iscsi-node-kckcg -c iscsi sh
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
# mkfs --help

Usage:
 mkfs [options] [-t <type>] [fs-options] <device> [<size>]

Make a Linux filesystem.

Options:
 -t, --type=<type>  filesystem type; when unspecified, ext2 is used
     fs-options     parameters for the real filesystem builder
     <device>       path to the device to be used
     <size>         number of blocks to be used on the device
 -V, --verbose      explain what is being done;
                      specifying -V more than once will cause a dry-run
 -h, --help         display this help
 -V, --version      display version

For more details see mkfs(8).
# exit	
[root@hchiramm csi-driver-iscsi]# kubectl get pods
NAME                   READY   STATUS    RESTARTS   AGE
csi-iscsi-node-kckcg   3/3     Running   0          5m42s
[root@hchiramm csi-driver-iscsi]# kubectl describe pod csi-iscsi-node-kckcg |grep -i image
    Image:         k8s.gcr.io/sig-storage/livenessprobe:v2.1.0
    Image ID:      docker-pullable://k8s.gcr.io/sig-storage/livenessprobe@sha256:b0a9eb3e489150d79bff1c97681f34c579e6fcb1a4ed0289c19180eb83ebc83d
    Image:         k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0
    Image ID:      docker-pullable://k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f
    Image:         gcr.io/k8s-staging-sig-storage/iscsiplugin:v0.1.0
    Image ID:      docker-pullable://gcr.io/k8s-staging-sig-storage/iscsiplugin@sha256:8ff2016fd563f0365ccf6ad4ab09f09a1941cbede5c62c5692e95589c0ed41de
  Normal  Pulled     5m50s  kubelet            Container image "k8s.gcr.io/sig-storage/livenessprobe:v2.1.0" already present on machine
  Normal  Pulled     5m50s  kubelet            Container image "k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0" already present on machine
  Normal  Pulling    5m50s  kubelet            Pulling image "gcr.io/k8s-staging-sig-storage/iscsiplugin:v0.1.0"
  Normal  Pulled     5m36s  kubelet            Successfully pulled image "gcr.io/k8s-staging-sig-storage/iscsiplugin:v0.1.0" in 13.39723499s
[root@hchiramm csi-driver-iscsi]# 

@wrahmann
Copy link
Author

wrahmann commented Feb 1, 2022

@humblec , I will try to change the image to v0.1.0 as you recommended and will update here soon

@wrahmann
Copy link
Author

wrahmann commented Feb 1, 2022

@humblec . I change it v0.1.0 but the error remains the same

Normal Scheduled 6s default-scheduler Successfully assigned default/nginx to worker3
Warning FailedMount 2s (x4 over 5s) kubelet MountVolume.SetUp failed for volume "iscsiplugin-pv" : rpc error: code = Internal desc = format of disk "/dev/sdb" failed: type:("ext4") target:("/var/lib/kubelet/pods/b6601410-cbe3-47b3-9240-306d39178f0b/volumes/kubernetes.io~csi/iscsiplugin-pv/mount") options:("rw,defaults") errcode:(executable file not found in $PATH) output:()

@andyzhangx
Copy link
Member

andyzhangx commented Feb 4, 2022

try andyzhangx/iscsi-csi:v0.2.0 which was built by #96

@humblec
Copy link
Contributor

humblec commented Feb 4, 2022

@wrahmann or you can go back to canary image and check once again. If still not going through please share the node spec you have like

OS, release version ..etc where node plugin is running.

@wrahmann
Copy link
Author

wrahmann commented Feb 4, 2022

@andyzhangx /@humblec ,

I can check both images and will update the results here

@humblec
Copy link
Contributor

humblec commented Feb 4, 2022

That would be awesome.. thanks !

@wrahmann
Copy link
Author

@humblec

I tried canary version as well. I just used the latest csi-iscsi-node.yaml from github. The problem is the pods seems to be CrashLoopBackOff

wajih@master:~/Training/CSI$ kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
calico-kube-controllers-558995777d-g2z9z 1/1 Running 20 47d
calico-node-9lc7r 1/1 Running 20 47d
calico-node-b5tfr 1/1 Running 21 47d
calico-node-ckmm5 1/1 Running 20 47d
calico-node-v29g4 1/1 Running 20 47d
coredns-f9fd979d6-22zs9 1/1 Running 20 47d
coredns-f9fd979d6-pns62 1/1 Running 20 47d
csi-iscsi-node-p97rz 2/3 CrashLoopBackOff 37 19h
csi-iscsi-node-sxvsd 2/3 Error 9 19h
csi-iscsi-node-t4v96 2/3 Error 9 19h
etcd-master 0/1 Running 20 47d
kube-apiserver-master 0/1 Running 22 47d
kube-controller-manager-master 0/1 Running 22 47d
kube-proxy-57n78 1/1 Running 20 47d
kube-proxy-5w5n5 1/1 Running 20 47d
kube-proxy-7h5mk 1/1 Running 20 47d
kube-proxy-ntvrb 1/1 Running 20 47d

wajih@master:~/Training/CSI$ kubectl describe pod -n kube-system csi-iscsi-node-p97rz
Name: csi-iscsi-node-p97rz
Namespace: kube-system
Priority: 0
Node: worker1/172.16.1.131
Start Time: Sat, 12 Feb 2022 05:34:56 +1100
Labels: app=csi-iscsi-node
controller-revision-hash=54f67b876c
pod-template-generation=1
Annotations:
Status: Running
IP: 172.16.1.131
IPs:
IP: 172.16.1.131
Controlled By: DaemonSet/csi-iscsi-node
Containers:
liveness-probe:
Container ID: docker://606026eb4147807a55448828dca9fd4229f726a0c7d7963cd7196cf7cfbe4440
Image: k8s.gcr.io/sig-storage/livenessprobe:v2.1.0
Image ID: docker-pullable://k8s.gcr.io/sig-storage/livenessprobe@sha256:b0a9eb3e489150d79bff1c97681f34c579e6fcb1a4ed0289c19180eb83ebc83d
Port:
Host Port:
Args:
--csi-address=/csi/csi.sock
--probe-timeout=3s
--health-port=29753
--v=2
State: Running
Started: Sun, 13 Feb 2022 10:10:27 +1100
Last State: Terminated
Reason: Error
Exit Code: 2
Started: Sat, 12 Feb 2022 05:34:58 +1100
Finished: Sat, 12 Feb 2022 05:40:55 +1100
Ready: True
Restart Count: 1
Limits:
memory: 100Mi
Requests:
cpu: 10m
memory: 20Mi
Environment:
Mounts:
/csi from socket-dir (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-glrsh (ro)
node-driver-registrar:
Container ID: docker://85564d0254275b8beff47a7ea0c8b2d967a72fa3206ad5708e2b4e68bbda33ad
Image: k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0
Image ID: docker-pullable://k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f
Port:
Host Port:
Args:
--v=2
--csi-address=/csi/csi.sock
--kubelet-registration-path=/var/lib/kubelet/plugins/iscsi.csi.k8s.io/csi.sock
State: Running
Started: Sat, 12 Feb 2022 23:11:24 +1100
Last State: Terminated
Reason: Error
Exit Code: 2
Started: Sat, 12 Feb 2022 05:34:59 +1100
Finished: Sat, 12 Feb 2022 05:40:55 +1100
Ready: True
Restart Count: 1
Limits:
memory: 200Mi
Requests:
cpu: 10m
memory: 20Mi
Environment:
KUBE_NODE_NAME: (v1:spec.nodeName)
Mounts:
/csi from socket-dir (rw)
/registration from registration-dir (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-glrsh (ro)
iscsi:
Container ID: docker://186a7a83ed4067c7b75bf8ef2af41704be62a85d45d9cbda05a02a86ebe0bd5f
Image: gcr.io/k8s-staging-sig-storage/iscsiplugin:canary
Image ID: docker-pullable://gcr.io/k8s-staging-sig-storage/iscsiplugin@sha256:58ba21b1338c53ef18d31d4608ee141297633c20ec6df341f9aef8559a91b0a9
Port: 29753/TCP
Host Port: 29753/TCP
Args:
--nodeid=$(NODE_ID)
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Sun, 13 Feb 2022 01:19:48 +1100
Finished: Sun, 13 Feb 2022 01:19:48 +1100
Ready: False
Restart Count: 35
Limits:
memory: 300Mi
Requests:
cpu: 10m
memory: 20Mi
Liveness: http-get http://:healthz/healthz delay=30s timeout=10s period=30s #success=1 #failure=5
Environment:
NODE_ID: (v1:spec.nodeName)
CSI_ENDPOINT: unix:///csi/csi.sock
Mounts:
/csi from socket-dir (rw)
/dev from host-dev (rw)
/host from host-root (rw)
/sbin/iscsiadm from chroot-iscsiadm (rw,path="iscsiadm")
/var/lib/kubelet/ from mountpoint-dir (rw)
/var/run/iscsi.csi.k8s.io from iscsi-csi-run-dir (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-glrsh (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
socket-dir:
Type: HostPath (bare host directory volume)
Path: /var/lib/kubelet/plugins/iscsi.csi.k8s.io
HostPathType: DirectoryOrCreate
mountpoint-dir:
Type: HostPath (bare host directory volume)
Path: /var/lib/kubelet/
HostPathType: DirectoryOrCreate
registration-dir:
Type: HostPath (bare host directory volume)
Path: /var/lib/kubelet/plugins_registry
HostPathType: DirectoryOrCreate
host-dev:
Type: HostPath (bare host directory volume)
Path: /dev
HostPathType:
host-root:
Type: HostPath (bare host directory volume)
Path: /
HostPathType: Directory
chroot-iscsiadm:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: configmap-csi-iscsiadm
Optional: false
iscsi-csi-run-dir:
Type: HostPath (bare host directory volume)
Path: /var/run/iscsi.csi.k8s.io
HostPathType:
default-token-glrsh:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-glrsh
Optional: false
QoS Class: Burstable
Node-Selectors: kubernetes.io/os=linux
Tolerations: node.kubernetes.io/disk-pressure:NoSchedule op=Exists
node.kubernetes.io/memory-pressure:NoSchedule op=Exists
node.kubernetes.io/network-unavailable:NoSchedule op=Exists
node.kubernetes.io/not-ready:NoExecute op=Exists
node.kubernetes.io/pid-pressure:NoSchedule op=Exists
node.kubernetes.io/unreachable:NoExecute op=Exists
node.kubernetes.io/unschedulable:NoSchedule op=Exists
Events:
Type Reason Age From Message


Normal Scheduled 19h default-scheduler Successfully assigned kube-system/csi-iscsi-node-p97rz to worker1
Normal Pulled 19h kubelet Container image "k8s.gcr.io/sig-storage/livenessprobe:v2.1.0" already present on machine
Normal Created 19h kubelet Created container liveness-probe
Normal Started 19h kubelet Started container liveness-probe
Normal Pulled 19h kubelet Container image "k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0" already present on machine
Normal Created 19h kubelet Created container node-driver-registrar
Normal Started 19h kubelet Started container node-driver-registrar
Normal Pulled 19h (x4 over 19h) kubelet Container image "gcr.io/k8s-staging-sig-storage/iscsiplugin:canary" already present on machine
Normal Created 19h (x4 over 19h) kubelet Created container iscsi
Normal Started 19h (x4 over 19h) kubelet Started container iscsi
Warning BackOff 19h (x28 over 19h) kubelet Back-off restarting failed container
Warning BackOff 4m47s (x591 over 129m) kubelet Back-off restarting failed container

@wrahmann
Copy link
Author

@humblec

Here is my environment:

wajih@master:~/Training/CSI$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready master 47d v1.19.4
worker1 Ready 47d v1.19.4
worker2 Ready 47d v1.19.4
worker3 Ready 47d v1.19.4

@andyzhangx
Copy link
Member

try kubectl edit ds csi-iscsi-node -n kube-system and then repace canary image with andyzhangx/iscsi-csi:v0.2.0, need to figure out why canary image is not rebuilt later

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants