k8s在创建dashboard的时候,提示镜像不无法下载,但实际本地已经有镜像了!

现象:
1、在安装完k8s以后,加载ui面:
kubectl create -f kubernetes-dashboard.yaml
2、查看容器状态:
[root@k8s-master software]# kubectl get pods --namespace kube-system
NAME READY STATUS RESTARTS AGE
kubernetes-dashboard-3852341777-135gm 0/1 ContainerCreating 0 5m
3、检查报错信息:
[root@k8s-master software]# kubectl describe pod kubernetes-dashboard-3852341777-135gm --namespace kube-system
Name: kubernetes-dashboard-3852341777-135gm
Namespace: kube-system
Node: 10.1.108.78/10.1.108.78
Start Time: Tue, 27 Jun 2017 17:18:35 +0800
Labels: k8s-app=kubernetes-dashboard

            pod-template-hash=3852341777

Annotations: kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicaSet","namespace":"kube-system","name":"kubernetes-dashboard-3852341777","uid":"985001e1-5b19-11e7-a...
Status: Pending
IP:
Controllers: ReplicaSet/kubernetes-dashboard-3852341777
Containers:
kubernetes-dashboard:

Container ID:
Image:              registry.sinosafe.com.cn:5000/sinosafe/kubernetes/kubernetes-dashboard:1.5.1
Image ID:
Port:               9090/TCP
Args:
  --apiserver-host=http://10.1.108.77:8080
State:              Waiting
  Reason:           ContainerCreating
Ready:              False
Restart Count:      0
Liveness:           http-get http://:9090/ delay=30s timeout=30s period=10s #success=1 #failure=3
Environment:        <none>
Mounts:             <none>

Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes: <none>
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node-role.kubernetes.io/master=:NoSchedule
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
5m 5m 1 default-scheduler Normal Scheduled Successfully assigned kubernetes-dashboard-3852341777-135gm to 10.1.108.78
5m 20s 6 kubelet, 10.1.108.78 Warning FailedSync Error syncing pod, skipping: failed to "CreatePodSandbox" for "kubernetes-dashboard-3852341777-135gm_kube-system(9858b797-5b19-11e7-a69f-842b2b5e0f11)" with CreatePodSandboxError: "CreatePodSandbox for pod \"kubernetes-dashboard-3852341777-135gm_kube-system(9858b797-5b19-11e7-a69f-842b2b5e0f11)\" failed: rpc error: code = 2 desc = unable to pull sandbox image \"gcr.io/google_containers/pause-amd64:3.0\": Error response from daemon: {\"message\":\"Get https://gcr.io/v1/_ping: dial tcp 74.125.23.82:443: i/o timeout\"}"

5m 7s 7 kubelet, 10.1.108.78 Warning MissingClusterDNS kubelet does not have ClusterDNS IP configured and cannot create Pod using "ClusterFirst" policy. Falling back to DNSDefault policy.
[root@k8s-master software]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
registry.sinosafe.com.cn:5000/sinosafe/jenkins latest 681ef98a247f 5 weeks ago 704 MB
registry.sinosafe.com.cn:5000/sinosafe/centos 7.1.1 d9c3f227207e 3 months ago 605 MB
registry.sinosafe.com.cn:5000/sinosafe/kubernetes/kubernetes-dashboard 1.5.1 1180413103fd 5 months ago 104 MB
daocloud.io/gfkchinanetquest/kubernetes-dashboard-amd64 v1.5.1 1180413103fd 5 months ago 104 MB
daocloud.io/daocloud/google_containers_pause-amd64 3.0 99e59f495ffa 13 months ago 747 kB
gcr.io/google_containers/pause-amd64 3.0 99e59f495ffa 13 months ago 747 kB
[root@k8s-master software]# kubectl describe pod kubernetes-dashboard-3852341777-135gm --namespace kube-system
Name: kubernetes-dashboard-3852341777-135gm
Namespace: kube-system
Node: 10.1.108.78/10.1.108.78
Start Time: Tue, 27 Jun 2017 17:18:35 +0800
Labels: k8s-app=kubernetes-dashboard

            pod-template-hash=3852341777

Annotations: kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicaSet","namespace":"kube-system","name":"kubernetes-dashboard-3852341777","uid":"985001e1-5b19-11e7-a...
Status: Pending
IP:
Controllers: ReplicaSet/kubernetes-dashboard-3852341777
Containers:
kubernetes-dashboard:

Container ID:
Image:              registry.sinosafe.com.cn:5000/sinosafe/kubernetes/kubernetes-dashboard:1.5.1
Image ID:
Port:               9090/TCP
Args:
  --apiserver-host=http://10.1.108.77:8080
State:              Waiting
  Reason:           ContainerCreating
Ready:              False
Restart Count:      0
Liveness:           http-get http://:9090/ delay=30s timeout=30s period=10s #success=1 #failure=3
Environment:        <none>
Mounts:             <none>

Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes: <none>
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node-role.kubernetes.io/master=:NoSchedule
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
10m 10m 1 default-scheduler Normal Scheduled Successfully assigned kubernetes-dashboard-3852341777-135gm to 10.1.108.78
9m 18s 11 kubelet, 10.1.108.78 Warning FailedSync Error syncing pod, skipping: failed to "CreatePodSandbox" for "kubernetes-dashboard-3852341777-135gm_kube-system(9858b797-5b19-11e7-a69f-842b2b5e0f11)" with CreatePodSandboxError: "CreatePodSandbox for pod \"kubernetes-dashboard-3852341777-135gm_kube-system(9858b797-5b19-11e7-a69f-842b2b5e0f11)\" failed: rpc error: code = 2 desc = unable to pull sandbox image \"gcr.io/google_containers/pause-amd64:3.0\": Error response from daemon: {\"message\":\"Get https://gcr.io/v1/_ping: dial tcp 74.125.23.82:443: i/o timeout\"}"

10m 7s 12 kubelet, 10.1.108.78 Warning MissingClusterDNS kubelet does not have ClusterDNS IP configured and cannot create Pod using "ClusterFirst" policy. Falling back to DNSDefault policy.
4、查看本地镜像(本地gcr.io/google_containers/pause-amd64已经有这个镜像了,镜像是从docker pull daocloud.io/daocloud/google_containers_pause-amd64:3.0下载的):

[root@k8s-master ~]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
registry.sinosafe.com.cn:5000/sinosafe/jenkins latest 681ef98a247f 5 weeks ago 704 MB
registry.sinosafe.com.cn:5000/sinosafe/centos 7.1.1 d9c3f227207e 3 months ago 605 MB
daocloud.io/gfkchinanetquest/kubernetes-dashboard-amd64 v1.5.1 1180413103fd 5 months ago 104 MB
registry.sinosafe.com.cn:5000/sinosafe/kubernetes/kubernetes-dashboard 1.5.1 1180413103fd 5 months ago 104 MB
daocloud.io/daocloud/google_containers_pause-amd64 3.0 99e59f495ffa 13 months ago 747 kB
gcr.io/google_containers/pause-amd64 3.0 99e59f495ffa 13 months ago 747 kB

参与12
  • 已经解决,在node节点主机上,修改kubelet.service启动参数,增加--pod-infra-container-image=registry.sinosafe.com.cn:5000/sinosafe/pause-amd64:3.0即可解决该问题
    2017-06-28

2同行回答

luck_libiaoluck_libiao  系统工程师 , 华安
[root@k8s-master software]# cat kubernetes-dashboard.yamlCopyright 2015 Google Inc. All Rights Reserved.Licensed under the Apache License, Version 2.0 (the "License");you may not use this file except in compliance with the License.You may obtain a co...显示全部

[root@k8s-master software]# cat kubernetes-dashboard.yaml

Copyright 2015 Google Inc. All Rights Reserved.

Licensed under the Apache License, Version 2.0 (the "License");

you may not use this file except in compliance with the License.

You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software

distributed under the License is distributed on an "AS IS" BASIS,

WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

See the License for the specific language governing permissions and

limitations under the License.

Configuration to deploy release version of the Dashboard UI compatible with

Kubernetes 1.6 (RBAC enabled).

Example usage: kubectl create -f <this_file>

apiVersion: v1
kind: ServiceAccount
metadata:
labels:

k8s-app: kubernetes-dashboard

name: kubernetes-dashboard

namespace: kube-system

apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: kubernetes-dashboard
labels:

k8s-app: kubernetes-dashboard

roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:

  • kind: ServiceAccount
    name: kubernetes-dashboard

    namespace: kube-system

    kind: Deployment
    apiVersion: extensions/v1beta1
    metadata:
    labels:
    k8s-app: kubernetes-dashboard
    name: kubernetes-dashboard
    namespace: kube-system
    spec:
    replicas: 1
    revisionHistoryLimit: 10
    selector:
    matchLabels:

    k8s-app: kubernetes-dashboard

    template:
    metadata:

    labels:
      k8s-app: kubernetes-dashboard

    spec:

    containers:
    - name: kubernetes-dashboard
      image: registry.sinosafe.com.cn:5000/sinosafe/kubernetes/kubernetes-dashboard:1.5.1
      ports:
      - containerPort: 9090
        protocol: TCP
      args:
        # Uncomment the following line to manually specify Kubernetes API server Host
        # If not specified, Dashboard will attempt to auto discover the API server and connect
        # to it. Uncomment only if the default does not work.
         - --apiserver-host=http://10.1.108.77:8080
      livenessProbe:
        httpGet:
          path: /
          port: 9090
        initialDelaySeconds: 30
        timeoutSeconds: 30
    serviceAccountName: kubernetes-dashboard
    # Comment the following tolerations if Dashboard must not be deployed on master
    tolerations:
    - key: node-role.kubernetes.io/master
      effect: NoSchedule

    kind: Service
    apiVersion: v1
    metadata:
    labels:
    k8s-app: kubernetes-dashboard
    name: kubernetes-dashboard
    namespace: kube-system
    spec:
    ports:

    • port: 80
      targetPort: 9090
      selector:
      k8s-app: kubernetes-dashboard
收起
保险 · 2017-06-27
浏览15893
wanggengwanggeng  系统运维工程师 , 某银行
初步怀疑是你的docker环境问题,你可以检查一下docker环境,比如网络代理设置。显示全部

初步怀疑是你的docker环境问题,你可以检查一下docker环境,比如网络代理设置。

收起
银行 · 2017-06-27
浏览19276
  • 您说的网络代理设置是什么?如何检查呢?
    2017-06-27
  • 在yaml文件中指定镜像拉取的策略,默认是awlays,修改为IfNotPresent
    2017-06-27
  • [此评论已删除]
    2017-06-27
  • 修改了以后,提示的错误信息依然相同,信息如下: [root@k8s-master software]# kubectl delete -f kubernetes-dashboard.yaml serviceaccount &quot;kubernetes-dashboard&quot; deleted clusterrolebinding &quot;kubernetes-dashboard&quot; deleted deployment &quot;kubernetes-dashboard&quot; deleted service &quot;kubernetes-dashboard&quot; deleted [root@k8s-master software]# kubectl create -f kubernetes-dashboard.yaml serviceaccount &quot;kubernetes-dashboard&quot; created clusterrolebinding &quot;kubernetes-dashboard&quot; created deployment &quot;kubernetes-dashboard&quot; created service &quot;kubernetes-dashboard&quot; created [root@k8s-master software]# kubectl get pods --namespace kube-system NAME READY STATUS RESTARTS AGE kubernetes-dashboard-3852341777-r9txq 0/1 ContainerCreating 0 10s [root@k8s-master software]# kubectl describe pod kubernetes-dashboard-3852341777-r9txq --namespace kube-system Name: kubernetes-dashboard-3852341777-r9txq Namespace: kube-system Node: 10.1.108.79/10.1.108.79 Start Time: Tue, 27 Jun 2017 20:14:02 +0800 Labels: k8s-app=kubernetes-dashboard pod-template-hash=3852341777 Annotations: kubernetes.io/created-by={&quot;kind&quot;:&quot;SerializedReference&quot;,&quot;apiVersion&quot;:&quot;v1&quot;,&quot;reference&quot;:{&quot;kind&quot;:&quot;ReplicaSet&quot;,&quot;namespace&quot;:&quot;kube-system&quot;,&quot;name&quot;:&quot;kubernetes-dashboard-3852341777&quot;,&quot;uid&quot;:&quot;1a9f1b34-5b32-11e7-a... Status: Pending IP: Controllers: ReplicaSet/kubernetes-dashboard-3852341777 Containers: kubernetes-dashboard: Container ID: Image: registry.sinosafe.com.cn:5000/sinosafe/kubernetes/kubernetes-dashboard:1.5.1 Image ID: Port: 9090/TCP Args: --apiserver-host=http://10.1.108.77:8080 State: Waiting Reason: ContainerCreating Ready: False Restart Count: 0 Liveness: http-get http://:9090/ delay=30s timeout=30s period=10s #success=1 #failure=3 Environment: &lt;none&gt; Mounts: &lt;none&gt; Conditions: Type Status Initialized True Ready False PodScheduled True Volumes: &lt;none&gt; QoS Class: BestEffort Node-Selectors: &lt;none&gt; Tolerations: node-role.kubernetes.io/master=:NoSchedule Events: FirstSeen LastSeen Count From SubObjectPath Type Reason Message --------- -------- ----- ---- ------------- -------- ------ ------- 6m 6m 1 default-scheduler Normal Scheduled Successfully assigned kubernetes-dashboard-3852341777-r9txq to 10.1.108.79 5m 5m 1 kubelet, 10.1.108.79 Warning FailedSync Error syncing pod, skipping: failed to &quot;CreatePodSandbox&quot; for &quot;kubernetes-dashboard-3852341777-r9txq_kube-system(1aa5e3fc-5b32-11e7-a69f-842b2b5e0f11)&quot; with CreatePodSandboxError: &quot;CreatePodSandbox for pod \&quot;kubernetes-dashboard-3852341777-r9txq_kube-system(1aa5e3fc-5b32-11e7-a69f-842b2b5e0f11)\&quot; failed: rpc error: code = 2 desc = unable to pull sandbox image \&quot;gcr.io/google_containers/pause-amd64:3.0\&quot;: Error response from daemon: {\&quot;message\&quot;:\&quot;Get https://gcr.io/v1/_ping: dial tcp 74.125.203.82:443: i/o timeout\&quot;}&quot; 4m 26s 5 kubelet, 10.1.108.79 Warning FailedSync Error syncing pod, skipping: failed to &quot;CreatePodSandbox&quot; for &quot;kubernetes-dashboard-3852341777-r9txq_kube-system(1aa5e3fc-5b32-11e7-a69f-842b2b5e0f11)&quot; with CreatePodSandboxError: &quot;CreatePodSandbox for pod \&quot;kubernetes-dashboard-3852341777-r9txq_kube-system(1aa5e3fc-5b32-11e7-a69f-842b2b5e0f11)\&quot; failed: rpc error: code = 2 desc = unable to pull sandbox image \&quot;gcr.io/google_containers/pause-amd64:3.0\&quot;: Error response from daemon: {\&quot;message\&quot;:\&quot;Get https://gcr.io/v1/_ping: dial tcp 74.125.204.82:443: i/o timeout\&quot;}&quot; 6m 11s 7 kubelet, 10.1.108.79 Warning MissingClusterDNS kubelet does not have ClusterDNS IP configured and cannot create Pod using &quot;ClusterFirst&quot; policy. Falling back to DNSDefault policy
    2017-06-27
  • [此评论已删除]
    2017-06-27
  • yaml文件如下: apiVersion: v1 kind: ServiceAccount metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kube-system --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRoleBinding metadata: name: kubernetes-dashboard labels: k8s-app: kubernetes-dashboard roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - kind: ServiceAccount name: kubernetes-dashboard namespace: kube-system --- kind: Deployment apiVersion: extensions/v1beta1 metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kube-system spec: replicas: 1 revisionHistoryLimit: 10 selector: matchLabels: k8s-app: kubernetes-dashboard template: metadata: labels: k8s-app: kubernetes-dashboard spec: containers: - name: kubernetes-dashboard image: registry.sinosafe.com.cn:5000/sinosafe/kubernetes/kubernetes-dashboard:1.5.1 imagePullPolicy: IfNotPresent ports: - containerPort: 9090 protocol: TCP args: # Uncomment the following line to manually specify Kubernetes API server Host # If not specified, Dashboard will attempt to auto discover the API server and connect # to it. Uncomment only if the default does not work. - --apiserver-host=http://10.1.108.77:8080 livenessProbe: httpGet: path: / port: 9090 initialDelaySeconds: 30 timeoutSeconds: 30 serviceAccountName: kubernetes-dashboard # Comment the following tolerations if Dashboard must not be deployed on master tolerations: - key: node-role.kubernetes.io/master effect: NoSchedule --- kind: Service apiVersion: v1 metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kube-system spec: ports: - port: 80 targetPort: 9090 selector: k8s-app: kubernetes-dashboard
    2017-06-27
  • 昨天在测试的时候提示报错,但是今天早上再次查询的时候,发现已经正常了:估计应该和镜像拉取策略有关系,因为昨天只修改了镜像拉取策略,目前已经正常: [root@k8s-master ~]# kubectl get pods --namespace kube-system NAME READY STATUS RESTARTS AGE kubernetes-dashboard-3852341777-r9txq 1/1 Running 0 12h
    2017-06-28

提问者

luck_libiao
系统工程师华安
擅长领域: 云计算容器云容器

问题来自

相关问题

相关资料

相关文章

问题状态

  • 发布时间:2017-06-27
  • 关注会员:2 人
  • 问题浏览:22538
  • 最近回答:2017-06-27
  • X社区推广