欢迎阅读!

潇湘夜雨

当前位置: 主页 > 系统 > 云计算 >

Kubernetes一主两从集群部署

时间:2018-12-06 15:42来源:潇湘夜雨 作者:华嵩阳 点击:
k8s的核心概念 Master k8s集群的管理节点,负责管理集群,提供集群的资源数据访问入口。 拥有Etcd存储服务(可选),运行Api Server进程,Controller Manager服务进程及Scheduler服务进程,关联
k8s的核心概念
 
Master
k8s集群的管理节点,负责管理集群,提供集群的资源数据访问入口。
拥有Etcd存储服务(可选),运行Api Server进程,Controller Manager服务进程及Scheduler服务进程,关联工作节点Node。
Kubernetes API server提供HTTP Rest接口的关键服务进程是Kubernetes里所有资源的增、删、改、查等操作的唯一入口也是
集群控制的入口进程Kubernetes Controller Manager是Kubernetes所有资源对象的自动化控制中心;
Kubernetes Schedule是负责资源调度(Pod调度)的进程
 
Node
 
Node是Kubernetes集群架构中运行Pod的服务节点(亦叫agent或minion)。
Node是Kubernetes集群操作的单元,用来承载被分配Pod的运行,是Pod运行的宿主机。
关联Master管理节点,拥有名称和IP、系统资源信息。运行docker eninge服务,守护进程kunelet及负载均衡器kube-proxy.
 
每个Node节点都运行着以下一组关键进程
 
kubelet:负责对Pod对于的容器的创建、启停等任务
kube-proxy:实现Kubernetes Service的通信与负载均衡机制的重要组件
Docker Engine(Docker):Docker引擎,负责本机容器的创建和管理工作
Node节点可以在运行期间动态增加到Kubernetes集群中,默认情况下,kubelet会想master注册自己,这也是Kubernetes推荐的
Node管理方式,kubelet进程会定时向Master汇报自身情报,如操作系统、Docker版本、CPU和内存,以及有哪些Pod在运行等等,
这样Master可以获知每个Node节点的资源使用情况,并实现高效均衡的资源调度策略。
 
 
运行于Node节点上,若干相关容器的组合。
Pod内包含的容器运行在同一宿主机上,使用相同的网络命名空间、IP地址和端口,能够通过localhost进行通。
Pod是Kurbernetes进行创建、调度和管理的最小单位,它提供了比容器更高层次的抽象,使得部署和管理更加灵活。
一个Pod可以包含一个容器或者多个相关容器。
 
Pod其实有两种类型:普通Pod和静态Pod,后者比较特殊,它并不存在Kubernetes的etcd存储中,而是存放在某个
具体的Node上的一个具体文件中,并且只在此Node上启动。普通Pod一旦被创建,就会被放入etcd存储中,随后会
被Kubernetes Master调度到摸个具体的Node上进行绑定,随后该Pod被对应的Node上的kubelet进程实例化成一组
相关的Docker容器冰启动起来,在。在默认情况下,当Pod里的某个容器停止时,Kubernetes会自动检测到这个问
起并且重启这个Pod(重启Pod里的所有容器),如果Pod所在的Node宕机,则会将这个Node上的所有Pod重新调度到其他节点上
 
Replication Controller
 
Replication Controller用来管理Pod的副本,保证集群中存在指定数量的Pod副本。
集群中副本的数量大于指定数量,则会停止指定数量之外的多余容器数量,反之,则会启动少于指定数量个数的容器,保证数量不变。
Replication Controller是实现弹性伸缩、动态扩容和滚动升级的核心。
 
Service
 
Service定义了Pod的逻辑集合和访问该集合的策略,是真实服务的抽象。
Service提供了一个统一的服务访问入口以及服务代理和发现机制,关联多个相同Label的Pod,用户不需要了解后台Pod是如何运行。
 

环境:
master.k8s 192.168.0.20
node1.k8s 192.168.0.21
node2.k8s 192.168.0.22
前提:
1、基于主机名通信:/etc/hosts;
192.168.0.20  master.k8s
192.168.0.21  node1.k8s
192.168.0.22  node2.k8s
 
2、时间同步;
3、关闭firewalld和iptables.service;
OS:CentOS 7.3.1611, Extras仓库中;
安装配置步骤:
1、etcd cluster,仅master节点;
2、flannel,集群的所有节点;
3、配置k8s的master:仅master节点;
kubernetes-master
启动的服务:
kube-apiserver, kube-scheduler, kube-controller-manager
4、配置k8s的各Node节点;
kubernetes-node 
先设定启动docker服务;
启动的k8s的服务:
kube-proxy, kubelet
 
 
 
 
使用kubeadm安装部署kubernetes集群:
 前提:
 1、各节点时间同步;
 2、各节点主机名称解析:dns OR hosts;
 3、各节点iptables及firewalld服务被disable;
 
 一、设置各节点安装程序包 
 
 1、生成yum仓库配置
 
 先获取docker-ce的配置仓库配置文件:
 # wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -P /etc/yum.repos.d/
 
 生成kubernetes的yum仓库配置文件/etc/yum.repos.d/kubernetes.repo,内容如下:
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
enabled=1
 
 2、安装相关的程序包 
[root@master ~]# yum install docker-ce kubelet kubeadm kubectl
 tee /etc/docker/daemon.json <<-'EOF'
{
  "registry-mirrors": ["https://je18uwqu.mirror.aliyuncs.com"]
}
EOF
 二、初始化主节点
 1、配置docker Unit File中的Environment变量,定义其HTTPS_PROXY,或者事先导入所需要的镜像文件;
 第一种方式:利用代理ip
[root@master ~]# sed -i  '10i Environment="HTTPS_PROXY=http://www.ik8s.io:10080" ' /usr/lib/systemd/system/docker.service
 第二种方式: # docker load master-component-imgs.gz
 第三种方式:使用其它第三方仓库的镜像,然后tag一下。
[root@master ~]# systemctl daemon-reload
[root@master ~]# systemctl start docker
[root@master ~]# docker info
Containers: 0
Running: 0
Paused: 0
Stopped: 0
Images: 0
Server Version: 18.09.0
Storage Driver: overlay2
Backing Filesystem: xfs
Supports d_type: true
Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: c4446665cb9c30056f4998ed953e6d4ff22c7c39
runc version: 4fc53a81fb7c994640722ac585fa9ca548971871
init version: fec3683
Security Options:
seccomp
  Profile: default
Kernel Version: 3.10.0-862.el7.x86_64
Operating System: CentOS Linux 7 (Core)
OSType: linux
Architecture: x86_64
CPUs: 1
Total Memory: 1.81GiB
Name: master.k8s
ID: X7DH:5MK5:AN5J:EPJD:Q7RX:EBLI:5INN:2VP2:FCW3:TPKH:RQDG:TGW4
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
HTTPS Proxy: http://www.ik8s.io:10080
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
Product License: Community Engine
 
 
[root@master ~]# vim /etc/sysctl.d/k8s.conf
 
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
 
[root@master ~]# sysctl -p /etc/sysctl.d/k8s.conf 
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
 
[root@master ~]# cat /proc/sys/net/bridge/bridge-nf-call-iptables 
1
[root@master ~]# cat /proc/sys/net/bridge/bridge-nf-call-ip6tables 
1
 
 2、编辑kubelet的配置文件/etc/sysconfig/kubelet,设置其忽略Swap启用的状态错误,内容如下:
 KUBELET_EXTRA_ARGS="--fail-swap-on=false"
 
 #开启services ipvs模式(非必须)
 KUBE_PROXY_MODE=ipvs
 
 ip_vs, ip_vs_rr, ip_vs_wrr, ip_vs_sh, nf_conntrack_ipv4
 
 3、设定docker和kubelet开机自启动:
[root@master ~]# systemctl enable docker kubelet
[root@master ~]# systemctl is-enabled docker kubelet
enabled
enabled
 
 4、初始化master节点:
 # kubeadm init --kubernetes-version=v1.11.1 --pod-network-cidr=10.244.0.0/16 service-cidr=10.96.0.0/12 --ignore-preflight-errors=Swap
kubeadm init --kubernetes-version=v1.13.0 --pod-network-cidr=10.244.0.0/16 service-cidr=10.96.0.0/12 --ignore-preflight-errors=Swap
-
[root@master ~]# kubeadm init --kubernetes-version=v1.13.0 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12 --ignore-preflight-errors=Swape,NumCPU
[init] Using Kubernetes version: v1.13.0
[preflight] Running pre-flight checks
[WARNING NumCPU]: the number of available CPUs 1 is less than the required 2
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 18.09.0. Latest validated version: 18.06
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables contents are not set to 1
[ERROR Swap]: running with swap on is not supported. Please disable swap
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
 
[root@master ~]# swapoff -a
[root@master ~]# kubeadm init --kubernetes-version=v1.13.0 --pod-network-cidr=10.244.0.0/16 serviccidr=10.96.0.0/12 --ignore-preflight-errors=Swape,NumCPU
[ERROR ImagePull]: failed to pull image k8s.gcr.io/coredns:1.2.6: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
, error: exit status 1
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
 
报错原因:Google的仓库不能访问。
解决方法:去下载其地方的镜像
 
[root@master Kubernetes]# kubeadm config images list
I1211 22:08:11.535531   10113 version.go:94] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.txt": Get https://storage.googleapis.com/kubernetes-release/release/stable-1.txt: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
I1211 22:08:11.535714   10113 version.go:95] falling back to the local client version: v1.13.0
k8s.gcr.io/kube-apiserver:v1.13.0
k8s.gcr.io/kube-controller-manager:v1.13.0
k8s.gcr.io/kube-scheduler:v1.13.0
k8s.gcr.io/kube-proxy:v1.13.0
k8s.gcr.io/pause:3.1
k8s.gcr.io/etcd:3.2.24
k8s.gcr.io/coredns:1.2.6
 
 
docker pull registry.cn-qingdao.aliyuncs.com/baizhuanshuang/kube-apiserver:v1.13.0
docker pull registry.cn-qingdao.aliyuncs.com/baizhuanshuang/kube-controller-manager:v1.13.0
docker pull registry.cn-qingdao.aliyuncs.com/baizhuanshuang/kube-scheduler:v1.13.0
docker pull registry.cn-qingdao.aliyuncs.com/baizhuanshuang/kube-proxy:v1.13.0
docker pull registry.cn-qingdao.aliyuncs.com/baizhuanshuang/pause:3.1
docker pull registry.cn-qingdao.aliyuncs.com/baizhuanshuang/etcd:3.2.24
docker pull registry.cn-qingdao.aliyuncs.com/baizhuanshuang/coredns:1.2.6
 
 
 
docker tag registry.cn-qingdao.aliyuncs.com/baizhuanshuang/kube-apiserver:v1.13.0 k8s.gcr.io/kube-apiserver:v1.13.0 
docker tag registry.cn-qingdao.aliyuncs.com/baizhuanshuang/kube-controller-manager:v1.13.0 k8s.gcr.io/kube-controller-manager:v1.13.0
docker tag registry.cn-qingdao.aliyuncs.com/baizhuanshuang/kube-scheduler:v1.13.0 k8s.gcr.io/kube-scheduler:v1.13.0
docker tag registry.cn-qingdao.aliyuncs.com/baizhuanshuang/kube-proxy:v1.13.0 k8s.gcr.io/kube-proxy:v1.13.0
docker tag registry.cn-qingdao.aliyuncs.com/baizhuanshuang/pause:3.1 k8s.gcr.io/pause:3.1
docker tag registry.cn-qingdao.aliyuncs.com/baizhuanshuang/etcd:3.2.24 k8s.gcr.io/etcd:3.2.24
docker tag registry.cn-qingdao.aliyuncs.com/baizhuanshuang/coredns:1.2.6 k8s.gcr.io/coredns:1.2.6
 
 
 
 
 
[root@master ~]# kubeadm init --kubernetes-version=v1.13.0 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12 --ignore-preflight-errors=Swape,NumCPU
[init] Using Kubernetes version: v1.13.0
[preflight] Running pre-flight checks
[WARNING NumCPU]: the number of available CPUs 1 is less than the required 2
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 18.09.0. Latest validated version: 18.06
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [master.k8s kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.0.20]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [master.k8s localhost] and IPs [192.168.0.20 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [master.k8s localhost] and IPs [192.168.0.20 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 39.513372 seconds
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.13" in namespace kube-system with the configuration for the kubelets in the cluster
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "master.k8s" as an annotation
[kubelet-check] Initial timeout of 40s passed.
[mark-control-plane] Marking the node master.k8s as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node master.k8s as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: clg6u7.uew37hd06alm7gnh
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
 
Your Kubernetes master has initialized successfully!
 
To start using your cluster, you need to run the following as a regular user:
 
  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config
 
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/
 
You can now join any number of machines by running the following on each node
as root:
 
  kubeadm join 192.168.0.20:6443 --token o3q465.cp71qq0uam2qxz3d --discovery-token-ca-cert-hash sha256:079253fc46bf505da4732ef306a73c4dd0d954eff2d0ca52f3ade4f4e930a6dd
 
如果集群错误,可以重置:kubeadm reset
ifconfig cni0 down
ip link delete cni0
ifconfig flannel.1 down
ip link delete flannel.1
rm -rf /var/lib/cni/
 
 注意:请记录最后的kubeadm join命令的全部内容。
 
 [root@master ~]# ss -tnl
State       Recv-Q Send-Q                           Local Address:Port                                          Peer Address:Port              
LISTEN      0      128                                  127.0.0.1:10248                                                    *:*                  
LISTEN      0      128                                  127.0.0.1:10249                                                    *:*                  
LISTEN      0      128                               192.168.0.20:2379                                                     *:*                  
LISTEN      0      128                                  127.0.0.1:2379                                                     *:*                  
LISTEN      0      128                                  127.0.0.1:10251                                                    *:*                  
LISTEN      0      128                               192.168.0.20:2380                                                     *:*                  
LISTEN      0      128                                  127.0.0.1:10252                                                    *:*                  
LISTEN      0      128                                          *:22                                                       *:*                  
LISTEN      0      128                                  127.0.0.1:35383                                                    *:*                  
LISTEN      0      100                                  127.0.0.1:25                                                       *:*                  
LISTEN      0      128                                         :::10250                                                   :::*                  
LISTEN      0      128                                         :::6443                                                    :::*                  
LISTEN      0      128                                         :::10256                                                   :::*                  
LISTEN      0      128                                         :::10257                                                   :::*                  
LISTEN      0      128                                         :::10259                                                   :::*                  
LISTEN      0      128                                         :::22                                                      :::* 
 
 [root@master ~]# mkdir -p $HOME/.kube
[root@master ~]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
 
 
#查看集群状态
 
 
 
 
 
 5、初始化kubectl
 # mkdir ~/.kube 
 # cp /etc/kubernetes/admin.conf ~/.kube/
 
 测试:
[root@master ~]# kubectl get cs
NAME                 STATUS    MESSAGE              ERROR
scheduler            Healthy   ok                   
controller-manager   Healthy   ok                   
etcd-0               Healthy   {"health": "true"}   
[root@master ~]# kubectl get node
NAME         STATUS     ROLES    AGE   VERSION
master.k8s   NotReady   master   10h   v1.13.0
#NotReady表示集群未就绪,因为fannel网络镜像还没有初始化
 
 6、添加flannel网络附件
 
[root@master ~]#wget  https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
[root@master ~]#vim kube-flannel.yml
 
[root@master ~]# vim kube-flannel.yml
 
        - --ip-masq
        - --kube-subnet-mgr
        - --iface=ens33 #这里要指定一下本地的网卡信息,否则dns无法解析
 
[root@master ~]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.extensions/kube-flannel-ds-amd64 created
daemonset.extensions/kube-flannel-ds-arm64 created
daemonset.extensions/kube-flannel-ds-arm created
daemonset.extensions/kube-flannel-ds-ppc64le created
daemonset.extensions/kube-flannel-ds-s390x created
 
[root@master ~]# docker images|grep flannel
quay.io/coreos/flannel                                                    v0.10.0-amd64       f0fad859c909        10 months ago       44.6MB
 
 7、验正master节点已经就绪
[root@master ~]# kubectl get node
NAME         STATUS   ROLES    AGE   VERSION
master.k8s   Ready    master   11h   v1.13.0
 
[root@master ~]# kubectl get ns
NAME          STATUS   AGE
default       Active   11h
kube-public   Active   11h
kube-system   Active   11h
[root@master ~]# kubectl get pods -n kube-system
NAME                                 READY   STATUS    RESTARTS   AGE
coredns-86c58d9df4-bcph4             1/1     Running   0          11h
coredns-86c58d9df4-t9445             1/1     Running   0          11h
etcd-master.k8s                      1/1     Running   0          11h
kube-apiserver-master.k8s            1/1     Running   0          11h
kube-controller-manager-master.k8s   1/1     Running   0          11h
kube-flannel-ds-amd64-k28cx          1/1     Running   0          6m26s
kube-proxy-lkjpd                     1/1     Running   0          11h
kube-scheduler-master.k8s            1/1     Running   0          11h
 
[root@master ~]# kubectl get pod --all-namespaces -o wide
NAMESPACE     NAME                                 READY   STATUS    RESTARTS   AGE     IP             NODE         NOMINATED NODE   READINESS GATES
kube-system   coredns-86c58d9df4-kwr27             1/1     Running   0          4m39s   10.244.0.2     master.k8s   <none>           <none>
kube-system   coredns-86c58d9df4-plvw2             1/1     Running   0          4m39s   10.244.0.3     master.k8s   <none>           <none>
kube-system   etcd-master.k8s                      1/1     Running   0          3m52s   192.168.0.20   master.k8s   <none>           <none>
kube-system   kube-apiserver-master.k8s            1/1     Running   0          4m1s    192.168.0.20   master.k8s   <none>           <none>
kube-system   kube-controller-manager-master.k8s   1/1     Running   0          4m4s    192.168.0.20   master.k8s   <none>           <none>
kube-system   kube-flannel-ds-amd64-5s5s5          1/1     Running   0          27s     192.168.0.20   master.k8s   <none>           <none>
kube-system   kube-proxy-knmbg                     1/1     Running   0          4m39s   192.168.0.20   master.k8s   <none>           <none>
kube-system   kube-scheduler-master.k8s            1/1     Running   0          3m54s   192.168.0.20   master.k8s   <none>           <none>
 
 
 
三、添加节点到集群中
 
 node1节点:
[root@node1 ~]# yum install docker-ce kubelet kubeadm kubectl
[root@node1 ~]# systemctl enable docker kubelet
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /etc/systemd/system/kubelet.service.
[root@node1 ~]# systemctl is-enabled docker kubelet
enabled
enabled
[root@node1 ~]# systemctl start docker
 
[root@node1 ~]# vim /etc/sysctl.d/k8s.conf
 
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
 
[root@node1 ~]# sysctl -p /etc/sysctl.d/k8s.conf 
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
 
#登录自定义的k8s仓库
[root@node1 ~]# docker login --username=136856246@qq.com registry.cn-qingdao.aliyuncs.com
Password: 
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store
 
Login Succeeded
 
#拉取镜像
docker pull registry.cn-qingdao.aliyuncs.com/lzh_k8s/kube-apiserver:v1.13.0
docker pull registry.cn-qingdao.aliyuncs.com/lzh_k8s/kube-controller-manager:v1.13.0
docker pull registry.cn-qingdao.aliyuncs.com/lzh_k8s/kube-scheduler:v1.13.0
docker pull registry.cn-qingdao.aliyuncs.com/lzh_k8s/kube-proxy:v1.13.0
docker pull registry.cn-qingdao.aliyuncs.com/lzh_k8s/pause:3.1
docker pull registry.cn-qingdao.aliyuncs.com/lzh_k8s/etcd:3.2.24
docker pull registry.cn-qingdao.aliyuncs.com/lzh_k8s/coredns:1.2.6
 
 
#修改tag
docker tag registry.cn-qingdao.aliyuncs.com/lzh_k8s/kube-apiserver:v1.13.0 k8s.gcr.io/kube-apiserver:v1.13.0 
docker tag registry.cn-qingdao.aliyuncs.com/lzh_k8s/kube-controller-manager:v1.13.0 k8s.gcr.io/kube-controller-manager:v1.13.0
docker tag registry.cn-qingdao.aliyuncs.com/lzh_k8s/kube-scheduler:v1.13.0 k8s.gcr.io/kube-scheduler:v1.13.0
docker tag registry.cn-qingdao.aliyuncs.com/lzh_k8s/kube-proxy:v1.13.0 k8s.gcr.io/kube-proxy:v1.13.0
docker tag registry.cn-qingdao.aliyuncs.com/lzh_k8s/pause:3.1 k8s.gcr.io/pause:3.1
docker tag registry.cn-qingdao.aliyuncs.com/lzh_k8s/etcd:3.2.24 k8s.gcr.io/etcd:3.2.24
docker tag registry.cn-qingdao.aliyuncs.com/lzh_k8s/coredns:1.2.6 k8s.gcr.io/coredns:1.2.6
 
[root@node1 ~]# docker images|grep gcr
k8s.gcr.io/pause                                                   3.1                 7f0b4eec8ca3        42 hours ago        742kB
k8s.gcr.io/kube-scheduler                                          v1.13.0             2962c00d86a0        42 hours ago        79.6MB
k8s.gcr.io/kube-proxy                                              v1.13.0             a45e89f6a343        42 hours ago        80.2MB
k8s.gcr.io/kube-apiserver                                          v1.13.0             49e8d13421ab        42 hours ago        181MB
k8s.gcr.io/etcd                                                    3.2.24              8f2fba290c70        42 hours ago        220MB
k8s.gcr.io/kube-controller-manager                                 v1.13.0             c29fb5c95c62        42 hours ago        146MB
k8s.gcr.io/coredns                                                 1.2.6               b1526d1ab00a        42 hours ago        40MB
 
 
[root@node1 ~]# swapoff -a
[root@node1 ~]#   kubeadm join 192.168.0.20:6443 --token o3q465.cp71qq0uam2qxz3d --discovery-token-ca-cert-hash sha256:079253fc46bf505da4732ef306a73c4dd0d954eff2d0ca52f3ade4f4e930a6dd
[preflight] Running pre-flight checks
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 18.09.0. Latest validated version: 18.06
[discovery] Trying to connect to API Server "192.168.0.20:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://192.168.0.20:6443"
[discovery] Requesting info from "https://192.168.0.20:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "192.168.0.20:6443"
[discovery] Successfully established connection with API Server "192.168.0.20:6443"
[join] Reading configuration from the cluster...
[join] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.13" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "node1.k8s" as an annotation
 
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
 
Run 'kubectl get nodes' on the master to see this node join the cluster.
 
 
[root@master ~]# kubectl get nodes
NAME         STATUS     ROLES    AGE     VERSION
master.k8s   Ready      master   11h     v1.13.0
node1.k8s    NotReady   <none>   4m32s   v1.13.0
 
#NotReady是因为node1节点还在初始化fannel镜像
[root@master ~]# kubectl get pods -n kube-system -o wide # 查看pods及运行节点
NAME                                 READY   STATUS     RESTARTS   AGE     IP             NODE         NOMINATED NODE   READINESS GATES
coredns-86c58d9df4-bcph4             1/1     Running    0          11h     10.244.0.2     master.k8s   <none>           <none>
coredns-86c58d9df4-t9445             1/1     Running    0          11h     10.244.0.3     master.k8s   <none>           <none>
etcd-master.k8s                      1/1     Running    0          11h     192.168.0.20   master.k8s   <none>           <none>
kube-apiserver-master.k8s            1/1     Running    0          11h     192.168.0.20   master.k8s   <none>           <none>
kube-controller-manager-master.k8s   1/1     Running    0          11h     192.168.0.20   master.k8s   <none>           <none>
kube-flannel-ds-amd64-c2mv4          0/1     Init:0/1   0          4m45s   192.168.0.21   node1.k8s    <none>           <none>
kube-flannel-ds-amd64-k28cx          1/1     Running    0          33m     192.168.0.20   master.k8s   <none>           <none>
kube-proxy-lkjpd                     1/1     Running    0          11h     192.168.0.20   master.k8s   <none>           <none>
kube-proxy-qwp5b                     1/1     Running    0          4m45s   192.168.0.21   node1.k8s    <none>           <none>
kube-scheduler-master.k8s            1/1     Running    0          11h     192.168.0.20   master.k8s   <none>           <none>
[root@master ~]# kubectl get pods -n kube-system -o wide
NAME                                 READY   STATUS                  RESTARTS   AGE     IP             NODE         NOMINATED NODE   READINESS GATES
coredns-86c58d9df4-bcph4             1/1     Running                 0          11h     10.244.0.2     master.k8s   <none>           <none>
coredns-86c58d9df4-t9445             1/1     Running                 0          11h     10.244.0.3     master.k8s   <none>           <none>
etcd-master.k8s                      1/1     Running                 0          11h     192.168.0.20   master.k8s   <none>           <none>
kube-apiserver-master.k8s            1/1     Running                 0          11h     192.168.0.20   master.k8s   <none>           <none>
kube-controller-manager-master.k8s   1/1     Running                 0          11h     192.168.0.20   master.k8s   <none>           <none>
kube-flannel-ds-amd64-c2mv4          0/1     Init:ImagePullBackOff   0          5m51s   192.168.0.21   node1.k8s    <none>           <none>
kube-flannel-ds-amd64-k28cx          1/1     Running                 0          34m     192.168.0.20   master.k8s   <none>           <none>
kube-proxy-lkjpd                     1/1     Running                 0          11h     192.168.0.20   master.k8s   <none>           <none>
kube-proxy-qwp5b                     1/1     Running                 0          5m51s   192.168.0.21   node1.k8s    <none>           <none>
kube-scheduler-master.k8s            1/1     Running                 0          11h     192.168.0.20   master.k8s   <none>           <none>
[root@master ~]# kubectl get pods -n kube-system -o wide
NAME                                 READY   STATUS    RESTARTS   AGE    IP             NODE         NOMINATED NODE   READINESS GATES
coredns-86c58d9df4-bcph4             1/1     Running   0          11h    10.244.0.2     master.k8s   <none>           <none>
coredns-86c58d9df4-t9445             1/1     Running   0          11h    10.244.0.3     master.k8s   <none>           <none>
etcd-master.k8s                      1/1     Running   0          11h    192.168.0.20   master.k8s   <none>           <none>
kube-apiserver-master.k8s            1/1     Running   0          11h    192.168.0.20   master.k8s   <none>           <none>
kube-controller-manager-master.k8s   1/1     Running   0          11h    192.168.0.20   master.k8s   <none>           <none>
kube-flannel-ds-amd64-c2mv4          1/1     Running   0          6m3s   192.168.0.21   node1.k8s    <none>           <none>
kube-flannel-ds-amd64-k28cx          1/1     Running   0          34m    192.168.0.20   master.k8s   <none>           <none>
kube-proxy-lkjpd                     1/1     Running   0          11h    192.168.0.20   master.k8s   <none>           <none>
kube-proxy-qwp5b                     1/1     Running   0          6m3s   192.168.0.21   node1.k8s    <none>           <none>
kube-scheduler-master.k8s            1/1     Running   0          11h    192.168.0.20   master.k8s   <none>           <none>
[root@master ~]# kubectl get nodes
NAME         STATUS   ROLES    AGE     VERSION
master.k8s   Ready    master   11h     v1.13.0
node1.k8s    Ready    <none>   6m19s   v1.13.0
 
 
node2节点初始化
 
 
[root@node2 ~]# yum install docker-ce kubelet kubeadm kubectl
[root@node2 ~]# systemctl enable docker kubelet
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /etc/systemd/system/kubelet.service.
[root@node2 ~]# systemctl is-enabled docker kubelet
enabled
enabled
[root@node2 ~]# vim /etc/sysctl.d/k8s.conf
[root@node2 ~]# sysctl -p /etc/sysctl.d/k8s.conf 
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-ip6tables: 没有那个文件或目录
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: 没有那个文件或目录
[root@node2 ~]# systemctl start docker
[root@node2 ~]# sysctl -p /etc/sysctl.d/k8s.conf 
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
[root@node2 ~]# docker login --username=136856246@qq.com registry.cn-qingdao.aliyuncs.com
Password: 
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store
 
Login Succeeded
 
#拉取镜像
docker pull registry.cn-qingdao.aliyuncs.com/lzh_k8s/kube-apiserver:v1.13.0
docker pull registry.cn-qingdao.aliyuncs.com/lzh_k8s/kube-controller-manager:v1.13.0
docker pull registry.cn-qingdao.aliyuncs.com/lzh_k8s/kube-scheduler:v1.13.0
docker pull registry.cn-qingdao.aliyuncs.com/lzh_k8s/kube-proxy:v1.13.0
docker pull registry.cn-qingdao.aliyuncs.com/lzh_k8s/pause:3.1
docker pull registry.cn-qingdao.aliyuncs.com/lzh_k8s/etcd:3.2.24
docker pull registry.cn-qingdao.aliyuncs.com/lzh_k8s/coredns:1.2.6
 
#重新tag
[root@node2 ~]# docker tag registry.cn-qingdao.aliyuncs.com/lzh_k8s/kube-controller-manager:v1.13.0 k8s.gcr.io/kube-controller-manager:v1.13.0
[root@node2 ~]# docker tag registry.cn-qingdao.aliyuncs.com/lzh_k8s/kube-scheduler:v1.13.0 k8s.gcr.io/kube-scheduler:v1.13.0
[root@node2 ~]# docker tag registry.cn-qingdao.aliyuncs.com/lzh_k8s/kube-proxy:v1.13.0 k8s.gcr.io/kube-proxy:v1.13.0
[root@node2 ~]# docker tag registry.cn-qingdao.aliyuncs.com/lzh_k8s/pause:3.1 k8s.gcr.io/pause:3.1
[root@node2 ~]# docker tag registry.cn-qingdao.aliyuncs.com/lzh_k8s/etcd:3.2.24 k8s.gcr.io/etcd:3.2.24
[root@node2 ~]# docker tag registry.cn-qingdao.aliyuncs.com/lzh_k8s/coredns:1.2.6 k8s.gcr.io/coredns:1.2.6
 
[root@node2 ~]# swapoff -a
[root@node2 ~]# kubeadm join 192.168.0.20:6443 --token o3q465.cp71qq0uam2qxz3d --discovery-token-ca-cert-hash sha256:079253fc46bf505da4732ef306a73c4dd0d954eff2d0ca52f3ade4f4e930a6dd
[preflight] Running pre-flight checks
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 18.09.0. Latest validated version: 18.06
[WARNING Hostname]: hostname "node2.k8s" could not be reached
[WARNING Hostname]: hostname "node2.k8s": lookup node2.k8s on 114.114.114.114:53: no such host
[discovery] Trying to connect to API Server "192.168.0.20:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://192.168.0.20:6443"
[discovery] Requesting info from "https://192.168.0.20:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "192.168.0.20:6443"
[discovery] Successfully established connection with API Server "192.168.0.20:6443"
[join] Reading configuration from the cluster...
[join] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.13" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "node2.k8s" as an annotation
 
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
 
Run 'kubectl get nodes' on the master to see this node join the cluster.
 
[root@master ~]# kubectl get nodes
NAME         STATUS   ROLES    AGE     VERSION
master.k8s   Ready    master   11h     v1.13.0
node1.k8s    Ready    <none>   21m     v1.13.0
node2.k8s    Ready    <none>   4m42s   v1.13.0
 
四、kubectl管理命令
 
1、查看节点信息
[root@master ~]# kubectl describe node node1.k8s
Name:               node1.k8s
Roles:              <none>
Labels:             beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/hostname=node1.k8s
Annotations:        flannel.alpha.coreos.com/backend-data: {"VtepMAC":"96:24:70:fa:36:e1"}
flannel.alpha.coreos.com/backend-type: vxlan
flannel.alpha.coreos.com/kube-subnet-manager: true
flannel.alpha.coreos.com/public-ip: 192.168.0.21
kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Wed, 12 Dec 2018 11:08:45 +0800
Taints:             <none>
Unschedulable:      false
 
[root@master ~]# kubectl version #查看版本
Client Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.0", GitCommit:"ddf47ac13c1a9483ea035a79cd7c10005ff21a6d", GitTreeState:"clean", BuildDate:"2018-12-03T21:04:45Z", GoVersion:"go1.11.2", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.0", GitCommit:"ddf47ac13c1a9483ea035a79cd7c10005ff21a6d", GitTreeState:"clean", BuildDate:"2018-12-03T20:56:12Z", GoVersion:"go1.11.2", Compiler:"gc", Platform:"linux/amd64"}
[root@master ~]# kubectl cluster-info #查看集群信息
Kubernetes master is running at https://192.168.0.20:6443
KubeDNS is running at https://192.168.0.20:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
 
 
2、测试 dns
 
[root@master ~]# kubectl run curl --image=radial/busyboxplus:curl -it
kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
If you don't see a command prompt, try pressing enter.
[ root@curl-66959f6557-qxnl2:/ ]$ ping www.baidu.com
PING www.baidu.com (112.34.112.41): 56 data bytes
64 bytes from 112.34.112.41: seq=0 ttl=50 time=43.305 ms
64 bytes from 112.34.112.41: seq=1 ttl=50 time=43.064 ms
64 bytes from 112.34.112.41: seq=2 ttl=50 time=43.737 ms
--- www.baidu.com ping statistics ---
5 packets transmitted, 5 packets received, 0% packet loss
round-trip min/avg/max = 43.064/43.597/44.026 ms
解析测试svc名称
[ root@curl-66959f6557-qxnl2:/ ]$ nslookup kubernetes.default 
Server:    10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
 
Name:      kubernetes.default
Address 1: 10.96.0.1 kubernetes.default.svc.cluster.local
 
 
 
 
3、创建deployment
Usage:
  kubectl run NAME --image=image [--env="key=value"] [--port=port] [--replicas=replicas] [--dry-run=bool] [--overrides=inline-json] [--command] -- [COMMAND] [args...] [options]
 
[root@master ~]# kubectl run nginx-deploy --image=nginx:1.14-alpine --port 80 --replicas=1 --dry-run=true #--dry-run=true参数不会实际创建
kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
deployment.apps/nginx-deploy created (dry run)
 
[root@master ~]# kubectl get deployment
No resources found.
[root@master ~]# kubectl run nginx-deploy --image=nginx:1.14-alpine --port 80 --replicas=1
kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
deployment.apps/nginx-deploy created
[root@master ~]# kubectl get deployment
NAME           READY   UP-TO-DATE   AVAILABLE   AGE
nginx-deploy   0/1     1            0           9s
#创建后需要一定的时间初始化
 
[root@master ~]# kubectl get deployment
NAME           READY   UP-TO-DATE   AVAILABLE   AGE
nginx-deploy   1/1     1            1           3m39s
 
[root@master ~]# kubectl get pods -o wide
NAME                            READY   STATUS    RESTARTS   AGE     IP           NODE        NOMINATED NODE   READINESS GATES
nginx-deploy-84cbfc56b6-r5sz2   1/1     Running   0          5m25s   10.244.1.2   node1.k8s   <none>           <none>
 
[root@node1 ~]# ifconfig 
cni0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450
        inet 10.244.1.1  netmask 255.255.255.0  broadcast 0.0.0.0
        inet6 fe80::6060:6dff:fefa:e15d  prefixlen 64  scopeid 0x20<link>
        ether 0a:58:0a:f4:01:01  txqueuelen 1000  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 8  bytes 648 (648.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
 
docker0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 172.17.0.1  netmask 255.255.0.0  broadcast 172.17.255.255
        ether 02:42:2c:2b:d2:96  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
 
#node1节点创建了一个cnio网卡
 
[root@node1 ~]# curl -I 10.244.1.2
HTTP/1.1 200 OK
Server: nginx/1.14.2
Date: Wed, 12 Dec 2018 07:43:26 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Thu, 06 Dec 2018 00:24:40 GMT
Connection: keep-alive
ETag: "5c086c48-264"
Accept-Ranges: bytes
 
[root@node2 ~]# curl -I 10.244.1.2 #node2也可以访问
HTTP/1.1 200 OK
Server: nginx/1.14.2
Date: Wed, 12 Dec 2018 07:44:38 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Thu, 06 Dec 2018 00:24:40 GMT
Connection: keep-alive
ETag: "5c086c48-264"
Accept-Ranges: bytes
 
[root@master ~]# curl -I 10.244.1.2
curl: (7) Failed connect to 10.244.1.2:80; 连接超时
 
注意:创建的容器可以在节点内部访问
 
4、删除容器
 
[root@master ~]# kubectl delete pods nginx-deploy-84cbfc56b6-r5sz2 #delete 通过文件名、标准输入、资源名称或标签选择器来删除资源。
pod "nginx-deploy-84cbfc56b6-r5sz2" deleted
 
[root@master ~]# kubectl get pods -o wide #删除后资源又会重新创建,但是NAME、node、IP可能会发生变化,--restart='Always'默认会重新创建
NAME                            READY   STATUS    RESTARTS   AGE   IP            NODE        NOMINATED NODE   READINESS GATES
curl-66959f6557-qxnl2           1/1     Running   0          21m   10.244.1.20   node1.k8s   <none>           <none>
nginx-deploy-84cbfc56b6-prdb8   1/1     Running   0          10m   10.244.1.21   node1.k8s   <none>           <none>
 
强制删除容器:
 
[root@master ~]# kubectl get pods -o wide
NAME                            READY   STATUS             RESTARTS   AGE     IP            NODE        NOMINATED NODE   READINESS GATES
client-58c9b895c-88b45          0/1     CrashLoopBackOff   4          3m59s   10.244.1.12   node1.k8s   <none>           <none> #删除pods仍然会重建
nginx-deploy-84cbfc56b6-mf576   1/1     Running            0          3h4m    10.244.2.3    node2.k8s   <none>           <none>
[root@master ~]# kubectl get deployment
NAME           READY   UP-TO-DATE   AVAILABLE   AGE
client         0/1     1            0           28m
nginx-deploy   1/1     1            1           4h30m
[root@master ~]# kubectl delete deployment client #删除deployment
deployment.extensions "client" deleted
[root@master ~]# kubectl get deployment #已经强制删除了
NAME           READY   UP-TO-DATE   AVAILABLE   AGE
nginx-deploy   1/1     1            1           4h30m
 
 
5、创建一个Service对象暴露Deployment
[root@master ~]# kubectl expose deployment nginx-deploy --name=nginx --port 80 --target-port=80 #(在80端口负载TCP流量)
service/nginx exposed
[root@master ~]# kubectl get service
NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.96.0.1       <none>        443/TCP   26m
nginx        ClusterIP   10.109.202.23   <none>        80/TCP    3m13s
 
[root@master ~]# kubectl describe service nginx
Name:              nginx
Namespace:         default
Labels:            run=nginx-deploy
Annotations:       <none>
Selector:          run=nginx-deploy
Type:              ClusterIP
IP:                10.109.202.23
Port:              <unset>  80/TCP
TargetPort:        80/TCP
Endpoints:         10.244.1.21:80
Session Affinity:  None
Events:            <none>
 
 
[root@master ~]# kubectl get service -n kube-system #查看名称空间的dns服务
NAME       TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)         AGE
kube-dns   ClusterIP   10.96.0.10   <none>        53/UDP,53/TCP   16h
 
#进入pod
[root@master ~]# kubectl exec -it curl-66959f6557-kzsrp /bin/sh
/bin/sh: shopt: not found
[ root@curl-66959f6557-kzsrp:/ ]$ 
 
[ root@curl-66959f6557-qxnl2:/ ]$ nslookup nginx #解析名称空间
Server:    10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
 
Name:      nginx
Address 1: 10.109.202.23 nginx.default.svc.cluster.local
 
 
[ root@curl-66959f6557-qxnl2:/ ]$ curl -I nginx
HTTP/1.1 200 OK
Server: nginx/1.14.2
Date: Wed, 12 Dec 2018 15:36:34 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Thu, 06 Dec 2018 00:24:40 GMT
Connection: keep-alive
ETag: "5c086c48-264"
Accept-Ranges: bytes
 
 
 
[root@master ~]# kubectl delete pods nginx-deploy-84cbfc56b6-prdb8
pod "nginx-deploy-84cbfc56b6-prdb8" deleted
[root@master ~]# kubectl get pods -o wide
NAME                            READY   STATUS    RESTARTS   AGE   IP            NODE        NOMINATED NODE   READINESS GATES
curl-66959f6557-qxnl2           1/1     Running   0          23m   10.244.1.20   node1.k8s   <none>           <none>
nginx-deploy-84cbfc56b6-cf42p   1/1     Running   0          86s   10.244.2.7    node2.k8s   <none>           <none>
[root@master ~]# kubectl get service #删除后cluster地址仍然不变
NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.96.0.1       <none>        443/TCP   30m
nginx        ClusterIP   10.109.202.23   <none>        80/TCP    7m5s
 
[ root@curl-66959f6557-qxnl2:/ ]$ curl -I http://nginx/ #删除重建后仍然可以访问
HTTP/1.1 200 OK
Server: nginx/1.14.2
Date: Wed, 12 Dec 2018 15:43:21 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Thu, 06 Dec 2018 00:24:40 GMT
Connection: keep-alive
ETag: "5c086c48-264"
Accept-Ranges: bytes
 
 
#之所以修改名称空间后仍然可以访问,是因为标签nginx-deploy没有改变
[root@master ~]# kubectl get pods --show-labels
NAME                            READY   STATUS    RESTARTS   AGE   LABELS
curl-66959f6557-qxnl2           1/1     Running   0          11h   pod-template-hash=66959f6557,run=curl
nginx-deploy-84cbfc56b6-cf42p   1/1     Running   0          10h   pod-template-hash=84cbfc56b6,run=nginx-deploy
 
[root@master ~]# kubectl get pods --show-labels
NAME                            READY   STATUS    RESTARTS   AGE   LABELS
curl-66959f6557-qxnl2           1/1     Running   0          11h   pod-template-hash=66959f6557,run=curl
nginx-deploy-84cbfc56b6-cf42p   1/1     Running   0          10h   pod-template-hash=84cbfc56b6,run=nginx-deploy
[root@master ~]# kubectl delete pods nginx-deploy-84cbfc56b6-cf42p
pod "nginx-deploy-84cbfc56b6-cf42p" deleted
[root@master ~]# kubectl get pods --show-labels
NAME                            READY   STATUS    RESTARTS   AGE   LABELS
curl-66959f6557-qxnl2           1/1     Running   0          11h   pod-template-hash=66959f6557,run=curl
nginx-deploy-84cbfc56b6-88xdr   1/1     Running   0          15s   pod-template-hash=84cbfc56b6,run=nginx-deploy
 
 
6、多副本负载均衡
[root@master ~]# kubectl run myapp --image=ikubernetes/myapp:v1 --replicas=2
kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
deployment.apps/myapp created
[root@master ~]# kubectl get deployment 
NAME           READY   UP-TO-DATE   AVAILABLE   AGE
curl           1/1     1            1           11h
myapp          2/2     2            2           6m51s
nginx-deploy   1/1     1            1           11h
[root@master ~]# kubectl get pods -o wide
NAME                            READY   STATUS    RESTARTS   AGE     IP            NODE        NOMINATED NODE   READINESS GATES
curl-66959f6557-qxnl2           1/1     Running   0          11h     10.244.1.20   node1.k8s   <none>           <none>
myapp-9b4987d5-6bgth            1/1     Running   0          2m46s   10.244.2.10   node2.k8s   <none>           <none>
myapp-9b4987d5-zmshm            1/1     Running   0          8m50s   10.244.1.24   node1.k8s   <none>           <none>
nginx-deploy-84cbfc56b6-88xdr   1/1     Running   0          23m     10.244.1.22   node1.k8s   <none>           <none>
 
[root@master ~]# curl 10.244.1.24/hostname.html
myapp-9b4987d5-zmshm
[root@master ~]# curl 10.244.2.10/hostname.html
myapp-9b4987d5-6bgth
 
 
[root@master ~]# kubectl expose deployment myapp --name=myappslb --port=80
service/myappslb exposed
[root@master ~]# kubectl get svc
NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.96.0.1       <none>        443/TCP   12h
myappslb     ClusterIP   10.110.68.247   <none>        80/TCP    26s
nginx        ClusterIP   10.109.202.23   <none>        80/TCP    11h
[root@master ~]# curl 10.110.68.247/hostname.html
myapp-9b4987d5-zmshm
[root@master ~]# curl 10.110.68.247/hostname.html
myapp-9b4987d5-6bgth
[root@master ~]# curl 10.110.68.247/hostname.html
myapp-9b4987d5-zmshm
[root@master ~]# curl 10.110.68.247/hostname.html
 
 
#同样可以通过名称表示访问
[ root@curl-66959f6557-qxnl2:/ ]$ curl myappslb/hostname.html
myapp-9b4987d5-zmshm
[ root@curl-66959f6557-qxnl2:/ ]$ curl myappslb/hostname.html
myapp-9b4987d5-6bgth
 
 
#副本扩容
[root@master ~]# kubectl scale --replicas=5 deployment myapp
deployment.extensions/myapp scaled
[root@master ~]# kubectl get pods -o wide #很快就扩到了5个副本
NAME                            READY   STATUS    RESTARTS   AGE   IP            NODE        NOMINATED NODE   READINESS GATES
curl-66959f6557-qxnl2           1/1     Running   0          12h   10.244.1.20   node1.k8s   <none>           <none>
myapp-9b4987d5-6bgth            1/1     Running   0          39m   10.244.2.10   node2.k8s   <none>           <none>
myapp-9b4987d5-8vq67            1/1     Running   0          15s   10.244.1.25   node1.k8s   <none>           <none>
myapp-9b4987d5-cpqrh            1/1     Running   0          15s   10.244.1.26   node1.k8s   <none>           <none>
myapp-9b4987d5-qs5qq            1/1     Running   0          15s   10.244.2.11   node2.k8s   <none>           <none>
myapp-9b4987d5-zmshm            1/1     Running   0          45m   10.244.1.24   node1.k8s   <none>           <none>
nginx-deploy-84cbfc56b6-88xdr   1/1     Running   0          60m   10.244.1.22   node1.k8s   <none>           <none>
 
 
[ root@curl-66959f6557-qxnl2:/ ]$ while true;do curl myappslb/hostname.html;sleep 1;done
 
myapp-9b4987d5-cpqrh
myapp-9b4987d5-6bgth
myapp-9b4987d5-zmshm
myapp-9b4987d5-qs5qq
myapp-9b4987d5-zmshm
myapp-9b4987d5-8vq67
myapp-9b4987d5-zmshm
myapp-9b4987d5-cpqrh
myapp-9b4987d5-cpqrh
myapp-9b4987d5-6bgth
myapp-9b4987d5-8vq67
myapp-9b4987d5-8vq67
#可以看到后端服务多了几个
 
 
#副本缩减
[root@master ~]# kubectl scale --replicas=2 deployment myapp
deployment.extensions/myapp scaled
[root@master ~]# kubectl get pods -o wide
NAME                            READY   STATUS        RESTARTS   AGE    IP            NODE        NOMINATED NODE   READINESS GATES
curl-66959f6557-qxnl2           1/1     Running       0          12h    10.244.1.20   node1.k8s   <none>           <none>
myapp-9b4987d5-6bgth            1/1     Running       0          41m    10.244.2.10   node2.k8s   <none>           <none>
myapp-9b4987d5-cpqrh            0/1     Terminating   0          116s   10.244.1.26   node1.k8s   <none>           <none>
myapp-9b4987d5-qs5qq            0/1     Terminating   0          116s   10.244.2.11   node2.k8s   <none>           <none>
myapp-9b4987d5-zmshm            1/1     Running       0          47m    10.244.1.24   node1.k8s   <none>           <none>
nginx-deploy-84cbfc56b6-88xdr   1/1     Running       0          62m    10.244.1.22   node1.k8s   <none>           <none>
[root@master ~]# kubectl get pods -o wide
NAME                            READY   STATUS        RESTARTS   AGE   IP            NODE        NOMINATED NODE   READINESS GATES
curl-66959f6557-qxnl2           1/1     Running       0          12h   10.244.1.20   node1.k8s   <none>           <none>
myapp-9b4987d5-6bgth            1/1     Running       0          41m   10.244.2.10   node2.k8s   <none>           <none>
myapp-9b4987d5-cpqrh            0/1     Terminating   0          2m    10.244.1.26   node1.k8s   <none>           <none>
myapp-9b4987d5-zmshm            1/1     Running       0          47m   10.244.1.24   node1.k8s   <none>           <none>
nginx-deploy-84cbfc56b6-88xdr   1/1     Running       0          62m   10.244.1.22   node1.k8s   <none>           <none>
NAME                            READY   STATUS    RESTARTS   AGE   IP            NODE        NOMINATED NODE   READINESS GATES
curl-66959f6557-qxnl2           1/1     Running   0          12h   10.244.1.20   node1.k8s   <none>           <none>
myapp-9b4987d5-6bgth            1/1     Running   0          41m   10.244.2.10   node2.k8s   <none>           <none>
myapp-9b4987d5-zmshm            1/1     Running   0          47m   10.244.1.24   node1.k8s   <none>           <none>
nginx-deploy-84cbfc56b6-88xdr   1/1     Running   0          62m   10.244.1.22   node1.k8s   <none>           <none>
 
注意:副本缩减时较缓慢
 
 
7、版本升级和回退
 
[root@master ~]# kubectl set image deployment myapp myapp=ikubernetes/myapp:v2 #版本升级
deployment.extensions/myapp image updated
[root@master ~]# kubectl rollout status deployment myapp #已经更新完成(实际是一个一个的灰度更新)
deployment "myapp" successfully rolled out
 
 
[ root@curl-66959f6557-qxnl2:/ ]$ while true;do curl myappslb;sleep 1;done
#可以看到后端服务逐步升级到v2
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
Hello MyApp | Version: v2 | <a href="hostname.html">Pod Name</a>
Hello MyApp | Version: v2 | <a href="hostname.html">Pod Name</a>
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
Hello MyApp | Version: v2 | <a href="hostname.html">Pod Name</a>
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
Hello MyApp | Version: v2 | <a href="hostname.html">Pod Name</a>
Hello MyApp | Version: v2 | <a href="hostname.html">Pod Name</a>
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
Hello MyApp | Version: v2 | <a href="hostname.html">Pod Name</a>
Hello MyApp | Version: v2 | <a href="hostname.html">Pod Name</a>
Hello MyApp | Version: v2 | <a href="hostname.html">Pod Name</a>
Hello MyApp | Version: v2 | <a href="hostname.html">Pod Name</a>
Hello MyApp | Version: v2 | <a href="hostname.html">Pod Name</a>
 
 
[root@master ~]# kubectl get pods -o wide #更新后pods都会发生改变,因为重新创建了。
NAME                            READY   STATUS    RESTARTS   AGE     IP            NODE        NOMINATED NODE   READINESS GATES
curl-66959f6557-qxnl2           1/1     Running   0          12h     10.244.1.20   node1.k8s   <none>           <none>
myapp-65899575cd-q2tsp          1/1     Running   0          3m40s   10.244.2.12   node2.k8s   <none>           <none>
myapp-65899575cd-z9grp          1/1     Running   0          3m36s   10.244.1.27   node1.k8s   <none>           <none>
nginx-deploy-84cbfc56b6-88xdr   1/1     Running   0          75m     10.244.1.22   node1.k8s   <none>           <none>
 
 
[root@master ~]# kubectl rollout undo  deployment myapp #版本回退
deployment.extensions/myapp rolled back
[root@master ~]# kubectl get pods -o wide
NAME                            READY   STATUS        RESTARTS   AGE     IP            NODE        NOMINATED NODE   READINESS GATES
curl-66959f6557-qxnl2           1/1     Running       0          12h     10.244.1.20   node1.k8s   <none>           <none>
myapp-65899575cd-q2tsp          1/1     Running       0          6m33s   10.244.2.12   node2.k8s   <none>           <none>
myapp-65899575cd-z9grp          1/1     Terminating   0          6m29s   10.244.1.27   node1.k8s   <none>           <none>
myapp-9b4987d5-4t24q            0/1     Pending       0          0s      <none>        node1.k8s   <none>           <none>
myapp-9b4987d5-wc67b            1/1     Running       0          1s      10.244.2.13   node2.k8s   <none>           <none>
nginx-deploy-84cbfc56b6-88xdr   1/1     Running       0          78m     10.244.1.22   node1.k8s   <none>           <none>
[root@master ~]# kubectl get pods -o wide
NAME                            READY   STATUS    RESTARTS   AGE   IP            NODE        NOMINATED NODE   READINESS GATES
curl-66959f6557-qxnl2           1/1     Running   0          12h   10.244.1.20   node1.k8s   <none>           <none>
myapp-9b4987d5-4t24q            1/1     Running   0          17s   10.244.1.28   node1.k8s   <none>           <none>
myapp-9b4987d5-wc67b            1/1     Running   0          18s   10.244.2.13   node2.k8s   <none>           <none>
nginx-deploy-84cbfc56b6-88xdr   1/1     Running   0          78m   10.244.1.22   node1.k8s   <none>           <none>
[root@master ~]# kubectl rollout status deployment myapp
 
#可以看到回退速度较快,可能是因为镜像已经下载。
Hello MyApp | Version: v2 | <a href="hostname.html">Pod Name</a>
Hello MyApp | Version: v2 | <a href="hostname.html">Pod Name</a>
Hello MyApp | Version: v2 | <a href="hostname.html">Pod Name</a>
Hello MyApp | Version: v2 | <a href="hostname.html">Pod Name</a>
Hello MyApp | Version: v2 | <a href="hostname.html">Pod Name</a>
Hello MyApp | Version: v2 | <a href="hostname.html">Pod Name</a>
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
 
 
 
8、修改端口映射类型
[root@master ~]# kubectl edit svc myappslb
修改:type: NodePort
 
service/myappslb edited
[root@master ~]# kubectl get svc
NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP   10.96.0.1       <none>        443/TCP        13h
myappslb     NodePort    10.110.68.247   <none>        80:30689/TCP   69m #可以通过所有节点的宿主机ip:30689访问服务
nginx        ClusterIP   10.109.202.23   <none>        80/TCP         12h
 
[root@master ~]# curl 192.168.0.22:30689/hostname.html
myapp-9b4987d5-wc67b
[root@master ~]# curl 192.168.0.22:30689/hostname.html
myapp-9b4987d5-4t24q
[root@master ~]# curl 192.168.0.21:30689/hostname.html
myapp-9b4987d5-4t24q
[root@master ~]# curl 192.168.0.21:30689/hostname.html
myapp-9b4987d5-4t24q
[root@master ~]# curl 192.168.0.20:30689/hostname.html
myapp-9b4987d5-wc67b
[root@master ~]# curl 192.168.0.20:30689/hostname.html
myapp-9b4987d5-wc67b
[root@master ~]# curl 192.168.0.20:30689/hostname.html
myapp-9b4987d5-4t24q
 
#所有节点都监听起来了
[root@master ~]# netstat -tnlp |grep 30689
tcp6       0      0 :::30689                :::*                    LISTEN      12792/kube-proxy  
[root@node1 ~]# netstat -tnlp |grep 30689
tcp6       0      0 :::30689                :::*                    LISTEN      1869/kube-proxy    
 
[root@node2 ~]# netstat -tnlp |grep 30689
tcp6       0      0 :::30689                :::*                    LISTEN      1840/kube-proxy
(责任编辑:liangzh)
织梦二维码生成器
顶一下
(0)
0%
踩一下
(0)
0%
------分隔线----------------------------