目 录CONTENT

文章目录

一键安装高可用K8S集群脚本

筱晶哥哥
2023-01-26 / 0 评论 / 0 点赞 / 29 阅读 / 30314 字 / 正在检测是否收录...
温馨提示:
本文最后更新于 2024-03-23,若内容或图片失效,请留言反馈。部分素材来自网络,若不小心影响到您的利益,请联系我们删除。

容器是打包和运行应用程序的最佳方式,在生产环境中,我们需要管理运行应用程序的容器,并且确保这些容器不会停机。如果一个容器发生了故障,则需要手动启动另一个容器,太麻烦了;如果有一个系统能够帮助我们处理这些行为,是不是会很方便?

Kubernetes 就能解决上面提出的一系列的问题。Kubernetes 为我们提供了一个可弹性运行的分布式系统的框架,Kubernetes 可以满足我们的扩展要求、故障转移、部署模式等。

K8S的搭建对于某些人,即使是专业的运维人员,也会有难度,这里直接提供一键安装高可用K8S集群脚本。

前言

Kubernetes 为我们提供下面的功能:

  • 服务发现和负载均衡:Kubernetes 可以使用 DNS 名称或自己的 IP 地址公开容器,如果进入容器的流量很大,Kubernetes 可以负载均衡并分配网络流量,从而使得部署稳定。

  • 存储编排:Kubernetes 允许我们自动挂载自己选择的存储系统,如:本地存储、公有云提供商等。

  • 自动部署和回滚:我们可以使用 Kubernetes 描述已部署容器的所需状态,Kubernetes 可以以受控的速率将实际状态更改为期望状态,如:我们可以自动化 Kubernetes 来为我们的部署创建新的容器,删除现有容器并将它们的所有资源用于新的容器。

  • 自动完成装箱计算:Kubernetes 允许我们指定每个容器所需要的 CPU 和内存(RAM)。当容器指定了资源请求时,Kubernetes 可以做出更好的决策来管理容器的资源。

  • 自我修复:Kubernetes 重新启动失败的容器、替换容器、杀死不响应用户定义的运行状况检查的容器,并且在准备好服务之前不将其通告给客户端。

  • 密钥和配置管理:Kubernetes 允许我们存储和管理敏感信息,如:密码、OAuth2 令牌和 SSH 密钥。我们可以在不重建容器镜像的情况下部署和更新密钥和应用程序配置,也无需再堆栈配置中暴露密钥。

  • ……

脚本

本脚本基于国产开源项目 kubeasz ,以二进制安装的方式,不得不说这些人真的牛批,省去了大量环境的搭建时间。

Github:https://github.com/easzlab/kubeasz

此脚本的全部参数是在脚本中配置的,如果想在运行脚本时指定参数,可以自己改造脚本,或者联系我免费定制,本人微信:jing2427259171,微信公众号:程序员阿晶,纯开源爱好者。

注意:此脚本适用于联网环境,因为其中要从公网拉取部分依赖、镜像等。

#!/bin/bash
# author: lijing
# descriptions: 以二进制形式一键安装部署K8S

# kubeasz推荐版本对照
# https://github.com/easzlab/kubeasz
# Kubernetes version	1.22	1.23	1.24	1.25	1.26
# kubeasz version	   3.1.1	3.2.0	3.3.2	3.4.3	3.5.0

# 变量定义
export release="3.3.2" 
export BASE="/etc/kubeasz" 
# if k8s version >= 1.24, docker is not supported
export k8s_ver="v1.24.1"  # k8s版本

clusterName="k8s-cluster-test" # 集群名称
rootPasswd="vagrant" # 集群机器root帐号密码

netNum="172.16.15"
# if k8s version >= 1.24, docker is not supported.
cri="containerd" #[containerd|docker]
cni="calico" #[calico|flannel]

isCN=true # 服务器是否在中国

if ls -1v ./kubeasz*.tar.gz &>/dev/null;then software_packet="$(ls -1v ./kubeasz*.tar.gz )";else software_packet="";fi
pwd="/etc/kubeasz"

masterNode=(
    "192.168.56.201"
    # "192.168.56.202"
)

slaveNode=(
    "192.168.56.203"
    "192.168.56.204"
    # "192.168.56.205"
)

etcdNode=(
    "192.168.56.201"
    "192.168.56.203"
    "192.168.56.204"
)

allNode=("${masterNode[@]}" "${slaveNode[@]}")

# 升级软件库
if cat /etc/redhat-release &>/dev/null;then
    yum update -y && apt-get install -y git vim curl
else
    if [[ $isCN == true ]]; then
        sed -i 's/security.debian.org/mirrors.ustc.edu.cn/' /etc/apt/sources.list
    fi
    apt-get update && apt-get upgrade -y && apt-get dist-upgrade -y && apt-get install -y git vim curl
    [ $? -ne 0 ] && apt-get -yf install
fi

# 检测python环境
python -V &>/dev/null
if [ $? -ne 0 ];then
    if cat /etc/redhat-release &>/dev/null;then	
        yum install -y python2
        ln -s /usr/bin/python2 /usr/bin/python
        ln -s /usr/bin/pip2 /usr/bin/pip
        cd -
    else
        apt-get install -y python2
        ln -s /usr/bin/python2 /usr/bin/python
    fi
fi

if [[ $isCN == true ]]; then
# 设置pip安装加速源
mkdir ~/.pip
cat > ~/.pip/pip.conf <<CB
[global]
index-url = https://mirrors.aliyun.com/pypi/simple
[install]
trusted-host=mirrors.aliyun.com
CB
fi


# 安装相应软件包
if cat /etc/redhat-release &>/dev/null;then
    yum install git vim sshpass net-tools tar wget -y
    [ -f ./get-pip.py ] && python ./get-pip.py || {
    wget https://bootstrap.pypa.io/pip/2.7/get-pip.py && python get-pip.py
    }
else
    apt-get install git vim sshpass net-tools tar -y
    [ -f ./get-pip.py ] && python ./get-pip.py || {
    wget https://bootstrap.pypa.io/pip/2.7/get-pip.py && python get-pip.py
    }
fi
python -m pip install --upgrade pip
pip -V
pip install --no-cache-dir ansible netaddr


# 做其他node的ssh免密操作
for host in  ${allNode[@]}
do
    echo "============ ${host} ===========";

    if [[ ${USER} == 'root' ]];then
        [ ! -f /${USER}/.ssh/id_rsa ] &&\
        ssh-keygen -t rsa -P '' -f /${USER}/.ssh/id_rsa
    else
        [ ! -f /home/${USER}/.ssh/id_rsa ] &&\
        ssh-keygen -t rsa -P '' -f /home/${USER}/.ssh/id_rsa
    fi
    sshpass -p ${rootPasswd} ssh-copy-id -o StrictHostKeyChecking=no ${USER}@${host}

    if cat /etc/redhat-release &>/dev/null;then
        ssh -o StrictHostKeyChecking=no ${USER}@${host} "yum update -y && yum install -y git vim curl"
    else
        if [[ $isCN == true ]]; then
            sed -i 's/security.debian.org/mirrors.ustc.edu.cn/' /etc/apt/sources.list
        fi
        ssh -o StrictHostKeyChecking=no ${USER}@${host} "apt-get update && apt-get upgrade -y && apt-get dist-upgrade -y && apt-get install -y git vim curl"
        [ $? -ne 0 ] && ssh -o StrictHostKeyChecking=no ${USER}@${host} "apt-get -yf install"
    fi
done


# 下载k8s二进制安装脚本

if [[ ${software_packet} == '' ]];then
    curl -C- -fLO --retry 3 https://github.com/easzlab/kubeasz/releases/download/${release}/ezdown
    sed -ri "s+^(K8S_BIN_VER=).*$+\1${k8s_ver}+g" ezdown
    chmod +x ./ezdown
    # 使用工具脚本下载
    ./ezdown -D && ./ezdown -P
else
    tar xvf ${software_packet} -C /etc/
    chmod +x ${pwd}/{ezctl,ezdown}
fi

# 初始化一个名为xx的k8s集群配置

CLUSTER_NAME="$clusterName"
${pwd}/ezctl new ${CLUSTER_NAME}
if [[ $? -ne 0 ]];then
    echo "cluster name [${CLUSTER_NAME}] was exist in ${pwd}/clusters/${CLUSTER_NAME}."
    exit 1
fi

if [[ ${software_packet} != '' ]];then
    # 设置参数,启用离线安装
    sed -i 's/^INSTALL_SOURCE.*$/INSTALL_SOURCE: "offline"/g' ${pwd}/clusters/${CLUSTER_NAME}/config.yml
fi


# to check ansible service
ansible all -m ping

#---------------------------------------------------------------------------------------------------




# 修改二进制安装脚本配置 config.yml

sed -ri "s+^(CLUSTER_NAME:).*$+\1 \"${CLUSTER_NAME}\"+g" ${pwd}/clusters/${CLUSTER_NAME}/config.yml

## k8s上日志及容器数据存独立磁盘步骤(参考阿里云的)

[ ! -d /var/lib/container ] && mkdir -p /var/lib/container/{kubelet,docker,containerd}

## cat /etc/fstab     
# UUID=105fa8ff-bacd-491f-a6d0-f99865afc3d6 /                       ext4    defaults        1 1
# /dev/vdb /var/lib/container/ ext4 defaults 0 0
# /var/lib/container/kubelet /var/lib/kubelet none defaults,bind 0 0
# /var/lib/container/docker /var/lib/docker none defaults,bind 0 0

## tree -L 1 /var/lib/container
# /var/lib/container
# ├── docker
# ├── kubelet
# └── lost+found

# docker data dir
DOCKER_STORAGE_DIR="/var/lib/container/docker"
sed -ri "s+^(DOCKER_STORAGE_DIR:).*$+DOCKER_STORAGE_DIR: \"${DOCKER_STORAGE_DIR}\"+g" ${pwd}/clusters/${CLUSTER_NAME}/config.yml
# containerd data dir
CONTAINERD_STORAGE_DIR="/var/lib/container/containerd"
sed -ri "s+^(CONTAINERD_STORAGE_DIR:).*$+CONTAINERD_STORAGE_DIR: \"${CONTAINERD_STORAGE_DIR}\"+g" ${pwd}/clusters/${CLUSTER_NAME}/config.yml
# kubelet logs dir
KUBELET_ROOT_DIR="/var/lib/container/kubelet"
sed -ri "s+^(KUBELET_ROOT_DIR:).*$+KUBELET_ROOT_DIR: \"${KUBELET_ROOT_DIR}\"+g" ${pwd}/clusters/${CLUSTER_NAME}/config.yml
if [[ $isCN == true ]]; then
    # docker aliyun repo
    REG_MIRRORS="https://pqbap4ya.mirror.aliyuncs.com"
    sed -ri "s+^REG_MIRRORS:.*$+REG_MIRRORS: \'[\"${REG_MIRRORS}\"]\'+g" ${pwd}/clusters/${CLUSTER_NAME}/config.yml
fi
# [docker]信任的HTTP仓库
sed -ri "s+127.0.0.1/8+${netNum}.0/24+g" ${pwd}/clusters/${CLUSTER_NAME}/config.yml
# disable dashboard auto install,是否自动安装dashboard,如果要安装,下面的no改为yes即可
sed -ri "s+^(dashboard_install:).*$+\1 \"no\"+g" ${pwd}/clusters/${CLUSTER_NAME}/config.yml


# 融合配置准备
CLUSEER_WEBSITE="${CLUSTER_NAME}k8s.gtapp.xyz"
lb_num=$(grep -wn '^MASTER_CERT_HOSTS:' ${pwd}/clusters/${CLUSTER_NAME}/config.yml |awk -F: '{print $1}')
lb_num1=$(expr ${lb_num} + 1)
lb_num2=$(expr ${lb_num} + 2)
sed -ri "${lb_num1}s+.*$+  - "${CLUSEER_WEBSITE}"+g" ${pwd}/clusters/${CLUSTER_NAME}/config.yml
sed -ri "${lb_num2}s+(.*)$+#\1+g" ${pwd}/clusters/${CLUSTER_NAME}/config.yml

# node节点最大pod 数
MAX_PODS="120"
sed -ri "s+^(MAX_PODS:).*$+\1 ${MAX_PODS}+g" ${pwd}/clusters/${CLUSTER_NAME}/config.yml



# 修改二进制安装脚本配置 hosts
# clean old ip
sed -ri '/192.168.1.1/d' ${pwd}/clusters/${CLUSTER_NAME}/hosts
sed -ri '/192.168.1.2/d' ${pwd}/clusters/${CLUSTER_NAME}/hosts
sed -ri '/192.168.1.3/d' ${pwd}/clusters/${CLUSTER_NAME}/hosts
sed -ri '/192.168.1.4/d' ${pwd}/clusters/${CLUSTER_NAME}/hosts

# 创建ETCD集群的主机位
for host in ${etcdNode[@]}
do
    echo $host
    sed -i "/\[etcd/a $host"  ${pwd}/clusters/${CLUSTER_NAME}/hosts
done

# 创建KUBE-MASTER集群的主机位
for host in ${masterNode[@]}
do
    echo $host
    sed -i "/\[kube_master/a $host"  ${pwd}/clusters/${CLUSTER_NAME}/hosts
done

# 创建KUBE-NODE集群的主机位
for host in ${slaveNode[@]}
do
    echo $host
    sed -i "/\[kube_node/a $host"  ${pwd}/clusters/${CLUSTER_NAME}/hosts
done

# 配置容器运行时CNI
case ${cni} in
    flannel)
    sed -ri "s+^CLUSTER_NETWORK=.*$+CLUSTER_NETWORK=\"${cni}\"+g" ${pwd}/clusters/${CLUSTER_NAME}/hosts
    ;;
    calico)
    sed -ri "s+^CLUSTER_NETWORK=.*$+CLUSTER_NETWORK=\"${cni}\"+g" ${pwd}/clusters/${CLUSTER_NAME}/hosts
    ;;
    *)
    echo "cni need be flannel or calico."
    exit 11
esac

userName=root.crontab
if grep -q "^${userName}" /etc/passwd;then
  echo "${userName} has exsits, will not be created."  
else
  useradd ${userName}
fi
# 配置K8S的ETCD数据备份的定时任务
if cat /etc/redhat-release &>/dev/null;then
    if ! grep -w '94.backup.yml' /var/spool/cron/root &>/dev/null;then echo "00 00 * * * `which ansible-playbook` ${pwd}/playbooks/94.backup.yml &> /dev/null" >> /var/spool/cron/root;else echo exists ;fi
    chown root.crontab /var/spool/cron/root
    chmod 600 /var/spool/cron/root
else
    if ! grep -w '94.backup.yml' /var/spool/cron/crontabs/root &>/dev/null;then echo "00 00 * * * `which ansible-playbook` ${pwd}/playbooks/94.backup.yml &> /dev/null" >> /var/spool/cron/crontabs/root;else echo exists ;fi
    chown root.crontab /var/spool/cron/crontabs/root
    chmod 600 /var/spool/cron/crontabs/root
fi
rm /var/run/cron.reboot
service crond restart 




#---------------------------------------------------------------------------------------------------
# 准备开始安装了
rm -rf ${pwd}/{dockerfiles,docs,.gitignore,pics,dockerfiles} &&\
find ${pwd}/ -name '*.md'|xargs rm -f
#read -p "Enter to continue deploy k8s to all nodes >>>" YesNobbb

# now start deploy k8s cluster 
cd ${pwd}/

# to prepare CA/certs & kubeconfig & other system settings
echo "step 01\n"
${pwd}/ezctl setup ${CLUSTER_NAME} 01
sleep 1
# to setup the etcd cluster
echo "step 02\n"
${pwd}/ezctl setup ${CLUSTER_NAME} 02
sleep 1
# to setup the container runtime(docker or containerd)
echo "step 03\n"
case ${cri} in
    containerd)
    sed -ri "s+^CONTAINER_RUNTIME=.*$+CONTAINER_RUNTIME=\"${cri}\"+g" ${pwd}/clusters/${CLUSTER_NAME}/hosts
    ${pwd}/ezctl setup ${CLUSTER_NAME} 03
    ;;
    docker)
    sed -ri "s+^CONTAINER_RUNTIME=.*$+CONTAINER_RUNTIME=\"${cri}\"+g" ${pwd}/clusters/${CLUSTER_NAME}/hosts
    ${pwd}/ezctl setup ${CLUSTER_NAME} 03
    ;;
    *)
    echo "cri need be containerd or docker."
    exit 11
esac
sleep 1
# to setup the master nodes
echo "step 04\n"
${pwd}/ezctl setup ${CLUSTER_NAME} 04
sleep 1
# to setup the worker nodes
echo "step 05\n"
${pwd}/ezctl setup ${CLUSTER_NAME} 05
sleep 1
# to setup the network plugin(flannel、calico...)
echo "step 06\n"
${pwd}/ezctl setup ${CLUSTER_NAME} 06
sleep 1
# to setup other useful plugins(metrics-server、coredns...)
echo "step 07\n"
${pwd}/ezctl setup ${CLUSTER_NAME} 07
sleep 1
# [可选]对集群所有节点进行操作系统层面的安全加固  https://github.com/dev-sec/ansible-os-hardening
#ansible-playbook roles/os-harden/os-harden.yml
#sleep 1
#cd `dirname ${software_packet:-/tmp}`


k8s_bin_path='/opt/kube/bin'


echo "-------------------------  k8s version list  ---------------------------"
${k8s_bin_path}/kubectl version
echo
echo "-------------------------  All Healthy status check  -------------------"
${k8s_bin_path}/kubectl get componentstatus
echo
echo "-------------------------  k8s cluster info list  ----------------------"
${k8s_bin_path}/kubectl cluster-info
echo
echo "-------------------------  k8s all nodes list  -------------------------"
${k8s_bin_path}/kubectl get node -o wide
echo
echo "-------------------------  k8s all-namespaces's pods list   ------------"
${k8s_bin_path}/kubectl get pod --all-namespaces
echo
echo "-------------------------  k8s all-namespaces's service network   ------"
${k8s_bin_path}/kubectl get svc --all-namespaces
echo
echo "-------------------------  k8s welcome for you   -----------------------"
echo

# you can use k alias kubectl to siample
echo "alias k=kubectl && complete -F __start_kubectl k" >> ~/.bashrc

# get dashboard url
${k8s_bin_path}/kubectl cluster-info|grep dashboard|awk '{print $NF}'|tee -a /root/k8s_results

# get login token
${k8s_bin_path}/kubectl -n kube-system describe secret $(${k8s_bin_path}/kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')|grep 'token:'|awk '{print $NF}'|tee -a /root/k8s_results
echo
echo "you can look again dashboard and token info at  >>> /root/k8s_results <<<"
#echo ">>>>>>>>>>>>>>>>> You can excute command [ source ~/.bashrc ] <<<<<<<<<<<<<<<<<<<<"
echo ">>>>>>>>>>>>>>>>> You need to excute command [ reboot ] to restart all nodes <<<<<<<<<<<<<<<<<<<<"
0

评论区