Kubernetes容器编排与运维技能Skill kubernetes

此技能专用于Kubernetes平台的部署、管理和运维,覆盖资源架构设计、安全最佳实践、Helm图表开发、故障排除和生产操作。关键词包括Kubernetes, k8s, 部署, 安全, Helm, 集群, RBAC, 网络策略, 生产运维, 容器编排, DevOps, 云原生。

Docker/K8s 0 次安装 0 次浏览 更新于 3/24/2026

名称: kubernetes 描述: Kubernetes 部署、集群架构、安全和运维。包括清单、Helm 图表、RBAC、网络策略、故障排除和生产最佳实践。触发关键词:kubernetes, k8s, pod, deployment, statefulset, daemonset, service, ingress, configmap, secret, pvc, namespace, helm, chart, kubectl, cluster, rbac, networkpolicy, podsecurity, operator, crd, job, cronjob, hpa, pdb, kustomize。 允许工具: 读取, 查找, 全局搜索, 编辑, 写入, Bash

Kubernetes

概述

此技能覆盖 Kubernetes 资源配置、部署策略、集群架构、安全加固、Helm 图表开发和生产运维。它帮助创建生产就绪的清单、解决集群问题和实施安全最佳实践。

代理委派

软件工程师 (Sonnet) - 编写 K8s 清单,实施配置 高级软件工程师 (Opus) - Kubernetes 上的应用架构、多服务设计、集群架构与实现、操作符、CRDs、RBAC、网络策略、Pod安全标准、准入控制 代码审查员 (Sonnet) - 审查 K8s YAML 以遵循最佳实践和安全问题

注意:对于集群架构和安全加固,使用高级软件工程师与 /kubernetes、/security-audit 和 /threat-model 技能

指令

1. 设计资源架构

  • 规划命名空间组织和边界
  • 基于工作负载定义资源请求/限制
  • 设计服务网格拓扑(如适用)
  • 规划高可用性(副本、亲和性、PDBs)
  • 选择工作负载类型(部署/有状态集/守护进程集/作业)
  • 设计存储策略(PVCs、存储类、临时卷)

2. 创建资源清单

  • 编写部署/有状态集/守护进程集,包含适当的生命周期
  • 配置服务(集群IP/节点端口/负载均衡器)和入口
  • 设置配置映射和秘密(密封秘密、外部秘密)
  • 定义 RBAC 策略(服务账户、角色、角色绑定)
  • 实现网络策略用于 Pod 间通信
  • 配置 Pod 中断预算以保证可用性

3. 开发 Helm 图表

  • 结构化图表(Chart.yaml、values.yaml、templates/)
  • 使用模板函数(_helpers.tpl 用于通用标签)
  • 参数化配置(副本数、镜像标签、资源)
  • 定义依赖(requirements.yaml 或 Chart.yaml 依赖)
  • 实现钩子(预安装、后升级等)
  • 本地测试图表(helm template、helm lint、helm install --dry-run)
  • 打包并发布图表到注册表

4. 实施安全最佳实践

  • 以非 root 用户运行(runAsUser、runAsNonRoot)
  • 丢弃所有能力(securityContext.capabilities.drop: [ALL])
  • 使用只读根文件系统(readOnlyRootFilesystem: true)
  • 防止特权升级(allowPrivilegeEscalation: false)
  • 实现 Pod 安全标准(限制配置文件)
  • 使用网络策略实现零信任网络
  • 扫描镜像漏洞(集成 Trivy、Snyk)
  • 定期轮换秘密(外部秘密操作符)
  • 审计 RBAC 权限(最小权限原则)

5. 配置可观测性

  • 添加 Prometheus 注释用于抓取
  • 配置日志记录(stdout/stderr、fluentd/loki)
  • 实现健康探针(存活、就绪、启动)
  • 导出指标(HPA 的自定义指标)
  • 设置分布式追踪(OpenTelemetry)
  • 配置警报规则

6. 故障排除问题

  • Pod 崩溃:kubectl logskubectl describe pod、事件
  • 镜像拉取失败:检查 ImagePullSecrets、注册表认证
  • 网络:测试 DNS 解析、检查网络策略
  • 资源约束:检查节点容量、Pod 驱逐
  • 调度失败:节点选择器、污点/容忍、亲和性
  • 性能:使用 kubectl top、Prometheus 指标
  • CrashLoopBackOff:检查启动探针、初始化容器
  • 挂起 Pod:描述 Pod 以查找调度失败

7. 生产运维

  • 滚动更新与适当的回滚策略
  • 蓝绿部署(多个服务)
  • 金丝雀部署(通过服务网格进行流量分割)
  • 备份和恢复(Velero 用于集群状态)
  • 灾难恢复规划(多区域、etcd 备份)
  • 容量规划(资源配额、限制范围)
  • 成本优化(适当规模、竞价实例、集群自动扩展器)
  • 升级规划(在预演环境测试、滚动节点升级)

最佳实践

  1. 使用命名空间:逻辑组织资源,启用 RBAC 边界
  2. 设置资源限制:防止资源耗尽,启用 QoS 类
  3. 健康探针:配置存活、就绪和启动探针
  4. 滚动更新:零停机部署,使用 maxSurge/maxUnavailable
  5. 秘密管理:使用外部秘密,永不硬编码凭据
  6. 标签一切:启用筛选、选择和监控
  7. 使用 Helm/Kustomize:作为代码模板和管理清单
  8. 安全上下文:始终以非 root 和最小特权运行
  9. 网络策略:默认拒绝,显式允许以实现零信任
  10. Pod 中断预算:在中断期间保护可用性
  11. 资源配额:防止每个命名空间的资源失控消耗
  12. 准入控制:使用 OPA/Kyverno 进行策略执行
  13. 不可变基础设施:永不 SSH 进入 Pod,而是替换
  14. GitOps:使用 ArgoCD/Flux 进行声明式部署

示例

示例 1: 生产部署

apiVersion: apps/v1
kind: Deployment
metadata:
  name: api-server
  namespace: production
  labels:
    app: api-server
    version: v1.2.0
spec:
  replicas: 3
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 0
  selector:
    matchLabels:
      app: api-server
  template:
    metadata:
      labels:
        app: api-server
        version: v1.2.0
      annotations:
        prometheus.io/scrape: "true"
        prometheus.io/port: "8080"
    spec:
      serviceAccountName: api-server
      securityContext:
        runAsNonRoot: true
        runAsUser: 1000
        fsGroup: 1000

      containers:
        - name: api
          image: myregistry.io/api-server:v1.2.0
          imagePullPolicy: IfNotPresent

          ports:
            - name: http
              containerPort: 8080
              protocol: TCP

          env:
            - name: DATABASE_URL
              valueFrom:
                secretKeyRef:
                  name: api-secrets
                  key: database-url
            - name: LOG_LEVEL
              valueFrom:
                configMapKeyRef:
                  name: api-config
                  key: log-level

          resources:
            requests:
              cpu: 100m
              memory: 128Mi
            limits:
              cpu: 500m
              memory: 512Mi

          livenessProbe:
            httpGet:
              path: /health/live
              port: http
            initialDelaySeconds: 15
            periodSeconds: 20
            timeoutSeconds: 5
            failureThreshold: 3

          readinessProbe:
            httpGet:
              path: /health/ready
              port: http
            initialDelaySeconds: 5
            periodSeconds: 10
            timeoutSeconds: 3
            failureThreshold: 3

          securityContext:
            allowPrivilegeEscalation: false
            readOnlyRootFilesystem: true
            capabilities:
              drop:
                - ALL

          volumeMounts:
            - name: tmp
              mountPath: /tmp
            - name: cache
              mountPath: /app/cache

      volumes:
        - name: tmp
          emptyDir: {}
        - name: cache
          emptyDir:
            sizeLimit: 100Mi

      affinity:
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
            - weight: 100
              podAffinityTerm:
                labelSelector:
                  matchLabels:
                    app: api-server
                topologyKey: kubernetes.io/hostname

      topologySpreadConstraints:
        - maxSkew: 1
          topologyKey: topology.kubernetes.io/zone
          whenUnsatisfiable: ScheduleAnyway
          labelSelector:
            matchLabels:
              app: api-server

示例 2: 服务和入口

apiVersion: v1
kind: Service
metadata:
  name: api-server
  namespace: production
  labels:
    app: api-server
spec:
  type: ClusterIP
  ports:
    - port: 80
      targetPort: http
      protocol: TCP
      name: http
  selector:
    app: api-server

---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: api-server
  namespace: production
  annotations:
    kubernetes.io/ingress.class: nginx
    cert-manager.io/cluster-issuer: letsencrypt-prod
    nginx.ingress.kubernetes.io/rate-limit: "100"
    nginx.ingress.kubernetes.io/rate-limit-window: "1m"
spec:
  tls:
    - hosts:
        - api.example.com
      secretName: api-tls-cert
  rules:
    - host: api.example.com
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: api-server
                port:
                  number: 80

示例 3: 配置映射和秘密

apiVersion: v1
kind: ConfigMap
metadata:
  name: api-config
  namespace: production
data:
  log-level: "info"
  max-connections: "100"
  cache-ttl: "3600"
  feature-flags: |
    {
      "new-checkout": true,
      "beta-features": false
    }

---
apiVersion: v1
kind: Secret
metadata:
  name: api-secrets
  namespace: production
type: Opaque
stringData:
  database-url: "postgresql://user:password@db-host:5432/myapp"
  api-key: "super-secret-key"

示例 4: 水平 Pod 自动扩展器

apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
  name: api-server
  namespace: production
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: api-server
  minReplicas: 3
  maxReplicas: 10
  metrics:
    - type: Resource
      resource:
        name: cpu
        target:
          type: Utilization
          averageUtilization: 70
    - type: Resource
      resource:
        name: memory
        target:
          type: Utilization
          averageUtilization: 80
  behavior:
    scaleDown:
      stabilizationWindowSeconds: 300
      policies:
        - type: Percent
          value: 10
          periodSeconds: 60
    scaleUp:
      stabilizationWindowSeconds: 0
      policies:
        - type: Percent
          value: 100
          periodSeconds: 15
        - type: Pods
          value: 4
          periodSeconds: 15
      selectPolicy: Max

示例 5: Helm 图表结构

mychart/
  Chart.yaml          # 图表元数据
  values.yaml         # 默认配置值
  templates/
    _helpers.tpl      # 模板助手
    deployment.yaml   # 部署模板
    service.yaml      # 服务模板
    ingress.yaml      # 入口模板
    configmap.yaml    # 配置映射模板
    secret.yaml       # 秘密模板
    NOTES.txt         # 安装后注释
  charts/             # 图表依赖
  .helmignore         # 忽略的文件

Chart.yaml:

apiVersion: v2
name: api-server
description: 生产 API 服务器 Helm 图表
type: application
version: 1.2.0
appVersion: "1.2.0"
keywords:
  - api
  - 后端
maintainers:
  - name: 平台团队
    email: platform@example.com
dependencies:
  - name: postgresql
    version: "12.x.x"
    repository: "https://charts.bitnami.com/bitnami"
    condition: postgresql.enabled

values.yaml:

replicaCount: 3

image:
  repository: myregistry.io/api-server
  tag: "1.2.0"
  pullPolicy: IfNotPresent

service:
  type: ClusterIP
  port: 80
  targetPort: 8080

ingress:
  enabled: true
  className: nginx
  annotations:
    cert-manager.io/cluster-issuer: letsencrypt-prod
  hosts:
    - host: api.example.com
      paths:
        - path: /
          pathType: Prefix
  tls:
    - secretName: api-tls-cert
      hosts:
        - api.example.com

resources:
  requests:
    cpu: 100m
    memory: 128Mi
  limits:
    cpu: 500m
    memory: 512Mi

autoscaling:
  enabled: true
  minReplicas: 3
  maxReplicas: 10
  targetCPUUtilizationPercentage: 70
  targetMemoryUtilizationPercentage: 80

postgresql:
  enabled: true
  auth:
    username: apiuser
    database: apidb

templates/_helpers.tpl:

{{- define "api-server.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" }}
{{- end }}

{{- define "api-server.fullname" -}}
{{- if .Values.fullnameOverride }}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- $name := default .Chart.Name .Values.nameOverride }}
{{- if contains $name .Release.Name }}
{{- .Release.Name | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" }}
{{- end }}
{{- end }}
{{- end }}

{{- define "api-server.labels" -}}
helm.sh/chart: {{ .Chart.Name }}-{{ .Chart.Version }}
app.kubernetes.io/name: {{ include "api-server.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- end }}

示例 6: 网络策略(零信任)

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: api-server-netpol
  namespace: production
spec:
  podSelector:
    matchLabels:
      app: api-server
  policyTypes:
    - Ingress
    - Egress

  ingress:
    - from:
        - namespaceSelector:
            matchLabels:
              name: ingress-nginx
        - podSelector:
            matchLabels:
              app: frontend
      ports:
        - protocol: TCP
          port: 8080

  egress:
    - to:
        - podSelector:
            matchLabels:
              app: postgresql
      ports:
        - protocol: TCP
          port: 5432

    - to:
        - podSelector:
            matchLabels:
              k8s-app: kube-dns
          namespaceSelector:
            matchLabels:
              name: kube-system
      ports:
        - protocol: UDP
          port: 53

    - to:
        - namespaceSelector: {}
      ports:
        - protocol: TCP
          port: 443

---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: default-deny-all
  namespace: production
spec:
  podSelector: {}
  policyTypes:
    - Ingress
    - Egress

示例 7: RBAC 配置

apiVersion: v1
kind: ServiceAccount
metadata:
  name: api-server
  namespace: production
  labels:
    app: api-server

---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: api-server
  namespace: production
rules:
  - apiGroups: [""]
    resources: ["configmaps"]
    verbs: ["get", "list", "watch"]

  - apiGroups: [""]
    resources: ["secrets"]
    resourceNames: ["api-secrets", "database-creds"]
    verbs: ["get"]

  - apiGroups: [""]
    resources: ["pods"]
    verbs: ["get", "list"]

---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: api-server
  namespace: production
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: api-server
subjects:
  - kind: ServiceAccount
    name: api-server
    namespace: production

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: pod-reader
rules:
  - apiGroups: [""]
    resources: ["pods"]
    verbs: ["get", "list", "watch"]

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: read-pods-global
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: pod-reader
subjects:
  - kind: ServiceAccount
    name: monitoring
    namespace: observability

示例 8: 有状态集与持久存储

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: postgresql
  namespace: production
spec:
  serviceName: postgresql-headless
  replicas: 3
  selector:
    matchLabels:
      app: postgresql

  template:
    metadata:
      labels:
        app: postgresql
    spec:
      containers:
        - name: postgres
          image: postgres:15-alpine
          ports:
            - containerPort: 5432
              name: postgres
          env:
            - name: POSTGRES_DB
              value: myapp
            - name: POSTGRES_USER
              valueFrom:
                secretKeyRef:
                  name: postgres-creds
                  key: username
            - name: POSTGRES_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: postgres-creds
                  key: password
            - name: PGDATA
              value: /var/lib/postgresql/data/pgdata
          volumeMounts:
            - name: data
              mountPath: /var/lib/postgresql/data
          resources:
            requests:
              cpu: 250m
              memory: 512Mi
            limits:
              cpu: 1000m
              memory: 2Gi

  volumeClaimTemplates:
    - metadata:
        name: data
      spec:
        accessModes: ["ReadWriteOnce"]
        storageClassName: fast-ssd
        resources:
          requests:
            storage: 50Gi

---
apiVersion: v1
kind: Service
metadata:
  name: postgresql-headless
  namespace: production
spec:
  clusterIP: None
  selector:
    app: postgresql
  ports:
    - port: 5432
      targetPort: postgres
      name: postgres

故障排除命令

# Pod 调试
kubectl get pods -n production
kubectl describe pod api-server-abc123 -n production
kubectl logs api-server-abc123 -n production
kubectl logs api-server-abc123 -n production --previous
kubectl logs api-server-abc123 -c container-name -n production
kubectl exec -it api-server-abc123 -n production -- /bin/sh

# 事件和状态
kubectl get events -n production --sort-by='.lastTimestamp'
kubectl get events -n production --field-selector involvedObject.name=api-server-abc123

# 资源使用
kubectl top nodes
kubectl top pods -n production
kubectl describe node node-1

# 网络调试
kubectl run -it --rm debug --image=nicolaka/netshoot --restart=Never -- /bin/bash
kubectl exec -it api-server-abc123 -n production -- nslookup kubernetes.default
kubectl exec -it api-server-abc123 -n production -- curl -v http://other-service

# 配置
kubectl get configmap api-config -n production -o yaml
kubectl get secret api-secrets -n production -o jsonpath='{.data}'

# 部署和回滚
kubectl rollout status deployment/api-server -n production
kubectl rollout history deployment/api-server -n production
kubectl rollout undo deployment/api-server -n production
kubectl rollout restart deployment/api-server -n production

# RBAC 调试
kubectl auth can-i get pods --as=system:serviceaccount:production:api-server -n production
kubectl get rolebindings,clusterrolebindings --all-namespaces -o json | jq '.items[] | select(.subjects[]?.name=="api-server")'

# 网络策略调试
kubectl describe networkpolicy api-server-netpol -n production

# Helm 操作
helm list -n production
helm get values api-server -n production
helm history api-server -n production
helm upgrade api-server ./mychart -n production --values values.yaml
helm rollback api-server 2 -n production
helm uninstall api-server -n production

# 集群信息
kubectl cluster-info
kubectl get nodes -o wide
kubectl get componentstatuses
kubectl api-resources
kubectl api-versions

常见问题与解决方案

问题 原因 解决方案
ImagePullBackOff 注册表认证缺失或镜像未找到 检查 imagePullSecrets,验证镜像是否存在
CrashLoopBackOff 应用启动时崩溃 检查日志,验证配置,添加启动探针
Pending pod 资源不足或调度约束 检查节点容量、污点、容忍、亲和性
OOMKilled 内存限制超出 增加内存限制,修复内存泄漏
Service unreachable 选择器错误或端口错误 验证选择器匹配 Pod 标签,检查端口
DNS resolution fails CoreDNS 问题或网络策略阻塞 检查 CoreDNS Pods,验证网络策略出口
PVC pending 存储类缺失或无可用卷 验证存储类,检查提供者
RBAC permission denied 服务账户缺少所需权限 添加具有必要动词的角色/角色绑定
Readiness probe failing 应用未就绪或探针配置错误 调整 initialDelaySeconds,检查探针端点
HPA not scaling 指标服务器缺失或无资源请求 安装 metrics-server,设置资源请求