Kustomize配置管理技能Skill kustomize

Kustomize是一个Kubernetes原生配置管理工具,专为多环境部署、资源补丁和GitOps工作流设计。它通过声明式方式管理环境特定配置,无需模板,直接集成到kubectl中。适用于DevOps和云原生场景,关键词:Kubernetes, 配置管理, Kustomize, 多环境部署, GitOps, DevOps, 容器编排, 云原生, 资源补丁, 声明式配置。

DevOps 0 次安装 0 次浏览 更新于 3/24/2026

名称: kustomize 描述: 使用Kustomize进行Kubernetes原生配置管理。用于环境特定配置、资源补丁、清单组织、多环境部署和GitOps工作流。触发词: kustomize, kustomization, overlay, base, patch, strategic merge, json patch, json6902, configmap generator, secret generator, namespace, namePrefix, nameSuffix, commonLabels, commonAnnotations, component, transformer, replacement, multi-environment, dev/staging/prod configs, k8s manifest management. 允许工具: Read, Grep, Glob, Edit, Write, Bash

Kustomize技能

概述

Kustomize是一个Kubernetes原生配置管理工具,使用声明式定制来管理环境特定配置,无需模板。它遵循声明式应用程序管理原则,并直接与kubectl集成。

主要用例: 多环境部署、资源补丁、GitOps工作流、ConfigMap/Secret生成、横切转换。

本技能涵盖内容

  • 多环境覆盖层: 使用基础+覆盖层的开发/暂存/生产模式
  • 补丁策略: 战略合并与JSON 6902补丁,何时使用每种
  • 生成器: 具有自动内容哈希的ConfigMap和Secret生成
  • 转换器: 横切更改(标签、注释、命名空间、名称前缀/后缀、镜像、副本数)
  • 组件: 可重用的可选功能包(监控、入口、调试工具)
  • 替换: 用于在资源之间传播值的动态字段替换
  • GitOps集成: ArgoCD、Flux、CI/CD管道模式
  • 安全: 秘密管理、镜像固定、验证、RBAC模式

核心概念

  • 基础: 包含kustomization.yaml和一组资源的目录(通常是通用/共享配置)
  • 覆盖层: 引用基础并应用自定义的目录,带有kustomization.yaml(如开发/暂存/生产的环境特定配置)
  • 补丁: 修改现有资源的部分资源定义(战略合并或JSON补丁)
  • 组件: 可包含在多个kustomization中的可重用自定义包(例如,监控、入口)
  • 生成器: 从文件、文字或环境文件创建ConfigMap和Secret,带有自动内容哈希
  • 转换器: 全面修改资源(标签、注释、命名空间、名称前缀、名称后缀、副本数、镜像)
  • 替换: 在资源之间传播值的动态字段替换(例如,带有哈希后缀的ConfigMap名称)

关键原则

  1. 基础可重用: 定义一次通用配置,按环境自定义
  2. 覆盖层可组合: 为不同环境堆叠多个自定义
  3. 资源不被修改: 原始基础文件保持不变
  4. 无模板: 使用声明式合并而非变量替换
  5. kubectl集成: kubectl apply -k <目录> 原生支持Kustomize
  6. 内容哈希: ConfigMap和Secret基于内容获得自动名称后缀,用于不可变部署

目录结构

推荐布局

k8s/
├── base/
│   ├── kustomization.yaml
│   ├── deployment.yaml
│   ├── service.yaml
│   └── configmap.yaml
├── overlays/
│   ├── dev/
│   │   ├── kustomization.yaml
│   │   ├── patch-replicas.yaml
│   │   └── config-values.env
│   ├── staging/
│   │   ├── kustomization.yaml
│   │   ├── patch-replicas.yaml
│   │   └── config-values.env
│   └── prod/
│       ├── kustomization.yaml
│       ├── patch-replicas.yaml
│       ├── patch-resources.yaml
│       └── config-values.env
└── components/
    ├── monitoring/
    │   ├── kustomization.yaml
    │   └── servicemonitor.yaml
    └── ingress/
        ├── kustomization.yaml
        └── ingress.yaml

多服务结构

k8s/
├── base/
│   ├── kustomization.yaml (引用所有服务)
│   ├── namespace.yaml
│   └── services/
│       ├── api/
│       │   ├── kustomization.yaml
│       │   ├── deployment.yaml
│       │   └── service.yaml
│       └── worker/
│           ├── kustomization.yaml
│           ├── deployment.yaml
│           └── service.yaml
└── overlays/
    ├── dev/
    │   └── kustomization.yaml
    ├── staging/
    │   └── kustomization.yaml
    └── prod/
        └── kustomization.yaml

快速参考

常见操作

任务 命令
构建清单 kustomize build k8s/overlays/prod
预览更改 kubectl diff -k k8s/overlays/prod
应用到集群 kubectl apply -k k8s/overlays/prod
验证语法 kustomize build k8s/overlays/prod | kubectl apply --dry-run=client -f -
更新镜像标签 kustomize edit set image myapp=registry/myapp:v1.2.3
添加资源 kustomize edit add resource deployment.yaml
添加ConfigMap kustomize edit add configmap app-config --from-literal=KEY=value

多环境模式

k8s/
├── base/              # 共享配置
├── overlays/
│   ├── dev/          # 开发特定(低资源,启用调试)
│   ├── staging/      # 暂存特定(中等资源,监控)
│   └── prod/         # 生产特定(高资源,高可用,安全)
└── components/       # 可选功能(监控、入口、调试工具)

生成器快速入门

生成器类型 用例 示例
ConfigMap文字值 简单键值配置 configMapGenerator: - name: app-config literals: - LOG_LEVEL=info
ConfigMap文件 配置文件 configMapGenerator: - name: app-config files: - application.properties
ConfigMap环境文件 环境变量 configMapGenerator: - name: app-config envs: - config.env
Secret文字值 简单秘密 secretGenerator: - name: app-secrets literals: - username=admin
Secret文件 证书/密钥文件 secretGenerator: - name: tls-secrets files: - tls.crt - tls.key type: kubernetes.io/tls

转换器快速入门

转换器 用例 示例
namespace 为所有资源设置命名空间 namespace: production
namePrefix 添加资源名称前缀 namePrefix: myapp-
nameSuffix 添加资源名称后缀 nameSuffix: -v2
commonLabels 为所有资源添加标签 commonLabels: app: myapp team: platform
commonAnnotations 为所有资源添加注释 commonAnnotations: managed-by: kustomize
images 更新容器镜像 images: - name: myapp newTag: v1.2.3
replicas 设置副本数 replicas: - name: myapp count: 5

工作流

1. 创建基础配置

从适用于所有环境的通用资源开始:

mkdir -p k8s/base
cd k8s/base
# 创建资源文件(deployment.yaml、service.yaml等)
# 创建kustomization.yaml以引用它们

2. 构建和验证基础

kustomize build k8s/base
# 或
kubectl kustomize k8s/base

3. 创建环境覆盖层

mkdir -p k8s/overlays/dev
cd k8s/overlays/dev
# 创建引用基础的kustomization.yaml
# 添加补丁和自定义

4. 应用到集群

# 预览更改
kubectl diff -k k8s/overlays/dev

# 应用
kubectl apply -k k8s/overlays/dev

# 删除
kubectl delete -k k8s/overlays/dev

5. 迭代和重构

  • 将通用模式提取到组件
  • 使用生成器处理ConfigMap和Secret
  • 应用转换器处理横切关注点

补丁策略

战略合并补丁(默认)

战略合并是默认补丁策略,它使用Kubernetes感知的合并逻辑。

战略合并特征

  • 按键合并映射/对象
  • 默认替换数组(除非特殊指令)
  • 使用$patch: delete$patch: replace指令
  • 对Kubernetes资源更直观

战略合并用例

  • 简单字段更新(副本数、镜像、环境变量)
  • 添加或替换容器
  • 更新资源限制
  • 最常见用例

JSON补丁(RFC 6902)

JSON补丁提供精确的数组操作和字段操作。

JSON补丁特征

  • 操作:添加、移除、替换、移动、复制、测试
  • 使用JSON指针路径(例如,/spec/template/spec/containers/0/image
  • 精确目标数组元素
  • 更冗长但更精确

JSON补丁用例

  • 精确数组元素操作
  • 条件补丁(测试操作)
  • 复杂嵌套更新
  • 当战略合并太粗糙时

组件模式

组件是可重用的自定义包,可以有选择地包含在覆盖层中。它们适用于仅某些环境需要的可选功能。

何时使用组件

模式 使用组件对于 使用补丁对于
可选功能 监控、入口、调试工具 必需修改
跨环境重用 在暂存+生产中可用的功能 环境特定更改
干净组合 独立功能集 调整现有资源
条件包含 按环境启用 始终在覆盖层应用

组件结构

# k8s/components/monitoring/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1alpha1
kind: Component

resources:
  - servicemonitor.yaml
  - prometheusrule.yaml

patches:
  - path: patch-metrics.yaml
    target:
      kind: Deployment

labels:
  - pairs:
      prometheus.io/scrape: "true"

包含组件

# k8s/overlays/prod/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

resources:
  - ../../base

components:
  - ../../components/monitoring
  - ../../components/ingress
  - ../../components/security-hardening

常见组件模式

监控组件: ServiceMonitor、PrometheusRules、指标端口补丁 入口组件: Ingress资源、TLS配置、服务注释 调试工具组件: 调试环境变量、分析端口、详细日志 安全加固组件: SecurityContext、NetworkPolicy、PodSecurityPolicy 备份组件: 备份的CronJob、PersistentVolume、具有备份权限的ServiceAccount

组件与覆盖层决策树

需要包含可选功能? → 组件
需要环境特定值? → 覆盖层
都需要? → 组件处理功能 + 覆盖层补丁处理值

最佳实践

目录组织

  1. 保持基础通用: 避免在基础中包含环境特定值
  2. 一个关注点一个补丁: 为不同修改创建单独的补丁文件
  3. 使用描述性名称: patch-replicas.yamlpatch-monitoring.yaml,而不是patch1.yaml
  4. 分组相关资源: 将服务、部署和配置保持在一起
  5. 使用组件处理功能: 将可选功能(监控、入口)提取为组件

补丁卫生

  1. 最小化补丁大小: 仅包含正在更改的字段
  2. 记录复杂补丁: 添加注释解释为何需要补丁
  3. 首选战略合并: 仅在必要时使用JSON补丁
  4. 验证补丁: 运行kustomize build以验证输出
  5. 测试组合: 确保补丁正确组合

资源管理

  1. 使用生成器处理动态数据: ConfigMap和Secret应使用生成器
  2. 启用名称后缀: 为ConfigMap/Secret名称添加内容哈希以实现不可变性
  3. 按资源引用: 使用nameReference进行自动名称更新
  4. 通用标签: 在所有资源上应用一致的标签
  5. 命名空间管理: 在kustomization中设置命名空间,而非单个资源

版本控制

  1. 提交生成的清单: 考虑提交kustomize build输出用于GitOps
  2. 记录依赖: 注意任何外部资源或排序要求
  3. 固定版本: 使用远程基础时按版本/标签引用
  4. 审查渲染输出: 应用前始终检查最终清单

安全

  1. 永不提交秘密: 使用sealed-secrets、external-secrets或带有gitignored文件的秘密生成器
  2. 使用RBAC: 限制谁可以修改基础与覆盖层
  3. 验证资源: 使用kustomize插件或OPA进行策略强制执行
  4. 分离敏感覆盖层: 考虑为生产配置使用单独的仓库

示例

基本基础Kustomization

k8s/base/kustomization.yaml

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

namespace: myapp

commonLabels:
  app: myapp
  managed-by: kustomize

commonAnnotations:
  contact: team@example.com

resources:
  - deployment.yaml
  - service.yaml
  - serviceaccount.yaml

configMapGenerator:
  - name: app-config
    literals:
      - LOG_LEVEL=info
      - MAX_CONNECTIONS=100

images:
  - name: myapp
    newName: registry.example.com/myapp
    newTag: latest

k8s/base/deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp
spec:
  replicas: 1
  selector:
    matchLabels:
      app: myapp
  template:
    metadata:
      labels:
        app: myapp
    spec:
      serviceAccountName: myapp
      containers:
        - name: app
          image: myapp
          ports:
            - containerPort: 8080
              name: http
          envFrom:
            - configMapRef:
                name: app-config
          resources:
            requests:
              memory: "128Mi"
              cpu: "100m"
            limits:
              memory: "256Mi"
              cpu: "200m"
          livenessProbe:
            httpGet:
              path: /health
              port: http
            initialDelaySeconds: 30
            periodSeconds: 10
          readinessProbe:
            httpGet:
              path: /ready
              port: http
            initialDelaySeconds: 5
            periodSeconds: 5

k8s/base/service.yaml

apiVersion: v1
kind: Service
metadata:
  name: myapp
spec:
  type: ClusterIP
  ports:
    - port: 80
      targetPort: http
      protocol: TCP
      name: http
  selector:
    app: myapp

k8s/base/serviceaccount.yaml

apiVersion: v1
kind: ServiceAccount
metadata:
  name: myapp

开发覆盖层

k8s/overlays/dev/kustomization.yaml

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

namespace: myapp-dev

namePrefix: dev-
nameSuffix: -v1

commonLabels:
  environment: dev
  version: v1

resources:
  - ../../base

patches:
  - path: patch-replicas.yaml
    target:
      kind: Deployment
      name: myapp

configMapGenerator:
  - name: app-config
    behavior: merge
    literals:
      - LOG_LEVEL=debug
      - ENABLE_DEBUG_ROUTES=true
    envs:
      - config-values.env

images:
  - name: myapp
    newTag: dev-latest

replicas:
  - name: myapp
    count: 1

k8s/overlays/dev/patch-replicas.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp
spec:
  replicas: 1
  template:
    spec:
      containers:
        - name: app
          resources:
            requests:
              memory: "64Mi"
              cpu: "50m"
            limits:
              memory: "128Mi"
              cpu: "100m"

k8s/overlays/dev/config-values.env

DATABASE_URL=postgres://localhost:5432/myapp_dev
REDIS_URL=redis://localhost:6379
ENABLE_PROFILING=true

暂存覆盖层

k8s/overlays/staging/kustomization.yaml

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

namespace: myapp-staging

commonLabels:
  environment: staging

resources:
  - ../../base

patches:
  - path: patch-replicas.yaml
  - path: patch-tolerations.yaml

configMapGenerator:
  - name: app-config
    behavior: merge
    envs:
      - config-values.env

secretGenerator:
  - name: app-secrets
    envs:
      - secrets.env # gitignored文件

images:
  - name: myapp
    newTag: v1.2.3-rc1

replicas:
  - name: myapp
    count: 2

components:
  - ../../components/monitoring

k8s/overlays/staging/patch-replicas.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp
spec:
  replicas: 2
  template:
    spec:
      containers:
        - name: app
          resources:
            requests:
              memory: "256Mi"
              cpu: "250m"
            limits:
              memory: "512Mi"
              cpu: "500m"

k8s/overlays/staging/patch-tolerations.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp
spec:
  template:
    spec:
      tolerations:
        - key: "workload"
          operator: "Equal"
          value: "staging"
          effect: "NoSchedule"
      nodeSelector:
        environment: staging

生产覆盖层

k8s/overlays/prod/kustomization.yaml

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

namespace: myapp-prod

commonLabels:
  environment: prod
  criticality: high

commonAnnotations:
  oncall: sre-team@example.com
  runbook: https://wiki.example.com/myapp-runbook

resources:
  - ../../base

patches:
  - path: patch-replicas.yaml
  - path: patch-resources.yaml
  - path: patch-affinity.yaml
  - path: patch-pdb.yaml
  - path: patch-security.yaml

configMapGenerator:
  - name: app-config
    behavior: merge
    envs:
      - config-values.env

secretGenerator:
  - name: app-secrets
    envs:
      - secrets.env # 由外部秘密管理管理

images:
  - name: myapp
    newTag: v1.2.3
    digest: sha256:e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855

replicas:
  - name: myapp
    count: 5

components:
  - ../../components/monitoring
  - ../../components/ingress

resources:
  - poddisruptionbudget.yaml
  - horizontalpodautoscaler.yaml
  - networkpolicy.yaml

k8s/overlays/prod/patch-resources.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp
spec:
  replicas: 5
  template:
    spec:
      containers:
        - name: app
          resources:
            requests:
              memory: "512Mi"
              cpu: "500m"
            limits:
              memory: "1Gi"
              cpu: "1000m"

k8s/overlays/prod/patch-affinity.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp
spec:
  template:
    spec:
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            - labelSelector:
                matchLabels:
                  app: myapp
              topologyKey: kubernetes.io/hostname
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
              - matchExpressions:
                  - key: workload-type
                    operator: In
                    values:
                      - production

k8s/overlays/prod/patch-security.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp
spec:
  template:
    spec:
      securityContext:
        runAsNonRoot: true
        runAsUser: 1000
        fsGroup: 1000
        seccompProfile:
          type: RuntimeDefault
      containers:
        - name: app
          securityContext:
            allowPrivilegeEscalation: false
            capabilities:
              drop:
                - ALL
            readOnlyRootFilesystem: true
          volumeMounts:
            - name: tmp
              mountPath: /tmp
            - name: cache
              mountPath: /app/cache
      volumes:
        - name: tmp
          emptyDir: {}
        - name: cache
          emptyDir: {}

k8s/overlays/prod/poddisruptionbudget.yaml

apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
  name: myapp-pdb
spec:
  minAvailable: 2
  selector:
    matchLabels:
      app: myapp

k8s/overlays/prod/horizontalpodautoscaler.yaml

apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
  name: myapp-hpa
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: myapp
  minReplicas: 3
  maxReplicas: 10
  metrics:
    - type: Resource
      resource:
        name: cpu
        target:
          type: Utilization
          averageUtilization: 70
    - type: Resource
      resource:
        name: memory
        target:
          type: Utilization
          averageUtilization: 80
  behavior:
    scaleDown:
      stabilizationWindowSeconds: 300
      policies:
        - type: Percent
          value: 50
          periodSeconds: 60
    scaleUp:
      stabilizationWindowSeconds: 0
      policies:
        - type: Percent
          value: 100
          periodSeconds: 30
        - type: Pods
          value: 2
          periodSeconds: 30
      selectPolicy: Max

k8s/overlays/prod/networkpolicy.yaml

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: myapp-netpol
spec:
  podSelector:
    matchLabels:
      app: myapp
  policyTypes:
    - Ingress
    - Egress
  ingress:
    - from:
        - namespaceSelector:
            matchLabels:
              name: ingress-nginx
      ports:
        - protocol: TCP
          port: 8080
  egress:
    - to:
        - namespaceSelector:
            matchLabels:
              name: database
      ports:
        - protocol: TCP
          port: 5432
    - to:
        - namespaceSelector:
            matchLabels:
              name: kube-system
          podSelector:
            matchLabels:
              k8s-app: kube-dns
      ports:
        - protocol: UDP
          port: 53

JSON 6902补丁示例

k8s/overlays/prod/kustomization.yaml (节选)

patchesJson6902:
  - target:
      group: apps
      version: v1
      kind: Deployment
      name: myapp
    path: json-patch-containers.yaml

k8s/overlays/prod/json-patch-containers.yaml

# 添加边车容器
- op: add
  path: /spec/template/spec/containers/-
  value:
    name: log-shipper
    image: fluent/fluent-bit:2.0
    volumeMounts:
      - name: logs
        mountPath: /var/log/app
        readOnly: true

# 替换主容器(第一个容器)的镜像
- op: replace
  path: /spec/template/spec/containers/0/image
  value: registry.example.com/myapp:v1.2.3

# 向特定容器添加环境变量
- op: add
  path: /spec/template/spec/containers/0/env/-
  value:
    name: FEATURE_FLAG_X
    value: "enabled"

# 移除特定环境变量(按索引)
- op: remove
  path: /spec/template/spec/containers/0/env/3

# 测试值存在后再补丁(条件补丁)
- op: test
  path: /spec/replicas
  value: 1
- op: replace
  path: /spec/replicas
  value: 5

# 添加卷
- op: add
  path: /spec/template/spec/volumes/-
  value:
    name: logs
    emptyDir: {}

ConfigMap生成器示例

文字值

configMapGenerator:
  - name: app-config
    literals:
      - LOG_LEVEL=info
      - MAX_RETRIES=3
      - TIMEOUT=30s

从文件创建ConfigMap

configMapGenerator:
  - name: app-config
    files:
      - application.properties
      - config.json
      - tls.crt=certs/server.crt

从环境文件创建ConfigMap

configMapGenerator:
  - name: app-config
    envs:
      - config.env

覆盖层中的ConfigMap合并

configMapGenerator:
  - name: app-config
    behavior: merge # 选项:create(默认)、replace、merge
    literals:
      - LOG_LEVEL=debug # 覆盖基础值

禁用名称后缀哈希

configMapGenerator:
  - name: app-config
    options:
      disableNameSuffixHash: true
    literals:
      - KEY=value

秘密生成器示例

从文字创建秘密

secretGenerator:
  - name: app-secrets
    literals:
      - username=admin
      - password=changeme

从文件创建秘密

secretGenerator:
  - name: tls-secrets
    files:
      - tls.crt
      - tls.key
    type: kubernetes.io/tls

从环境文件创建秘密(Gitignored)

secretGenerator:
  - name: app-secrets
    envs:
      - secrets.env # 文件不提交到git

组件示例:监控

k8s/components/monitoring/kustomization.yaml

apiVersion: kustomize.config.k8s.io/v1alpha1
kind: Component

resources:
  - servicemonitor.yaml
  - prometheusrule.yaml

patches:
  - path: patch-metrics.yaml
    target:
      kind: Deployment

labels:
  - pairs:
      prometheus.io/scrape: "true"
    includeSelectors: false

k8s/components/monitoring/servicemonitor.yaml

apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
  name: myapp
spec:
  selector:
    matchLabels:
      app: myapp
  endpoints:
    - port: metrics
      interval: 30s
      path: /metrics

k8s/components/monitoring/prometheusrule.yaml

apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule
metadata:
  name: myapp-alerts
spec:
  groups:
    - name: myapp
      interval: 30s
      rules:
        - alert: HighErrorRate
          expr: |
            rate(http_requests_total{status=~"5.."}[5m]) > 0.05
          for: 5m
          labels:
            severity: warning
          annotations:
            summary: 检测到高错误率
            description: 错误率为 {{ $value }} req/s

k8s/components/monitoring/patch-metrics.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: not-important
spec:
  template:
    spec:
      containers:
        - name: app
          ports:
            - containerPort: 9090
              name: metrics
              protocol: TCP

组件示例:入口

k8s/components/ingress/kustomization.yaml

apiVersion: kustomize.config.k8s.io/v1alpha1
kind: Component

resources:
  - ingress.yaml

patches:
  - path: patch-service.yaml
    target:
      kind: Service

k8s/components/ingress/ingress.yaml

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: myapp
  annotations:
    cert-manager.io/cluster-issuer: letsencrypt-prod
    nginx.ingress.kubernetes.io/ssl-redirect: "true"
    nginx.ingress.kubernetes.io/rate-limit: "100"
spec:
  ingressClassName: nginx
  tls:
    - hosts:
        - myapp.example.com
      secretName: myapp-tls
  rules:
    - host: myapp.example.com
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: myapp
                port:
                  name: http

k8s/components/ingress/patch-service.yaml

apiVersion: v1
kind: Service
metadata:
  name: not-important
  annotations:
    service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http

转换器示例

使用内置转换器

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

# 为所有资源名称添加前缀
namePrefix: myapp-

# 为所有资源名称添加后缀
nameSuffix: -v2

# 为所有资源设置命名空间
namespace: production

# 为所有资源添加标签
commonLabels:
  app: myapp
  team: platform
  environment: prod

# 为所有资源添加注释
commonAnnotations:
  managed-by: kustomize
  contact: team@example.com

# 转换镜像
images:
  - name: nginx
    newName: my-registry/nginx
    newTag: 1.21.0
  - name: redis
    newName: my-registry/redis
    digest: sha256:a4d4e6f8c9b0d1e2f3a4b5c6d7e8f9a0b1c2d3e4f5a6b7c8d9e0f1a2b3c4d5e6

# 设置部署的副本数
replicas:
  - name: myapp
    count: 3
  - name: worker
    count: 2

# 向特定资源添加标签
labels:
  - pairs:
      version: v2
    includeSelectors: true
    includeTemplates: true

# 转换资源名称/命名空间
replacements:
  - source:
      kind: ConfigMap
      name: app-config
      fieldPath: metadata.name
    targets:
      - select:
          kind: Deployment
        fieldPaths:
          - spec.template.spec.volumes.[name=config].configMap.name

高级:使用替换进行动态引用

替换基础配置

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

resources:
  - deployment.yaml
  - service.yaml
  - configmap.yaml

replacements:
  # 基于实际服务名称替换入口中的服务名称
  - source:
      kind: Service
      name: myapp
      fieldPath: metadata.name
    targets:
      - select:
          kind: Ingress
        fieldPaths:
          - spec.rules.[host=myapp.example.com].http.paths.[path=/].backend.service.name

  # 将ConfigMap名称传播到部署(处理哈希后缀)
  - source:
      kind: ConfigMap
      name: app-config
      fieldPath: metadata.name
    targets:
      - select:
          kind: Deployment
        fieldPaths:
          - spec.template.spec.volumes.[name=config].configMap.name

使用远程基础

远程基础引用

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

resources:
  # GitHub仓库
  - https://github.com/org/repo/k8s/base?ref=v1.0.0

  # 仓库中的特定路径
  - github.com/org/repo/manifests?ref=main

patches:
  - path: local-patch.yaml

多环境与共享组件

共享组件基础配置

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

resources:
  - deployment.yaml
  - service.yaml

共享组件开发覆盖层

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

namespace: dev

resources:
  - ../../base

components:
  - ../../components/debug-tools

replicas:
  - name: myapp
    count: 1

共享组件生产覆盖层

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

namespace: prod

resources:
  - ../../base

components:
  - ../../components/monitoring
  - ../../components/ingress

replicas:
  - name: myapp
    count: 5

k8s/components/debug-tools/kustomization.yaml

apiVersion: kustomize.config.k8s.io/v1alpha1
kind: Component

patches:
  - path: patch-debug.yaml

configMapGenerator:
  - name: app-config
    behavior: merge
    literals:
      - DEBUG=true
      - ENABLE_PPROF=true

常见任务

验证Kustomization

# 构建和验证
kustomize build k8s/overlays/prod

# 使用kubectl(包括额外验证)
kubectl kustomize k8s/overlays/prod

# 验证而不应用到集群
kubectl apply -k k8s/overlays/prod --dry-run=server

# 应用前检查差异
kubectl diff -k k8s/overlays/prod

提取通用配置

当注意到跨覆盖层重复时:

  1. 识别通用补丁或资源
  2. 移动到基础或创建组件
  3. 从覆盖层引用
# 之前:相同补丁在开发、暂存、生产中
# 之后:移动到组件
mkdir -p k8s/components/common-settings
# 创建组件kustomization
# 从每个覆盖层引用

调试名称转换

# 查看转换后的最终资源名称
kustomize build k8s/overlays/prod | grep "^  name:"

# 检查带有哈希的ConfigMap/Secret名称
kustomize build k8s/overlays/prod | grep -A 2 "kind: ConfigMap"

转换现有清单

# 从现有资源生成kustomization.yaml
cd k8s/base
kustomize create --autodetect

# 或手动指定
kustomize create --resources deployment.yaml,service.yaml

更新镜像标签

# 更新kustomization.yaml中的镜像标签
cd k8s/overlays/prod
kustomize edit set image myapp=registry.example.com/myapp:v1.3.0

# 或使用kubectl
kubectl set image deployment/myapp myapp=registry.example.com/myapp:v1.3.0 --dry-run=client -o yaml | kubectl apply -k .

添加资源

cd k8s/base
kustomize edit add resource new-deployment.yaml

添加ConfigMap生成器

cd k8s/overlays/dev
kustomize edit add configmap app-config --from-literal=KEY=value

集成模式

使用ArgoCD的GitOps

argocd-application.yaml

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: myapp-prod
  namespace: argocd
spec:
  project: default
  source:
    repoURL: https://github.com/org/repo
    targetRevision: main
    path: k8s/overlays/prod
  destination:
    server: https://kubernetes.default.svc
    namespace: myapp-prod
  syncPolicy:
    automated:
      prune: true
      selfHeal: true
    syncOptions:
      - CreateNamespace=true

使用Flux的GitOps

kustomization.yaml

apiVersion: kustomize.toolkit.fluxcd.io/v1
kind: Kustomization
metadata:
  name: myapp-prod
  namespace: flux-system
spec:
  interval: 10m
  path: ./k8s/overlays/prod
  prune: true
  sourceRef:
    kind: GitRepository
    name: myapp
  validation: client
  healthChecks:
    - apiVersion: apps/v1
      kind: Deployment
      name: myapp
      namespace: myapp-prod

CI/CD管道

#!/bin/bash
# build-and-validate.sh

set -euo pipefail

OVERLAY=${1:-dev}
OUTPUT_DIR="manifests/${OVERLAY}"

# 构建清单
kustomize build "k8s/overlays/${OVERLAY}" > "${OUTPUT_DIR}/all.yaml"

# 使用kubeval验证
kubeval --strict "${OUTPUT_DIR}/all.yaml"

# 使用kube-score验证
kube-score score "${OUTPUT_DIR}/all.yaml"

# 使用OPA/Conftest进行策略验证
conftest test "${OUTPUT_DIR}/all.yaml"

# 为GitOps提交渲染的清单
git add "${OUTPUT_DIR}/all.yaml"

Helm集成

# 使用kustomize定制Helm输出
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

helmCharts:
  - name: postgresql
    repo: https://charts.bitnami.com/bitnami
    version: 12.1.2
    releaseName: myapp-db
    namespace: database
    valuesInline:
      auth:
        username: myapp
        database: myapp_prod

patches:
  - path: patch-postgresql.yaml
    target:
      kind: StatefulSet
      name: myapp-db-postgresql

故障排除

常见错误

错误:累积资源

accumulating resources: accumulation err='accumulating resources from '../../base':
evalsymlink failure on '/path/to/base' : lstat /path/to/base: no such file or directory'
  • 解决方案: 检查覆盖层kustomization.yaml中的基础路径是否正确
  • 路径相对于kustomization.yaml位置

错误:没有匹配的OriginalId

no matches for OriginalId ~G_v1_ConfigMap|~X|app-config;
failed to find unique target for patch
  • 解决方案: 确保被补丁的资源存在于基础中
  • 检查资源名称和类型是否完全匹配

错误:补丁冲突

conflict: multiple matches for ...
  • 解决方案: 使用元数据使补丁更具体
  • 使用JSON补丁进行精确目标

错误:循环依赖

base 'overlays/dev' refers to base '../../base' which refers back to 'overlays/dev'
  • 解决方案: 检查基础中的循环引用
  • 基础不应引用覆盖层

调试技术

# 启用详细输出
kustomize build k8s/overlays/prod --enable-alpha-plugins --load-restrictor=LoadRestrictionsNone

# 显示转换前后的资源
kustomize build k8s/base > base.yaml
kustomize build k8s/overlays/prod > overlay.yaml
diff base.yaml overlay.yaml

# 验证特定资源
kustomize build k8s/overlays/prod | kubectl apply --dry-run=client -f -

# 检查YAML语法错误
kustomize build k8s/overlays/prod | yamllint -

# 检查ConfigMap哈希生成
kustomize build k8s/overlays/prod | grep -A 10 "kind: ConfigMap"

性能优化

大规模Kustomizations

  1. 使用组件实现模块化: 将大型kustomization分解为组件
  2. 避免深层覆盖链: 保持层次浅(基础 -> 覆盖层,而不是基础 -> 覆盖层1 -> 覆盖层2)
  3. 缓存远程基础: 为频繁引用的远程基础使用本地副本
  4. 并行化构建: 在CI中并行构建多个覆盖层
  5. 限制资源范围: 不要kustomize不需要自定义的资源

构建时间优化

# 使用--load-restrictor=LoadRestrictionsNone允许加载kustomization根目录外的文件
kustomize build --load-restrictor=LoadRestrictionsNone k8s/overlays/prod

# 并行构建多个环境
parallel kustomize build k8s/overlays/{} ::: dev staging prod

安全考虑

  1. 秘密管理: 永不提交秘密到git

    • 使用外部秘密操作器(sealed-secrets、external-secrets-operator)
    • 使用带有gitignored文件的秘密生成器
    • 考虑使用SOPS进行git中的加密秘密
  2. 镜像安全: 在生产中按摘要固定镜像

    images:
      - name: myapp
        digest: sha256:e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
    
  3. RBAC: 分离对基础与覆盖层的访问

    • 基础:限制为平台团队
    • 覆盖层:应用团队可以自定义
  4. 验证: 使用准入控制器

    • OPA Gatekeeper
    • Kyverno策略
    • 自定义准入webhook
  5. 审计: 跟踪kustomization更改

    • Git提交历史
    • CI/CD日志
    • Kubernetes审计日志

总结

Kustomize提供了一种声明式、Kubernetes原生的配置管理方法:

  • 使用基础 处理共享、环境无关的配置
  • 使用覆盖层 处理环境特定的自定义
  • 使用组件 处理可选、可重用的功能
  • 使用生成器 处理带有内容哈希的ConfigMap和Secret
  • 使用转换器 处理横切修改
  • 首选战略合并 以获得简单性,JSON补丁获得精确性
  • 保持结构浅 以避免复杂性
  • 早期验证 使用kustomize buildkubectl diff
  • 安全秘密 使用外部工具,永不提交敏感数据
  • 记录模式 以便团队成员理解自定义策略

Kustomize与kubectl、GitOps工具(ArgoCD、Flux)和CI/CD管道无缝集成,使其成为Kubernetes配置管理的绝佳选择。