LoggingBestPractices logging-best-practices

提供结构化日志记录的最佳实践,包括日志级别、JSON格式、上下文日志、PII处理、集中式日志记录等,适用于提高应用程序的可观测性和调试能力。

后端开发 0 次安装 0 次浏览 更新于 3/4/2026

日志记录最佳实践

概览

全面指南,用于在应用程序中实施结构化、安全和高性能的日志记录。涵盖日志级别、结构化日志格式、上下文信息、PII保护和集中式日志系统。

何时使用

  • 建立应用程序日志基础设施
  • 实施结构化日志记录
  • 为不同环境配置日志级别
  • 管理日志中的敏感数据
  • 设置集中式日志记录
  • 实施分布式跟踪
  • 调试生产问题
  • 遵守日志记录法规

指令

1. 日志级别

标准日志级别

// logger.ts
enum LogLevel {
  DEBUG = 0,   // 用于调试的详细信息
  INFO = 1,    // 一般信息性消息
  WARN = 2,    // 警告消息,可能有害
  ERROR = 3,   // 错误消息,应用程序可以继续
  FATAL = 4    // 临界错误,应用程序必须停止
}

class Logger {
  constructor(private minLevel: LogLevel = LogLevel.INFO) {}

  debug(message: string, context?: object) {
    if (this.minLevel <= LogLevel.DEBUG) {
      this.log(LogLevel.DEBUG, message, context);
    }
  }

  info(message: string, context?: object) {
    if (this.minLevel <= LogLevel.INFO) {
      this.log(LogLevel.INFO, message, context);
    }
  }

  warn(message: string, context?: object) {
    if (this.minLevel <= LogLevel.WARN) {
      this.log(LogLevel.WARN, message, context);
    }
  }

  error(message: string, error?: Error, context?: object) {
    if (this.minLevel <= LogLevel.ERROR) {
      this.log(LogLevel.ERROR, message, {
        ...context,
        error: {
          message: error?.message,
          stack: error?.stack,
          name: error?.name
        }
      });
    }
  }

  fatal(message: string, error?: Error, context?: object) {
    this.log(LogLevel.FATAL, message, {
      ...context,
      error: {
        message: error?.message,
        stack: error?.stack,
        name: error?.name
      }
    });
    process.exit(1);
  }

  private log(level: LogLevel, message: string, context?: object) {
    const logEntry = {
      timestamp: new Date().toISOString(),
      level: LogLevel[level],
      message,
      ...context
    };
    console.log(JSON.stringify(logEntry));
  }
}

// 使用
const logger = new Logger(
  process.env.NODE_ENV === 'production' ? LogLevel.INFO : LogLevel.DEBUG
);

logger.debug('处理请求', { userId: '123', requestId: 'abc' });
logger.info('用户登录', { userId: '123' });
logger.warn('接近速率限制', { userId: '123', count: 95 });
logger.error('数据库连接失败', dbError, { query: 'SELECT ...' });

2. 结构化日志记录(JSON)

Node.js与Winston

// winston-logger.ts
import winston from 'winston';

const logger = winston.createLogger({
  level: process.env.LOG_LEVEL || 'info',
  format: winston.format.combine(
    winston.format.timestamp(),
    winston.format.errors({ stack: true }),
    winston.format.json()
  ),
  defaultMeta: {
    service: 'user-service',
    environment: process.env.NODE_ENV
  },
  transports: [
    // 写入控制台
    new winston.transports.Console({
      format: winston.format.combine(
        winston.format.colorize(),
        winston.format.simple()
      )
    }),
    // 写入文件
    new winston.transports.File({
      filename: 'logs/error.log',
      level: 'error',
      maxsize: 5242880, // 5MB
      maxFiles: 5
    }),
    new winston.transports.File({
      filename: 'logs/combined.log',
      maxsize: 5242880,
      maxFiles: 5
    })
  ]
});

// 使用
logger.info('用户创建', {
  userId: user.id,
  email: user.email,
  requestId: req.id
});

logger.error('支付处理失败', {
  error: error.message,
  stack: error.stack,
  orderId: order.id,
  amount: order.total,
  userId: user.id
});

Python与structlog

# logger.py
import structlog
import logging

# 配置structlog
structlog.configure(
    processors=[
        structlog.stdlib.filter_by_level,
        structlog.stdlib.add_logger_name,
        structlog.stdlib.add_log_level,
        structlog.stdlib.PositionalArgumentsFormatter(),
        structlog.processors.TimeStamper(fmt="iso"),
        structlog.processors.StackInfoRenderer(),
        structlog.processors.format_exc_info,
        structlog.processors.UnicodeDecoder(),
        structlog.processors.JSONRenderer()
    ],
    context_class=dict,
    logger_factory=structlog.stdlib.LoggerFactory(),
    cache_logger_on_first_use=True,
)

logger = structlog.get_logger()

# 使用
logger.info("user_created",
    user_id=user.id,
    email=user.email,
    request_id=request.id
)

logger.error("payment_failed",
    error=str(error),
    order_id=order.id,
    amount=order.total,
    user_id=user.id
)

Go与zap

// logger.go
package main

import (
    "go.uber.org/zap"
    "go.uber.org/zap"
)

func main() {
    // 生产配置(JSON)
    logger, _ := zap.NewProduction()
    defer logger.Sync()

    // 开发配置(人类可读)
    // logger, _ := zap.NewDevelopment()

    logger.Info("User created",
        zap.String("userId", user.ID),
        zap.String("email", user.Email),
        zap.String("requestId", req.ID),
    )

    logger.Error("Payment processing failed",
        zap.Error(err),
        zap.String("orderId", order.ID),
        zap.Float64("amount", order.Total),
        zap.String("userId", user.ID),
    )

    // Sugared logger用于较少的结构化日志
    sugar := logger.Sugar()
    sugar.Infow("User login",
        "userId", user.ID,
        "ip", req.IP,
    )
}

3. 上下文日志记录

请求上下文中间件

// request-logger.ts
import { v4 as uuidv4 } from 'uuid';
import { AsyncLocalStorage } from 'async_hooks';

const asyncLocalStorage = new AsyncLocalStorage();

// 中间件添加请求上下文
export function requestLogger(req, res, next) {
  const requestId = req.headers['x-request-id'] || uuidv4();
  const context = {
    requestId,
    method: req.method,
    path: req.path,
    ip: req.ip,
    userAgent: req.headers['user-agent'],
    userId: req.user?.id
  };

  asyncLocalStorage.run(context, () => {
    logger.info('请求开始', context);

    // 完成时记录响应
    res.on('finish', () => {
      logger.info('请求完成', {
        ...context,
        statusCode: res.statusCode,
        duration: Date.now() - req.startTime
      });
    });

    req.startTime = Date.now();
    next();
  });
}

// 包含上下文的日志记录器
export function getLogger() {
  const context = asyncLocalStorage.getStore();
  return {
    info: (message: string, meta?: object) =>
      logger.info(message, { ...context, ...meta }),
    error: (message: string, error: Error, meta?: object) =>
      logger.error(message, { ...context, error, ...meta }),
    warn: (message: string, meta?: object) =>
      logger.warn(message, { ...context, ...meta }),
    debug: (message: string, meta?: object) =>
      logger.debug(message, { ...context, ...meta })
  };
}

// 在路由处理程序中的使用
app.get('/api/users/:id', async (req, res) => {
  const log = getLogger();

  log.info('获取用户', { userId: req.params.id });

  try {
    const user = await userService.findById(req.params.id);
    log.info('找到用户', { userId: user.id });
    res.json(user);
  } catch (error) {
    log.error('获取用户失败', error, { userId: req.params.id });
    res.status(500).json({ error: '内部服务器错误' });
  }
});

相关ID

// correlation-id.ts
export class CorrelationIdManager {
  private static storage = new AsyncLocalStorage<string>();

  static run<T>(correlationId: string, callback: () => T): T {
    return this.storage.run(correlationId, callback);
  }

  static get(): string | undefined {
    return this.storage.getStore();
  }
}

// 中间件
app.use((req, res, next) => {
  const correlationId = req.headers['x-correlation-id'] || uuidv4();
  res.setHeader('x-correlation-id', correlationId);

  CorrelationIdManager.run(correlationId, () => {
    next();
  });
});

// 增强的日志记录器
const enhancedLogger = {
  info: (message: string, meta?: object) =>
    logger.info(message, {
      correlationId: CorrelationIdManager.get(),
      ...meta
    })
};

4. PII和敏感数据处理

数据清理

// sanitizer.ts
const SENSITIVE_FIELDS = [
  'password',
  'token',
  'apiKey',
  'ssn',
  'creditCard',
  'email',  // 根据法规
  'phone'   // 根据法规
];

function sanitize(obj: any): any {
  if (typeof obj !== 'object' || obj === null) {
    return obj;
  }

  if (Array.isArray(obj)) {
    return obj.map(sanitize);
  }

  const sanitized = {};
  for (const [key, value] of Object.entries(obj)) {
    if (SENSITIVE_FIELDS.some(field =>
      key.toLowerCase().includes(field.toLowerCase())
    )) {
      sanitized[key] = '[REDACTED]';
    } else if (typeof value === 'object') {
      sanitized[key] = sanitize(value);
    } else {
      sanitized[key] = value;
    }
  }
  return sanitized;
}

// 使用
logger.info('用户数据', sanitize({
  userId: '123',
  email: 'user@example.com',  // 将被删除
  password: 'secret123',       // 将被删除
  name: 'John Doe'             // 将被记录
}));

// 输出:
// {
//   "userId": "123",
//   "email": "[REDACTED]",
//   "password": "[REDACTED]",
//   "name": "John Doe"
// }

电子邮件/PII掩码

// masking.ts
function maskEmail(email: string): string {
  const [local, domain] = email.split('@');
  const maskedLocal = local[0] + '*'.repeat(local.length - 2) + local[local.length - 1];
  return `${maskedLocal}@${domain}`;
}

function maskPhone(phone: string): string {
  return phone.replace(/\d(?=\d{4})/g, '*');
}

function maskCreditCard(cc: string): string {
  return cc.replace(/\d(?=\d{4})/g, '*');
}

// 使用
logger.info('用户注册', {
  userId: user.id,
  email: maskEmail(user.email),           // u***r@example.com
  phone: maskPhone(user.phone),            // ******1234
  creditCard: maskCreditCard(user.card)    // ************1234
});

5. 性能日志记录

// performance-logger.ts
class PerformanceLogger {
  private timers = new Map<string, number>();

  start(operation: string) {
    this.timers.set(operation, Date.now());
  }

  end(operation: string, metadata?: object) {
    const startTime = this.timers.get(operation);
    if (!startTime) return;

    const duration = Date.now() - startTime;
    this.timers.delete(operation);

    logger.info(`Performance: ${operation}`, {
      operation,
      duration,
      durationMs: duration,
      ...metadata
    });

    // 如果慢则警告
    if (duration > 1000) {
      logger.warn(`Slow operation: ${operation}`, {
        operation,
        duration,
        threshold: 1000,
        ...metadata
      });
    }
  }

  async measure<T>(operation: string, fn: () => Promise<T>, metadata?: object): Promise<T> {
    this.start(operation);
    try {
      return await fn();
    } finally {
      this.end(operation, metadata);
    }
  }
}

// 使用
const perfLogger = new PerformanceLogger();

// 手动计时
perfLogger.start('database-query');
const users = await db.query('SELECT * FROM users');
perfLogger.end('database-query', { count: users.length });

// 自动计时
const result = await perfLogger.measure(
  'complex-operation',
  async () => await processData(),
  { userId: '123' }
);

6. 集中式日志记录

ELK堆栈(Elasticsearch,Logstash,Kibana)

# docker-compose.yml
version: '3'
services:
  elasticsearch:
    image: elasticsearch:8.0.0
    environment:
      - discovery.type=single-node
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
    ports:
      - "9200:9200"

  logstash:
    image: logstash:8.0.0
    volumes:
      - ./logstash.conf:/usr/share/logstash/pipeline/logstash.conf
    ports:
      - "5000:5000"
    depends_on:
      - elasticsearch

  kibana:
    image: kibana:8.0.0
    ports:
      - "5601:5601"
    depends_on:
      - elasticsearch
# logstash.conf
input {
  tcp {
    port => 5000
    codec => json
  }
}

filter {
  # 解析时间戳
  date {
    match => ["timestamp", "ISO8601"]
  }

  # 如果IP存在,则添加地理位置
  if [ip] {
    geoip {
      source => "ip"
    }
  }
}

output {
  elasticsearch {
    hosts => ["elasticsearch:9200"]
    index => "app-logs-%{+YYYY.MM.dd}"
  }
}

将日志发送到ELK

// winston-elk.ts
import winston from 'winston';
import 'winston-logstash';

const logger = winston.createLogger({
  transports: [
    new winston.transports.Logstash({
      port: 5000,
      host: 'logstash',
      node_name: 'user-service',
      max_connect_retries: -1
    })
  ]
});

AWS CloudWatch日志

// cloudwatch-logger.ts
import winston from 'winston';
import WinstonCloudWatch from 'winston-cloudwatch';

const logger = winston.createLogger({
  transports: [
    new WinstonCloudWatch({
      logGroupName: '/aws/lambda/user-service',
      logStreamName: () => {
        const date = new Date().toISOString().split('T')[0];
        return `${date}-${process.env.LAMBDA_VERSION}`;
      },
      awsRegion: 'us-east-1',
      jsonMessage: true
    })
  ]
});

7. 分布式跟踪

// tracing.ts
import opentelemetry from '@opentelemetry/api';
import { NodeTracerProvider } from '@opentelemetry/node';
import { SimpleSpanProcessor } from '@opentelemetry/tracing';
import { JaegerExporter } from '@opentelemetry/exporter-jaeger';

// 设置跟踪器
const provider = new NodeTracerProvider();
provider.addSpanProcessor(
  new SimpleSpanProcessor(
    new JaegerExporter({
      serviceName: 'user-service',
      endpoint: 'http://jaeger:14268/api/traces'
    })
  )
);
provider.register();

const tracer = opentelemetry.trace.getTracer('user-service');

// 应用程序中的使用
app.get('/api/users/:id', async (req, res) => {
  const span = tracer.startSpan('get-user', {
    attributes: {
      'http.method': req.method,
      'http.url': req.url,
      'user.id': req.params.id
    }
  });

  try {
    const user = await fetchUser(req.params.id, span);
    span.setStatus({ code: opentelemetry.SpanStatusCode.OK });
    res.json(user);
  } catch (error) {
    span.setStatus({
      code: opentelemetry.SpanStatusCode.ERROR,
      message: error.message
    });
    res.status(500).json({ error: 'Internal server error' });
  } finally {
    span.end();
  }
});

async function fetchUser(userId: string, parentSpan: Span) {
  const span = tracer.startSpan('database-query', {
    parent: parentSpan,
    attributes: { 'db.statement': 'SELECT * FROM users WHERE id = ?' }
  });

  try {
    const user = await db.query('SELECT * FROM users WHERE id = ?', [userId]);
    return user;
  } finally {
    span.end();
  }
}

8. 日志抽样(高容量服务)

// log-sampler.ts
class SamplingLogger {
  constructor(
    private logger: Logger,
    private sampleRate: number = 0.1 // 10%抽样
  ) {}

  info(message: string, meta?: object) {
    if (this.shouldSample()) {
      this.logger.info(message, meta);
    }
  }

  // 总是记录警告和错误
  warn(message: string, meta?: object) {
    this.logger.warn(message, meta);
  }

  error(message: string, error: Error, meta?: object) {
    this.logger.error(message, error, meta);
  }

  private shouldSample(): boolean {
    return Math.random() < this.sampleRate;
  }

  // 基于用户ID抽样(一致抽样)
  infoSampled(userId: string, message: string, meta?: object) {
    const hash = this.hashUserId(userId);
    if (hash % 100 < this.sampleRate * 100) {
      this.logger.info(message, { ...meta, sampled: true });
    }
  }

  private hashUserId(userId: string): number {
    let hash = 0;
    for (let i = 0; i < userId.length; i++) {
      hash = ((hash << 5) - hash) + userId.charCodeAt(i);
      hash |= 0;
    }
    return Math.abs(hash);
  }
}

最佳实践

✅ 要做

  • 在生产中使用结构化日志(JSON)
  • 在所有日志中包含相关性/请求ID
  • 在适当级别记录(不要过度使用DEBUG)
  • 删除敏感数据(PII、密码、令牌)
  • 包含上下文(userId、requestId等)
  • 记录错误时带有完整的堆栈跟踪
  • 在分布式系统中使用集中式日志
  • 设置日志轮转以管理磁盘空间
  • 监控日志量和成本
  • 使用异步日志记录以提高性能
  • 包含ISO 8601格式的时间戳
  • 记录业务事件(用户操作、交易)
  • 设置错误模式警报

❌ 不要做

  • 不要记录密码、令牌或敏感数据
  • 不要在生产中使用console.log
  • 不要在生产默认记录DEBUG级别
  • 不要在紧密循环中记录(使用抽样)
  • 不要包含未经匿名处理的PII
  • 忽略日志轮转(磁盘将被填满)
  • 在热路径中使用同步日志记录
  • 无需使用多个传输日志
  • 忘记包含错误堆栈跟踪
  • 记录二进制数据或大对象
  • 使用字符串连接(使用结构化字段)
  • 在高容量API中记录每个请求

常见模式

模式1:错误边界日志记录

class ErrorBoundary {
  static async handle(fn: () => Promise<void>) {
    try {
      await fn();
    } catch (error) {
      logger.error('Unhandled error', error, {
        function: fn.name,
        stack: error.stack
      });
      throw error;
    }
  }
}

模式2:审计日志记录

function auditLog(action: string, resource: string) {
  return function(target: any, propertyKey: string, descriptor: PropertyDescriptor) {
    const originalMethod = descriptor.value;

    descriptor.value = async function(...args: any[]) {
      const result = await originalMethod.apply(this, args);

      logger.info('Audit', {
        action,
        resource,
        userId: this.userId,
        timestamp: new Date().toISOString(),
        result: sanitize(result)
      });

      return result;
    };

    return descriptor;
  };
}

// 使用
class UserService {
  @auditLog('DELETE', 'user')
  async deleteUser(userId: string) {
    // ...
  }
}

工具和资源

  • Winston: 多才多艺的Node.js日志记录器
  • Pino: 快速JSON日志记录器,适用于Node.js
  • structlog: Python的结构化日志记录
  • zap: 快速结构化日志记录,适用于Go
  • Logback: Java日志框架
  • ELK Stack: Elasticsearch, Logstash, Kibana
  • Splunk: 企业日志管理
  • Datadog: 云监控和日志记录
  • CloudWatch: AWS日志管理
  • Jaeger: 分布式跟踪