fairlearn-公平性检测器Skill fairlearn-bias-detector

这是一个基于Fairlearn库的机器学习模型公平性评估与偏差缓解工具。它能够检测模型在敏感属性(如性别、种族)上的预测偏差,评估人口统计均等性、均等化几率等关键公平性指标,并提供多种算法进行偏差缓解。该技能还支持生成合规性报告,帮助金融机构、科技公司等确保其AI模型符合伦理和监管要求。关键词:机器学习公平性,AI偏差检测,公平性评估,模型合规,Fairlearn,算法公平,偏差缓解,AI伦理,敏感属性分析,合规报告。

AI应用 0 次安装 8 次浏览 更新于 2/23/2026

name: fairlearn-bias-detector description: 使用Fairlearn进行偏差检测、缓解和合规性报告的公平性评估技能。 allowed-tools:

  • Read
  • Write
  • Bash
  • Glob
  • Grep

fairlearn-bias-detector

概述

在机器学习模型中使用Fairlearn进行偏差检测、缓解和合规性报告的公平性评估技能。

能力

  • 人口统计均等性评估
  • 均等化几率评估
  • 差异度指标计算
  • 偏差缓解算法(预处理、处理中、后处理)
  • 公平性约束优化
  • 合规性文档生成
  • 交叉公平性分析
  • 公平性阈值优化

目标流程

  • 模型评估与验证框架
  • 模型可解释性与可说明性分析
  • 机器学习模型的A/B测试框架

工具与库

  • Fairlearn
  • scikit-learn
  • pandas

输入模式

{
  "type": "object",
  "required": ["modelPath", "dataPath", "sensitiveFeatures"],
  "properties": {
    "modelPath": {
      "type": "string",
      "description": "已训练模型的路径"
    },
    "dataPath": {
      "type": "string",
      "description": "评估数据的路径"
    },
    "sensitiveFeatures": {
      "type": "array",
      "items": { "type": "string" },
      "description": "敏感属性列名"
    },
    "labelColumn": {
      "type": "string",
      "description": "目标/标签列的名称"
    },
    "assessmentConfig": {
      "type": "object",
      "properties": {
        "metrics": {
          "type": "array",
          "items": {
            "type": "string",
            "enum": ["demographic_parity", "equalized_odds", "true_positive_rate", "false_positive_rate", "accuracy"]
          }
        },
        "threshold": { "type": "number" }
      }
    },
    "mitigationConfig": {
      "type": "object",
      "properties": {
        "method": {
          "type": "string",
          "enum": ["threshold_optimizer", "exponentiated_gradient", "grid_search", "reductions"]
        },
        "constraint": { "type": "string" },
        "gridSize": { "type": "integer" }
      }
    }
  }
}

输出模式

{
  "type": "object",
  "required": ["status", "assessment"],
  "properties": {
    "status": {
      "type": "string",
      "enum": ["success", "error"]
    },
    "assessment": {
      "type": "object",
      "properties": {
        "overallMetrics": { "type": "object" },
        "groupMetrics": {
          "type": "array",
          "items": {
            "type": "object",
            "properties": {
              "group": { "type": "string" },
              "count": { "type": "integer" },
              "metrics": { "type": "object" }
            }
          }
        },
        "disparityMetrics": {
          "type": "object",
          "properties": {
            "demographicParityDiff": { "type": "number" },
            "equalizedOddsDiff": { "type": "number" }
          }
        },
        "fairnessScore": { "type": "number" }
      }
    },
    "mitigation": {
      "type": "object",
      "properties": {
        "method": { "type": "string" },
        "improvedModel": { "type": "string" },
        "beforeMetrics": { "type": "object" },
        "afterMetrics": { "type": "object" }
      }
    },
    "complianceReport": {
      "type": "string",
      "description": "生成的合规性报告路径"
    }
  }
}

使用示例

{
  kind: 'skill',
  title: '评估模型公平性',
  skill: {
    name: 'fairlearn-bias-detector',
    context: {
      modelPath: 'models/loan_model.pkl',
      dataPath: 'data/test.csv',
      sensitiveFeatures: ['gender', 'race'],
      labelColumn: 'approved',
      assessmentConfig: {
        metrics: ['demographic_parity', 'equalized_odds'],
        threshold: 0.8
      },
      mitigationConfig: {
        method: 'threshold_optimizer',
        constraint: 'demographic_parity'
      }
    }
  }
}