name: ai-governance
description: AI治理和合规指南,涵盖EU AI Act风险分类、NIST AI RMF、负责任的AI原则、AI伦理审查和AI系统监管合规。
allowed-tools: Read, Glob, Grep, Task
AI治理
全面的AI治理、监管合规和负责任的AI实践指南,包括EU AI Act和NIST AI风险管理系统框架。
何时使用此技能
- 根据EU AI Act风险类别分类AI系统
- 使用NIST AI RMF进行AI风险评估
- 实施负责任的AI原则
- 准备AI合规审计
- 创建AI系统文档和模型卡片
- 建立AI治理框架
- 进行AI伦理审查
快速参考
EU AI Act风险分类
| 风险等级 |
描述 |
示例 |
要求 |
| 不可接受 |
禁止的实践 |
社会评分、潜意识操纵、利用漏洞 |
完全禁止 |
| 高风险 |
安全/权利影响 |
就业AI、信用评分、生物识别ID、关键基础设施 |
严格合规 |
| 有限风险 |
需要透明度 |
聊天机器人、情感识别、深度伪造 |
需披露 |
| 最小风险 |
低/无监管 |
垃圾邮件过滤器、游戏AI、推荐系统 |
自愿准则 |
NIST AI RMF功能
| 功能 |
目的 |
关键活动 |
| 治理 |
培养风险文化 |
政策、问责制、治理结构 |
| 映射 |
理解上下文 |
利益相关者、影响、约束、需求 |
| 测量 |
评估和跟踪 |
风险指标、测试、监控、评估 |
| 管理 |
优先处理和行动 |
缓解措施、响应、文档 |
负责任的AI原则
| 原则 |
描述 |
实施 |
| 公平性 |
公平待遇、偏见缓解 |
公平性指标、偏见测试、多样数据 |
| 透明度 |
可解释的决策 |
XAI方法、模型卡片、文档 |
| 问责制 |
明确所有权和监督 |
治理角色、审计跟踪、升级 |
| 隐私 |
数据保护、同意 |
PII处理、匿名化、同意管理 |
| 安全性 |
可靠、安全操作 |
测试、监控、事件响应 |
| 人工监督 |
有意义的控制 |
HITL设计、覆盖机制、审查 |
EU AI Act合规
禁止的AI实践(第5条)
prohibited_practices:
social_scoring:
description: "通用社会信用系统"
applies_to: "公共当局对公民评分"
prohibition: "绝对 - 无例外"
subliminal_manipulation:
description: "AI利用潜意识造成伤害"
applies_to: "使用超出意识技术的系统"
prohibition: "绝对 - 无例外"
vulnerability_exploitation:
description: "AI利用年龄、残疾、社会状况"
applies_to: "针对弱势群体的系统"
prohibition: "绝对 - 无例外"
real_time_biometric_identification:
description: "公共场所远程生物识别ID"
applies_to: "执法使用"
exceptions:
- "寻找失踪儿童"
- "预防恐怖袭击"
- "识别犯罪嫌疑人"
authorization: "需要事先司法或行政批准"
emotion_inference_workplace:
description: "工作场所/教育中的情感识别"
applies_to: "员工/学生监控"
exceptions:
- "医疗或安全目的"
predictive_policing:
description: "仅基于分析的个体犯罪风险"
applies_to: "执法预测"
prohibition: "当仅基于分析/特征时绝对禁止"
facial_recognition_scraping:
description: "无目标面部图像收集"
applies_to: "从互联网/CCTV抓取构建的数据库"
prohibition: "绝对 - 无例外"
高风险AI分类(附件III)
high_risk_categories:
biometrics:
- "远程生物识别系统"
- "生物识别分类(种族、政治、宗教)"
- "情感识别系统"
critical_infrastructure:
- "道路交通中的安全组件"
- "水、气、供暖、电力管理"
- "数字基础设施安全组件"
education_training:
- "教育/职业访问决策"
- "考试评估(学习成果)"
- "机构行为评估"
employment:
- "招聘和候选人筛选"
- "工作广告定向"
- "申请评估"
- "晋升/终止决策"
- "基于行为/特征的任务分配"
- "绩效监控"
essential_services:
- "信用评分和信用价值"
- "人寿/健康保险中的风险评估"
- "紧急服务调度优先级"
law_enforcement:
- "个体风险评估(再犯)"
- "测谎仪和类似工具"
- "证据可靠性评估"
- "犯罪预测(个体/群体)"
- "调查中的分析"
migration_asylum:
- "边境的测谎仪和类似工具"
- "风险评估(安全、健康、非法入境)"
- "旅行证件真实性验证"
- "庇护/签证/居留申请处理"
justice_democracy:
- "AI辅助司法研究/解释"
- "AI辅助法律应用于事实"
- "替代争议解决"
- "选举/公投影响"
高风险AI要求
namespace Security.AIGovernance;
/// <summary>
/// EU AI Act高风险AI系统要求。
/// </summary>
public static class HighRiskRequirements
{
/// <summary>
/// 风险管理系统要求(第9条)。
/// </summary>
public static readonly RiskManagementRequirements RiskManagement = new(
ContinuousProcess: true,
IdentifyKnownRisks: true,
EstimateRiskLevels: true,
EvaluateEmergingRisks: true,
AdoptMitigations: true,
DocumentDecisions: true,
TestingRequirements: [
"针对定义指标的测试",
"使用代表性数据测试",
"针对可预见滥用的测试",
"在适当情况下由独立方测试"
]
);
/// <summary>
/// 数据和数据治理要求(第10条)。
/// </summary>
public static readonly DataGovernanceRequirements DataGovernance = new(
TrainingDataDocumentation: true,
DataQualityManagement: true,
BiasExamination: true,
RelevanceVerification: true,
RepresentativenessCheck: true,
SpecialCategoryDataHandling: [
"严格用于偏见检测",
"受适当保护",
"不用于其他目的"
]
);
/// <summary>
/// 技术文档要求(第11条)。
/// </summary>
public static readonly TechnicalDocumentationRequirements Documentation = new(
GeneralDescription: true,
IntendedPurpose: true,
DesignSpecifications: true,
SystemArchitecture: true,
DataRequirements: true,
TrainingMethodologies: true,
ValidationProcedures: true,
PerformanceMetrics: true,
RiskManagementSystem: true,
Cybersecurity: true,
ModificationLog: true
);
/// <summary>
/// 记录保持要求(第12条)。
/// </summary>
public static readonly RecordKeepingRequirements RecordKeeping = new(
AutomaticLogging: true,
OperationalLogs: true,
IdentityOfUsers: true,
DateTimeOfUse: true,
ReferenceInputData: true,
OutputData: true,
RetentionPeriod: "适合预期目的"
);
/// <summary>
/// 透明度要求(第13条)。
/// </summary>
public static readonly TransparencyRequirements Transparency = new(
ClearInstructions: true,
ProviderIdentity: true,
SystemCapabilities: true,
SystemLimitations: true,
AccuracyLevels: true,
ForeseeableRisks: true,
HumanOversightMeasures: true,
MaintenanceRequirements: true
);
/// <summary>
/// 人工监督要求(第14条)。
/// </summary>
public static readonly HumanOversightRequirements HumanOversight = new(
DesignedForOversight: true,
OperatorTools: [
"理解系统能力和限制",
"正确监控操作",
"检测自动化偏见",
"正确解释输出",
"覆盖或中断系统",
"决定不使用或忽略输出"
],
Proportionate: "与风险和自治水平成比例"
);
}
public sealed record RiskManagementRequirements(
bool ContinuousProcess,
bool IdentifyKnownRisks,
bool EstimateRiskLevels,
bool EvaluateEmergingRisks,
bool AdoptMitigations,
bool DocumentDecisions,
string[] TestingRequirements);
public sealed record DataGovernanceRequirements(
bool TrainingDataDocumentation,
bool DataQualityManagement,
bool BiasExamination,
bool RelevanceVerification,
bool RepresentativenessCheck,
string[] SpecialCategoryDataHandling);
public sealed record TechnicalDocumentationRequirements(
bool GeneralDescription,
bool IntendedPurpose,
bool DesignSpecifications,
bool SystemArchitecture,
bool DataRequirements,
bool TrainingMethodologies,
bool ValidationProcedures,
bool PerformanceMetrics,
bool RiskManagementSystem,
bool Cybersecurity,
bool ModificationLog);
public sealed record RecordKeepingRequirements(
bool AutomaticLogging,
bool OperationalLogs,
bool IdentityOfUsers,
bool DateTimeOfUse,
bool ReferenceInputData,
bool OutputData,
string RetentionPeriod);
public sealed record TransparencyRequirements(
bool ClearInstructions,
bool ProviderIdentity,
bool SystemCapabilities,
bool SystemLimitations,
bool AccuracyLevels,
bool ForeseeableRisks,
bool HumanOversightMeasures,
bool MaintenanceRequirements);
public sealed record HumanOversightRequirements(
bool DesignedForOversight,
string[] OperatorTools,
string Proportionate);
NIST AI风险管理系统框架
治理功能
govern_function:
description: "培养风险管理文化"
govern_1:
name: "政策和程序"
activities:
- "建立AI治理政策"
- "定义AI风险容忍度"
- "创建AI开发标准"
- "记录伦理指南"
outputs:
- "AI治理政策"
- "风险偏好声明"
- "开发标准"
govern_2:
name: "问责制结构"
activities:
- "定义AI所有权角色"
- "建立监督委员会"
- "创建升级路径"
- "分配合规责任"
outputs:
- "AI系统的RACI矩阵"
- "治理组织图"
- "升级程序"
govern_3:
name: "劳动力多样性"
activities:
- "多样化团队组成"
- "包容性开发实践"
- "偏见意识培训"
- "跨职能协作"
outputs:
- "多样性指标"
- "培训记录"
- "团队组成报告"
govern_4:
name: "组织文化"
activities:
- "推广负责任的AI价值观"
- "鼓励伦理考虑"
- "支持风险识别"
- "促进透明度"
outputs:
- "文化评估结果"
- "伦理培训完成情况"
- "反馈机制"
govern_5:
name: "利益相关者参与"
activities:
- "识别受影响的利益相关者"
- "建立反馈渠道"
- "纳入利益相关者输入"
- "沟通AI决策"
outputs:
- "利益相关者注册表"
- "参与记录"
- "沟通计划"
govern_6:
name: "法律合规"
activities:
- "映射监管要求"
- "监控监管变化"
- "确保合规验证"
- "保持审计准备"
outputs:
- "合规矩阵"
- "监管跟踪器"
- "审计计划"
映射功能
map_function:
description: "理解上下文和影响"
map_1:
name: "预期目的"
activities:
- "记录业务目标"
- "定义用例边界"
- "识别目标用户"
- "指定部署上下文"
outputs:
- "用例规范"
- "用户画像"
- "部署计划"
map_2:
name: "分类"
activities:
- "分类AI系统类型"
- "确定风险类别"
- "识别监管适用性"
- "评估关键性水平"
outputs:
- "风险分类"
- "监管映射"
- "关键性评估"
map_3:
name: "影响和受影响方"
activities:
- "识别潜在危害"
- "映射受影响人群"
- "评估差异影响"
- "考虑累积效应"
outputs:
- "影响评估"
- "受影响方分析"
- "公平性考虑"
map_4:
name: "依赖性"
activities:
- "记录数据源"
- "识别第三方组件"
- "映射系统集成"
- "评估供应链风险"
outputs:
- "依赖性清单"
- "第三方风险评估"
- "集成图"
map_5:
name: "风险识别"
activities:
- "枚举潜在风险"
- "考虑故障模式"
- "评估对抗性威胁"
- "评估滥用潜力"
outputs:
- "风险登记册"
- "威胁模型"
- "滥用场景"
测量功能
measure_function:
description: "评估和跟踪风险"
measure_1:
name: "风险指标"
activities:
- "定义风险指标"
- "建立测量方法"
- "设置阈值和容忍度"
- "创建监控仪表板"
outputs:
- "KRI定义"
- "测量协议"
- "阈值文档"
measure_2:
name: "测试和评估"
activities:
- "进行偏见测试"
- "评估模型性能"
- "测试边缘案例"
- "评估鲁棒性"
outputs:
- "测试结果"
- "性能指标"
- "鲁棒性报告"
measure_3:
name: "持续监控"
activities:
- "监控模型漂移"
- "跟踪性能退化"
- "检测异常"
- "记录事件"
outputs:
- "监控报告"
- "漂移分析"
- "事件日志"
measure_4:
name: "独立评估"
activities:
- "进行内部审计"
- "参与外部评审员"
- "促进红队测试"
- "执行算法审计"
outputs:
- "审计报告"
- "外部评审结果"
- "红队结果"
管理功能
manage_function:
description: "优先处理和响应风险"
manage_1:
name: "风险优先级"
activities:
- "按严重性排名风险"
- "评估可能性和影响"
- "优先缓解努力"
- "分配资源"
outputs:
- "优先级风险登记册"
- "资源分配计划"
- "缓解路线图"
manage_2:
name: "风险响应"
activities:
- "实施缓解措施"
- "开发应急计划"
- "创建回滚程序"
- "记录决策"
outputs:
- "缓解实施"
- "应急计划"
- "回滚程序"
manage_3:
name: "残余风险"
activities:
- "评估剩余风险"
- "获得风险接受"
- "文档限制"
- "沟通约束"
outputs:
- "残余风险评估"
- "风险接受记录"
- "限制文档"
manage_4:
name: "文档和沟通"
activities:
- "维护风险文档"
- "向利益相关者报告"
- "分享经验教训"
- "更新治理工件"
outputs:
- "风险文档"
- "利益相关者报告"
- "经验教训"
AI风险评估
风险分类模型
namespace Security.AIGovernance;
/// <summary>
/// AI系统风险分类和评估。
/// </summary>
public sealed class AIRiskAssessment
{
/// <summary>
/// 基于特征分类AI系统风险级别。
/// </summary>
public static RiskClassification ClassifyRisk(AISystemCharacteristics system)
{
// 首先检查禁止的实践
if (IsProhibited(system))
{
return new RiskClassification(
Level: RiskLevel.Unacceptable,
Reasoning: "系统属于EU AI Act禁止的实践",
Requirements: ["系统不得部署"],
ComplianceActions: ["停止开发", "审查替代方法"]);
}
// 检查高风险类别
if (IsHighRisk(system))
{
return new RiskClassification(
Level: RiskLevel.High,
Reasoning: "系统属于EU AI Act附件III高风险类别",
Requirements: [
"实施风险管理系统",
"确保数据治理",
"创建技术文档",
"实施日志和记录保持",
"确保对用户的透明度",
"启用人工监督",
"确保准确性、鲁棒性、网络安全",
"进行合格评估"
],
ComplianceActions: GetHighRiskActions(system));
}
// 检查有限风险(透明度义务)
if (IsLimitedRisk(system))
{
return new RiskClassification(
Level: RiskLevel.Limited,
Reasoning: "系统有透明度义务",
Requirements: [
"向用户披露AI交互",
"在适用时标记AI生成内容",
"告知情感识别/生物识别分类"
],
ComplianceActions: ["实施披露机制", "更新用户界面"]);
}
// 最小/无风险
return new RiskClassification(
Level: RiskLevel.Minimal,
Reasoning: "系统不属于监管类别",
Requirements: ["考虑自愿行为准则"],
ComplianceActions: ["记录风险评估决策", "监控监管变化"]);
}
private static bool IsProhibited(AISystemCharacteristics system)
{
return system.UseCase switch
{
AIUseCase.SocialScoring => system.DeployedBy == DeploymentContext.PublicAuthority,
AIUseCase.SubliminalManipulation => true,
AIUseCase.VulnerabilityExploitation => true,
AIUseCase.FacialRecognitionScraping => true,
AIUseCase.PredictivePolicing => system.BasedSolelyOnProfiling,
AIUseCase.EmotionRecognition => system.Context is DeploymentContext.Workplace or DeploymentContext.Education
&& !system.ForMedicalOrSafetyPurposes,
_ => false
};
}
private static bool IsHighRisk(AISystemCharacteristics system)
{
return system.Category is
AICategory.Biometrics or
AICategory.CriticalInfrastructure or
AICategory.Education or
AICategory.Employment or
AICategory.EssentialServices or
AICategory.LawEnforcement or
AICategory.MigrationAsylum or
AICategory.JusticeDemocracy;
}
private static bool IsLimitedRisk(AISystemCharacteristics system)
{
return system.UseCase is
AIUseCase.Chatbot or
AIUseCase.EmotionRecognition or
AIUseCase.DeepfakeGeneration or
AIUseCase.ContentGeneration;
}
private static string[] GetHighRiskActions(AISystemCharacteristics system)
{
var actions = new List<string>
{
"建立风险管理系统",
"文档化训练数据治理",
"创建符合附件IV的技术文档",
"实施自动日志记录",
"创建使用说明",
"设计人工监督"
};
if (system.Category == AICategory.Biometrics)
{
actions.Add("进行基本权利影响评估");
actions.Add("在欧盟AI数据库中注册");
}
return [.. actions];
}
}
public sealed record AISystemCharacteristics(
AICategory Category,
AIUseCase UseCase,
DeploymentContext Context,
DeploymentContext? DeployedBy = null,
bool BasedSolelyOnProfiling = false,
bool ForMedicalOrSafetyPurposes = false);
public sealed record RiskClassification(
RiskLevel Level,
string Reasoning,
string[] Requirements,
string[] ComplianceActions);
public enum RiskLevel { Minimal, Limited, High, Unacceptable }
public enum AICategory
{
Biometrics,
CriticalInfrastructure,
Education,
Employment,
EssentialServices,
LawEnforcement,
MigrationAsylum,
JusticeDemocracy,
General
}
public enum AIUseCase
{
SocialScoring,
SubliminalManipulation,
VulnerabilityExploitation,
FacialRecognitionScraping,
PredictivePolicing,
EmotionRecognition,
BiometricIdentification,
CreditScoring,
RecruitmentScreening,
PerformanceMonitoring,
Chatbot,
DeepfakeGeneration,
ContentGeneration,
RecommendationSystem,
GameAI,
SpamFilter,
Other
}
public enum DeploymentContext
{
PublicAuthority,
PrivateSector,
Workplace,
Education,
Healthcare,
LawEnforcement,
General
}
模型卡片和文档
模型卡片模板
model_card_template:
model_details:
name: ""
version: ""
type: "" # 分类、回归、生成等。
developer: ""
license: ""
release_date: ""
intended_use:
primary_use_cases: []
intended_users: []
out_of_scope_uses: []
factors:
relevant_factors: []
evaluation_factors: []
metrics:
performance_measures: []
decision_thresholds: []
variation_approaches: []
evaluation_data:
datasets: []
motivation: ""
preprocessing: ""
training_data:
datasets: []
motivation: ""
preprocessing: ""
quantitative_analyses:
unitary_results: []
intersectional_results: []
ethical_considerations:
sensitive_use_cases: []
known_limitations: []
bias_mitigations: []
caveats_recommendations:
known_issues: []
recommendations: []
additional_testing: []
AI系统文档
namespace Security.AIGovernance;
/// <summary>
/// AI系统文档,用于合规性和透明度。
/// </summary>
public sealed record AISystemDocumentation
{
// 一般描述
public required string SystemName { get; init; }
public required string Version { get; init; }
public required string Description { get; init; }
public required string IntendedPurpose { get; init; }
public required RiskLevel RiskClassification { get; init; }
public required DateTimeOffset DocumentDate { get; init; }
// 提供商信息
public required OrganizationInfo Provider { get; init; }
public required ContactInfo TechnicalContact { get; init; }
public required ContactInfo ComplianceContact { get; init; }
// 技术规范
public required SystemArchitecture Architecture { get; init; }
public required ModelSpecification Model { get; init; }
public required DataSpecification TrainingData { get; init; }
public required PerformanceMetrics Performance { get; init; }
// 风险管理
public required RiskAssessment Risks { get; init; }
public required MitigationMeasures Mitigations { get; init; }
public required HumanOversightDesign HumanOversight { get; init; }
// 合规性
public required ComplianceStatus Compliance { get; init; }
public required List<AuditRecord> AuditHistory { get; init; }
public required List<string> ApplicableRegulations { get; init; }
}
public sealed record OrganizationInfo(
string Name,
string Address,
string Country,
string RegistrationNumber);
public sealed record ContactInfo(
string Name,
string Email,
string Phone);
public sealed record SystemArchitecture(
string Description,
List<string> Components,
List<string> ExternalDependencies,
List<string> IntegrationPoints);
public sealed record ModelSpecification(
string ModelType,
string Algorithm,
string Framework,
string TrainingApproach,
DateTimeOffset LastTrainingDate);
public sealed record DataSpecification(
string DataSources,
long RecordCount,
string DataTypes,
string QualityMeasures,
string BiasAssessment,
bool ContainsSensitiveData,
string SensitiveDataHandling);
public sealed record PerformanceMetrics(
Dictionary<string, double> Metrics,
string EvaluationMethodology,
string LimitationsAndFailureModes);
public sealed record RiskAssessment(
List<IdentifiedRisk> Risks,
string OverallRiskLevel,
DateTimeOffset AssessmentDate);
public sealed record IdentifiedRisk(
string Description,
string Likelihood,
string Impact,
string MitigationStatus);
public sealed record MitigationMeasures(
List<string> TechnicalMeasures,
List<string> OrganizationalMeasures,
List<string> MonitoringMeasures);
public sealed record HumanOversightDesign(
string OversightModel, // 人在循环、人在环路、人在指挥
List<string> OversightMechanisms,
List<string> OverrideCapabilities,
string TrainingRequirements);
public sealed record ComplianceStatus(
bool EUAIActCompliant,
string ConformityAssessmentStatus,
string CertificationStatus,
DateTimeOffset LastComplianceReview);
public sealed record AuditRecord(
DateTimeOffset Date,
string AuditType,
string Auditor,
string Findings,
string CorrectiveActions);
合规检查清单
EU AI Act高风险合规检查清单
eu_ai_act_high_risk_checklist:
risk_management:
- task: "建立风险管理系统"
status: "待完成"
evidence: ""
- task: "文档化已知和可预见风险"
status: "待完成"
evidence: ""
- task: "实施风险缓解措施"
status: "待完成"
evidence: ""
- task: "进行风险评估测试"
status: "待完成"
evidence: ""
data_governance:
- task: "文档化训练数据源"
status: "待完成"
evidence: ""
- task: "实施数据质量管理"
status: "待完成"
evidence: ""
- task: "进行偏见检查"
status: "待完成"
evidence: ""
- task: "验证数据代表性"
status: "待完成"
evidence: ""
technical_documentation:
- task: "创建符合附件IV的文档"
status: "待完成"
evidence: ""
- task: "文档化系统架构"
status: "待完成"
evidence: ""
- task: "文档化训练方法"
status: "待完成"
evidence: ""
- task: "文档化性能指标"
status: "待完成"
evidence: ""
record_keeping:
- task: "实施自动日志记录"
status: "待完成"
evidence: ""
- task: "记录用户交互"
status: "待完成"
evidence: ""
- task: "定义保留期"
status: "待完成"
evidence: ""
transparency:
- task: "创建使用说明"
status: "待完成"
evidence: ""
- task: "文档化能力和限制"
status: "待完成"
evidence: ""
- task: "指定准确性水平"
status: "待完成"
evidence: ""
human_oversight:
- task: "设计监督机制"
status: "待完成"
evidence: ""
- task: "实施覆盖能力"
status: "待完成"
evidence: ""
- task: "定义操作员培训要求"
status: "待完成"
evidence: ""
accuracy_robustness_cybersecurity:
- task: "验证性能指标"
status: "待完成"
evidence: ""
- task: "测试鲁棒性"
status: "待完成"
evidence: ""
- task: "进行安全评估"
status: "待完成"
evidence: ""
conformity_assessment:
- task: "完成自我评估或第三方评估"
status: "待完成"
evidence: ""
- task: "准备欧盟合规声明"
status: "待完成"
evidence: ""
- task: "在欧盟数据库中注册(如果适用)"
status: "待完成"
evidence: ""
NIST AI RMF实施检查清单
nist_ai_rmf_checklist:
govern:
- task: "建立AI治理政策"
status: "待完成"
- task: "定义问责制结构"
status: "待完成"
- task: "创建风险管理程序"
status: "待完成"
- task: "建立利益相关者参与过程"
status: "待完成"
- task: "映射法律和监管要求"
status: "待完成"
map:
- task: "文档化预期目的和用例"
status: "待完成"
- task: "按风险类别分类AI系统"
status: "待完成"
- task: "识别潜在影响和受影响方"
status: "待完成"
- task: "文档化数据和模型依赖性"
status: "待完成"
- task: "识别和枚举风险"
status: "待完成"
measure:
- task: "定义风险指标和指示器"
status: "待完成"
- task: "进行偏见和公平性测试"
status: "待完成"
- task: "评估模型性能"
status: "待完成"
- task: "实施持续监控"
status: "待完成"
- task: "安排独立评估"
status: "待完成"
manage:
- task: "按严重性优先级风险"
status: "待完成"
- task: "实施风险缓解"
status: "待完成"
- task: "开发应急和回滚计划"
status: "待完成"
- task: "文档化残余风险和接受"
status: "待完成"
- task: "建立持续沟通过程"
status: "待完成"
参考
- EU AI Act详情:参见
references/eu-ai-act-requirements.md 获取完整监管文本映射
- NIST AI RMF:参见
references/nist-ai-rmf-profiles.md 获取行业特定配置文件
- 模型卡片:参见
references/model-card-examples.md 获取已完成示例
相关技能
威胁建模 - AI系统的安全威胁分析
DevSecOps实践 - 将AI治理集成到管道中
漏洞管理 - 管理AI系统漏洞
最后更新: 2025-12-26